By direct computation of pmf of X^m, it is clear that it is not a Poisson variate. The inverse of the transformation g(x) = x^m is h(y) = y^(1/m) which is not an integer for almost all values of y; i.e., the pmf of X^m at x is zero for almost all integers x.
In the same sense as a log-normal distribution you should be able to have a log-Poisson distribution. That would take care of the case where m is some constant.
If m is the sum of several distributions then you could get a Gaussian distribution in log space. So m=a+b+c+d+e+f where 1
1. If m is a not random constant, than one can use the name Poisson pd on the ordered set {0=0^m, 1=1^m, 2^m, 3^m, 4^m,. . . .}
2. The name log-Poisson should be accordingly preferred for e^x, instead; indeed, y=e^x is of log-normal distribution if x is normal.
3. If m is random, no particular names can be assignedalso, not any particular distribution can be implied without information on the joint pd of n and m
4. For the limit pd of sums like a logx+b logx+c logx+d logx +..., the CLT is applicable in very special circumstnces only, which in particular have to take into account non-zero correlation between the coefficients a,b,c,d,...and the rv x in such a way, that the terms are iid; this is due to the fact, that if a,b,c,d,... are iid, then a logx, b logx, c logx, d logx , ... are not independent
In the spirit of writing qualifications .... I would add to Joachim's #4 that the second paragraph in my original post also assumes that a, b, c, ... all contribute equally to m. If factor c accounts for 99% of the effect on m, then CLT will not help even if the remaining 1% is composed of hundreds of other factors.
What is likely is that a, b, c, d, ... are correlated to some extent and not all the factors contribute equally. In this case you could run a factor analysis to reduce a, b, c, d, ... down to a small number of independent factors and the number of factors that are significant will determine how close the distribution of m is to Gaussian.
@Joachim, There are two problems with your point #4.
First, the central limit theorem requires independence, but it does not require identical distributions. CLT will be just fine with a (Gaussian) + b (Uniform) + c (negative binomial) + d (beta), and CLT will guarantee that the result will converge to and overall Gaussian distribution (though you may need more than four factors to get a distribution that is indistinguishable from Gaussian for any given sample size). You do not even have to know the distribution of d which happens to be Uniform (minimum=e, maximum=f) where e is normal(2,7) and f is Poisson(17). In the distribution of m, only the distribution of d will count as an independent factor of m, not e and f.
Second, you need a "not" in the sentence "...that if a,b,c,d,... are not iid, then alogx, blogx, clogx, dlogx, ... are not independent.
1. Seemingly, you are missing the key message. I do not need "not" in note 4. Namely, I am claiming the necessity of dependence between the terms a logx, b logx, c logx, d logx, .. even if the a,b,c,d are independent. More precisely and more generally:
if a\ne 0 , b\ne 0, ... are independent, and if y is not concentrated at one point, then a*y , b*y , ... are NOT independent.
2. Contarily to your suggestion, that "if a,b,c,d,... are not iid, then a logx, b logx, c logx, d logx, ... are not independent", consider, please the following example:
If x1 and x2 are iid exponential, then u1:=x1/(x1+x2) and u2:=x2/(x1+x2) are not independent (even though they both have the same uniform distribution on [0,1]), but multiplied by y:=x1+x2 they become independent, since we return to x1 and x2, respectively.
Let me present a proof of the proposition of 1. above in a special, but mostly expected case. Namely, when a,b and y are independent rvs with finite expectations. Additionally let us assume that the first two possess the means E(a) and E(b) not equal 0. Then
E( (a*y)*(b*y) ) = E(a) * E(b) * E(y^2)
E( a*y)* E(b*y) ) = E(a) * E(b) * E(y)^2
The two expression are not equal unless the variance of y equals zero.
Maybe this is a better phrasing: Regardless if a, b, c, d, ... are independent or not, alogx, blogx, clogx, dlogx,.... will be dependent because all contain a logx term.
For #2 in your reply could you please rephrase it to fit the original example as much as possible using x, m, and "a, b, c, d, ..." I am having trouble seeing how the example fits to show that if a, b, c, d, ... are dependent then alogx, blogx,clogx, dlogx, will be independent despite the fact that each term has a logx component.
let U,V,W be iid uniform on [0,1]. define a= U, b:= 1- U, m=a+b = 1, x=1/V*W. Consequently, y:= ln(x) is standard Erlang order 2 (= gamma with exponent 2). Hence, for
E exp{ - s y } = 1/(1+s)^2
Using this, one can obtain the two-dimensional Laplace transform of the pair (a*y,b*y) defined by E{ exp(- s*a*ln(x) - t*b*ln(x) } equal to 1/(s+1) * 1/(t+1) which means that the two random variables are independent standard exponential, despite that the coefficients a and b are linearly deterministically dependent.
Obviously, this example does not fulfil all the requirements, however it proves that expecting necessity of correlation between a*y and b*y is false unless additional conditions on the joint probability distributions are imposed.