let pX(t) any density such that a1*pX(t) < pZ(t) everywhere
.
define q(t) = (pZ(t) - a1*pX(t))/(1-a1)
q is a density (positive, integrates to unity ...)
.
and pZ(t) = a1*pX(t) + (1-a1)*q(t) so that you get the normal density as a mixture ...
.
the result is exact but of course, if pX may be taken as a "well known" density (say uniform on [a, b] with a1 depending on (a, b)), q will very unlikely be a "well known" density !
(two half gaussians densities, maybe ?)
.
if your question is about mixture of two "well known" densities, everything will depend on what you call "approximated", as pointed out by Jochen above
Let X ~ (approximately) Normal, Y ~ some other distribution, and Z = u*X + (1-u)*Y. The distribution of the mixture Z will be approximately normal for the trivial case that u -> 1, that is, when the "not-normal component" of the mixture is negligibly small.
2)
Let X1, X2, ... Xn ~ some distributions with finite mean and not extreme variances, and Z = u1*X1+u2*X2+...+un*Xn (with all u>0 and u1+u2+...+un=1). For larger n, the distributionof Z will approximate the normal distribution (central limit theorem).
3)
Let X, Y ~ Normal with same mu and sigma, then any mixture of these is also normal distributed. When the mus and/or sigmas are not identical, but sufficiently similar, the mixture distribution is still approximately normal.
let pX(t) any density such that a1*pX(t) < pZ(t) everywhere
.
define q(t) = (pZ(t) - a1*pX(t))/(1-a1)
q is a density (positive, integrates to unity ...)
.
and pZ(t) = a1*pX(t) + (1-a1)*q(t) so that you get the normal density as a mixture ...
.
the result is exact but of course, if pX may be taken as a "well known" density (say uniform on [a, b] with a1 depending on (a, b)), q will very unlikely be a "well known" density !
(two half gaussians densities, maybe ?)
.
if your question is about mixture of two "well known" densities, everything will depend on what you call "approximated", as pointed out by Jochen above
I want to use a mixture of two normal distributions or mixture of two half normal distribution. The mixture of exponential and gamma distribution may be taken. In my work normality condition is necessary of any type of mixture to be considered.
So, Fabrice Clerot and Jochen Wilhelm kindly tell me some reference that can provide as a evidence for the cases you mentioned.
@ Jochen: your answer gives results about *linear combinations* of Gaussians (or other variables), not about *mixtures*. That's quite different things: the linear combination of two independent Gaussians is a Gaussian, not their mixture (in general)... And for first example, Z = u X + (1-u) Y is not the mixture of X and Y, and has a quite different density than the mixture!
I think that's just a matter of notations, and does not change the result for examples 1 (provided you define Z as the variable whose density fZ is u fX + (1-u) fY) and 3 (since the density of the common Gaussian factorizes). However, I'm not sure the central limit theorem apply to mixtures also, it is really defined on sums of random variables, not on the sum of their distribution functions...
@ Jochen : at the beginning of the Wikipedia page you're giving :
« A distinction needs to be made between a random variable whose distribution function or density is the sum of a set of components (i.e. a mixture distribution) and a random variable whose value is the sum of the values of two or more underlying random variables, in which case the distribution is given by the convolution operator »