As Nils already wrote, the PDF of a normal distribution can be arbitrarily large. When the variance (sigma^2) goes to zero, the mode (i.e., the maximum, which is located at the mean) of the PDF goes to infinity. With variance being zero, the PDF of a normal distribution can be considered as a Dirac delta distribution, which is zero everywhere except of the location of the mean, where its function value is infinite.
Yes, the PDF of the normal distribution is not bounded; with sigma chosen small enough the PDF-value at the mean exceeds any given positive real number.
As Nils already wrote, the PDF of a normal distribution can be arbitrarily large. When the variance (sigma^2) goes to zero, the mode (i.e., the maximum, which is located at the mean) of the PDF goes to infinity. With variance being zero, the PDF of a normal distribution can be considered as a Dirac delta distribution, which is zero everywhere except of the location of the mean, where its function value is infinite.
Hi Shehkroz. If we associate PDF for small frequence intervals (P2 acumulated - P1 acumulated) versus variable average (L2-L1)/(P2-P1) related to an ordered Lorenz curve, then PDF tends to become (deltaP)^2/delta L. Function L(p ac) or Lorenz curve increase or decrease monotonically -according to previous ordering- for any detected increment of P (cummulate population), so PDF may be very high, but not infinite for models made of real data. That is my vision now because my mind is not prepared to understand neither the "big infinite" neither the "tiny infinite", unless you give me better reasons that convince me otherwise. Thanks, emilio
The probability density function f(x) of a Normal (mu,sigma^2) distribution has its maximum at f(mu).
The value is f(mu) = 1/sqrt(2 pi sigma^2).
If you are interested in the maximum of the density over all sigma > 0 then this is like the maximum of 1 / sigma which is infinity.
If you consider a class of possible sigma which may be (0,c) (like a confidence interval obtained from estimating sigma) then the maximum will always be finite (and is 1/sqrt(2 pi c^2).
I think the log-likelihood of GMM is -(x-mean(x))^2/(2*sigma*sigma) + log(expression_you_wrote). For exact formula look at the slides here http://www.cse.psu.edu/~rcollins/CSE586Spring2010/lectures/cse586gmmemPart1_6pp.pdf and the slide with heading 'Maximum Likelihood'.
If d is one point then x=mean(x) and sigma=0 therefore you will have 0/0 +log(constant/0) situation
If d is a vector with very small sigma, likelihood will still be negative as can be seen from the formula in that slide.