The most successful classifier GMM-UBM uses log-likelihood estimation of training data for a given model or latent parameters like mean and variance. The drawback of using log is, it generates a 0 value if a Gaussian distribution returns unit value and produce low variance with a high change of values. The distribution with 0 values produces NaN values. In a result, a GMM model becomes completely useless to compare with the testing data. Because, this model also returns NaN values for the testing sample's likelihood measurement. We can selectively delete the distributions with the NaN values. This deletion makes the mixture distribution random for individual class. Have there any way to minimise or fix this NaN problem in the GMM-UBM when running with EM algorithm?