The Gaussian Mixture Model with Universal Background Model (GMM-UBM) takes individual class training data to train the GMM and uses all training data or different dataset to build the UBM. In this way, it can make a fast and speaker or speech or class-dependent model for each individual. However, the GMM-UBM classification rate is lower than the Support Vector Machine (SVM) as I found in my studies. Noted, the SVM is a supervised classifier and kind of neural network. The SVM takes all training data at a time to make the training model and takes huge time to compute cost and gamma parameters. But, it gives a better classification accuracy than the GMM-UBM. Now, my question is why people use GMM-UBM for most cases?