Maybe you need to reduce the dimensionality of your data set? Have you tried reducing the number of features (PCA, covariance analysis, canonical variates, etc)? If that fails, you could decimate the data (undersample it). Perhaps a few more details would elicit more responses.
Basically, I am trying to build a Universal Background Model (UBM), for speaker recogntion, and I have around 7000 Speech utterance, and every utterance is around 30 sec to 1 mint, this consists of large amount of data, so I need to apply GMM on this data, which gives me Memory error. I have written my own code for this, which work perfectly for small data, but for bigger data, it does not work, might be the initial estimation of means, variance, weights are not so perfect.
Yes, big data (specifically map-reduce) should help you to solve memory issues, for example assuming the problem is because the library is using huge matrices to do some calculations, map-reduce will allow you to split the job in smaller tasks, but you need to rewrite the algorithm in map-reduce architecture or find some code/libraries for map reduce.