07 April 2018 1 6K Report

Hi folks,

I'm a first year PhD student trying to wrap my head around some concepts concerning classification using RBFs.

Background:

I was looking at the Netlab toolkit's implementation of RBFs which uses a GMM to approximate the RBF centers. My data is the standard MNIST dataset which has 784 input dimensions (pixels). The data is strongly correlated with a large number of zeros within each sample, i.e. only some of the 784 pixels are set for any given output class {0 to 9}

My Problem:

Netlab's RBF models use Spherical covariance by default. While my covariance matrix is non-zero, I think my input dimension (784) is causing the Activation probability (i.e. the probability P(X|J) of the data conditioned on each component density) to go to Zero which is causing the EM algorithm for GMM training to not converge and occasionally report Error Values = 'Inf' for some cycle.

I have absolutely no clue on how to get around this? Do I need to reduce the dimensionality of my data? Should I add some Gaussian noise to the data so that the elements are non-zero?

Any guidance in the subject would be greatly appreciated.

Thanks,

Kam

My Covariance values:

mix =

struct with fields:

type: 'gmm'

nin: 784

ncentres: 11

covar_type: 'spherical'

priors: [0.0610 0.0700 0.0780 0.1210 0.0970 0.0320 0.0870 0.1220 0.1370 0.1340 0.0610]

centres: [11×784 double]

covars: [27.3526 17.8717 14.9184 18.6905 17.8717 23.8131 14.9184 18.3074 15.3307 15.3307 15.6931]

nwts: 8646

More Kam Nasim's questions See All
Similar questions and discussions