I generated a list of sequence observations (artificial) using an HMM model. Then I tried to infer the HMM model parameter by training it with the observation list using the Viterbi and EM algorithms. At each step I inferred the hidden sequences and updated the model until the sum of likelihoods of all the training examples were stable and no longer increased. Using this method, the final model is far away from the original one. I used a left-to-right HMM model and I initialized my start model randomly, but in such way that it would be near to the original model. Does anyone have any suggestions as to why the HMM training using the Viterbi and EM algorithms doesn't converge?

Similar questions and discussions