05 February 2016 5 10K Report

It was predominantly used in speech recognition in the past and still being used where it deals with a one-dimensional signal. Now a number of articles mention its use for gesture recognition as well. Within a span of few months newer articles having solutions in HMM use to appear. Interestingly the problem still remain unsolved and still new papers on HMM queue up. How do they get aligned in timeline in a single thread of execution?

Is probabilistic estimation of a gesture (say, a trajectory for a two dimensional shape) would ultimately give a deterministic solution through abstract features? 

I would appreciate explanation and justification from fellow researchers.

More Lalit Kane's questions See All
Similar questions and discussions