A subject performs arm movements in a self-paced manner. Movement parameters are captured by sensors and data is streamed in real-time to the computer. There is a fair amount of variability in the sensors readings: in the attached figure, data points captured from 3 sensors during execution of 10 repetitions of 2 types of movement are shown in the attached file.

A couple of important points:

1) In regards to training data: many subjects will be performing the same types of movements, in slightly different ways. Given the small number of repetitions that each subject will be contributing for each class (say, around 10), we want to take advantage of across-subject structure.

2) In regards to classification: Classification needs to happen in "real time", i.e. while data is streamed. We can assume a typical movement duration (say, 5 seconds) to define a "page size" of data to use for the classification, but that can be pretty variable, both within and across subjects. Ideally, the classifier will return a class more frequently, e..g once per second; obviously, at the beginning of a new movement there will be large uncertainty about the class, which will be getting smaller as more data is streamed.

Similar questions and discussions