Hello people,

I am using a machine learning approach to be able to detect gestures using a 9-axis MEMS IMU sensor.

I am collecting the raw data from accelerometer, gyro and magnetometers for the appropriate gestures made in the air using the IMU sensor, compute the features, train the model (Eg. SVM model from Scikit learn), save it and use it later for predicting the gestures.

I am able to record the data, train and able to detect the gestures successfully, but only in one orientation. So, the model could not predict if I draw a '0' in the air holding the IMU sensor rotated 90 degrees clockwise for example, or any other orientation for that matter because It has not been trained in that orientation. So please shed some light so as to be able to detect the gestures irrespective of the orientation I am holding the IMU sensor. Please ask questions if this info is insufficient. Thanks in advance :)

More Nikhilesh Karanam's questions See All
Similar questions and discussions