Detailed Description :
I am working on steering wheel angle sensor that measures absolute angle of steering wheel. As steering angle sensors uses gears and several joints which is totally hardware related so in spite of calibration in start with the passage of time due to usage of mechanical parts and also due to some environmental and road conditions some errors occurs in the values of sensors (e.g. offset, phase change, flattening of signal, delay).
In short due to these errors in the measurements our aim gets distracted means If I am viewing velocity vs time curve so if in the original or calibrated sensor in short close to ideal condition sensor my velocity shows a peak in amplitude but due to error (hysteresis) in measured signal I am not getting peak in velocity curve or I am getting flattening of curve so it will affect my final task.
I have a tolerance let say 1.20 degree for hysteresis so that’s why I am having detailed idea about my signal and want to observe my signal if some changes means offset, delay, lowering has occurred in my signal or not. This will not only provide me an idea that whether to lessen the amount of sensors used for my task or made some changes in hardware of sensor to lessen the amount of hysteresis or do some other actions to reduce it.
What I have done uptill now in which uptill now I am not sure that whether I am right or wrong. I am getting some values for hysteresis but I have few questions regarding those techniques. If someone provides me an idea about it how to improve these techniques or provide me a better approach then it will be nice and great guidance.
I have an ideal sensor signal (under ideal conditions which we want) and values from 1 sensor I have data of 6 different drives from car. I am explaining just 1 example of my first drive and its relation with my reference sensor data.
Given the data reference signal and sensor signal data of size 1x1626100 and 1 x 1626100 double for one reading from sensor but in all readings values from Ideal and measured signal w.r.t to time are same.
In short I want to find out the Hysteresis difference of sensor signal from measured signal.
I have applied few data estimation techniques to accomplish my goals.
1- I have applied Gaussian Technique to estimate error in my data but I am not satisfied with it as I am not getting some good values or expected values with it may be due to outliers or some other errors.
I have subtracted (Ref – measured value of signal). Calculated mean of difference signal after applying my limitations, Standard Deviation of difference signal after applying my limitations, then make a Gaussian Curve along with mean and standard deviation. Made 2 lines one for mean+ standard deviation and 2nd one is with Mean – Standard Deviation and distance between +ve Mean_std and –ve Mead_std is called Hysteresis (Loss).
Please have a look at the attached figure. I have attached figure for 3 modal Gaussian curve but in some diagrams just like picture 3 my data is shifted. Can anyone tell me the reason why it is so and how to erase it because it is occurring in all diagrams but in this figure 3 it was clear.
2- In this method I have applied Regression lines Technique (On upper and lower values of difference signal).
I took difference of my signals (Ref – measured value of signal after applying my limitation on signal).
Applied regression technique above and below the difference signal means on upper values and on lower values separately and difference between upper and lower values regression lines is called as Hysteresis (Loss). Please have a look at figure 3 and 4 for clear view.
The Problem here with this technique is that I define the values for upper and lower regression line by myself after looking into data like up= 0.08 , low= -0.08.
3- I have also applied RMSE technique but have few questions that are confusing me.
As I am dealing with static data so I considered my Reference signal as actual valued signal and my measured valued signal from sensor as measured values and apply RMSE formula on that.
RMSE= square_error = (sig_diff_lim).^2;
mse = mean(square_error)
msedivided = mse/numel(drv(2))
rmse = sqrt(mse)
4- I have also applied correlation function but I think it is not working well with my data.
But it gave me some good insight about data.
Some Questions that also need clarification if possible:
1- What is difference between RMSE and MSE means I know basic stuffs but I want to know that what are the applications where we use RMSE or MSE and which will work in my case for my data.
2- I have now explained Gaussian, Regression technique, RMSE. Just one Request .Can someone explain me which Technique is the best to use because I have mentioned my issues in Gaussian and Regression Technique above. Just some explanation will work for me .Means Pros and cons of above mentioned technique.
I hope that I remained able to provide you the clear idea and detailed review what I want to do and what I am expecting and I hope to get some positive and valuable response from people.
Sorry for providing a long story but As I am not too expert in these as I am still learning so if I will get some useful responses it will improve my knowledge and also help me to accomplish my task/Goal.
Thanks for cooperation in advance.