I have real life data (time series), there might occur distortion occasionally which is to be detected. The detection itself is simple, i have some difference from a predicted values, of which outliers clearly identify it. Of course there is some noise but three or six sigma thresholding does the job, however when there are no distortions present, the noise levels are so low that I catch false positives, cause they are above the threshold.

So the problem is in robusting the method, in a way. Any ideas?

More Przemysław Skurowski's questions See All
Similar questions and discussions