For measured data x = [x(1) x(2) … x(N)] affected by noise, smoothing is a usual choice. It is possible to use a uniform smoothing by replacing the initial value x(k) by the mean of the vector s = [x(k-m) … x(k) … x(k+m)] for 1 < m +1 ≤ k ≤ N-m < N. For smaller or larger values of the index k, the length of the vector s is reduced. Another possible choice is to use a non-uniform smoothing, using a weighted mean of the vector s, i.e. giving larger weights for the values close to x(k) and smaller weights for the values x(i) close to the extremities of the vector s. Does anybody know a theory concerning the best choice of a smoothing window?

Similar questions and discussions