Suppose that in Kalman filtering the measurements are low-pass filtered before using them in Kalman filter. Then what will happen to the measurement covariance matrix? Does low-pass filtering influence on the Kalman filtering procedure?
When you use low-pass filtered measurements, their noise variances get lower. But do not be too optimistic. Such filtering inevitably introduces time lags when a dynamic quantity is measured. Those lags are equivalent to additional measurement errors that must be accounted for by increasing the corresponding noise variances in the Kalman filter. Thus you completely lose all potential benefits of pre-filtering. The Kalman filter is itself a good filter for measurement denoising, provided that a correct noise variance matrix is specified.
To get an idea, you can analyze what happens on a fixed gain observer $\dot{\hat y}=A\hat x+Bu+L(y-C\hat x)$ of the linear time-invariant system $\dot x=Ax+Bu, y=Cx$. If you use in the observer a filtered version yf of the measured signal y, you can easily show that the error signal will converge not to zero but to something depending on filtered derivatives of y.
The two previous answers are good. Let me just add one strange example.
If the measurements to be used are expected to have a fairly large amount of variation, but the sensor pre-conditions (for example low-pass filters) those measurements, the system using those measurements may actually end up discarding them, labeling them invalid.
Here's a potential problem. A sensor already filters the sensor's data, when you, the system designer, didn't expect it to. One possible fault of certain sensors is to freeze to a previous measurement value, and continue to transmit that value. If the system designer develops a criterion to discard readings that were frozen to just one value, oops, perhaps now the sensor's readings are consistently triggering alarms.
You might think that filtering data twice is rather innocuous, but not necessarily.
The two previous answers are good. Let me just add one strange example.
If the measurements to be used are expected to have a fairly large amount of variation, but the sensor pre-conditions (for example low-pass filters) those measurements, the system using those measurements may actually end up discarding them, labeling them invalid.
Here's a potential problem. The sensor already filters the sensor's data, when you, the system designer, didn't expect it to. If the system designer develops a criterion to discard readings that were frozen to just one value, oops, perhaps now the sensor's readings are consistently triggering alarms.
You might think that filtering data twice is rather innocuous, but not necessarily.
I think the previous answers clear the doubt. However, I may add : The effectiveness of the Kalman filter will depend on the quality of the signal. If by using a low pass filter, you are modifying the signal as well, if the parameters of the filter are such that it removes the noise alone, that will be good . But to implement the Kalman filter, you will still need the noise co-variances. So I am not sure what can be achieved by low pass filtering. On the other hand if you remove the mean from the data, this may help in better convergence and avoid bias in the estimates. This I am writing from experience on working on real seismic data as part of my PhD work (1978-81) at I I T Delhi
Let me add that if you can live with a delayed state estimate then, if all the inputs to the system are delayed, the system response will be delayed as well. So a filter of the measurements (with a corresponding delay of the input u, to make things coherent) will make the filter to work with a smaller measurement covariance. This can be good.
This can be specially good if your system model does not account for the physisc associated to part of the measurement noise. In this case filtering the noise will have the added benefit of allowing a smaller proccess covariance. This can be interesting as well.
There is an extension to the Kalman filter equation to correctly handle time-correlated noise (e.g. https://www.ion.org/publications/abstract.cfm?articleID=5568 ). When you precisely model these correlations from the pre-filtered input data by a covariance function or matrix, the extended Kalman filter equations will result in similar precise predictions as for uncorrelated white noise. Low-pass filtered white noise can be described by a exp(-|t|) function.
I have some additional question. Let's consider different situation. Kalman filter works with a sample rate about 0.1 sec, while measurements come with a sample rate about 0.01 sec. So, for each step of Kalman filtering we will have 10 measurements. Is it a good idea to feed Kalman with averaging value of those 10 measurements? For example, the weight of new measurement in a sequence is bigger.
Aleksey Mazur This has more to do with the sampling theorem than with Kalman filters. The best is to use a decimation filter with a lowpass cutoff frequency below 5 Hz (Nyquist criterion). Averaging goes part of the way, but you probably want a steeper cutoff than that.