Least squares would be the simplest and most efficient but it doesn't include a prediction-correction step like the Kalman filter has. There are different flavors of Kalman filters, including adaptive Kalman filters, but all are more complex than the basic Kalman filter. What is your application?
I would say that a simpler and with better performance filter is not that easy to find. What you do have is more complex and with improved performance filters. You have the extended Kalman filter, the unscented Kalman filter, nonlinear filters, (nonlinear) model predictive estimators, and so on. Usually, the nonlinear filters have to be designed for a specific application/platform/model, so they are not as simple as the Kalman filter and its extended versions.
As a compromise, for a general problem, I would suggest (linear) model predictive estimators, as they are conceptually simple and allow the definition of constraints on the model state variables. They do, however, need increased computational power.
The application is predicting eye-positions (from eye-tracker video systems) for the close future in an attempt to compensate for the system latency time.
The problem is, we get a stream of eye positions at discrete but constant time intervals (Ti with deltaT constant) these positions when precessed by the system are actually available at times Ti+latency (so Ti is in the past) and from those we need to predict the eye positions at times Ti+latencyT+VariableT (so for the "close future"). Usually both latency time and VariableT are longer than deltaT, so we do not predict Ti+1 but "deeper in the future" (~Ti+3). Further, both latency time and VariableT are not a multiple of deltaT, so we do not even predict Ti+n but "between two observations" (~Ti+2.7).
Of course, as you already may suppose, I do not have unlimited computational power, and would like to keep algorithmic as simple as possible in order no to extend system latency time even more and making a less accurate prediction.
Which approaches and strategies would you recommend?
One idea (only that) is use 'standard filters' i mean iir or fir, are implementation simpler. And these filters are ok! For predictions - time, scalar, etc.- (i used on traffic prediction and compare with kalman too)
You could train a Neural Network (perhaps a deep neural network DNN). They are quite successful nowadays in image and speech recognition. You need image recognition, so perhaps it is a useful choice... They take quite a long time to train but not much time to be applied.
See for instance http://www.idsia.ch/~juergen/vision.html or the page of Geoffrey Hinton http://www.cs.utoronto.ca/~hinton/ where there are many papers of his group on DNNs.
Usually, a discrete Kalman filter assumes a constant sampling rate for both the prediction and update step. However it is straight forward to have multi-rate measurements and also non-constant sampling periods. To do this you only have to change the update equations and matrices to reflect the elapsed time, by setting deltaT = T(i)-T(i-1).
You can also predict farther into the future by changing the value of deltaT for the prediction step, bearing in mind that this prediction is based on the current measurements, and that, as you look further into the future, your uncertainty (and errors) will grow.
I think that given your goal (look to the state at T(i+3)), this will not be problematic.
If I understand your problem, you trying to predict the future eye-position (direction vector the eye is "looking" or a 2D representation?) based on a sequence of eye position measurements. The prediction is made by propagating the state estimate, using the latest and previous collected measurements, forward in time. This is a classic tracking/prediction problem except for the fact that the dynamics of the motion can be varied and unpredictable. A standard Kalman filter is not very useful without a good model of system dynamics which seems to be the case here. At a minimum an adaptive Kalman filter would be needed to reset the filter based on the error between the measurement and the estimated position at the time of the measurement. Alternatively or in addition, "state noise" could be added to the state transition matrix to shorten the filter's memory and make it more responsive to dynamic changes (at the expense of increased prediction error).
I would suggest looking at Alpha Beta filters. If you are not familiar with them, Wikipedia has a reasonable introduction. They are very computationally efficient and can give performance similar to a Kalman filter. As with the Kalman, an adaptive version would probably be needed to reset things if the eye suddenly went in a different direction.
You might also want to consider the coordinate system in which the filter is modeled and the order of the filter. XY, Radial, and Polar are possible coordinate choices. I would guess a 2nd order filter (estimating position and velocity) would be sufficient.
A trick that can be used to configure Alpha Beta filters is to look at the time history of the "gains" (the correction multipliers that are applied to each of the observations to make a correction in the estimated state) of a Kalman Filter (or other that appears to work well) and adjust Alpha and Beta to give a similar time history. In a simple dynamic model such as this, you will find that for a given initial state and measurement error covariance the time history of the gains (for a given sample interval) will always be the same. This leads to the obvious conclusion that a look up table could be used to provide the gains. It doesn't get any more computationally efficient than that.
I came further with the implementation, but sticked to kalman filter (although I am evaluating in parallel other response prediction filers).
I am using the discrete Kalman filter (though a hybrid version could be advantageous, give the nature of my problem).
Now I have two specific questions to the experts community:
In my implementation if a inspect the values of the Kalman gain (to correct the estimates based on the difference between last estimate and last measurement), I interestingly observe a sort of alternating behaviour: Gain alternates between >0.9 for the odd predictions and >0.4 for the even predicitons, both factors are getting together but always alternate and the 2:1 gap remains even after 25000 samples! Is it normal behaviour of the Kalman gain?
Remember that I am using Kalman filter to PREDICT FUTURE POSITIONS and not just to clean out the signal. Another finding is that for a general signal, my predictions are better at the beginning and getting worse to completely false at the end. I would expect exactly the oppositve behaviour of the filter. Can anyone help me with this? (it could be actually a programming error but I have no clue where the bug is).
¨...my predictions are better at the beginning and getting worse to completely false at the end¨ That is correct.
The problem is that if you do not know the process noise exactly, the Kalman may demonstrate large divergence at the end. If so, then a more accurate prediction provides the p-step unbiased FIR predictor that ignores the noise statistics.This predictor is iterative, it has the Kalman structure, but with the deterministic bias correction gain. It is a new solution. You can find it in the Internet.
If you want to simplify the predictor even more and your state space model is polynomial, then you can use the batch UFIR predictor that is just a convolution.
In our experiments and applications,both these solutions always gave better results than Kalman under the unknown noise statistics and uncertainties.
Y. S. Shmaliy, L. Arceo-Miquel, Efficient predictive estimator for holdover in GPS-based clock synchronization, IEEE Trans. on Ultrason., Ferroelec., and Freq. Control, vol. 55, no. 10, pp. 2131-2139, Oct. 2008
A complete theory of the batch predictive filter is here
Y. S. Shmaliy, An unbiased p-step predictive FIR filter for a class of noise free discrete-time models with independently observed states, Signal, Image and Video Processing, Vol. 3, no. 2, pp. 127-135, June 2009
The Kalman-like form is given in
Y. S. Shmaliy, An iterative Kalman-like algorithm ignoring noise and initial conditions, IEEE Trans. Signal Processing, vol. 59, no. 6, pp. 2465-2473, Jun. 2011
In the last paper (Kalman-like case), put p=0 in (22)-(26) and find the filtering estimate. Then increase p in (32) and go to the p-step prediction. A comparison with the Kalman predictor is given in Fig. 4. Ask me if you need an assistance.