I think your question makes little sense without specifying the problem in more detail. The errors all these methods generate depend on the derivatives of the ''true'' function (if any) which generated these data -- and of course I assume that you speak about interpolation of a given set of data and in addition , that these data can be assumed as ''exact'', since you want to interpolate. Piecewise cubic Hermite interpolation has an error constant smaller than the standard (Hermite) cubic spline, but on the other side needs the first derivatives at all points and if you estimate these derivatives using local cubic polynomial interpolation then the error constant of the spline increases even more (all with the fourth derivative of the unknown function involved and the grid size in power 4, whereas piecewise linear interpolation depends on the second derivative and the grid size in power 2.) Hence, which will be better also depends on the growth of the size of the derivatives and the given grid size. and , of course high degree interpolation of a large data set makes no sense. you may play a little bit with these methods here:
Can anyone kindly explain me why my reply above was down-voted? I am asking this not because of down-voting, bu rather out of curiosity. As I understand, cubic spline interpolation encompasses linear, quadratic as well as polynomial interpolation to a greater extent. It can fit most datasets smoother due to its higher adaptability. So, whats wrong?
First one might ask how well one must interpolate to solve the problem at hand?
If there is no error and an underling physical process, use the underling physical process to set up a model for the data to be interpolated. Fit the model to the data and interpolate applying the model.
If there is error in the data, what do you know about the error?
Are there ways to minimizing the error in interpolation? What are they? Will any provide the accuracy needed for the problem at hand?
The questions I provide here are meant for discussion; questions for the reader or others to answer.
Babak, you may be interested in "Interpolating for the location of remote sensor data" as an example of a use of an an underling physical process to set up a model for the data to be interpolated.
As far as I know the best techniques are Pchip (piecewise cubic hermite interpolating polynomial), Akima (makima) and Bspline, but after all it will depend on your case.
It really depends on your data and its properties.
For example, if your data follows y = sin(x), then applying linear interpolation would not be that accurate (also depends on your grid/ or size between two consecutive data points).
Whereas, using polynomial interpolation for data that follows y = mx + c, is again a bad move, and linear interpolation is a better fit here.
So, I would suggest you really understand the shape/trend of your dataset (which in most real-life scenarios are non-linear), and try and accommodate the best fit based on that.