diff function calculate difference between successive data, hence for n data of x and y , the result of diff(y)./diff(x) is (n-1) data number. How can I get n data number of diff(y)./diff(x) ?
Actually, diff() doesn't necessarily do this. "Normally", yes, diff() gives you the difference between sequential/successive elements of a vector. However, if your variables are symbolic it will just differentiate. That said, although I am not sure I am understanding your question correctly, you may want something like want diff(x)/n to get your approximate derivative or define a value dx (an approximate distance so that dy=diff(data)/dx gives you arbitrarily accurate approximations. The entire point of the diff() function is to "lose data" in that whether you feed in a vector or a matrix you end up with an output of smaller dimensions.
Thank you very much Andrew Messing for your replay . well i do have small steps between successive data (practically deltax =dx; deltay=dy), this is not my problem, my problem is just to use diff function in a way that not lead to miss data for example if i have 1000 xs and 1000 ys how can i get 1000dy/dx? by using certain combination of newton forward backward or central difference ? can i doi it?
For the same problem, I prefer to use the central difference in the domain (with the second order of precision) and the backward/forward difference at the end/start point. It is straightforward and simple with low computational costs. If your mesh is non-uniform but structured, the formulation has been derived and is ready to be used in Dehghan et al., ..., Renewable Energy (2015).
Dear @Abdulghefar, as Diff is built in Matlab, it is not possible to get n data number of diff(y)./diff(x) from n data because it is an approximation to the limit which is the average of tow consequtive values; However, if you do use the Cubic Spline between every 2 consquitive values then you get a polynomial of degree 3 that can be evaluated at the whole interval and then you get an approximation to the derivatives at the n points.
I do make a cubic spline in order to minimize delta x (in order that =dx) , but I want to avoid curve fitting , I think that interpolation is better represent physical data than curve fitting (in cases that there is no theoretical relationship that govern the physical data)
I believe your question is more philosophical. You do loose data during differentiation if the only data you have is a discrete set of values; what you can do is COMPLETING your differentiation result in a most reasonable way, predicting the right derivative, but the data is gone anyway.
Interpolation (with a small degree polynomial) is fine as long as you are sure your data is exact (say, it is extracted from a formula, but in this case why not using symbolic differentiation)? Otherwise, least squares fitting has the advantage of smoothing out or filtering the errors. Avoid using high degree polynomial interpolation due to the high oscillations you get, especially if your data is equally spaced.
Btw, if you are using Matlab, take a look at Chebfun http://www.chebfun.org/, it does interpolation, fitting, differentiation, etc. of discrete data in the smartest way and transparently.
I agree with @Andrei to avoid using high degree polynomial interpolation due to the high oscillations you get. But we have to be careful in using the least squares approximation because it minimizes the area but this does not take care of directions that determine derivatives and thus leads to a huge error in the derivatives approximated using this method.
@Abedallah: you are right. You could actually fit the data using a different norm, e.g. a Sobolev norm which involves the derivatives. In this way, when minimizing the norm you guarantee less oscillations. Or you could do regularization (it is almost equivalent), addint a term to the objective function that grows with oscillations. However, in practice the l.s. fit is cheap and works quite well.
Yes indeed dear @Muhammad Mujtaba Shaikh, cubic spline interpolation will be useful, which can then lead to derivatives at all nodes.
I do agree with you dear @Andrei, the Least-squares approximation is cheap and convenient when it fits to the problem; in this case the Sobolev norm will take care of this.