Based on your experience, what is the best method to represent a smooth function f? Taylor, Lagrange, Newton, Hermite, spline, parametric polynomial or trigonometric approximation? Least-squares method? Each method has its strengths along with conditions and "information". Which method did you practice? Did it work? Are there any drawbacks in the method? Which method do you suggest for young researchers to use? Explain?
http://en.wikipedia.org/wiki/Stone%E2%80%93Weierstrass_theorem
http://en.wikipedia.org/wiki/Polynomial_interpolation
Queer question. If you have a continuous function f given in some way, i.e. for arbitrary x \in [a,b] you have a way to find f(x) why then to use an approximation?
The scenario that most of the present answers are dealing with is that f is given by a list of x-values and a corresponding list of y-values ('function-values'). In such a situation it would be nice if we could infer f(x) also for x not in the list of x-values with a precision that matches the precision with which the given y-values are consistent with the underlying 'definition of f'. An universal and very practical approach derives from table interpolation which was the method of choice in astronomy for hundreds of years:
Create your x,y-lists sufficiently dense that local quadratic interpolation is sufficient to find the value of f(x). This is the format in which (as far as I know) Mathematica represents all solutions of differential equations obtained by numerical integration. There is a simple, though not widely known method, which is computationally twice expensive and which gets the interpolated function as a smooth one (i.e. derivatives of any order exist if the defining quantities are taken in R and not as computer numbers).
Appended is a link to this method.
http://demonstrations.wolfram.com/SmoothlyInterpolatingASetOfData/
So many topics are involved here. Depends on the dynamics of the system and its control forces and how much info you know beforehand (as apriori). Whether there are abrupt changes in the function and how many of them or a function is continuous. Whether this function has a periodic pattern or you are not interested in its harmonics. Each system has its own approach, but the rule of thumb: the more information you include in inference, less uncertainty is left in your estimates. So I prefer those approaches which require intense homework. Splines, least squares, Kalman filter (which can be used to infer parameters of the approximating equation), exponential smoothing (if you allow inertia in your approx. function) and etc. are all special cases of a wider variational approach of probabilities and their densities which is practically based on Maximum relative Entropy/Bayes theorem. This can explain why you have certain weight factors in Linear programming solutions or (to make it jump among the fields) Pontryagin maximum principle. Biggest drawback of variational approach (which I always use): huge amount of time spent on doing "homework" when analyzing the system and what information to use when updating the probabilities. So variational approach does not work for "get rich quick" design scenarios. However in my view, this drawback is really an advantage when a perfect solution is needed and scientific research duration is not too limited: be it 3D mouse development based on inertial sensors, robotics app, inverted pendulum system, mass airflow measurement or vision analysis. I think young researchers should start Simultaneously two directions:1. Least squares and cubic splines because that is simple to explain and dig out the derivations (which start from axiomatic definitions of derivative and integrals). Then step by step student can proceed further to Lagrange multipliers method and see that exponential smoothing is a wider method than least squares, then further to Euler Lagrange and etc. 2. Second simultaneous path is Monte Carlo sampling and probability theory and play with different approaches until they grow to Bayes Theorem and start using Particle filters and Nested or similar sampling methods. Very frequently to develop O(n Log n) approaches the right online algorithm is the golden compromise and combination of Monte Carlo and variational approach which has the best set of tradeoffs. As one professor once said: "give me an optimization problem and I will do solve it in 3 (max 10) iterations". He , of course, did not mention the duration of how much scientific exploration duration takes.
Thank you @Renaldas for the feedback, so, you think that students should first learn the least-squares and splines before learning the ides of interpolation; I do teach the interpolation subject before these 2 subjects for the Mathematics students; however, for non-Math students, learning least-squares and splines before learning the interpolation might work.
Uni-variate interpolation could be taught after students get acquainted with Taylor series. Least squares could be taught later after total derivative is clear and when they are familiar with functions of few variables. Cubic spline is very good topic after the classical Constant linear acceleration (which is usually known to students before university) and making an assumption that acceleration is a linear function of time. And then go into first order ODE after it.
I prefer simple functions, since they are more appropriate in measure theory.
Dear Abedallah, the situation does not depend only on the continuity property but we also need informations about other aspects, like the range of f([a,b]), the oscillatory or not nature of the graph of f and probably more sophisticated things. If f is a slowly changing function then both Taylor and polynomial approximation are suitable. But if we have big peaks, then splines are more suitable. The least l2 norm approximation, although is the most popular, does nor work anytime. There also exist computational problems, since pseudo-inverse is a 'difficult' matrix. From your personal experience, can Bernstein polynomial regression solves the ill condition problems of a simple polynomial regression or the gain is negligible? (I ask because I tried to use Bernstein once and found no difference). Anyway your question is very interesting, I think.
Mind you, I am a chemist and cannot be as good as the scholars participating here. My simple answer is: I think that when there are different choices in a case study, it is better to follow the easiest ( rational approximation).
I found this reference interesting.
Dear Abedallah, once I had to calculate Bessel functions for the complex arguments to solve boundary integral equation for wave problems. I tried various method and I found the series expansions were successful for accuracies as well as convergence properties. It is of course that there are some Bessel functions showing singularities, so that this may not be the story that you expected. But it was one of my experiences.
Keying on Costas Drossos's comment about measure theory, perhaps followers of this thread will find the following lecture notes interesting:
P. Cannarsa, T. D'Aprile, Lecture Notes on Measure Theory and Functional Analysis,
Unversita di Roma, 2006/07:
http://www.mat.uniroma2.it/~cannarsa/cam_0607.pdf
See, for example, Chapter 7 (Functions of bounded variation and absolutely continuous functions), starting on page 175.
Yes indeed dear @Costas, simple functions are suitable in measure theory and applications. Dear @Demetris, the kind of information about other aspects determine in some cases the type of the solution. The Bernstein polynomials should give the same solution; however, the solution is more stable.
Yes, there is no single answer to the question, I agree with you dear @Asmat.
Dear @Nizar, thank you for the interesting file about rational approximation. The rational approximation is a strong one, however, not all systems deal with it.
Thank you dear @Terumi for sharing your experience; the series expansions are successful for accuracy as well as convergence. The Bessel functions show singularities in some cases. What are these cases if you mind?
Dear Nizar, have you tried to theoretically explain the success of Pade approximation? I have also noticed that success, but I have not found any relevant theory.
I think that , previously, should be defined which is the best approximation: Uniform, pointwise, asymptotic and so on. Indeed, any decision depends on the approximation aim.
Dear @James, thank you for the Lecture Notes on Measure Theory and Functional Analysis by Cannarsa and D'Aprile. It covers the interactions of the field of approximation and functions of bounded variation and absolutely continuous functions.
Dear @Demetris, I think the success of Pade approximation is because it approximates using rational functions and thus it better approximates functions of rational behavior. Dear @Juan-Esteban, thank you for your contribution that is highly appreciated. I agree with you that we should first define what kind of approximation: Uniform, pointwise or asymptotic because any decision depends on the approximation type. Thank you @Ahmad for the feedback.
Compared to you dear Demetris Christopoulos, I am a beginner since I am a chemist. When the Pade approximation is made & is compared with the Maclaurin series for a case in which both apply, close answers are obtained. Since this is approximation, then the difference is expected.
Honestly, I do not know a theory like you.
I have used feed-forward Artificial Neural Networks to approximate any continuous function and found to be the best
Dear @Muhammad, thank you for sharing your experience; the following link explains feedforward artificial neural networks,(if you have better resource please give it)
http://en.wikipedia.org/wiki/Feedforward_neural_network
Dear Abedellah,
Thank you for this interesting question but I am afraid its scope is a bit too wide; so I agree very much with Juan-Esteban who has suggested to define first the aim of the approximation.
I think it would be useful to provide also some more information concerning the original function f. For example, is it available analytically ? Is it differentiable ? Perhaps the values of f are simply accessible numerically through some black-box routine or perhaps the function f is known only for a large but limited set of points on [a, b].
Last but not least, what do you want to do with the approximation ? There are many possibilities we can think of. For example, the original function is too complicated and we need an approximation that would be easier to handle mathematically. Or perhaps the original function is too costly numerically and we need an approximation to speed-up the evaluation, etc.
The Taylor, Lagrange, Newton, Hermite, spline, parametric polynomial , trigonometric, Least-squares methods are used to represent a smooth function. Each method has its strengths and needs different kinds of information to be found. Other methods of best uniform or least deviation methods have no compact form for the solution.
Since college, I always do this using splines. I even think I should upgrade myself, but it always has worked.
Dear @Alexandre, splines is a very good choice; if you like you can go farther to using splines with high order approximation in which also some geometric information are used.
All of the above are right, depending on the question.
You have to specify more clearly your function - which space are they from?
If C^0 only, you can choose piecewise linear one,
if periodic sin and cos are good.
If decaying, exponentials will do,
etc. pp.
For details, wikipedia is OK (http://en.wikipedia.org/wiki/Smooth_function).
For a comparison of errors and numerical methods, you can check
http://arxiv.org/pdf/physics/0510176.pdf
Let's make a synopsis:
1)If f(x) gives a slowly changing graph, then polynomial and Taylor is OK
2)If f(x) has a periodicity, then Fourier analysis is a 'must'
3)If f(x) posses explotion up or down, then exponential like functions should be used
4)If f(x) is irregular, then local 3rd order spline approxiamation must be done
5)Any time the 'least property' has to be taken with care: L2 norm does not work all the times.
Thank you @Markus and @Demetris for the detailed answers and the attached links. I like the list of choices depending on the nature of the function.
It also depends on which precision you want to obtain in the final result.
For instance for very smooth functions (with derivatives of high degree) if you want the machine precision Lagrange (constructed on optimal nodes like the Chebychev ones) is the right choice: is simple and very fast convergent.
Otherwise if you have a (only) continuous function each method will converge very slowly and then the piecewise linear functions are the simpler choice.
In other words the space where the function lives is very important for choosing the approximation scheme, but the needed precision could refine your choice.
Thank you @Maria for introducing the concept of precision. In some cases the Chebyshev approximation is the right choice:
For different types of approximation, I have used Fourier series for periodic functions, Chebyshev polynomials for analytically known functions required to be approximated to high accuracy, and cubic splines for functions not required to be very accurate. Each has worked well in the right circumstances.
If the function f is smooth, I suggest the expansion based on the orthogonal polynomials(Least square) because this approximation is the best in the L^2 norm and is near the best with minimal Lebesque constant in the uniform norm. But if regularity of the function f is weak, I suggest piecewice approximation based on the orthogonal polynomials because this approximation leads to the best result in the subintervals and relatively good precision in the nodes. If function f has singularity at the interior of interval, I prefer piecewise or rational approximation based on the orthogonal polynomials. I can suggest the Book "T. J. Rivlin, An introduction to approximation of functions "
Thank you @John for sharing your experience about the Fourier, Chebyshev, and Spline approximations. I am interested in your experience using the Chebyshev approximation.
Dear @Payam, thank you for suggesting the book of Rivilin on Orthogonal functions; the rational Cehebyshev is a stable approximation and gives solution with least uniform error.
Lech Solarz Military University of Technology
In my opinion spline approximation is very useful. (Approximation not interpolation)
If your knowlege about the function emerge from experimental data then I think it is the best.
Least Sq. Methods are used when the values of the given function at different knots are known to have errors, e.g. data available from the market study, etc. When functional values are correct to only round off errors, the suitable method of approximation should be interpolation method, preferably cubic spline which takes care of the values of f , its slope as well as its curvature also. You may use the concept of B spline. Prof. SANJAY SEN
Thank you dear @Lech for the great feedback about the spline approximation when the knowlege about the function emerge from experimental data.
Dear Professor @Sanjay, Your opinion and contribution are highly appreciated: the discrete least squares methods are widely used when the values of the given function at different knots are known to have errors or data available from the market study. Interpolation methods are used for functional values that are correct.
My dear @Andrea sent me the following message of his contribution. "When it comes to interpolation in a given interval, I have found Lagrange interpolation polynomials quite fine, in particular if there are not many points. Of course, due attention if to be given never to utilize the method outside the interval."
Thanks for my dear @Nilupul for answering the question and sharing it with me: "I have tried polynomial and LS approximations so far. That is when those seemed suitable for the occasion. But your question seems so wide. I have to agree with most of the people saying that the choice of the approximation depends on what happens between the two points. Second, I think the importance you give to the approximation task should be considered: if the task is all about approximation, or it's only a part of a larger content.
Last, I would look for the availability of built-in or add-in functions (within my ability) to automate the approximation if necessary."
As answered by Prof. Sanjay Sen, for given accurate data one should use Splines or B-Splines which may incorporate different kind of boundary values including periodic.
Least square is used when you wish to fit in the data with some known type or characteristic of a function where parameters are to be determined.
The best method to approximate a continuous function f over[a,b] directly linked and depends from the unknown dynamics of this function on that interval [a,b].
Spline is the best option because it is not only simple but to a great extent deterministic in preserving or honouring original data .
I would not use polynomial approximations because the degree of the polynomial, in order to converge to f, may increase rapidly (see Bernstein's proof on Weirstrass Theorem). The rate of convergence of the spline functions, in general, should convince us all that they are the class of functions to be used in approximation (see Hammerlin-Hoffman, Numerical Mathematics). Moreover their b-spline representation offers the required flexiblility in cases when we want easily to change the continuity and the degree of the spline.
Thank you dear @Sylantyev, the best approximation depends on the unknown dynamics of this function on that interval [a,b]. Perhaps, one of the best choices is the Spline approximation as pointed out by dear @Mohammad and @Nikolaos.
Yes dear @Nikolaos, polynomial approximations of high degree posses the oscillation phenomena that makes the error term diverge to infinity. (see Hammerlin-Hoffman, Numerical Mathematics). The B-splines offer flexibility in many cases.
I think first must be defined the aim of the approximation, the space, the nature of the function, and the needed precision. In measure theory, simple functions are best. i have experimented polynomial and spline approximations for a random behavior like ultrasonic structural noise measure. The experiments justified the function extracted from noise as well gave a picture of matter behavior at micro scales.
Queer question. If you have a continuous function f given in some way, i.e. for arbitrary x \in [a,b] you have a way to find f(x) why then to use an approximation?
The scenario that most of the present answers are dealing with is that f is given by a list of x-values and a corresponding list of y-values ('function-values'). In such a situation it would be nice if we could infer f(x) also for x not in the list of x-values with a precision that matches the precision with which the given y-values are consistent with the underlying 'definition of f'. An universal and very practical approach derives from table interpolation which was the method of choice in astronomy for hundreds of years:
Create your x,y-lists sufficiently dense that local quadratic interpolation is sufficient to find the value of f(x). This is the format in which (as far as I know) Mathematica represents all solutions of differential equations obtained by numerical integration. There is a simple, though not widely known method, which is computationally twice expensive and which gets the interpolated function as a smooth one (i.e. derivatives of any order exist if the defining quantities are taken in R and not as computer numbers).
Appended is a link to this method.
http://demonstrations.wolfram.com/SmoothlyInterpolatingASetOfData/
Dear @Fairuz, thank you for sharing your experience of approximation in measure theory. I understand that you have 'experimented polynomial and spline approximations for a random behavior like ultrasonic structural noise measure. The experiments justified the function extracted from noise as well gave a picture of matter behavior at micro scales.'
This is an excellent feedback.
I advice you to use a least squares method with monic Chebyshev polynomomials. I implemented this method in Mathematica not only to approximate functions but also systems of differential equations for badly conditioned initial and boundary value problems. More details you can find in some publications presented on my profile in Researchgate.
n my opinion, Prof. Sanjay Sen narrated it rightly . Therefore, I do agree with him
Dear Ulrich, can you provide me the (x,y) data used in the Mathematica example that you have mentioned here?
(I want to investigate something, since my current research is in sigmoid curves identification)
Thank you.
Dear Demetris,
the clue of the Mathematica demo is that you can add and delete interpolation points and also can shift existing ones, so there is no point in knowing the points that were arbitrarily chosen for the initial configuration of the interactive application. If you download the free Wolfram CDF-reader you can read the Mathematica code from which all data are obvious. The important point is the formula for the smooth sigmoid which I so far could not find anywhere in the literature. The surprisingly simple formula for it can be read from the linked page without having the CDF-reader installed.
Hope this helps.
Let me know if you still have questions.
Dear Ulrich, your sigmoid animation is attached.
Correct me if I am wrong:
*We have a set of pairs (xi,yi) from an unknown function f.
*We use local quadratic interpolation for every consequent 3 pairs.
*We join the answers.
Where do you use the sigmoid mentioned? As a weight function?
(I do not want to download one more application)
Thank you
A very good and modern method is Fuzzy transform (F-transform). It is universal approximator and by setting proper parameters you can obtain result of the desired properties and, of course, of arbitrary precision. The fundamental paper is
@article{Perf:FT,
author = {Perfilieva, I.},
year = {2006},
title = {Fuzzy Transforms: Theory and Applications},
journal = {Fuzzy Sets and Systems},
volume = {157},
pages = {993-1023}
}
There are many otehr papers and applications. See also the recent
@article{novetal:INS2014,
author = {Nov\'ak, V. and Perfilieva, I. and Hol\v{c}apek, M. and Kreinovich, V.},
title = {Filtering out high frequencies in time series using {F}-transform},
year = {2014},
volume = {274},
journal = {Information Sciences},
pages = {192-209}
}
Vilem Novak
Dear Demetris,
for given x you single out the two triplets of x_i-values that are next to x. Both triplets give a value by quadratic interpolation. These two aswers are added with w and 1-w as weights, where w is defined as sigmoid(p) with a parameter 0
Thank you dear @Vilem for introducing the Fuzzy transform and the articles of you and Perfilieva.
Dear Abedallah,
The answer really depends on the problem at hand as mentioned by many of the above responses.
I am an engineer and I have used the function approximations either as a response surface in reliability analysis or as an approximation in statistical data analysis. I have used least squares, Fourier and spline mostly. The implementation of these methods is an easy task by using matlab.
An important issue here is parsimony that we want to have a good approximation with the smaller number of parameters.
Thank you Ulrich, I read 'quickly' your article and I found it very interesting (due to the stationary arguments and least actions procedures), so I have to read it more analytically!
I will come back after reading it again.
Dear @Behrouz, it is really interesting experience; what about your satisfaction, as an engineer, with these approximations. Which approximation are you going to use with the next application?
If the derivatives are available, cubic splines worked very well in my experience.
I used them as an entry point for a program for fast and precise calculation of log-Gamma function. I can hardly recall the details, since it was almost 10 years ago.
Above, I should have written "first derivative" instead of "derivatives".
Dear @Milen, thank you for the feedback about your experience with approximation.
As stated by many others, the answer strongly depends on your "boundary conditions". Things change, for instance, if you are interested in the "best" approximation according to a specific norm, if you want "approximation" rather than "interpolation", if you approximate just a "static" function" or a function describing a dynamical system, if the model complexity is a concern for you, if you are interested in circuit implementation of the approximate function, and so on. For instance, I used piecewise-linear approximations of both static functions and vector fields describing nonlinear dynamical systems, in view of their (digital) circuit implementations. For data fitting purposes oriented to microcontroller implementations, instead, I used splines. For other test cases, the "best" solution could be different.
Hope this helps!
The best is to have an idea of physics behind your function , as the order of approximation should not be higher that the order of the differential equation (for example) describing
the physical process. For example, temperature field is best fitted with spines or polynomials of order of 2 . For stream functionss it would be order of 4 etc
you might as well check the spectral approximation techniques.
there is a very good project and a book about approximation theory developed at oxford university numerical group
http://www.chebfun.org/
http://www.chebfun.org/ATAP/
Dear @Marco, thank you for the feedback in using piecewise linear approximations of both static functions and vector fields and using splines for data fitting purposes oriented to micro-controller implementations. It is also important to have an idea about the boundary conditions and the physics behind the approximation as dear @Evgene explained. Thank you dear @Ramy for the interesting link about approximation theory developed at oxford university numerical group.
Dear Abedalleh, sorry to get back late. did not see the question directed at me.
In civil engineering for advanced engineering mathematics s we use Fourier series to find the solution of partial differential equations such as 1D and 2D wave equations.
I am reading about wavelet these days as this is appropriate for dynamic and seismic analysis.
Dear @Behrouz, thank you for the answer; wavelets gives very practical solution for the PDEs' models.
Dear @Emma Perracchione sent me the answer in a message which I am sharing with you : " I need more information. Specifically, for a large number of points, methods as Lagrange or Newton interpolation are not recommended (I am sure you know). Moreover the choice of a method depends on how the points are distributed in the space (scattered data or regular data). I work on the partition of unity method which makes use of radial basis functions as local approximant. (suitable for a large number of scattered data).
Dear @Emma, Thank you for introducing the radial basis functions into this forum. I did not work on radial basis functions, but I know that Larry Schumacker is one of the people who worked in RBF for the geometric modeling purposes.
Dear Abedallah:
In my experience, I used piecewise linear approximations but mostly to approximate ODE's and their solutions. However, modestly I could say and share with other collegues in this forum, that depending on the problem: number of points, oscillatory behavior. On the other hand, for a rough approximation I used polynomial interpolation to get torque-velocity for electric motors.
Pleas do not hesitate to contact me to [email protected].
Best wishes,
Andrés
Dear @Andres, Thank you for sharing your valuable comments on the methods of approximation you used; it seems that you used comfortably polynomial interpolation to get torque-velocity for electric motors. We know that the least error occurs when taking the nodes of interpolation to be the roots of the Chebyshev polynomials.
Your question is rather ambiguous. One possible situation that corresponds to your question is this: You have a table of a set of x and f(x), where f(x) is a smooth but very complicated function whose evaluation requires considerable computation time, and you want to use the value of f(x) for any given value of x, which is not necessarily in the table, in a certain elaborate computer program. As many answers to your question imply, the use of suitable interpolation of the table is a solution. However, if you get a simpler function φ(x) to approximate f(x) with high accuracy, it makes the main elaborate program much shorter than the case of incorporating the table and an interpolation subprogram, especially if the table is a long one.
have experiences of getting such a function φ(x). You can find examples in our paper, "Approximation to Landau's distribution functions for the ionization energy loss of fast electrons" (https://www.researchgate.net/publication/243629907_Approximations_to_Landau%27s_distribution_functions_for_the_ionization_energy_loss_of_fast_electrons?ev=prf_pub). In optimizing adjustable parameters in φ(x), you should use the best approximation method, which is defined as the method to minimize maximum deviation, rather than the method of least squares. An example of the FORTRAN code for the best approximation method is given in our report, "ALESQ, a Nonlinear Least-Squares Fit Code, and TSOLVE, a Nonlinear Best-Approximation Code: Second Revised Edition" (https://www.researchgate.net/publication/259457102_ALESQ_a_Nonlinear_Least-Squares_Fit_Code_and_TSOLVE_a_Nonlinear_Best-Approximation_Code_Second_Revised_Edition?ev=prf_pub).
Article Approximations to Landau's distribution functions for the io...
Technical Report ALESQ, a Nonlinear Least-Squares Fit Code, and TSOLVE, a Non...
As already pointed out in this discussion, the answer is dependent on the kind of question you are asking. I can imagine a mathematical method that is optimal for the general problem of approximating a continous function in some interval using a measure for the error. But this does not mean that this method will be best
* for a different measure, or
* for certain subclasses of functions that - besides being continous - satisfy some further condition, e.g. bounded variation, larger smoothness, and/or some asymptotic condition at one or both the endpoints.
In my opinion, the answer - even when taken in a strict mathematical rather than a more practical point of view - depends largely on
a) some further information on the function to be approximated
b) the requirements that the approximating function has to satisfy (e.g. limited variation, smoothness, being in some space of functions relevant for the problem under consideration, etc.)
Point b) could for example rule out rational approximations that have poles in the approximation interval.
Of course, from a more practical point of view, there are some "standard methods" that work well for most low-accuracy requirements, and some other methods that are adequate for many high-accuracy requirements.
Besides answering partly the question which approximation approach is optimal as regards accuracy, the above mentioned "standard methods" should also satisfy the requirement that they are rather efficiently computable.
Furthermore, the question of "best" approximation depends also on the dimensionality of your problem. In problems in ten dimensions, say, the answer will be quite different from that in one variable, even for "equal" conditions a) and b).
This also applies to the case of finite intervals vs. infinite intervals.
Thus, in summary, I suggest to sharpen the question with respect to the points discussed above.
Contribution
Raoul BILOMBO
The iterative methods of approximations are very stable and consitent for the resolution of the dynamic optimal management problems.
The analysis of their approximation in finished dimension can be achieved with the help of the implicit function theorem, using the step of discretization like small parameter.
We get results of existence, local uniqueness and convergence thus, also of the evaluations of mistakes in different norms.
The modelling can be done by a scientific language as the MATLAB or the SCILAB.
These methods of management are very useful for the optimal control in applied sciences.
Key words: Iterative methods; Implicit functions; Dynamic optimal management;
Depends on what you mean by "best".
I have long beieved (and may be able to prove it if I can refer to my class notes of about 70 years ago) that if I fit polynomials progressively by least squares,to any continously differentiable function, I shall arrive at the best useful approximation.
Dear @Raoul, Thank you for sharing your experiences. Iterative methods are good choice in many cases and works well for the resolution of the dynamic optimal management problems.
Dear @Tony, Thank you for sharing your valuable experiences using the progressive least squares method to get the best solution.
Read the paper:
C. Mortici, Product approximation via asymptotic integration, Amer. Math.
Monthly 117 (5) (2010), 434-441.
@Valentin
If your professor would have published in arXiv I would read it. O quae mutatio rerum! Scientific journals, once the vehicles of dissemination of knowledge, now often work as sinks for knowledge.
First of all would you give a little background (if such can be given) about the function in question
If I was faced with such problem first:
1) I would like to know the origins of the 'mysterious' continious function.
2) From the formulation of your question I would guess that you have a set of points of that continous function on interval [a,b],
3) Moreover I guess you would like to know what sort of approximation would give you a best approxmation of your unkknown continious function.
4) As a mathematician I would like to know: In what sense you are looking the best approximation ? I.e
a) Do you want that the approximation minimizies the L^2 distance between your
mystery function and the approximation ? I mean do you look the best candidate in sense of L^2 norm ? Or just to approximate t near a particular point when of course
Taylor expansion would be could if you have an analytical expression..
6) Well if you formulate the question in more detail, perhaps I can give more accurate suggestions... This is now just a Friday night response :-).
regards
Samuli
Dear @Valentin, thank you for the paper:
C. Mortici, Product approximation via asymptotic integration, Amer. Math. Monthly 117 (5) (2010), 434-441.
Dear @Ulrich, thank you for you opinion.
Dear @Samuli, thank you for the detailed approach to explain possible methods of approximation; I am interested in feedback about the methods of approximation you are using and whether you are satisfied with these methods or not. Mathematically, the polynomial of least deviation (using the \infty norm) is the best approximation, however, to find this polynomial is a challenging issue. Therefore, the least squares (second norm) and the methods of interpolation or expansions are used because of the straight forward method of construction.
Hi Abedallah
Of course every norm/metric have their advantages,but in the original question smooth functions are adressed and to find the best approximation I think you really should kno some properties of your 'mystery' function.
In fact I seriously think you need to know the 'phenomena' which creates the smooth function after which you can argue which norm/metric you should use to create the best approximation.
Of course strictly mathematically: Given a set points of values of a smooth function on
interval [a,b] you really can not say which interpolation/spline/approximation method would produce 'a best approximation' unless you decide how you want to measure the distance of two functions. After you decide which metric/distance you use you can COMPARE different approximations and decide what is best under that particular metric !
As in the first box the best might be: Taylor, Lagrange, Newton, Hermite, spline, parametric polynomial or trigonometric approximation, Least-squares method? THIS REALLY DEPENDS HOW DO YOU WANT TO MEASURE THE DISTANCE TO THE APPROXIMATION !!!
BUT: OF COURSE IF THE PROBLEM COMES FOR EXAMPLE FROM SOME CLEAR APPLICATION, YOU PROBABLY HAVE A CLEAR INTUITIVE IDEA WHICH METRIC YOU SHOULD USE...
If wish you can send me a dataset and perhaps a description of the origin of your function.
kind regards
Samuli Piippponen
Hi
As I know, spline method give the most smooth function. Fortunately, there are so many kind of splines which you can choose in depend on your problem.
Dears @Samuli and Fateme, thank you for the feedback about your experience in approximation. Splines are a proper choice for many cases including design purposes.
I have limited experience using wavelets to model functions on a closed domain. What I learned is that Chebyshev polynomials are the most practical solution for approximating a function. http://en.wikipedia.org/wiki/Chebyshev_polynomials
Especially read the section 'As a basis set'. The fact they they are a basis makes the coefficients easy to compute.
@Sieuwert
You certainly mean 'orthonormal basis' instead of 'basis' in your last sentence. Even with this amelioration it is a dubious statement since it depends on the computational expression for the scalar product whether the coefficients are easy to compute.
It is given that the function is continuous in [a,b],
It is enough if we prove that the function is differentiable in ]a,b[
In other words we need to verify the Rolle's Theorem.
Thus by using Rolle's theorem we can fine values in ]a,b[ where the curve crosses x-axis, there after we can find the nature of the function.
Thanks to all for sharing their experiences.
Yes dear @Liyong, the Bernstein polynomial basis have been used to construct a polynomial approximation in order to prove the Weierstrass Theorem.
I suggest you try Chebyshev polynomials. For a nice implementation, see the ChebFun package written by Lloyd Trefethen and his group at Oxford. It is built to work in Matlab. Trefethen also has a fine book on the topic, Approximation Theory and Approximation Practice, SIAM, 2013.
Thank you dear @Charles, the Chebyshev polynomials are a very good choice because of the uniform behavior; did you use these kind of approximations?
Bernstein polynomials will be my choice, unless one really needs a speedy convergence (I am assuming that f 'lives' on the interval I=[0,1]). Why Bernstein?
1. It is a very constructive uniform approximation, albeit slow.
2. For a given n \in N, {B_n,i} form a partition of unity, which for their nonnegativity allows us to treat (B_n,i(x)) as a probabiliity distribution vector for any x \in I.
3. When the original function f is differentiable on I, not only the corresponding sequence of Bernstein polynomials uniformly approximates f, but the sequence of its derivatives approximates the derivative function f' on I. It holds true for higher order derivatives as well, and is also true for multivariate polynomials with f defined on a corresponding cube.
Dear @Roman, thank you for your contribution. I agree with you that the Bernstein polynomials are excellent choice. Moreover, in addition to the properties you mentioned, it is shown that they are the most numerically stable basis.
I agree with Ulrich Mutze. If there is a continuous function, one can easily find out a value of using that very function. Approximation theory tells us that when we cannot solve a problem, we have to solve it by assigning an approximate function.
Spline Approximation of Functions and Data
http://folk.uio.no/in329/nchap5.pdf
Maybe some impression for you will be my monograph on using Chebyshev polynomials (monic ones) to find approximate solution for differental equations. https://www.researchgate.net/publication/235256942_Application_of_computer_algebra_in_symbolic_computations_and_boundary-value_problems_of_the_theory_of_shells . See the second part. The algorithm based on least squares method was implemented within Mathematica system. See also my other publications on RG around begining od XXI century.
Article Application of computer algebra in symbolic computations and...