It depends on many things (dimension; type of data you have - values at points, averages; whether or not it is fixed or you can chose it, class of smoothness of the integrand, etc.) You need to be more specific in your question.
The answer depends very much on the nature of your problem as Yuliya already pointed out. For lower dimensions (two, three), one may use Gaussian product-type rules, but they use relatively many points, and for special problems, there are more efficient cubature methods, e.g., based on Voronoi tesselation, see for instance "Construction of positive definite cubature formulae and approximation of functions via Voronoi tessellations Allal, Guessab; Gerhard, Schmeisser
Advances in Computational Mathematics , Volume 32 (1) – Jan 1, 2010"
For high-dimensional problems, Gaussian product-type methods are usually too expensive.
But even in the one-dimensional case, Gaussian quadrature is not always the best.
Only if the integrand is smooth and/or a suitable positive weight function is known for which a Gaussian quadrature is known or readily obtainable, one can expect good results. One may also sometimes use a combination of coordinate transformations and standard Gauss-Legendre or Gauss-Jacobi quadrature with good results - this allows to avoid the somewhat involved computation of Gaussian rules for nonstandard weight functions. Here, quite often the singularity structure of the integrand in the complex plane is of importance: The coordinate transformation should "move the singularities away from the interval of integration". An example is the use of Möbius transformation in the context of integrands with sharp peaks near the endpoints (see my bibliography).
Also, Gaussian quadrature is taylored to polynomial approximation (up to the weight function). If the integrand does not fit into this scheme, Gaussian quadrature will not really give good results.
For highly oscillatory integrands, for instance, Filon-type or Levin-type quadrature methods are much superior.
Also, there is the celebrated double-exponential quadrature as an alternative tool. But also this has some pitfalls, e.g. see http://www.keisu.t.u-tokyo.ac.jp/research/techrep/data/2011/METR11-43.pdf
If you do not know much about the integrand, there are automatic quadrature methods that provide more or less sure-fire techniques. But even for these, there are limitations, e.g., non-smooth performance if your integrand depends smoothly on parameters.
There is also the issue whether you can calculate your integrand to essentially arbitrary precision, or whether it is obtained from experimental data, say. In the latter case, I would not expect Gaussian quadrature the best possible method.
Thus, the question of optimal quadrature method has to be answered for each specific problem anew.
to learn more about the question I suggest you read a wonderful book titled:
Handbook of Computational Methods for Integration by Prem K. Kythe &
Michael R. Schäferkotter.
In this volume, all types of integrations are presented.
In addition, a further generalization you can find it in this book:
Differential Quadrature and Its Application in Engineering by Chang Shu
In this volume Shu generalizes the law of Guass integral quadrature. It is proposed an integral quadrature that does not require a fixed discretization, but the points distribution can be selected arbitrarily.
From the tests we've done, this integral quadrature is the very good.
In the only book I ever read on this topic (preparing for the exam as a student) which was the textbook for MSU, it was written that the only methods which work in high dimensions are Monte Carlo. :)
It is the simple Simpsons one-third rule of integration!
In earlier times, when there was no computers available, more complicated rules of numerical integration were developed. However, now that we have fast computers, we can go for as small intervals as possible, and therefore the one-third rule of integration is the best.
When it comes to the general case, there is really no "best method" for calculating numerically an integral. Such a method, if it existed, would be both reliable and efficient in solving a problem for which the exact result is not known a priori. In other words, it would be able to achieve a user-defined precision while minimizing the number of function calls and would be able to do all that for any class of integrand and any dimension of the problem. This is simply not possible. As already asked by Yuliya and Herbert (excellent post BTW): do singularities exist ? Do we know their locations ? How smooth is the integrand ? etc.
In the case of a one-dimensional integrand without singularities, methods of Adaptive Quadrature (AQ), without being necessarily the best, may still be considered as standard general-purpose methods that are reasonably simple, reliable and efficient . They use internally (as a workhorse) one or more routines based on either Gaussian, Simpson or even trapezoidal rules. They do not subdivide uniformly the interval of integration but rather choose selectively where to evaluate the integrand, placing more/less points in high/low oscillatory regions, so as to achieve a predefined precision at a significantly reduced cost. Global AQ methods may be considered also. They usually are more reliable but are also more difficult to implement.
For high-dimension problems, the cost of deterministic methods increases exponentially with the dimension. So, I agree with Milen who suggests to go for Monte Carlo methods. Here again, reduced cost can be achieved using a stochastic AQ strategy via the use of, for example, a Monte Carlo integration with recursive stratified sampling.
you can find an interesting tool for integrationa and differentiation approximations
at the following link:
http://software.dicam.unibo.it/gdiq-tool
The Tool is named: GDIQ Tool
(Generalized Differential and integral Quadrature Tool)
You can find the whole theory in the book:
Francesco Tornabene, Nicholas Fantuzzi - Mechanics of Laminated Composite Doubly-Curved Shell Structures The Generalized Differential Quadrature Method and the Strong Formulation Finite Element Method, Esculapio, 2014
http://www.amazon.co.uk/dp/887488687X
Best regards,
Francesco Tornabene
Book Mechanics of Laminated Composite Doubly-Curved Shell Structu...
Thinking of degree of accuracy of the integration methods as a measure of goodness, the Gaussian Quadrature is exact for polynomials of degree 2n-1, while the Newton-Cotes formulas are exact for polynomials of degree n+1.
I have a program (written in C) that compares most of the available methods of numerical integration of many orders, even the more obscure methods, such as Lobatto. I also have a little program that will compute the coefficients for Gauss Quadrature of any order and write out a function that you can use directly. I often use 999 point GQ when I want to compare something to the "exact" solution when no analytical integral is available. I also have a program somewhere that I wrote to calculate the weights for any order of Newton-Cotes. I'll put these on my ftp site if anyone is interested. BTW, contrary to what some may insist, 2 applications of 5-pt GQ on half intervals is not as accurate as 10-pt, nor 2x10pt as accurate as 20pt, etc. There is at least one reason to use Lobatto over GQ, that is, when the end points are critical, as is the case in heat & mass transfer. GQ doesn't include the end points.
It depends on the regularity of the fuction, If the function is C^\infinity, the Gauss quadruture , but the error depends directly on the derivatives of the function
It is related to the problem, generally. But, I think that the Clenshaw-Curtis quadrature method is one of the best methods to approximate the integrals (see [L. N. Trefethen, Spectral methods in Matlab, Society for industrial and applied mathematics,Philadelphia, 2000.]).
Is it correct to say that if we have a wide range of tabulated empirically determined functions the best integration method is that has an optimal combination of speed and round of-errors? Which methods then are better? Simplson's, Trapezoid? any of the abovementiond method in this thread so far?
The spectral methods as described in the book of Trefethen and used in matlab are definitely very promising in the one-dimensional problems. One reason is that they can be implemented in quite efficient ways, and promise exponential accuracy. The question is how to apply these methods to multidimensional quadrature problems. Using them in a product-type way as is also done in the case of Gauss-rules is probably limited to low-dimensional problems. But I think, that this point is currently investigated already.
As further promising way to tackle quadrature problems is to use extrapolation. For example, this is essentially the basis of the Romberg quadrature. There, one extrapolates the sequence of quadrature results obtained from trapezoidal or midpoint rules of regularly decreasing subinterval lengths, in a way that reuses the integrand evaluations of previous subinterval lengths. Instead of subdivision methods, extrapolation can also be used if the quadrature problem can be transformed into a series of terms, that then can be summed by using extrapolation. A simple example would be that you have a series expansion of the integrand and can integrate this series termwise. A further possibility is to introduce a sequence of integrands J(a) depending on a parameter a, such that J(0) is the integrand of the original problem, e.g. J(a)=exp(-a x^2) f(x). The evaluation of the integral I(a) for J(a) may then be easier for a>0 since long-range effects are suppressed by the exponential factor, say. In that way, one obtains I(a) for various values of a and may extrapolate to I(0).
Whether an extrapolation approach is successful, depends on the one hand on the regularity of the integrand, and on the other hand on the availability of an estimate of the error of the sequence of quadrature methods used. The latter estimate can then be used to choose a "good" extrapolation method. Also, one has a chance to estimate the errors and the stability of the extrapolation method used (for instance for the J transformation, see my bibliography).
In principle, such extrapolation approaches can also be used for multidimensional problems, for instance by using subdivisions similarly to the Romberg method. However, then the problem is that reusing previous evaluations of the integrand is less effective than in the one-dimensional case. To make the integrand parameter-dependend and extrapolate to the limit can be an approach also for high-dimensional problems, but probably not for all types of integrands.
It depends on the problem, for example if the integrand is a smooth function the gaussian quadrature methods is the best, but if integrand is not smooth it is better we use local methods such as simpson. but it depends that how much the integrand is smooth. for example for a function that has some problems only in some points it is better to use adaptive quadrature method. but if a function have not got derivative in any points (for example Brownian motion) trapezoidal rule is appropriate. If the accuracy is not matter montecarlo methods could be useful specially in high dimention.
Dr A. H. Bhrawy, I proposed a good numerical quadrature for the integration of polynomials and even for all mathematical functions, would it? Article An Accurate Quadrature for the Numerical Integration of Poly...
I think the Simpson's rule in numerical analysis is a method of numerical integration, an improved method for the trapezoidal method. It is faster and more accurate. This is explained by the fact that the Simpson base contains a midpoint that provides a better approximation.
Maybe better to first state (1) what you exactly mean with "best", (2) whether you are talking about general application or a specific problem, and (3) finally more on the numerical integration (in FEM to compute the elements characteristics, time integration for transient analysis, or something else)
With thanks for your kind attention, have a very nice and healthy day and future.