This is a good question which is very difficult to answer in general. Normally, a good deal of insight into the underlying problem is required.
For example, consider the problem of computing the square root of a floating point number x = m2^e, where the mantissa m is between 1 and 2 and the exponent is an integer. If e = 2k is even, then sqrt(x) = sqrt(m) 2^k and if e = 2k+1 is odd then sqrt(x) = sqrt(2m)2^k. It follows that you only need the ability to generate a good initial guess for sqrt(y), when y is between 1 and 4. You can do this by approximating the square root function on this interval with a straight line. The best uniform approximation is really good in this case.
Sometimes you can generate a good initial guess using a "light" version of the algorithm which you will ultimately use to refine the solution. For example, when computing a firing solution for a piece of artillery you can wrap a Newton loop around a time integrator which computes the shells trajectory using a low order method and a large time step. This typically converges quickly especially if combined with table look up and interpolation. You can then shift to a high order method and a small time step and achieve must faster convergence compared with using the complex method from the start.
Ordinary differential equations are frequently solved using a combination of an explicit method and an implicit one. The explicit method (the predictor) is used to initialize the guess for the solution of the implicit method (the corrector). These methods are known as predictor/corrector methods for ODEs.
If you have a specific problem in mind, then it would be interesting to hear some of the details.
Hi! i had faced the same problem. I can suggest you of a better algorithm which has the same convergence rate as Newton-Raphson, but does not have its limitations. The method is Muller's method, it can obtain complex roots as well, and requires three guesses to begin with. You can place the guesses where you think your roots can be. You have three regions to hunt. Even if you do not place the three guesses at decent places, the algorithm will find the root. The method does not requires evaluation of derivatives of the function. Chances are slim for the algorithm to not converge. But Matlab, methamatica do not have the respective package. You can use Fortran/c/C++/Python to code it yourself. You can find more details about the method in any of these three books:
1) Numerical recipes by Press, Teukolsky,Vetterling, Flannery
2) Numerical methods for Engineers by Chapra and Canale
3) Introductory methods of numerical analysis by Sastry
I had used the method to solve a equation in which all known methods of obtaining roots had failed badly. But Mullers method gave me the correct solution with decent accuracy. See, in Section 5 of the attached link.
Technical Report Spatial dispersion of elastic waves in a bar characterized b...
The response by Mikkelsen is a good one. I especially like the first two sentences.
More information is needed to give a proper response to your question. Is the function whose zeros you seek a real valued function of a single real variable? Is it a function from R^n into R^n for n>1? Is it a complex function of a complex variable? Hopefully its derivative exists in a neighborhood about the zeros you're looking for, or else Newton's method will have trouble finding them.
Recently, I worked on the problem of solving f(z) = 0, where f is a particular complex function of a complex variable. There are multiple zeros, and I was looking for all of them. The main obstacle in my case was obtaining some good initial guesses. I used both Newton's method and Muller's method, and neither gave me all of the zeros because I didn't have good enough initial guesses. I missed some of the most important zeros using these methods as stand alone methods.
However, I found a great reference for such problem (freely available on line) as follows:
L. M. Delvest and J. N. Lyness, "A numerical method for locating the zeros of an analytic function", Mathematics of Computation 21(100), Sept. 1967.
The method presented there does not require that you know any initial guesses. Rather, it finds zeros without any initial guesses provided that you have a general idea of where they are. Specifically, if f is analytic on and inside a contour C that encloses the zeros your looking for (and does not pass through one of them), then the methods determines (1) how many there are inside C and (2) what they are. Delvest and Lyness still suggest polishing the computed zeros with Newton's method, but I've found that they are pretty accurate without polishing them. (I polish them anyway though).
If your function (call it f) happens to be a real function of a real variable and you're trying to find real x such that f(x) = 0, then perhaps you can extend f to the complex plane (simply replace real x by complex z and solve f(z) = 0 instead). Any zeros of f(z)=0 whose imaginary parts are nonzero (within some tolerance) can be discarded. Of course, it had better be the case that f(z) is analytic on and inside C. When you polish the resulting zeros with Newton's method, set any small imaginary part to 0 and use the real valued function f(x).
The upshot is that the above reference can be used to determine extremely good initial guesses, requiring only one or two iterations of Newton's method for polishing them.
I look forward to reading the paper referenced by Wilson in detail. At first glance, I am thrilled by the methods potential for parallelism. Any algorithm which builds so heavily on numerical integration should be "easy" to parallelize.
Another idea is to use a homotopic method, e.g. H(t) with H(1)=f, the function for which you seek zeroes, and H(0)=m, a model function for which you know all the zeroes. Then, the algorithm can follow the zeroes as you vary t from 0 to 1. The zeroes for t should be good initial values for t+dt for small dt.
This should work also in multivariate contexts. Of course, the problems still to be solved are
Could you please tell the problem which Muller's method was unable to solve? I had checked the method for multiple problems and all the time it gave me good results. This makes me curious about your problem.
so the idea is to take the first zero x=1 of H(0)=m as initial approximation for the (hopefully fast converging) calculation of zero z_1(eps) of H(eps), where eps>0 is small, then the second m-zero x=2 of as initial approximation for the calculation of zero z_2(eps) of H(eps), ... and the n-th zero x=n of m as initial approximation for the calculation of zero z_n(eps) of H(eps). Starting from the zeroes of z_i(eps), you then go on to compute the zeroes of H(2*eps) and so on up to the calculation of the zeroes of H(1) = f.
The calculation of the "next" n zeroes from the previous ones could be done via some standard solver like Newton-Raphson in parallel, or a simultaneous solver for all n roots. There may be complications for multiple or closely spaced zeroes of f or of H(t), 0< t < 1. I guess that these problems could be solved via a larger spacing and/or distribution of the zeroes of m in this example.
As recall, the version of Muller's method I used required having initial guesses for all zeros of interest. Your entry seems to suggest you only need 3, so you may have something different than what had available at the time.
Due to proprietary concerns, I can't in good conscience tell you what the function is, but I can tell you what I believe went wrong in utilizing Muller's method. There was actually a family of functions whose zeros I was looking for. That is, I wanted all complex z such that f(z,c)=0, where c is a given complex constant. The value of c depends on the particular problem a user might be interest in so the possibilities are endless. Unfortunately, the number of zeros varies considerably as a function of c. Just by plotting |f(z,c)|, I could easily see how many zeros there are for a particular value of c, but I did not know how to enumerate them in general (as a function of c).
So, the main problem I was having is that I didn't know for sure how many zeros I was looking for (in general). However, I was able to analyze the problem enough to come up with initial guesses for some of the zeros. For those, Muller's method had no trouble refining my guesses. But, it did not return the zeros that I had no initial guesses for. It appeared that I was one zero short, and the zero I was missing ironically happened to be the most important one.
Now if somehow I had been able to enumerate and determine initial guesses for all of the zeros, I have no doubt that Muller's method would have work well. But, that's precisely the advantage of the method described by Delvest and Lyness. You don't need to know how many zeros you're looking for and you don't need to supply any initial guesses.
As described above, it appears that the homotopic method also requires knowing how many zeros you're looking for. Therefore, it looks like the method of Delvest and Lyness has an advantage over that method.
The method of Delvest and Lyness is very nice for analytic functions. But it offers no direct generalization to multivariate cases that - I think - are much more demanding.
Thanks for the information. As I understand, in most cases, the number of roots is determined by the degree of the equation. In my problem, the equation was of an arbitrary degree, so number of roots was unknown. Second, the coefficients of the roots were complex as well which increased the complexity of the problem. As obvious, the roots were complex too. The Muller's algorithm I applied required three guess to begin with; the guesses could be real or complex. And then the algorithm did a very good job, as I have discussed in my paper mentioned above.