I already know there are a few methods to compute it. A good review can be read at http://www.cs.cornell.edu/cv/researchpdf/19ways+.pdf What I'm interested is your opinion on what you consider to be the best method. The dimension of the matrix A can be large. I'm trying to see whether my method is comparable to the best practice.
By the Cayley-Hamilton theorem, use an expansion: exp(At) = I + tA + (1/2)(tA)^2 + (1/6)(tA)^3 ... until A^p = 0. This is a standard way that you can find in Continuous-Time Systems, Springer, 2007, p.224. The book is available on my site here.
I already know there are a few methods to compute it. A good review can be read at http://www.cs.cornell.edu/cv/researchpdf/19ways+.pdf What I'm interested is your opinion on what you consider to be the best method. The dimension of the matrix A can be large. I'm trying to see whether my method is comparable to the best practice.
If everyone think that the Cayley-Hamilton method is the one to beat, I have to agree. The accuracy is there. However, personally I have reservation when it is used to compute large matrix transition. When the eigenvalues of the characteristic equation are all real and distinct, the method is quite straight forward to program. But the eigenvalues can also be complex or repeated roots.
One often used, robust and stable algorithm is the "scaling and squaring" approach which can be found, e.g., in matlab's expm routine. More details can be found in N. Highams textbook "functions of matrices".
However, is A is large and sparse and you're intrinsically interested in exp(At)v, i.e. the matrix-vector product of exp(At) with a vector v, a variety of other methods opens. These are usually more adequate for the large-scale case.
Try the scaling and squaring approach (with Pade approximation), as suggested by Patrick. You may refer to Prof. Nick Higham's homepage for more detail.