Matrix inversion is a good example for the conservatism in science. Unless you have very large sparse matrices (e.g. > 1000 x 1000 most matrix elements 0) there is only one recommendable method: Penrose's pseudo-inversion, which works for arbitrary mxn matrices and has in all cases a meaningful result which for invertible matrices reduces to the usual matrix inverse. Most books on linear algebra don't mention the method. Mathematica implements it so that it works also for complex-valued matrices. Its name here is PseudoInverse. The method is a simple derivate from the deep and insightful 'singular value decomposition' also known as SVD. You have to study it if you want to know the state of the art in inverting matrices. For the classical methods (such as the one mentioned in the first answer) you always have a problem with getting accurate results for matices with nearly linearly dependent rows or columns. In the many cases in which I made comparisons, Pseudo-inversion was always much faster and much more accurate than LU-decomposition or Cholski-decomposition.
Matrix inversion is a good example for the conservatism in science. Unless you have very large sparse matrices (e.g. > 1000 x 1000 most matrix elements 0) there is only one recommendable method: Penrose's pseudo-inversion, which works for arbitrary mxn matrices and has in all cases a meaningful result which for invertible matrices reduces to the usual matrix inverse. Most books on linear algebra don't mention the method. Mathematica implements it so that it works also for complex-valued matrices. Its name here is PseudoInverse. The method is a simple derivate from the deep and insightful 'singular value decomposition' also known as SVD. You have to study it if you want to know the state of the art in inverting matrices. For the classical methods (such as the one mentioned in the first answer) you always have a problem with getting accurate results for matices with nearly linearly dependent rows or columns. In the many cases in which I made comparisons, Pseudo-inversion was always much faster and much more accurate than LU-decomposition or Cholski-decomposition.
Look likes computing pseudoinverse is more computational intensive than computing inverse. My understanding is that if we want the solution of linear equations AX=B to be sparse we use Gaussian elimination method but if we want the solution to has the least norm we compute using pseudoinverse .
Whether the solution is sparse is determined by the problem and not by our wishes. If you need a fairly accurate solution, pseudoinverse is nearly always the fastest method (if you use a state of the art algorithm). That in case of a non-invertible, possibly non-square, matrix A you always get the x of minimal norm which minimizes |Ax-b| (I write vectors lower case) is fine. If, however, A is invertible, x is determined uniquely anyway. Then this property plays no role. What plays a role is that from the singular values, that the method delivers automatically, you can reliably estimate the error of the solution.
Of course if the solution is unique we do not have any choice. But if the system is underdetermined such as x1+2*x2+3*x3=6 we can choose solution like (6,0,0) which is sparse or ( , , ) which has minimal norm as pointed above.
To my knowledge, it is better to avoid computing the inverse of a matrix A unless it has some desirable properties; for instance, A is a digonal matrix. In fact, even for trivial example where A is, say, 2x2, the rounding error may lead to totally wrong answer. In computational mathematics, a common way to "compute" the inverse of a matrix is by solving a linear system. Take A^{-1} b for example. We should not compute A^{-1} first and then multiply it with b; instead, we should obtain it from solving Ax=b using some appropriate method, where x is an approximation of A^{-1} b.
In a word, the inverse of a matrix is more of theretical interest than numerical interest.
you share with us an oppinion that nearly all numerical text books propagate but that, nevertheless, I consider noxious obscurantism. First, one obviously should take into account whether one has only one problem of the form A x = b or many: A x_1 = b_1,... A x_n = b_n. If n is sufficiently large, the inverse matrix is obviously more economic. If you use not one of the old-fashioned (unfortunately still taking the most space in textbooks) methods (like LU decomposition) but the Penrose inverse then, you will not get 'totally wrong answers'.
Thank you for pointing out the problem. That I suggested solving a corresponding linear system instead of computing explicitly the inverse is partly because my research area is about linear system solvers. Frankly speaking, I have less knowledge of Penrose inverse implementation than of linear system solvers.
As for the multiple right-hand side linear system AX=B, I know there are some well-known methods (eg., block, global or seed Krylov subspace methods) and they are applied for large linear system. Maybe Penrose inverse is also an effective alternative. Could u please recommend some reference for the numerical implementation of Penrose inverse ? Thank you so much.
The Penrose inverse is made by inverting the diagonal factor resulting from the 'singular value decomposition' (SVD). My source for the algorithm for the latter is Press et al. Numerical Recipes in C, 2nd edition, Cambridge. Unfortunately the algorithm there is for real matrices only (i.e. not complex). When switching to this method many years ago I did some comparison with LU-decomposition from the same book and found my implementation of Penrose (based on SVD from Press et al., as I said) much faster and by far more accurate than LU. A bit more is in my earlier answer beginning 'Matrix inversion is a good example for the conservatism in science.'