Yes Jose Vegas is right. Infact, what you should have if det(A) is non-zero A.Inv(A) = Inv(A).A = I the identity matrix. However, for large value of n it is difficult to find det(A). If you apply, Gauss elimination method, then during elimintion process t some point your diagonal element becomes zero can not be made non-zero by elementary row exchange then the matrix is singular and the inverse does not exist.
If A is square you need det(A) nomzero. If A is not square and you don' t need the other condition A*A^(-1)=I, then you are looking for "left inverses", and that is a long story. I would check Wikipedia for help. Cheers!
Yes Jose Vegas is right. Infact, what you should have if det(A) is non-zero A.Inv(A) = Inv(A).A = I the identity matrix. However, for large value of n it is difficult to find det(A). If you apply, Gauss elimination method, then during elimintion process t some point your diagonal element becomes zero can not be made non-zero by elementary row exchange then the matrix is singular and the inverse does not exist.
It depends on the matrix. If it is of type integer, then you can do Gauss-Jordan elimination. If you don't end up with a zero row, then your matrix is invertible. Of course computation of determinant for small n is more efficient. Other method is to try to find eigenvalues, if zero is not among them, then again A is invertible. There exist almost ten different equivalent ways for your task.
How you choose to show existence of an inverse really depends on the matrix. There are instances where finding det(A) is far more difficult than proving [Ax = 0 implies x = 0].
If your matrix is a square matrix, it must have full rank and this implies detA not equal 0.
In other case, the product of the matrices A and, in this case A^-1, will give you a matrix of a rank equal to the minimial rank of the matrices A and A^-1. Therefore, you cannot obtain the identity matrix which have full rank equal n.
This implies that only classical inverse of A satisfies the equation A(A^-1)=I_n.
And the rank of the matrix is not full if it has
- zero column or row,
- column (row) that is equal to another column (row)
- column (row) is linear combination of the remaining comlumns (rows) of the matrix
I such way, even if the size of the matrix is large, we may determine if the matrix A is invertible without computation of its determinant.
R. Mittal's answer was the sufficient one in the 19th century (where nobody considered large matrices). Today the only reasonable way is to make a singular value decomposition (SVD) and inspect the singular values (i.e. let the computer do that). If at least one of them is zero (I don't discuss the question 'what is zero' for results of lengthy numerical computations) then the matrix has no classical inverse. It has a pseudo-inverse (Penrose inverse) though (which is closely related to the SVD), which can replace the classical inverse in many applications where classically one used the inverse. Peter's contribution to that side of the problem is valuable. Of course, all this stuff is in Wikipedia, and everybody who works with matrices in all but the most trivial applications need to know it!
I strongly feel people are unnecessary showing their extra knowledge without relevance to the question. Simple question regarding when does the inverse exist and the simple answer that it has to be non-singular that is, its determinant is non-zero.
For any (m,n) matrix A there is exactly one (n,m) matrix B such that the linear equation A x = y (with x n-dimensional column vector, and y m-dimensional column vector has a solution x = B y which is 'as good as possible'. This means that for any given y the error ||A B y - y|| is smaller than any ||A x - y || for all x. And among all x for which ||Ax - y|| is minimal, the 'solution' x = B y has the smallest norm. There are extremely stable and efficient algorithms for computing B (all based on SVD) which in the case n=m outperform LU-decomposition. I'm enthusiastic about this method since it once was the key for designing a multispectral camera which for cost efficiency had 10 color filters on a filter wheel the spectral transmissions of which were strongly overlapping. Getting a fast and stable estimate for spectral scene reflectance based on the responsed in the 10 overlapping spectral channels was perfectly possible with
this generalized inverse. Actually I 'invented' the method for this purpose and had to discover later that I wasn't the first. R. Mittal will not fail to recognize that again unnecessarily 'extra knowledge without relevance to the question' has thus been communicated.
Dear Prof. @Ulrich Mutze, Thank you for the answer. I used this method to find the solution to the degree reduction of triangular Bézier surfaces, see the link if you like:
Victor, the question was 'how do we determine ... ?' If the matrix is large and the determinant is very small than your answer is of little help. Unfortunately there are many mathematicians and many mathematical text books which completely ignore the challenges resulting from practical computational situations.
Simple case - the matrix with dominating diagonal (or permutatation of the such one). Say, all diagonal elements are positive and all nondiags are nonpositive. Other case: the product of diagonal elements is at least N! greater then this product for maximum nondiagonal ones from different row and column.
Computing the det is a P problem, simply by using LU factorization.
Now, for what numerical calculations matters, probably another factorization should be used. This also depends the structure of the matrix (Toepltz, tridiagonal, etcetc). QR is one of the most stable algoritms.
If you are asking to determine the inverse of a matrix then there are many ways to find it. But if you are asking about the existence of the inverse of a matrix, that is, how we show that a matrix has an inverse then I will recommend you to transform the given matrix into a triangular matrix. See if there is any zero element in the diagonal then the matrix will be singular (having zero determinant) and will not have any inverse as the determinant of a triangular matrix is equal to the product of the diagonal terms.
There is another way to check whether a matrix will have an inverse or not. Just reduce the matrix in row echelon form and if there appear a zero row somewhere during the process, then the matrix will not have an inverse.
I agree with Sardar. In fact, the only "civilized" way of obtaining determinants is by performing some kind of preparation, the best of which is Gaussian reduction (to be understood in a general sense) to row echelon form. Let me just mention that, opposite to row reduction, column operatiions are also allowed in this context.
It is good to recall the following result (since the reign of definition of the entries of the matrix was not specified): a square matrix M over a commutative ring R is invertible if and only if det(M) is a unit of R. We use the same formula M x adj M =det M I_n, which holds in this general setting. See Brown's book Matrices over Commutative Rings.
An n by n complex matrix A is invertible if and only if A row equivalent to I_{n}.
If the augmented matrix [A : I_{n}] can be transformed by elementary row operations to the row equivalent form [I_{n} : P]; then P is the inverse of A.
Let A be an m×n matrix over the complex field and let A* denote the conjugate transpose of A. We recall that a generalized inverse G of A is an n×m matrix which satisfies the first of the four Penrose equations:
(1) AXA = A, (2) XAX = X, (3) (AX)* = AX, (4) (XA)* = XA.
The Moore-Penrose inverse of A is the matrix G satisfying (1)-(4). Any matrix A admits a unique Moore-Penrose inverse, denoted A†.
Suppose A is a square matrix, the matrix A is invertible if and only if
We can see many good and simple answer, yes the question is simple. But only one gives the book reference. I think, the best answer is det(A) ne 0 and recommendation to read good book for example Gantmacher F.R. Applications of the theory of matrices (Interscience, 1959, Dover, 2005) or Bellman ets
An efficient way to determine invertibility of any real square nXn matrix A is to consider the symmetric matrix B = A*AT, where AT is the transpose of A. In case of complex A, use instead the Hermitian matrix B = A*AT, where AT is the conjugate transpose of A. If A is invertible, so is B. Symmetric and Hermitian (for the complex case) matrices always have always real eigenvalues. Then, use an efficient decomposition algorithm for B, e.g. Eigenvalue decomposition, Cholesky decomposition, Householder decomposition, depending on your application and the size of the problem you want to solve. If any eigenvalue is zero, or if any diagonal element of the matrix in the Cholesky decomposition is zero the matrix is not invertible. There are available software packages for all these algorithms. Also, you can use a package for finding the Moore-Penrose generalized inverse and check if it is an inverse of the original or not. The Moore-Penrose generalized inverse is simply a matrx A+ such that x=A+ * b is the solution of the minimization problem min ||Ax-b|| . So, if you want to solve Ax=b, find x that is the closest point (the projection) of b on the range space of A. If A is invertible, then it coincides with the inverse of A.
My suggestion: For moderate to large matrices and arbitrary entries, do not use a traditional elimination method. Use some of the above for computational efficiency and arithmetic stability.
If the determinant of the matrix A (detA) is not zero, then this matrix has an inverse matrix. This property of a matrix can be found in any textbook on higher algebra or in a textbook on the theory of matrices.