First, you should establish the equation. For this issue you have to find the determinant of the following matrix. The matrix above is that matrix with a major diameter of the initial value minus a constant parameter. Other elements of the original matrix are the same.
Second after finding this determinant, you ought to equlize the value to zero. Depend on the order of the matrix, you will find the equation(called charesteristics eqution) with the constant parameter mentioned above as the same order as matrix.
Third, by solving this equation you will find some values(depend on order) for the constant parameter above. Corresponding to each value(called eigenvalues or principal values), you will find eigenvectors(or principal vectors or principal orientation) using the original equation established at the first aspect.
Forth, you can use your own code, matlab, mathematica and Nspire texas calculator or some casio calculator for finding the eigenvalue and corresonding eigenvectors.
@ Farzad Sir, Thank you for your nice explanation. I know the method for computing the eigen values. I just want to know the numerical methods for the same which are computationally not much costly.
@ Mittal Sir and Jack Son sir, are these methods economical for large (dense) matrices?
@ Amaechi sir, You are absolutely right sir. But, Which algorithm does MATLAB follow for the same?!! It should be an economical one which maintains the balance between memory and computations.
My suggestion is that use Matrix Diagonalisation Method . This should be followed by fractional matrix (FM).In FM you not face any problem . However you can not use Inverse matrix Analysis if it ia a
Do you mean an algorithm/routine to implement yourself in your own program or are you looking for "ready to use" library? For relatively small symmetric matrices the Jacobi algorithm is quite good. For large matrices we use ScaLAPACK library mainly. I have not been involved in massive computations for years and I do not know which algorithm is implemented in this library.
In fact, for the enormous size of eigenvalue problems, typically the size of the matrix is O(10^5~10^7), to calculate all the eigenvalues is an impossible job. If you want to solve it, implicitly QR methods for the standard eigenvalue problems and the QZ methods for generalized eigenvalue problems are the best choice.
For the more specifical problems, for example, in fluid dynamics or the physical calculation, we only concern part of the spectrum. And in mathematically, we want to calculate the inner eigenvalues of the whole spectrum. For these problems, a Krylov subspace based solver is the best choice. In this type of methods, we constructed a reduced order matrix, which contains the approximate eigenvalues we care about, of the original one, by using various Arnoldi/Lanczos process.
Of course, for the size of the problem, we have to use distributed memory cluster. Some open source packages would be of great help. I highly recommend you to try one of the libraries(ARPACK/PARPACK, SLEPc, PRIMME, BLZPACK, TRLAN, BLOPEX, FEAST). These libraries are designed for large size eigenvalue problems with various iterative approximate methods.
The most effective methods to compute the eigenvalues of an eigenvalue problem are those employed in engineering analysis for the determination of eigenfrequencies and eigenmodes of free vibrations of multi-degree-of-freedom systems:
The widely used methods are:
1. Transformation methods (Householder method and QR transformation method)
2. The subspace iteration methods (inverse vector iteration method)
3. Determinant search method.
Although these methods are effective for large systems, the reduction of the degrees of freedom is recommended using appropriate methods, e.g. Ritz method, especially for large eigenvalue problems resulting from FEM discretization.
All these methods are detailly described in my book “Dynamic Analysis of Structures” Chapter 13 (see my contributions in Research Gate) addressed to engineering students.
I am attaching this Chapter. There you can find computer programs in FORTRAN, which can be used to compute the eigenvalues. Unfortunately, the book is written in Greek, but you can translate the Chapter using Google Translate.
There are also ready-to-use subroutines and functions, e.g. the Matlab functions e=eig(A), [V,D]=eig(A).
Other books describing these methods are:
1. BATHE K. J. and WILSON E. L., Numerical Methods in Finite Element Analysis, Englewood Cliffs, N.J.: Prentice-Hall, 1976.
2. Press W.H., Flannery B.P., Teukolsky S.A. and Vetterlin W.T., Numerical Recipes in Fortran, Cambridge University Press, (2nd ed.), New York, 1992
As I mentioned above, it is better for you to find the simplest numerical method for solving the characteristics equation. I blieve that newton raphson method or in the more useful way finite difference method are the best.
For most people, for most problems, to develop private methods to diagonalise matrices is about as smart as performing brain surgery on yourself with the use of a pocket knife and a mirror.
For this purpose, you may safely forget everything you know about characteristic equations, and instead spend your efforts investigating the available numerical packages for your problem; many of such have been developed since the 1960's. A good selection of these exists in the public domain, like Eispack and Linpack, but they require some programming efforts to use directly.
There are certainly very good options available in Matlab, for those who can afford a Matlab license or otherwise have access to Matlab.
Good, freely available alternatives can by found in the SciPy ecosystem for Python, https://www.scipy.org. With these (and a little bit of CPU time), you can often quite straightforwardly find all eigenvalues of a 104 x 104 dense matrix on a quite ordinary laptop. And, with some more CPU time, also the corresponding eigenvectors. But, like always in computing, there can always be pitfalls, depending on the nature of your problems. The routines come with several optional parameters, for tuning.
A simple example for finding all eigenvalues of real symmetric n x n matrices with random entries is illustrates by the code snippet
matrix = rand(n, n)- 0.5
t0 = time.time()
eigenvalues = eigvalsh(matrix)
dt = time.time() - t0
The diagonalisation time in seconds, as function of linear matrix dimension n, when done on a laptop from 2013, is shown below.
100 0.0007
200 0.0037
300 0.0066
400 0.0101
500 0.0200
600 0.0321
700 0.0355
800 0.0483
900 0.0604
1000 0.0728
2000 0.4521
3000 1.5282
4000 3.4252
5000 6.3514
6000 11.4669
In this case, the process broke down for n=7000. So there can still be much work to do, starting with a mandatory check for sensibility of the results. But the characteristic equation is absolutely not the thing to focus on.
For a some of problems I was involved with some 20+ years ago, my experience was that it took significantly longer to generate the matrix than to compute its eigenvalues. In any case, the time limiting process was to digest the results. Not to speak of publishing the conclusions...