First converting the given differential equation to a system of linear equations, then put it in matrix form, after, the diagonalisation of a matrix (Jordan form in general ) is one useful tool for solving differential equations and recurrence sequences.
First converting the given differential equation to a system of linear equations, then put it in matrix form, after, the diagonalisation of a matrix (Jordan form in general ) is one useful tool for solving differential equations and recurrence sequences.
Every discretization method turns your differential equation (a problem in "analysis") into a finite set of equations (a problem in "algebra"). This is so for boundary value problems as well as initial value problems. The point of discretization is to construct an algebraic problem which can be solved in finite time. Of course, the algebraic solution will only be an approximation to the the solution of the differential equation.
If the original differential equation is linear, your discrete problem will also be linear, so you have a problem within linear algebra. If the problem is nonlinear, however, you will have to use some iterative technique (usually Newton's method) to find the approximate solution. But even Newton's method will only lead to a sequence of linear algebraic problems to be solved.
In a few cases, matrix algebra can be avoided. This happens in initial value problems u' = f(u) if you use explicit time stepping methods, e.g. Euler's explicit method,
un+1 = un + h*f(un).
You can run this recursion, without using any matrices or linear algebra work. However, as soon as you use an implicit method, e.g. Euler's implicit method,
un+1 = un + h*f(un+1),
you have to do equation solving on each time step, and this requires algebraic work.
In boundary value problems, e.g. u" = f(x), discretization immediately produces a linear system of equations, L*u = f, where the matrix L is large and sparse. So here you have to use linear algebra for the numerical solution.
So basically the only numerical solution technique that doesn't use matrix algebra is explicit time stepping methods in initial value problems.
Yes numerical methods use matrices to solve a differential equation. However, if your equation is linear then theory of linear algebra, matrices and eigen values come automatically in the picture. The fact that solutions of the equation are linearly independent in that case.
Converting differential quantities to finite difference form will convert the differential equations to matrix equations which can then be solved algebraically.
Thank you all for your valuable answers. I'm still waiting for more additional answers and further investigations, specially in the field of nonlinear equations and PDE.
For nonlinear equations and PDEs, the framework is the same. Let's say you have a PDE ut = Lu, where L is a linear differential operator in space. Using method of lines, you discretize space to obtain a system of ODEs. With only a slight abuse of notation, let's write it
ut = Lu,
where the t subscript on the left refers to the time derivative, and L is now a matrix, replacing the former differential operator acting on the space variable(s).
If L is a first order operator in space, you might still get away with an explicit time stepping method (think Explicit Euler), provided that you fulfill the CFL stability condition dt/dx < C, where dt and dx are time and space mesh widths, respectively. So with an explicit scheme, you won't have to use matrix algebra. (This is the only exception.)
If L is a second order operator in space, however, the CFL condition becomes dt/dx2 < C. This is prohibitive (compare stiffness) as you are forced to keep the time-step incredibly small just in order to maintain stability. Therefore, you'll have to use an implicit time stepping method (think Implicit Euler). You then overcome the time step limitation, but at the cost of having to solve a linear system of the form
(I - dt*L) un+1 = un
on every time step. That's when you have to start using linear algebra, and matrix theory.
The same happens if you have higher order operators in space. The higher order, the more you're in need of implicit methods.
Should L be a nonlinear operator, the same thing happens, only you have to solve your systems using Newton-type methods instead. (Very large systems will have to be addressed using some type of iterative methods, such as GMRES or similar).
And if you don't have any time dependency, it's still the same story. There's no time-stepping then, but you still have an equation system to solve, necessitating the use of linear algebra.
U can do just linear diff equation is a linear operator so by representation according to a suirable basis and jordan canonixal form if xan be represented to a matrix giving solutiobns....
Here is an example of the method that uses matrix algebra in solving of
A LINEAR PARTIAL DIFFERENTIAL EQUATION OF THE HIGHER ORDER IN TWO VARIABLES WITH INITIAL CONDITION WHOSE COEFFICIENTS ARE REAL-VALUED SIMPLE STEP FUNCTIONS
Perhaps the simplest application of the matrix algebra is this: if A is an
N x N constant matrix and y is an N-component vector, then the (unique) solution of the Cauchy problem dy(t)/dt=A y(t), y(0)=z is given by y(t)=exp(At)z.