I am working in meshless methods using radial basis approximation. As it is known that the system of linear equations arising is severely ill-conditioned. What approach would you like to suggest for solving this problem?
The most effective method is to find a good preconditioner --- but this requires having useful information about the structure of the matrix A. Often, also, one can use operator splitting: finding a way to write A=B+R with B easily invertible and the remainder R small --- again, this requires having useful information about the structure of the matrix A. Another related approach is to look at the equivalent system A*A=A*b and regularize this by solving [A*A+lambda I]x=A*b, hoping that the positivity of A*A makes up for the worsened conditioning of the new problem.
On top of the "regularisation" or "diagonal loading" approach mentioned by Mr. Thomas, you might want to revert the problem to "Matching pursuit", where now you need to find the largest magnitude entries of x subject to the ill-conditioned and Ax = b constraint.
Maybe, you can try to solve it by iterative methods (such as CGLS) with Lanczos-based preconditioner, see http://www.sciencedirect.com/science/article/pii/S0307904X13002382
Hi! For ill-posed problems, regularization techniques are often needed; for instance, Tikhonov regularization. You can refer to Professor Lothar Reichel's homepage for more detail about ill-posed problem solvers: http://www.math.kent.edu/~reichel/research.html
For RBFs, the trick is to never form the matrix of RBF translates. That basis is terrifically ill-conditioned, and nothing you do to the matrix really helps improve the accuracy as the number of nodes increases. You should switch to one of the new stable bases.
I recommend using extended precision software, see www.advanpix.com for examples
Let A =[1+delta, 1; 1,1]; if delta > the machine epsilon, you have a nonsingular solution, otherwise, A is computationally singular. Math on a computer is NOT ideal math
DEFINITELY, THE MATLAB IS THE MOST SUITABLE TOOL ACCORDING TO M.R. HEMATIYAN. BUT THE SIMPLE X=pinv(A)*B MIGHT BE NOT SUFFICIENT. IT DEPENDS ON THE PSEUDO - RANK OF THE MATRIX A! NOT ALL THE SINGULAR VALUES SHOULD BE INCLUDED IN A PROCESS OF CONSTRUCTING THE SOLUTION.
@Edward Kansa - Thank you ! I saw your recent paper regarding the same. I shall definitely try the extended precision software. I have also come across @Scott Sarra's recent toolbox.
@M. R. Hematiyan @ Jan Sikora --- Generalized inverse (pinv) might work better than the iterative methods, however, it isn't an effective way to do so. I have tried it last year; you can see the results in my short paper
I am using this approach for tomography image construction and I can say that this is the best way to slve ill conditioning problem. Expensive in numerical sens but very precise.
Firstly, construct your linear equations in extended precision. Many times, the equations can become COMPUTATIONALLY singular if the product of the machine epsilon times the condition number is too small .Secondly R.l Hardy, inventor of MQ RBFs used domain decomposition; so did his students. Try scaling, extended precision, preconditioning, Usually, a combination of approaches is most robust. Alsp see ,u EABE May 2017 paper. E JKansa
For GRBFs, the Mercer expansion gives you the factorization analytically, and allows to partially dodge the singularity Edward mentions, towards the flat basis limit. See Fornberg and also Fasshauer and McCourt. Varun, I guess this is same as the stable basis answer? -M
Matt, exactly. In fact, for the Gaussian, the RBF-GA method is the fastest (Lehto, Fornberg, Larsson). The Mercer series approach in conjunction with Fasshauer and McCourt's Hilbert-Schmidt SVD is a great alternative. Finally, for local interpolation, appending polynomials seems to help prevent stagnation errors under refinement. There are also new developments: RBF-RA (rational approximation in the complex plane, Wright and Fornberg), a Newton basis (de Marchi), and a weighted SVD (de Marchi).
The problem I had with the Mercer factorization is that it still limits to the polynomial Vandermonde matrix in the flat limit, which still suffers from polynomial ill-conditioning, eg. of the mesh. I see you have a nice curvy solution to the mesh problem here: Article Curvilinear Mesh Adaptation Using Radial Basis Function Inte...
I second Kansa's recommendation of using domain decomposition. I hope you guys realize Prof. Kansa is an authority on this subject Ling & Kansa [2004] is not bad :) Using pseudo-inverse is not enough due to severe ill-conditioning when you try to increase accuracy (flat) or add more samples.
I have a question for Mishra, in your Hybrid Gaussian-cubic paper you show using a cubic polynomial term help to avoid the ill-conditioning. That is very interesting insight. However, can it interpolate a DC-term - i.e. reproduce a constant. I have always used a linear polynomial term.
please found attached two options to approximate any matrix for improving their condition number. But, I haven't any reference for these approaches .. please let me know anyone has an idea for such approximations.
the source of the picture was taken from the folwing ref.
Article An approximate high gain observer for speed-sensorless estim...