If you like i can teach you but for this way its difficult, mi email is [email protected] if you write me i send a papers with the methods to solve this problem!
In this case one of the equations must be dependent on one or several of the other equations because otherwise you have an overdetermined system. This is the same as saying that one of the equations is not independent of the others.
Jemimah: Actually you have a system of linear equations that can be expressed as
Y = XA, where Y is an (n x 1) vector, X is a (n x m) matrix, and A is an (m x 1) vector, with n is greater than m. The vector A is the vector of the unknowns.
As X is a rectangular matrix, it can not be inverted. However (X (transposed). X) is a square matrix. If this square matrix can be inverted, you would then get
A = (X (transposed) .X) (inverse).(X (transposed) Y).
Consider the case of a number of nonlinear algebraic equations in several unknowns:
F_j(x_1,...,x_U)=0 for j=1,...,E
where U < E (more equations than unknowns).
If there is a solution (X_1,...,X_U) for the unknowns, then it suffices perhaps to solve only U of those equations.
Here, I write perhaps for the following reason:
For given x_2,...,X_U there may be several solutions of F_j(x_1,...,x_U)=0 for x_1 due to the nonlinear nature of the equations. For instance, there are normally two solutions for x_1 if F_j is quadratic in x_1. But it could be that only one of the values for x_1 satisfies also the rest of the equations F_k=0 for k different from j for the given x_2,...,X_U.
Also, there may be the problem of which of the E equations to choose for obtaining as many equations as there are unknowns.
Furthermore, there may be several solutions whence uniqueness of the solution is not guaranteed.
Additionally, it may be that the system does not possess a unique solution or any solution at all.
One possible solution is to replace the system of equations by a minimization problem in the least-square sense:
L_E(x_1,...,x_U) = sum_{j=1,..,E} F_j(x_1,...,x_U)^2 = min
Then, any of the local minima of the sum is nonnegative because the sum contains only nonnegative terms. The global minimum is the minimum of all the local minima and may be zero in case of the existence of the solution of the original system, or positive otherwise.
As in the linear case, one may minimize other norms of the vector function F, i.e. ||F||=min.
Of course, the minimization problem may be quite involved numerically.
An interesting question is whether it is preferable to consider an alternative minimization problem instead:
Choose U of the F_j renumbered in such a way that they correspond to F_1,...,F_U and minimize
L_U(x_1,...,x_U) = sum_{j=1,..,U} F_j(x_1,...,x_U)^2 = min
under the E-U contraints F_j=0 for j=U+1,...,E.
The constraints may be added via Lagrange parameter lambda_j and then, one has to minimize
M(x_1,...,x_U,lambda_{U+1},...,lambda_E) =
L_U(x_1,...,x_U) + sum_{j=U+1,...,E} lambda_j F_j(x_1,...,x_U) = min
Making M stationary with respect to all its E arguments then leads to E equations for the U unknowns x_k and the E-U unknowns lambda_j.
There is still a further possibility that may be worth investigating that I would like to introduce via an example:
Consider simultaneously three quadratic equations in a single variable:
F_1 = a x^2 + b x + c = 0
F_2 = d x^2 + e x + f = 0
F_3 = g x^2 + h x + i = 0
This nonlinear system may be converted to a linear one
a y + b x + c = 0
d y + e x + f = 0
g y + h x + i = 0
by introducing the new variable y=x^2. This linear system of 3 equations for 2 unknowns x,y may be solved by any of the standard methods.
The original system has only a solution if a solution (x,y) of the linear system satisfies y=x^2.
Again, one may tackle the linear system by minimizing L_3(x,y), but now with the constraint y=x^2, that also may be added via a Lagrange multiplier mu, say.
Thus, one may try to linearize the original system by introducing further variables for the nonlinear terms and add additional
constraints for the defining equations of the new variables.