Kardi, I don't know what you mean be non-negative rectangular matrix. If you make the matrix square by filling out the missing elements by zeroes you will have a semi-definite positive matrix. Since the nonzero eigenvalues are positive definite you can use the canonical orthogonalization to remove those eigenvectors that corresponds to zero eigenvalues. If the number of eigenvalues do not match the order the matrix, you may have to use deflation techniques to find the proper degeneracies and their Segrè characteristics.
Thank you for your comment, Erkki. I have m by n rectangular matrix A whose elements are non-negative, meaning the are either zero or positive real number. The matrix is not symmetric nor Hermitian, thus it cannot become semi definite even if I add zeros to make it square. All the elements of vector b are positive real numbers. I want to know the positive solution x of linear equation Ax=b.
Let xp be a particular solution to Ax = b. Suppose that some component of xp is negative, we can find some vector xn in the nullspace of A such that the general solution to Ax = b is given by xp + xn is a non-negative vector.
I think this equation Ax=b , (b non negative ) need not always have a non negative solution even when A is non negative. But Farka's lemma assures a non negative solution under certain condition. ( Ax=b, x non negative is consistent if and only if Aty non negative implies that is non negatine. Here, < , > denotes the inner product). Further, when the matrix A is weak monotone also, a positive solution is guaranteed if b is non negative. At is the transpose of A
As I understand it, you wish to solve the constrained system AX = B with X >= 0. One classic technique for approaching this is to treat this constrained system as the input to the first phase of the classical two phase simplex method for linear programming. The standard approach is to use the simplex algorithm to solve the Phase I problem. Suppose that A is m x n . Let I be the n x n identity matrix and introduce n new "artificial" variables represented by the n x 1 vector W. Then apply the simplex method to the linear programming problem (LP):
min (sum of the entries of W) subject to AX + IW = B and X >= 0 and W >= 0.
Since B >= 0, LP theory tells us that the simplex method will converge in finite time to an optimal solution Xopt >= 0 and Wopt >= 0. Either Wopt = 0, in which case, Xopt is a nonnegative solution to the original constrained system, or else, Wopt has at least one positive entry, in which case, there is no nonnegative vector X that solves the original constrained system. Any textbook on linear programming (Calvert and Voxman, or Bondy and Murty, for example) will give you all the details of why this must be true, and will also explain how to implement the simplex algorithm. If you are working with A with large m and n (say greater than 50), then you should consider using some version of the revised simplex method, which is computationally more efficient.
Note that this approach does NOT require that A be nonnegative. Also note that, in general, your problem can be infeasible even without requiring that X be nonnegative.. The nonnegativity of A and of b is certainly not enough to guarantee that b is in the column space of A.
You might want to look at the book by Brualdi and Shader (from the late 1980's or early 1990's) on what are called sign solvable systems.
I am not sure what you mean by a closed form solution. You might consider some sort of constrained least squares. Ravi Bapat has a chapter on constrained least squares problems in his linear algebra book for statisticians. I am imagining that you would perform least squares to minimize ||AX - B|| subject to the constraint X >= 0 . The solutions to such problems admit "closed form" solutions that typically involve lots of products of matrices, some of which are inverses of products (if you are lucky) or generalized inverses of products (if you are not lucky). To understand this, in classical least squares, where there is no additional constraint on X, the least squares solution is X = (A^T A)^(-1) A^T B when A has full column rank (that is, A is m x n and rank(A) = n), but when A does not have full column rank, the solution becomes X = (A^T A)^- A^T B where M^- means some kind of generalized inverse of M (For example, the Moore-Penrose inverse).