To find out a positive integral solution for a homogeneous system of linear equations maybe of NP-hard, but how about determining the existence of solution?
If you have a classic hard problem like, e.g. TSP, you convert it to a decision problem by removing the objective function and instead inserting a solution, eg. OBJ
For a integral matrix A of order m \times n. The null space of the matrix A is the subset N(A) of C^{n} defined by
N(A) = {x in C^{n} : Ax = 0} .
Observe that N(A) is the solution set to the homogeneous linear system Ax = 0, sometimes called the solution space of the homogeneous system Ax = 0.
To find a positive integer solution for the homogeneous system of linear equations. Firstly we must compute the reduced echelon form of the augmented matrix [A : 0]. Then we get a basis of N(A). If every component of any basis vectors are positive integers then there exists a positive integer solution for the homogeneous system of linear equations.
I love the detail provided most of the answers, especially by Peter, thank you :). Yes as Peter was mentioning (first comment, he mentions MIPs), you will likely need to solve an Integer Programming Problem (you should only need this, as your matrix A consists of integers, and you want to check whether a feasible solution with integers exists), but if anybody knows a more elegant approach to determining the existence for this special case, that would be splendid to hear!
Just want to remind answers (some seem very irrelevant, see side note) are suppose to be posing seem irrelevant. The question asks "how", and not whether it is possible or not to determine if there exists such a solution in polynomial time.
Side note: It is worth noting that the original question said "effectively", not "efficiently", two very different animals when we talk about a problem's difficulty in terms of computational complexity. Effectively is a term thrown around a lot as a buzz word to sometimes mean it is correct, and "gets the job done", or at times can mean it is comparable with the best-known algorithms. This has been discussed on RG in the past: https://www.researchgate.net/post/Can_anyone_give_more_clarification_on_the_difference_between_the_effectiveness_and_efficiency_of_an_algorithm
If it were "efficiently", I'd agree with most of the answers. To be effectively done is simply to determine if it is a problem solvable in finite time (typically it's efficiency is comparable to the best known). "Quickly" is also a context-based concept, not a concrete mathematical definition. Some problems can be quickly solved within their respective problem domain. I've seen algorithms that quickly return answers but have exponential running times, but I'd not call them efficient in the context of all problems, but they may be for a specific problem. The problem at hand is traditionally not an easy problem to solve, so we should maybe consider providing answers. If the question asked about time-complexity, I'd imagine this to be more relevant to point out.
It is also worth noting (was reading some answers) is that a problem (when we talk about computational problems) is a set of problem instances. We talk about whether solutions are feasible or infeasible (,hence feasible solutions) if it meets all the constraints of the problem (though typically these terms make the most sense in the context of optimization problems), not decision problems (or even problem instances), unless they are poorly defined or something of that nature. Could be a slip up or something, but it is important to note.
As several otehrs have noted, this is an integer linear programing problem (ILP). To affirm a supposition by Peter Breuer, yes, Mathematica has this built in. If the matrix is called "mat" and "x" is an appropriate dimension vector of variables, one can do something like
Solve[Join[mat.x==0,Thread[x>=0], x, Integers]
The method involves branch-and-prune relaxations though with a different initial relaxation. At first we find the integer null space using the Hermite decomposition. We create "small" generators, call them "nulls", using latice reduction. So the relaxation is to ignore positivity (as opposed to the usual LP relaxation of ignoring integrality but maintaining the inequalities). With this in place the problem is reformulated as one of finding nonzero variabes "y" such that y.nulls is all nonnegative. Relative sizes of the null vectors determine the branching strategy.
The process is described, with examples, at the link below. If I recall corectly, repfigit (aka "Keith number") computations are in fact done as a positive null space, albeit with some other inequalities thrown in.
I'd like to thank all of you, to Peter, Thomas, Wiwat, Daniel and Lichtblau, for all your advice, which are helpful to me :-)
Following is my thought:
Since we know that AX=0 has a non-negative integral solution iff it has a non-negative rational solution, then the problem can be regarded as a standard linear programming problem, which can be done in polynomial times by ellipsoid algorithm (or by simple method which is of exponential time but efficient in practice). Thus, we could compute all the vertices of the polyhedral convex set {X|AX=0} and add them up, then we obtain a non-negative rational vector b and we can conclude that AX=0 has a positive integral solution iff b>0.
Xiong Xu Yes, you are right, this can be handled by ordinary linear programming. The thing I missed, obvious in hindsight, is that absent upper bounds on the size of integer null vector elements, you don't need integer programming.
So here is the protocol. If we have A.x=0 and x>=0 (by which I mean elementwise since x is a vector), it suffices to add the inequalities x
I'm not sure I followed your (Peter's) TSP model and in particular I do not know if it precludes disjoint cycles. But in any case it cannot be equivalent to, or rather a polynomial-complexity embedding into, the problem of the original post. TSP requires ILP or something of similar complexity while the original question really requires only LP.
So what is distinguishing the proposed embedding? If I understand this model then it is the integrality requirement. The TSP might have a rational solution with all nonnegative values. But multiplying through to get integrers will render it no longer a TSP solution; we'll lose the 0-1-ness of the flows.
It is quite possible I am not understanding the TSP modeling correctly. But I am fairly certain it will falter, as an embedding into the original question, because the complexity of TSP has to be greater (in the usual sense of not polynomially equivalent modulo P!=NP).