The class of NP-hard problems contains problems of decision (answer is yes or no, no enumeration). Then, finding all the solutions is not NP-hard since it is not a problem of decision. However, your problem of decision (does there exist a combination with the prescribed sum) is related to one of 21 initial NP-hard problems from Karp. This problem is known as the knapsac (there are variants of the problem, this one is the subset sum problem). It is nP-hard. However, there exists an easy algorithm of dynamic programming to solve the problem in polynomial time of the sum -quadratic if I remember well (I assume that we have here integers). You should check this: http://en.wikipedia.org/wiki/Subset_sum_problem
The algorithm you are referring to that uses DP is pseudo-polynomial. If it were polynomial, it would imply P=NP. That DP algorithm is for the 0-1 Knapsack problem (sometimes the Knapsack problem is used to describe the Fractional Knapsack Problem which is a relaxation of the 0-1 Knapsack problem). It is worth noting that you can consider the 0-1 Knapsack problem with real weights (it is at least as hard as the 0-1 knapsack problem (which traditionally uses integer weights). Mathematically we usually use reals (as the algorithm would still be correct), but normally we consider the problem with rational weights or integer weights due to input size and encoding the instances (as input size is very important when we consider a problem). It is also worth noting that this would also affect input size as storing real numbers is a very different encoding than one could use for integers or rationals. You do correctly identify the correspondence though.
Also it is worth noting that algorithm seeks out an optimal solution that meets those constraints, it doesn't generate all possible solutions that meet the optimal solution (though I'd imagine it not to be too tough to modify if you do some careful pruning).
Now some more comments:
I'm assuming you're discussing non-negative or positive rationals? Correct? If you don't restrict to positives or non-negative, you'll end up with an infinite set which won't be possible to obtain entirely.
Generation algorithms are inherently expensive to execute if they have to generate an exponential number of sequences. Normally such algorithms are considered with respect to the size of the set itself you seek to generate. What I suggest looking into are loopless algorithms. They tend to be very easy to implement, but very hard to prove theorems about them. Their benefit is that it takes O(n) to generate the first list member, then each is successively generated in constant time. You really can't do better than that when you don't impose further restrictions. Look into the literature for more details using the keywords loopless algorithm, and integer partitions (or zero integer partitions or weak integer partitions if you want non-negativity).
It is worth noting that if you're interested in producing all the possible combinations of N rational numbers (which is what you'd do anyways on a computer) with a sum constraint, just rescale and generate all possible integer partitions (then rescale back). This will be a huge number, so choose your inputs wisely and ensure to give yourself a few days at least to get your list.
There are some books in Combinatorial Algorithms that go into great detail on how to generate said objects. Here is a paper you can check out as a starting point: http://www.site.uottawa.ca/~ivan/F49-int-part.pdf (you can find several useful resources in the references of the paper as well).
Yes, you're right: it is pseudo polynomial with Dynamic Programming. It is why I wrote "polynomial of the sum" (my english was not completely correct, here, I guess). II don't remember if it is linear or quadratic in the integer s that provides the sum, something like that -too late to check, my bed is waiting for me-... while the size of the encoding of the data is usually much smaller (in log(s) or size(input)
Hassan: I don't think I understand your example. If alpha_i are real numbers, with i up to 8, then there are 2^8 == 256 possible combinations of alpha_i. I can sum for every combination, and just see if any combinations sum to == Z. I don't think I have to discretize; I fail to see how that helps.
Some obvious tree-trimming algorithms help; we don't have to consider sums once they exceed Z. You can sort the alpha_i s.t. alpha_1 is largest, and create a binary tree with a root that is the empty set, whose first branch is the binary choice of including alpha_1 in the sum or not, whose second level for each of those two possibilities is the binary choice of including alpha_2 or not, etc., but you can stop following a node whose sum exceeds Z.
That is the obvious algorithm that occurs to me, anyway, given your description.
Thank you all for your answers. Let me clarify what is meant by the question again.
Suppose that Z = 3. We want that 3 gets distributed between a1, a2 and a3 in such a way that \sum ai = 3 and ai is bigger that some threshold (say 0.1).
The combination would be:
a1 a2 a3
2.8 0.1 0.1
2.7 0.1 0.2
2.7 0.2 0.1
....
In this case, we have 3 for Z and only 3 as. This is simple to do. However, the problems gets fastly difficutl when the number of "a"s is 4, 5 or 6 ...
Okay, lets call the bin count B, so we have a_1 ... a_B. Let's call the number of objects to be distributed N. For mathematical simplicity I assume a bin can have zero objects in it; if that is not the case, subtract B from N and put one object in each bin, and use the following to distribute N-B objects instead, with the option of zero objects in a bin. In your example, I think N=Z/T, the total (Z) divided by your threshold (T).
Given N objects and 1 bin, all N objects must go in that bin, so there is 1 way to fill the 1 bin, so #ways_1 = 1.
Given N objects and 2 bins, the first bin can have from 0 to N objects, and the second is forced to hold the remainder, so there are (N+1) ways to fill the two bins. So #ways_2 = (N+1).
Given N objects and 3 bins, the first bin can have from 0 to N objects, we will call that count #a_1. The 2nd and 3rd must have (N-#a_1) objects, and know for those two bins there are (1+N-#a-1) ways to fill those. So the ways for 3 bins works out to the sum of:
#a_1, ways to fill remaining 2:
0, N+1
1, N
2, N-1
...
N, 1
Which is sum(1,N+1) = (N+1)*(N+2)/2. So #ways_3=(N+2)*(N+1)/2.
Then, for B=4, the same idea: Sum(1,(N+2)*(N+1)/2) = (N+2)*(N+1)*(N^2+3*N+4)/8 = #ways_4.
Mathematica or Maple (my choice) can help you with the arithmetic and factoring.
And so on. To realize this in an iterative format, I would probably use a recursive C algorithm; doing just what I said above: Start with a valid value in a_B, which reduces N by some amount, and recurse to count through stuff in [a-1 ... a_{B-1}] with the new N (passing the whole vector of values in a_1 ... a_B to the lowest level.
The recursion stops at a_1, which always has only one thing to do: whatever "N" it is passed all goes into a_1. At that point, you have one instance of a completed vector [a_1 ... a_B], and you can evaluate it using whatever algorithm you use. When you return from the recursion to the depth handling a_2, it will change the number of elements assigned to a_2, compute the new budget for "lower layers" and recurse again to a_1 with a different budget. When it is finished with its possibilities, it will return to the a_3 depth, it will do the same: change the contents of a_3, and call a_2 with a new budget, etc.
P.S. That's off the top of my head; let me know if you see a flaw.