Looking through the literature, I realized all the proofs for NP- hardness of QIP are based on the claim that Binary Quadratic Integer Programming is NP- hard. Is that true?
Thanks for your reply. In this paper, problems 2 and 12 (both NP- complete) seem to be more relevant to my problem. But in neither case, x is integer. Indeed, there is no clear explanation for them. In problem 2, it says 0
Thank you for your reply, the second paper was really insightful. But, as for the paper mentioned by Ibraheem K. Ibraheem, the discussed problems in this two papers are not QIP, they are QP problems. If QP is NP- hard, how one can conclude that QIP is NP-hard, too? Can you point it out if I'm missing something? (In addition, in my problem, H is positive semi-definite. Therefore, the proof for a special structure -negative eigenvalue- doesn't help me).
Unconstrained Quadratic Integer Programming is strongly NP-hard even when the objective function is convex. The most well known proof is by reduction from the Closest Vector Problem.
Thank you very much Farzaneh. Although the lemma 4 in this paper was really useful to learn proof by reduction in optimization problem, the main concern of this paper is again binary quadratic programming.
You can also easily reduce Unconstrained Quadratic 0-1 Programming to Unconstrained Quadratic Integer Programming. Just add a penalty term of the form M \sum_{i=1}^n (x_i^2 - x_i) to the cost function, where M is a large positive number. This penalty term is always non-negative, and it reduces to zero if and only if all variables take binary values.
Not only that, but the addition of the penalty term makes the objective function convex. Thus, requiring convexity does not improve the complexity of the problem.
@Fatemeh: do you mean a problem of the form minimise x^Qx + cx, subject to x_i \in S for all i,
where S is some specific finite set of integers? If so, then you can easily reduce Unconstrained Quadratic 0-1 Programming to it. Let s_1 and s_2 be the two smallest integers in S. Take your 0-1 problem and redefine the variables so that 0 maps to s_1 and 1 maps to s_2. Adjust the objective function accordingly. Then add a penalty term of the form M \sum_{i=1}^n (x_i-s_1)(x_i-s_2) to the cost function. This penalty term is zero when all variables take the value s_1 or s_2, and positive when some variables take values in S \ {s_1,s_2}. It would be negative if the variables were permitted to take values between s_1 and s_2, but this could not happen, from the definition of s_1 and s_2.
Regarding your nice suggestion, here come two questions:
1- How did you choose s_1 and s_2 to be the two smallest members of S? I ran some numerical simulations (grid search) up to 6 dimension and realized the most influential members of S are the minimum and the maximum integers in S if all members of S are non-negative. However, if one of them is negative, the other members of S are also involved in the optimal solution unless if c=0.
2- Following your suggestion, I can easily conclude that the decision version of binary Quadratic Integer Programming is reducible to the decision version of my problem. Because, if I have minimum of my problem, I can in polynomial time calculate the minimum for binary Quadratic Integer Programming problem. But, it is not the case for optimal problem. right? I mean, if I have an optimal solution for my problem, still I can not get the optimal solution for binary Quadratic Integer Programming problem. right?
@Fatemeh: the choice of s_1 and s_2 was arbitrary. You could pick any two consecutive members of S, and you could still reduce the 0-1 problem to your problem. This shows that your optimisation problem is NP-hard in the strong sense.
Going in the reverse direction (from your problem to the 0-1 problem) is also easy. Suppose the variables x_1, ... x_n can only take values in {s_1, ..., s_m}. For i = 1, ..., n and j = 1, ..., m, define a binary variable x_{ij}, taking the value 1 if and only if x_i takes the value s_j. Express the cost function in terms of the x_{ij} variables, using the identities x_i = \sum_{j=1}^m s_j x_{ij}. Then add the following penalty term to the cost function:
M \sum_{i=1}^n \left( 1 - \sum_{j=1}^m x_{ij} \right)^2,
where M is again a large positive number. This penalty is non-negative, and takes the value zero if and only if, for all i, exactly one of the x_{ij} variables is equal to one. Get the solution, and convert it into a solution for your problem, using the above identities.
Thank you very much. Indeed, my objective is to show binary QIP is reducible to my problem. I followed your suggestion and wrote the followings:
g: {0,1} \to {s_1,s_2} is defined as y=g(x) = (s_2 - s_1)x+s_1. Adjusting the objective function, I would have the following problem:
min y' \tilde{Q} y + \tilde{c} y + D
y \in {s_1,s_2}
Where \tilde{Q} = Q/((s_2-s_1)^2) and \tilde{c} = (c'-2(s_1)'Q)/(s_2-s_1) and D = (s_1'Qs_1 - c's_1)/(s_2-s_1). In order to calculate \tilde{c} and D, s_1 in numerators must be a n-dimensional vector.
Then if I add the penalty term in the form of M \sum_{i=1}^n (y_i-s_1)(y_i-s_2), I can write it as M(y-s_1)'(y-s_2) where y, s_1 and s_2 are vectors. Now I have the following problem:
min y' (\tilde{Q}+M I) y + (\tilde{c} -M(s_1'+s_2'))y + D + M (s_1' s_2)
y \in {s_1,s_2, ..., s_p}
where I is identity matrix and M is a large positive integer.
In this way, I can show an instance of binary QIP is transferred into an instance of my problem. Now, if there is an algorithm which can give me a solution for my problem (let's call it y_opt), how can I transfer it back to x_opt which is the solution for binary QIP where y_i is neither s_1 nor s_2?
How would you argue that binary QIP is reduced to my problem? Is that enough to say the minimum for binary QIP would be the minimum form my problem plus a positive M \sum_{i=1}^n (y_i-s_1)(y_i-s_2)?
Exactly. If M is large enough, then it will never be optimal to set y_i to anything other than the two specified values.
Reductions of the kind I have described are standard tools in computational complexity theory. l recommend reading Garey & Johnson (1979) Computers and Intractability: A Guide to the Theory of NP-Completeness.
Thank you very much. I was only familiar with Karp's reduction which is only useful for decision problems. You taught me a lot! I deeply appreciate your great help.