18 September 2019 3 2K Report

Deterministic Global optimization relies on convex relaxation of the non-convex problems. Certain nonlinearities are duly converted into linear forms underestimators to be solved by efficient MILP solvers (e.g. signomial functions/ bilinear terms).

Most nonlinearityies are approximated to linear functions by piece-wise linearizations. However, I am wondering if this linearizations guarantees that the approximations are understimators of the original nonconvex problem (i.e. for all x in Domf, f(x) >= u(x) where u is the understimator)

because otherwise the understimator may miss the global optimum during the branch and bound process.

Can the solver still converge even if the relaxation is not an understimator?

More Yves Matanga's questions See All
Similar questions and discussions