Recall that in the case of minimizing a differentiable and convex function f over R^n, that an optimal solution x^* is characterized by the fact that the gradient of f at x^* is the zero vector in R^n (i.e., a vector n elements long, with only zero entries); we can name it 0^n, for short.
If we consider next a convex function f (no longer necessarily differentiable), to be minimized on R^n , then the necessary and sufficient optimality condition for x^* to be optimal is that the vector x^* in R^n is such that the zero vector 0^n is one of the subgradients to f at the vector x^* - that is, there is a place on the graph of f where the function is "flat".
Here is one illustration - hard to find examples on the web in one dimension:
At the kink you can say that the slope is not determined, as it is always is for differentiable functions, so in the case of non-differentiable we look both left and right, to see that the left derivative is negative, and to the right of the kink it is positive. You have to imagine that the optimum point changes character at the optimum. To the right it has a negative slope, to the right a positive one. Right at the optimum there is no derivative, but we can imagine all the possible slopes in between the left and the right one, and you can now see that one of the slopes in between those two "extreme" slopes one of the slopes is 0!! Because in this example f is convex we have hence found that one of the subgradients is 0! And in the case of convex optimization, this is the characterization of an optimum.
Sorry for "talking" too much. :-) Now do you see?
/Michael
P.S. If you also have constraints then there are additional terms in the characterization, just like in the KKT conditions.
Your question is very general. Do you ask about the optimal solution of an objective function over convex regions? Or the usual linear programming simplex method? You should be more specific to get a clear answer to your request. Best regards
In fact a second-order condition is superfluous, thanks to convexity: If for every subgradient - let's call them g* - of the objective f at x^* satisfies the variational inequality g* T(x-x^*) >= 0, then x^* is optimal. Piece of cake! ;-) (The "T" is the "transpose" operator.)
The convexity and the derivability of the constrained functions or the gains are strong conditions, but not necessary for optimality. In my magister work, I aborded the Berge and the Nash equilibria under the condition of the 0-diagonal concavity, where the functions are semi-continuous and not necessary continuous (see Hacene GHAROUT "Etude des conditions d’existence des stratégies d’équilibre dans les jeux avec et sans contraintes").