How do we know whether a function is convex or not?
What are the different commands used in matlab to solve these types of problems?
I would suggest you look at the book "linear and nonlinear programming" by s.g. nash and a. sofer. You should be able to find an answer to all your questions. Hope this helps.
Actually, linear programming and nonlinear programming problems are not as general as saying convex and nonconvex optimization problems. A convex optimization problem maintains the properties of a linear programming problem and a non convex problem the properties of a non linear programming problem.
The basic difference between the two categories is that in a) convex optimization there can be only one optimal solution, which is globally optimal or you might prove that there is no feasible solution to the problem, while in b) nonconvex optimization may have multiple locally optimal points and it can take a lot of time to identify whether the problem has no solution or if the solution is global. Hence, the efficiency in time of the convex optimization problem is much better. From my experience a convex problem usually is much more easier to deal with in comparison to a non convex problem which takes a lot of time and it might lead you to a dead end.
Roughly speaking, when you have convexity you have something like the graph of x^2, see Figure. You can generalise this situation to many variables and have a similar 'graph' . The benefit is that a global minimum exists and is easy to find it by a variety of well tested methods. In matlab there exists a package named 'cvx' which solves many such problems fast.
I agree with Photios Stavrou. Anyway, I suggest to take a look at the book "Convex Optimisation" by S. Boyd and L. Vandenberghe, which is free and available online.
You can solve convex optisation problem in Matlab by resorting to CVX or YALMIP, two free toolboxes that are available online too. They can be downloaded and installed quite easily.
Actually, MATLAB functions are not so well performing and works only for convex functionals, where a global minimum can be found. For nonconvex functions you can't find a global minimum with an algorithm.
To check whether a function is convex or not check the second derivative of the Hessian, if exists. In case you do not have the second derivative (the Hessian in more dimensions) there are other methods, as the study of the subgradient or some other inequalities to check
Depends on your interest but for me the answer is simple. The biggest difference is that if it is convex all local optima are global and you need only find one typically. More generally finding global optima is difficult and really only practical for relatively small problems.
The convexity is a geometric property not always easy to prove. If you have the Hessian (second order derivative matrix) of your function, just verify that the Eigen values of that matrix (eig(H)) are non negative and your function is convex. Matlab uses fmincon (type help fmincon ...) to solve it. Better perhaps is to see the CVX tool box by Boye and Vandenberhe. Hope this helps U
Convex optimization means that the function is convex AND so the search area is convex. In such circumstances there exists exactly one minimum, moreover it is located inside the search area (the strictly convex case: both the function and the search domain are strictly convex). Any minimum finding (correct) algorithm should therefore locate the global minimum quite easily.
Non-convex case is usually much harder, even if the function is very smooth.
How do we recognize a convex function? The best way would be to check whether or not it satisfies the definition of convexity. A differentiable function of many variables is convex when its Hessian (a matrix of its all second derivatives) is positively defined on its entire domain.
The example presented by Demetris nicely shows how to recognize a convex function visually.
I agree with some of the above except maybe the MATLAB experience above. If you structure your problem well, MATLAB can do a decent job provided your number of unknown parameters are not ridiculous. With respect to a response to the specific question, practical sense answer is convex optimization problems can usually be successfully solved using traditional gradient-based search techniques to find the optimum. Non-convex usually need something like an annealing approach - allow for uphill guesses from time to time or a really good-guess/objective-function structure to let gradient methods find the optimum you are looking for. Usually with noisy problems, there are always local minima but often with enough regularization you can achieve some form of solution that is reasonable to design from. Given the others above spoke more to the nature of convex/non-convex, I thought I would speak to engineering methods for solution.
Convex optimization requires the minimization of a convex function over a convex domain. The solution obtained is unique. Nonconvex optimization has either a nonconvex domain or the objective function.
Besides linear functions which are convex , to ensure convexity one should have a positive definite 2nd order hessian matrix, although there are other cases of convexity. Discrete optimization given its nature is nonconvex.
Matlab functions are of passable quality. They work for relatively simple problems.
I forgot to tell you a MATLAB function.... lsqnonlin .. is a least squares non linear function optimization - convex optimizer. It is unconstrained, fairly basic, pretty easy to use. The previous post is how I would classify them too... "passable quality" ....
Basically we can not divide problems in convex and non-convex optimization problems. But, in MCDM or in any decision making problem, in checking whether the feasible answer is local or global, it is necssary to determine the convexity of the feasible set.
It should be noticed that not all nonlinear optimizations are non convex. There are some nonlinear convex functions which can be optimized to find global solutions, like quadratic optimization. The main difference is that in a non convex optimization finding of a global solution is not guarantied.
In structural design optimization, I believe most of the problems I come across are inherently non-convex. We probably cannot tell whether a complex design problem is convex or not, without much computational effort. I would err on the conservative side, and assume non-convexity. However, that is not really a major problem since most, if not all, solution methods that we use actually first reduce the problem into a series of convex problems, if possible (and it usually is). For example, linearization by means of the use of gradients, least squares quadratic design surface fitting, and so forth.There then follows a series of adjustments and re-solutions depending on where the previous solution in the series brought you. If you had a nice linear, therefore convex, problem, obviously one set of iterations using a linear programming method would give the final solution.
One point where I should have clarified further: quadratic surface fitting will not necessarily make your problem convex, but it will make it easier to deal with. You can then, if you wish, solve this as a series of linearized problems, and indeed this is a popular approach.
A convex optimization problem is a problem that can be formulated as follows: Minimize a convex function (or maximize a concave function, which is the same) subject to constraints that form a convex set. Optimization problems that can be formulated as convex can be solved exactly and efficiently in polynomial time (polynomial in the number of input data needed to specify the problem) almost always under very general regularity conditions that are true for all practical purposes. There is a vast literature on algorithms and methodologies to treat convex optimization problems of very many kinds. Roughly speaking, the reason for the power of convexity is the property that any local solution (minimum or maximum) is also a global solution.
On the other hand, non-convex problems do not have such a nice property. Non-convex problems are in general at least NP-Hard, so it is unlikely that a polynomial time algorithm is possible for general instances. These problems are usually treated by heuristic algorithms and methods known as Global Optimization. In most cases approximate solutions are sought. There is also a vast literature on non-convex optimization covering a large variety of cases, but general exact efficient algrithms do not exist.
Fool-proof way to solve optimization problems, convex or not, with or without constraints, is to use interval math. There is a Matlab toolbox operating with intervals, called INTLAB, by prof. Siegfried Rump, see: http://www.ti3.tuhh.de/rump/intlab/ (older, free versions are available, too).
Caution: the method is NP-complete, therefore it is practical when the number of unknowns is fairy small, say less than 10. Nevertheless, even the cases with more than 70 unknowns were reported as successfully solved.
@Marek, I didn' t find any free version of INTLAB, could you provide us with a proper link? Thanks.
@Demetris: I'm sorry to acknowledge that older, free versions of the package INTLAB are no longer easily available. Perhaps personal contact with Prof. Rump, its creator and maintainer could be of some help? He can be reached via his personal web page http://www.ti3.tu-harburg.de/rump/
In convex optimization, if it exists, the minimum is unique and all algorithms based on the gradient of the cost to be minimized can be used. While, in non convex optimization, there are many local minimums. Hence, finding the global minimum is very difficult. The most used algorithms are metaheuristic ones such as the simulated annealing algorithm.
The Hessian Matrix is how you know whether a function is convex or not. If it's Possitive Definite, the function is convex.
Convex optimization has a global minimizer; Newton's Method needs only one step of iterration to get the global minimizer.
Non-Convex optimization, on the other hand, might not even have a local minimizer. Take a paraboloid for example, Newton Method will give us the saddle point instead of a minimizer.
Cherry, our lives would be much less miserable if Newton method is convergent in a single step. Next, only some convex problems have (twice!) differentiable objective function. And finally: where is the paraboloid's saddle point?
@Cherry, there exist functions that are convex but not even once differentiable, so the Hessian approach is not a characterization of convexity.
The convex optimization problem refers to those optimization problems which have only one extremum point (minimum/maximum), but the non-convex optimization problems have more than one extremum point.
You can find the global optimum point of a convex optimization, but if your problem is non-convex, your solution procedure may end up with a local optimum point which is one of the emtremum points on the curve.
Note that if you have a linear objective function, and all equality and inequality constraints are linear, the optimization problem is definitely convex. But if you have nonlinear objective function and nonlinear inequality constraint, and linear equality constraint, the problem can be convex or non-convex (you can do some analysis to define it). Note that even if you have one nonlinear equality constraint, independent from the form of other constraints and objective function, the problem is definitely a non-convex optimization problem.
In convex optimization, if it exists, the minimum is unique and all algorithms based on the gradient of the cost to be minimized can be used. While, in non convex optimization, there are many local minimums. Hence, finding the global minimum is very difficult. The most used algorithms are metaheuristic ones such as the simulated annealing algorithm.
If you're dealing with a "black-box" type simulations, there is generally no way to prove that your function is convex. You could do partial sampling of the design space that lets you prove that the function is non-convex, but not being able to prove it non-convex doesn't necessarily mean it's a convex function (there could have been a non-convex region that you haven't caught in your sampling)
As for dealing with optimization of non-convex problems... there's a lot to try, none of it guaranteed (unless you are dealing with a small number of variables that lets you do an exhaustive search)
If you're already using Matlab's fmincon() function... the first thing I'd try is repeating it from multiple different starting points -- if that isn't good enough, you'll want to move to meta-heuristics or other specialized tools
Can anyone help me ? is the following problem a convex problem?
Objective>> Min{Max{Sum Px,t }} with all linear constraints and Px,t is variable for a range of x and t, thus we have T*X variables which T is the number of t and X is the number of x, Sum operator is for x and Min operator is for P.
My main problem is the Min and Max operators in the objective!
Plenty of good answers have been given already. However in response to the subquestion about matlab solvers (or indeed optimization algorithms in any language) it's worth being optimistic and saying that the general purpose gradient tyoe methods will often work quite well on differentiable functions which are non-convex. The non convex regions need not contain any stationary points and may be traversed - albeit slowly - by a gradient algorithm on its way to a convex region where a minimum exists. A particularly useful group of techniques can be based on following an approximation to the continuous steepest descent path which is the solution to dx/dt = -g(x). There is a strong overlap between these methods and the so-called trust region methods which seek the least value of a quadratic model function subject to a limit on step size.
If you want to minimize a real-world non-convex function you may be able to do it quite efficiently if you have a good a priori estimate of where the optimum may be!
min(a,b) = (a+b)/2 - |a-b|/2
max(a,b) = (a+b)/2 + |a-b|/2
Doesn't help much, uh?
Smooth convex functions guarantee the existence of a minimum. A convex function with continuous second partials is characterized by a positive definite Hessian matrix. See Danchick, Accurate numerical partials with application----in Applied Mathematics and Computation
dear Behera,
a good source of information for an introduction to the problems of numerical minimization is the book "numerical recipes"
http://www.nr.com/
There exist many equivalent definitions of what a convex function is. A function is convex, for example, if its Hessian is positive semidefinite. Linear programming, least squares, quadratic and semidefinite programming are convex problems. The Standard reference is Convex Optimization" by Stephen Boyd and Lieven Vandenberghe.
In Linear Programming : in N geometry if All the Vertices of a Simplex Belong to the same Half-Space for Each of its Hyperplanes called the Boundaries which are the Constraint Equations then the Simplex is a Convex Body. A Simple Numerical Test is Required. The Three-Dimensional Case Corresponding to a Polyhedron Defined by its Facets' Planes is an illustration Problem.
A twice-differentiable function f of a single variable defined on the interval I is
concave if and only if f ''(x) ≤ 0 for all x in the interior of I
convex if and only if f ''(x) ≥ 0 for all x in the interior of I.
Read the source link...
A non-convex function is either linear [affine] or concave. Pls. note f ''(x) =0 at the inflection point, a situation where f(x) can be cocavo-convex. Also research the Kuhn-Tucker saddle-point theorem, which produces a simultaneous min-max optimum point, but with a 90 degree plane rotation.
Hope his gives some insight. Cheers!
http://www.economics.utoronto.ca/osborne/MathTutorial/
In reality there are only Convex Functions with Local Concavities. If the Function is a Closed Curve there is an Ambiguous Definition of the Concavity.
In general for the One-Dimensional Problem f "(x) < 0 corresponds to a Maximization case (i.e the Optimal Point which Maximises the Function) and f "(x) > 0 : Corresponds to the a Minimization case (i.e the Optimal Point which Minimises the Function).
For the N-Dimensional case with N-Variables, the Particular Points of the Analyzed Function F(x1,......,xn) found with {Gradient F}={0} Correspond to the :
Point(s) which Maximises the Function if the Hessian Matrix is Negative Definite
Point(s) which Minimises the Function if the Hessian Matrix is Positive Definite.
We can find Several Maximums and Several Minimums verifying the above Definition but Only One of them is an Optimal Point Corresponding to the Optimization case.
The Hessian = Matrix of Second Derivatives of F. There are also other Particular Points as the Saddle Point.
Discussion should of course include cases where the Hessian is positive (negative)
SEMI-definite.
Any stationary point with a positive semi-definite Hessian can be termed a local minimum. The local minimum with the lowest objective function value is then distinguished as the global minimum.
The tantalising fact is that, in general, there are no computable conditions at the point itself which enable us to identify the global minimum. In haiku form we can characterise a global minimum by saying
It's the only place
to be - if you can get there.
If not, you won't know
Hello Michael Bartholomew-Biggs, if the Hessian matrix is nor positive definite and nor negative definite the corresponding point is a particular one like the saddle (seat of on a horse) point which is neither a maximum neither a minimum because in the corresponding plotted area : it shows a maximum in one view and a minimum in the other view. I have calculated several optimization cases and I have found that for each local minimum and for the optimal (global) one the matrix is always positive definite in minimization but the values of the corresponding function for each case is different : this is a trivial conclusion. Other particular points are inflexion points.
Convexity ensure that the optimal point u r searching for (maximum or minimum) is a global point not just a local point
Of course but the definition of maximum or minimum point is rough-spoken because the function is maximum or minimum for the corresponding point.
Sorry, but some answers exhibit serious problems in understanding.
- "In reality there are only Convex Functions with Local Concavities." If there is one "local concavity" (how do you define this?) the function cannot be convex.
- "A non-convex function is either linear [affine] or concave." An affine functiuon is convex AND concave.
The definition of convexity is geometric: f is called convex if its epigraph is a convex set. Epigraph denotes the set "abve" the graph, i.e. the set of all (x,a) from R^(n+1) with a >= f(x).
If f is differentiable, a sufficient condition is: the graph lies above the tangetial hyperplae.
If f is twice differentiable, f is convex, if the Hessian H(x) is positive semidefinite at each x.
Positive definiteness of H(x) everywhere gives strict convexity and thus guarantees a unique local (and at the same time global) minimum.
If f is convex, a local minimum is a global one.
Positive definitness is a sufficient and not a necessary condition. Think of
f:R^2->R, f(x,y)=x^2
This function is convex, its Hessian postive semidefinite. f is constant in y for each fixed x and the set of all minimizers is of the form (0,y) with y arbitrary.
But the original question was just how to solve the problems in matlab, so I think, theory was not really needed...
Btw., a convex function from R^n to R is continuous on the relative interior of its domain and is differentiable almost everywhere (see e.g. R.T. Rockafellar, "Convex Analysis").
It has a generaized derivative (the subdifferential) everywhere on the relative interior of its domain.
So these functions are "nice".
For optimization the nice property is that there are no isolated local minima (the set of minimizers is convex), such that an algorithm could not get stuck at one local minimum, which you are not interested in, or a saddle point.
A convex optimization problem needs not only a convex obejective funktion, but also a convex set of feasible points. Above there was onle a discussion about the objective - if there are constraints, theit properties are important, too.
See e.g. Bazaraa, Sherali, Shetty "Convex Optimization"
I agree with Martin Carlsen.
- In my opinion, a convex optimization problem is an optimization problem with objective and constraint functions are convex.
- Definitions of convex functions can be found in the book of R.T. Rockafellar, "Convex Analysis" or in the recommended book by S. Boyd and L. Vandenbergh "Convex optimization".
- Geometrically, a convex function f, maps a vector x in R^n into R, is a function satisfying the condition that: the segment with end points f(x), f(y) (for all x,y) is always above the range of the segment with end points x,y under f:
f( ax + by)
Much has been said. The problem with nonconvex optimization is that the global optimum is only guaranteed with NP (non polynomial or exponential) type algorithms as opposed to convex optimization which converges to the global optimum by using polynomial type algorithms for optimization. Sometimes the local optimum is also the isolated global optimum as I published long ago in Computer and Structures. Rockafellar pseudo function optimization may lead to saddle points in the case of bilinear (nonconvex) domains. The best thing to find out the domain characteristics is to look at the bordered Hessian. Even if it points at a nonconvex domain one if the objective function is convex one may use a convex optimization algorithm and a multi start procedure.
In convex optimization, there is only global minimums. Also, if the objective function is differentiable, one can use algorithms of gradient type. While, in non-convex optimization, even the objective function is very regular, it is very difficult to find global minimums. In order to find one global minimum, we make recourse to meta-heuristic ou genetic algorithms. These are convergent, but have the drawback of slowl convergence.
I am agree with Dr. Mohammed Lamine Moussaoui . In the non-convex function the finding the global solution is very difficult. because it is a particular one like the saddle (seat of on a horse) point which is neither a maximum neither a minimum because in the corresponding plotted area. for finding global solution can be used the heuristic optimization algorithm such as PSO, GSA and ABC etc.
For a convex optimization problem, we have only one local optimal solution which is also the global optimal solution. For a non-convex optimization problem, there are many local optimal solutions.
Be careful with the anwer of Hadi Fanaee Tork: uniqueness of the local (and global) minimizer requires strict convexity. Just convexity allows also infinitely many minimizers in a convex set. Unrestricted one-dimensional example:
f(x) = max{-x, x, 1}
f is convex (f is not differentiable, so use the geometric definition of convexity, i.e. the epigraph is convex) and the set of minimizers is the interval [-1,1] - all with the same objective value 1.
I agree partially with Luís M. C. Simões. I suppose he is speaking about classic optimization techniques as Linear Programming and Nonlinear Programming.
Convex problems have only one local optimum point, which is also the global optimum point. The point is maximum or minimum based on the second order derivative of that function. On the other hand, non convex problems have multiple optimum points.
basically, for non-convex problem, there is no guarantee to get the global optimal solution. so , depending to the algorithm you could get different local optimal. even running the same algorithms several times, might yield different results per iteration (using random initialization)
One of popular definition convex function f:S \to R on convex set S \subset R^n is
defined as follows.
$$ \forall x,y \in S, \forall \lambda \in [0,1],
f((1-\lambda)x+\lambda y) \le (1-\lambda)f(x)+\lambda f(y). $$
If we define the non-convex function as the negation of the above definition, then the following definition is obtained.
$$ \exists x,y \in S, \exists \lambda in [0,1],
f((1-\lambda)x+\lambda y) > (1-\lambda)f(x)+\lambda f(y). $$
How is strict convexity different from simple convexity in terms of the presence of unique optimum
Strict convexity requires the defining inequality to hold with < instead of
According to my thinking, be it convex or strictly convex, there can be only one minimum, but the values for which minimum occurs is unique in the case of strictly convex where as the minimum can occur for various values if it is convex. But the minimum remains the same. Please correct if wrong.
If the minimum for you is the function value, then you are of course right.
But if that notion would refer to the global minimal function value (if it exists), then this thinking is correct for any function - be it convex or non-convex.
The only special thing for convex functions thinking in these terms is that every local minimum is also a global one.
I was speaking about the points, where the minimum occurs.
The results from above refer to the minimum point(s) of a convex function f. If f is differentiable, then there exists such a minimum point x_min in the interior of the domain of definition of f if and only if f ' (x_min) = 0. For the minimum point(s) situated on the boundary one applies Karush-Kuhn-Tucker theorem. A main very useful related result is the so-called maximum principle for convex functions. The classical variant is: let f be continuous and convex real function defined on a compact convex subset K of a (Hausdorff) locally convex space X. Then there exists at least an extreme point of K at which the maximum of f on K is attained. The result can be generalized to closed convex subsets A (instead of K), which have extreme points, and on which the maximum of f is attained. The idea of the proof is to apply Krein-Milman type results. For the latter results see R. B. Holmes, "Geometric Functional Analysis and its Applications".
If f is strictly convex function, the function has unique local minimum. However, in case where f(x)=max(x^2-1,0), f is continuous convex function, and any point in [-1,1] become to a local minimum. Moreover any point in (-1,1) of f also become to a local maximum.
I came across the following statement when I was going through a research paper. "Since quasi-concave functions have more than one local maximum, local maximum does not always guarantee the global maximum. Therefore, standard convex optimization algorithms cannot be directly used" . Are these statements true? Please explain.
You should look up the definitions in Wikipedia before asking questions here:
https://en.wikipedia.org/wiki/Quasiconvex_function
and then you'll see that the requirement for quasiconcavity is much weaker that the one for concavity:
f(a*x + (1-a)*y) >= min(f(x) , f(y)) for all x,y in the (convex) domain of f and all a with 0
These terminologies can leave you even more confused than before. Convex optimization is regarded to have a smooth output and whereas the non-convex optimization is a non-smooth output. In an energy / convex function, the output doesn't vary too much and for a non-convex function , the output can vary significantly such as a spike in a graph. An example, a sinusoidal wave would be a convex function and whereas a sound wave would be a non-convex function.
Good god, Ray Leroy Khuboni,
it is clear from your opinion that you are an engineer and haven't understood the concept of convexity and of course cannot assess what its consequences are in optimization.
A sine-function over the reals is by no means convex or concave. Do you remember discussion of functions from school? One of the notions there were turning points, where local convexity and concavity of the function interchange.
You speak of smooth and non-smooth. These are well-defined mathematical terms. A sound e.g. produced by the strings of a guitar without the guitar player changing the positions of his fingers is clearly a smooth curve, unless one of the strings breaks.
Ralf, easy with those opinions regarding engineers now :) . Some of us do happen to know about convexity and can actually use such concepts in optimization, believe it or not. All I know about mathematicians, on my side, is that there are good ones and that there are quite mediocre ones, but few are actually familiar with product design.
Hi Erdal,
it was by no means my intent to insult all engineers. I give math classes for engineers, too - and there are really good students among them. Of course also the opposite, like in every profession...
But that answer was so strange that I could not just hold my breath.
The danger is that someone not so familiar with the concepts really believes those ideas are correct :-(
One of mathematical definition of a convex function f(x) is
(1-b)f(x)+bf(x) >= f((1-b)x+bx), for all b in [0,1], and for all x and y.
Therefore, we get the following non-convex definition by taking the negation of the above definition
(1-b)f(x)+bf(y) < f((1-b)x+by), exists b in [0,1], and for all x and y.
According to these definitions,f(x)=|x|, f(x)=x^2, f(x)=max{0,x^2-1} are convex function and f(x)=sqrt(|x|), f(x)=max{0,sqrt(|x|)-1}, sin x, f(x)=sin x are non-convex function.
Moreover, more other(more wide or narrow etc.) definitions of convex functions have been known, e.g. a quasi-convex function f(x):
max{f(x),f(y)} >= f((1-b)x+by), for all b in [0,1], and for all x and y.
ref.) J.Ponstein:"Seven Kinds of Convexity",SIAM Rev., 9(1), 115–119, 1967.
So. it seems that the difference between the definition of convex functions and non-convex functions is not a sharp classification.
O.k. there is quasicovexity, which is a much weaker notion than convex (there are strict convexity and strong convexity, too which are stricter).
Each of these notions was formulated to prove theorems, which need suppositons of different strength.
Quasiconvexity is sufficient as a presupposition, if its euqivalen is really the one needed: every lower level set of f is convex. For f:R->R monotonicity is a sufficient criterion, so quasiconvexita is really MUCH weaker than convexity.
This has nothing to do with the convexity properties needed to prove local = global optimum or even that the minimum point is unique.
If you look at the top of the page, you'll see: The question concerned the difference between convex and nonconvex optimization problems. A convex optimization problem is a minimization problem with a convex objective (or alternatively a maximization problem with concave objective) and a convex constraint set.
This definition is clear and not "not a sharp classification".
A quasi-convex function f(x) is defined as s.t. lower level set Lf(a)={ x | f(x)0 Lf(a) is convex set.
In convex optimization minimize the convex function over convex set. Details refer above example of
@Hideo Kanemistsu
Hi Muhammad Furqan,
you are right, that means that if the problem is given in the form
min { f(x) | g_j(x)
Convex optimization problems have convex objective function and convex constraints. Linear programming and convex quadratic programming are special case of convex optimization. Boyd and Vandenberghe book gives some nice methods to determine if a problem is convex or not.
Convex Optimization – Boyd and Vandenberghe, Cambridge University Press
If you problem is a linear programming, you can use my matlab code CurveLP which is available for free in
Yaguang Yang, CurveLP-A MATLAB implementation of an infeasible interior-point algorithm for linear programming, Numerical Algorithms, April 2017, Volume 74, Issue 4, pp 967–996.
If you problem is a convex quadratic programming, you can use my method
YaguangYang, A polynomial arc-search interior-point algorithm for convex quadratic programming, European Journal of Operational Research, Volume 215, Issue 1, 16 November 2011, Pages 25-38
A convex objective smooth real function which has a critical (stationary) point in the interior of the (convex) set of feasible solutions, has a global minimum at that point. More general, for non-smooth convex functions, if at an interior point the subdifferential contains the zero subgradient, then that point is a global minimum point. If the minimum is attained on the boundary, then Karush-Kuhn-Tucker theorem can be used. For the maximum point, usually one applies the maximum principle, which is valid under some assumptions: the maximum is attained at an extreme point of the convex, compact domain of definition (or, more generally convex, finite dimensional, closed subset, having extreme points + conditions on the behavior of the objective function on its domain of definition).
The results mentioned above are not longer valid for non-convex functions. Therefore, optimization of the latter functions requires other techniques and is more difficult. Among these methods, when looking for a local optimum interior point of a C^(2) function, we find firstly the critical points, then studying the type of the second derivative at these points (positive definite, negative definite, etc.). One can also use the type of second derivative on a whole neighborhood (such as semi-positive definite quadratic form, etc). If the second derivative is vanishing at a critical point, one looks for the first not-null derivative (of lowest order) at that point, then applying Taylor formula. Obviously, there are polynomials which has a unique critical point, where the second derivative is vanishing, that point not being an optimum local point.
For ease of identification, a convex function is the one that is second derivative is non decreasing (i.e is second derivative is a positive value between the variable interval), while a concave function is the one that is second derivative is decreasing (i.e the second derivative is a negative value at the inflection point)
Ofer Aluf,
your question is a bit confusing, since every strictly convex function is convex and every convex function is quasi-convex. So one example suffices to answer the three parts: consider e.g. x^2 over the interval [-1,1]. x^2 is strictly convex, so the minimum is attained in just one point.
More interesting: an example where this is NOT the case for convex functions.
Btw. quasi-convex functions need not be continuous on the interior of their domain while convex ones are. Quasi-convexity is a much weaker requrement than convexity, e.g. every monotone function f:R->R is quasi-convex and quasi-concave (->verify).
In your task the set Omega is not necessarily convex, since it is the union of two convex, compact sets. In that case different minma are possible for (1) - but you'll need an example in R^2 (for Omega to be nonconvex). Consider e.g. f(x)=x_1^2+x_2^2 (the distance from the origin) and Omega_1, Omega_2 intersecting circles touching the unit circle.
Think about a proof why (3) is impossible.
Btw. ResearchGate is not intended to do your homeworks, you should think a bit for yourself...
Have a look at some book about Convex Anlaysis / Convex Optimization regarding conditions for convexity and properties of convex functions, e.g.
R.T. Rockafellar "Convex Analysis"
Bazaraa/Sherali/Shetty "Nonlinear Optimization"
These books are dealing with R^n, not C - but C can be viewed as R^2 z(Re z, Im z).
Thus the results for R^n can be applied.
Insightful thread!
Please @Ralf Gollmer, is there any text you can recommend that deals with optimization/algorithm and explain this thread further.
Thanks.
Michael
Strictly convex functions in the real space have one minimum, which is a global minimum. This is the reason why the optimization process of a convex function is easier than for non convex ones. In practice, when the problem is solved numerically, a convex function can present several local minima located in the neighborhood of the global one.
To determine if a function is convex or not you can either use the general definition:
Convex Ensemble:
Let X be a subset of a vector set E. X is called a convex set if and only if: $\forall$ (x, y) $\in$ X, $\forall$ $\alpha$ $\in$ [0,1], $(1-\alpha) x + \alpha y \in X $. In other words, every points of a segment defined in X, should be contained in X.
Convex Function:
Let f be a $C^1$ function of n variables defined on a convex set E. f is convex in E, if and only if: \\
$\forall \textbf{(X,Y)} \in E$ \\
$f(\textbf{Y})-f(\textbf{X}) \geq \nabla f(\textbf{X}) \cdot (\textbf{Y}-\textbf{X}) $
Or check if the Hessian matrix is defined semi-positive or not.
Check the book : Dattoro Convex optimization Euclidian Distance Geometry
Hi Florian Danvin,
your 'definition' of a convex function is an 'iff' theorem correct for differentiable f, but convexity of a function can be defined without differentaiblity. So e.g. f:R^n->R^n, f(x)=||x||_2 (the Euclidean norm - the absolute value of x for n=1) is convex, but not differentiable at the origin.
So the most general definition could be formulated like in R.T.Rockafellar "Convex Analysis"
f:R^n->R^n is convex iff epi f=(x,\mu)\in R^{n+1}| \mu\geq f(x)} is a convex set.
It turns out that convex functions are continuous on the relative interior of their effective domain and differentiable almost everywhere on their domain.
The set of minimizers is convex.
If you observe different local minima, this is possible - then tey all belong to the minimizer set. If you observe results with different 'minimal' objective value eithjer the results are not really minima or the problem is not a convex one (i.e. objective not convex or constraint set not convex).
Best regards
Ralf
Hi Ralf Gollmer,
Yes you're right about my assumption on assuming f is C1, and your definition is indeed more general.
However the reason why I was mentioning different minimal objectives in numerics, even with a strictly convex function just come from the fact you can only converge up to machine precision. So purely mathematically speaking there is (numerically) in this case no unicity of the minimum. So they can all belong to a minimizer set I guess.
Regards,
Florian
A convex optimization problem has a convex objective function and a convex constraint set. Any optimization problem that does not meet these conditions is a non-convex optimization problem.
As far as I know, matlab does not have a command to solve general convex optimization problem, but it has a command to solve a special case of the convex optimization problem--linear programming.
For special convex optimization problems, including linear programming, convex quadratic programming, and semi-definite programming, you may want to read my new book "Arc-search techniques for interior point algorithm" Book Arc-Search Techniques for Interior-Point Methods
Which is available from Amazon website
https://www.amazon.com/Arc-Search-Techniques-Interior-Point-Methods-Yaguang/dp/0367487284
Jin, Here is the matlab description for fmincon:
Find minimum of constrained nonlinear multivariable function.
Although there is no detailed description on the algorithm, I believe that it is designed for general (non-convex) nonlinear optimization. But it can be used to solve linear optimization problem (see the options).
Interior-point methods are widely used for convex optimization. Three special convex optimization problems (linear programming, convex quadratic programming, and semidefinite programming) admit polynomial time interior-point algorithms. My new book (Arc-search techniques for interior-point methods) discusses the interior-point methods for these special convex optimization problems. The table of the contents can be found in ResearchGate:
https://www.researchgate.net/project/Arc-search-techniques-for-interior-point-methods
The book can be ordered from Amazon website
https://www.amazon.com/Arc-Search-Techniques-Interior-Point-Methods-Yaguang/dp/0367487284
As my knowledge, this problem has no axact analysis solution. It is also hard to recognize. You can use some methods to identity a convex or not by,
1. Using defination of convex , adds new variables,...
2. Using its derivative or Hessian properties
3. Several geometrical convex properties: combination of convex ...
4. By intuition 😁 or try graph it
Regards