The Convex Programing is a wide range field and uses many methods in order to solve every particular convex problem. After so many years of implementation, do you really believe that still exist any unsolved problem?
The theory of convex analysis is very mature, but still, algorithms for discrete convex problems (see the book "Discrete Convex Analysis" by Kazuo Murota) are not as mature as their continuous counterparts. Plus, there is always the possibility (from a theoretical perspective) to discover new tests to prove if a given model is convex or not (Boyd has done a lot of work on this).
I believe there are still open issues in the areas of decomposition, for example automatically discovering special structure (e.g. blocks) in the problem to help with parallelization and so on. Also, while Dantzig's decomposition method was (re-)studied extensively from an implementation point of view 10-15 years ago, I don't think any studies are available on the performance of such methods on clusters of many-core servers, to see how to best get maximum performance from such heterogeneous clusters with local shared-memory multi-core machines and fast interconnects...
The theory of convex analysis is very mature, but still, algorithms for discrete convex problems (see the book "Discrete Convex Analysis" by Kazuo Murota) are not as mature as their continuous counterparts. Plus, there is always the possibility (from a theoretical perspective) to discover new tests to prove if a given model is convex or not (Boyd has done a lot of work on this).
I believe there are still open issues in the areas of decomposition, for example automatically discovering special structure (e.g. blocks) in the problem to help with parallelization and so on. Also, while Dantzig's decomposition method was (re-)studied extensively from an implementation point of view 10-15 years ago, I don't think any studies are available on the performance of such methods on clusters of many-core servers, to see how to best get maximum performance from such heterogeneous clusters with local shared-memory multi-core machines and fast interconnects...
Of course many of convex optimization problems can be solved by totally new methods, based on nature, like cuckoo' search, particle swarm optimization, genetic algorithms and so on. Probably this way of optimization (mainly the use of Calculus) will soon overcome.
Dimitri, I have worked with many meta-heuristic methods such as the ones you mention (and many others too, e.g. Simulated Annealing, DDE, DE, Tabu Search, Path Relinking, Scatter Search, VNS, Nested Partitions, etc.). My experience is that if your problem is convex, then these general-purpose meta-heuristics will not be even close to being as good as the algorithms for convex-programming (which always take advantage of calculus-based theorems). Try it out for yourself: for a convex function of many variables (I especially suggest the "Trid-Function", google terms: "Trid Function" which is a convex function), try a Newton-based method, or a Trust-Region-based method, or the BFGS method and see how much faster it will converge to the global optimum as opposed to any othe meta-heuristics you mentioned before. I had attended the Mathematical Programming annual meeting in Ann Arbor, MI, in the mid-nineties, where John Holland himself, being the keynote speaker said that he was convinced that GAs (and most other global optimization meta-heuristics) are very good at getting you to the vicinity of the global (or a near-global) optimum, but to get there, you need a local-search method which in NLP, would translate to a Newton-based or Trust-Region-based method with starting point the best point found by the GA (or EA or whatever...)
Meta-heuristics, if tuned properly, will be very good at finding a "good" region of the problem-space, after that you need some domain knowledge to get to the optimum. In the case of convex programming, this knowledge corresponds to the convexity properties of your problem, and in fact, for such problems you don't need the meta-heuristics at all.
Ioannis, what is your opinion about the Active-Set methodology? I am working with it and I find the relevant FORTRAN computations to be very fast. Have you ever used such a method in your research field?
Dimitri, I haven't worked on constrained NLPs for some time now (other than box-constrained NLP), but when I did, indeed, the active-sets method had the best performance. For more recent developments, I have heard many good words about the "Feasibility Pump" method (applies mostly to LP/MIP problems though).
Unresolved problem: does the \epsilon-subdifferential of Rockafellar etc, defines a convex function up to additive constant for a fixed \epsolon > 0 ? For \epsilon=0 it does :)
A large set of open problems is offered by the theory of quasiconvexity, that can be viewed as a continuous limit of the mathematical programming problem with infinitely many linear constraints. In other words, the minimizers - a gradient e=grad u or a y=curl v - satisfy differential constraints everywhere, curl y=0 and div y=0, respectively; these are the limits of difference constraints.
This monograph contains 70 pages of interesting stuff, starting on page 2 on conjugacy. I agree with you that is a good place to start in looking into convex programming.
Bringing Murata's introduction up to current views of convex programming and Murata's approach, see
A. Barvinok, Review of Marata's Discrete Convex Analysis, Bulletin of the Amer. Math. Soc. 41, 2004, no. 3, 395-400: