I need to theoretically prove the convergence rate of my optimization algorithm. Please suggest some review papers or materials to understand the concept of convergence rate, its different types and process of derivation of convergence rate.
the convergence rate from the mathematics view particularly for modern algorithms is very difficult but if you put specific problem and compare between different algorithms in time to reach to optimal solution may be have you answer
Similar of what Mohamed has said, usually convergence is higly problem-dependant, so for any case you could have different techniques to tackle that issue.
If you at least can clarify what type of problem you're solving, it would be easier to give a non general recommendation.
Hi Felipe, the problem is convex optimization or strongly convex optimization with composite function ( sum of convex functions + regularizer). It is a Machine Learning problem for training with large data.
There are plenty of algorithms for so called Empirical Risk Minimization used in Machine Learning. For these methods, convergence proofs are available.
You can start with these works
Lin, Lu, Xiao An Accelerated Proximal Coordinate Gradient Method and its Application to Regularized Empirical Risk Minimization
S. Shalev-Shwartz and T. Zhang. Stochastic dual coordinate ascent methods for regularized loss minimization. SDCA Journal of Machine Learning Research , 14:567–599, 2013 http://www.jmlr.org/papers/volume14/shalev-shwartz13a/shalev-shwartz13a.pdf
Almost shameful to admit: if you simply search for "convergence rate analysis convex optimization" using Google, you will find fairly many papers - recent ones as well! This is not the general recommendation, of course, but here and now it appears to work. :-)
Hi I suggest taking the Steklov integral from a convex function. You will get a convex and smooth function. Adding a strong convex part, you will get a strong convex and smooth function. You need to apply the Newton method for optimization of such kind of function. The rate of convergence is superlinear convergence.You will get eps(D)-optimal point in the result of optimization, where D is a set over wich the Steklov integral is calculated. I wrote some articles where I described how to use the Steklov integral in optimization.