Randomized first-order methods for convex optimization : Improved convergence rate bounds and experimental evaluations
Abstract: Huge-scale optimization problems appear in several applications ranging frommachine learning over large data sets to distributed model predictive control.Classical optimization algorithms struggle to handle these large-scale computations,and recently, a number of randomized rst-order methods that are simpleto implement and have small per-iteration cost have been proposed. However,optimal step size selections and corresponding convergence rates of many randomizedrst-order methods were still unknown. In this thesis, we hence deriveconvergence rate results for several randomized rst-order methods for convexand strongly convex optimization problem, both with and without convexconstraints. Furthermore, we have implemented these randomized rst-ordermethods in MATLAB and evaluated their performance on l2-regularized leastsquaressupport vector machine (SVM) classication problems. In addition, wehave implemented randomized rst-order projection methods for constrainedconvex optimization, derived associated convergence rate bounds, and evaluatethe methods on l2-regularized least-squares SVM classication problems withEuclidean ball constraints of the weight vector. Based on the implementationexperience, we nally discuss how data scaling/normalization and conditioningaect the convergence rates of randomized rst-order methods.
AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)