from March 28 to July 1, 2016
Rio de Janeiro (Brazil)

Mini courses > Stochastic Convex Optimization Methods in Machine Learning

MSchmidt.PNG             

Mark Schmidt, University of British Columbia

From May 16th to May 20th, 2016 (here is the schedule)

We first review classic algorithms and complexity results for stochastic methods for convex optimization, and then turn our attention to the wide variety of exponentially-convergent algorithms that have been developed in the last four years. Topics will include finite-time convergence rates of classic stochastic gradient methods, stochastic average/variance-reduced gradient methods, primal-dual methods, proximal operators, acceleration, alternating minimization, non-uniform sampling, and a discussion of parallelization and non-convex problems. Applications in the field of machine learning will emphasized, but the principles we cover in this course are applicable to many fields. 

 

Material

Sldies: L1L2L3L4L5aL5bL6L7L8

Videos:  1, 2, 3, 4, 5, 6, 7 

 

Online user: 1