Acceleration, Smoothing, and Variance Reduction for Convex Optimization and Nash games under Uncertainty
Uday Shanbhag
Pennsylvania State University
Wednesday, May 16, 2018
4:15 - 5:15 PM
Location: Shriram 368
Abstract:
We consider a stochastic convex problem of the form E[f(x,w)] + g(x) where g may be nonsmooth but deterministic.Traditional stochastic approximation (SA) schemes employ a single (or a fixed-batch) of sampled gradient f(x,\omega) and a prox evaluation with respect to g in computing a new iterate. As a consequence, practical implementations may require a large number of prox-evaluations, often rendering such schemes impractical. We present a variable sample-size stochastic approximation (VSSA) scheme where the batch of noisy gradients increases at a suitable rate at each iteration. We proceed to show that such schemes achieve deterministic rates of convergence for strongly convex as well as convex nonsmooth problems (by utilizing Nesterov acceleration and iterative smoothing). We further extend these avenues to gradient and inexact best-response to schemes for nonsmooth stochastic Nash games. Time permitting, we demonstrate how similar avenues allow for improving convergence rates of stochastic quasi-Newton methods for stochastic convex problems.
Operations Research Colloquia: http://or.stanford.edu/oras_seminars.html