Stochastic Nested Composition Optimization and Beyond


Mengdi Wang
Princeton University

Wednesday, November, 2016
4:20 - 5:20 PM
Location: Y2E2 105


Abstract:

Classical stochastic optimization models usually involve expected-value objective functions (for example, empirical risk minimization). However, they do not apply to the minimization of a composition of two or multiple expected-value functions, i.e., the stochastic nested composition optimization problem. Stochastic composition optimization finds wide application in estimation, risk-averse optimization, dimension reduction and reinforcement learning. We propose a class of stochastic compositional first-order methods. We prove that the algorithms converge almost surely to an optimal solution for convex optimization problems (or a stationary point for nonconvex problems), as long as such a solution exists. The convergence involves the interplay of two martingales with different timescales. We obtain rate of convergence results under various assumptions, and show that the algorithms achieve the optimal sample-error complexity in several important special cases. These results provide the best-known sample complexity benchmarks for stochastic composition optimization. We demonstrate its application to statistical estimation and reinforcement learning. In addition, we also introduce some recent developments on nonconvex statistical optimization.




Operations Research Colloquia: http://or.stanford.edu/oras_seminars.html