Abstract
With the ever-growing data sizes along with the increasing complexity of the modern problem formulations, there is a recent trend where heuristic approaches with unverifiable assumptions are overtaking more rigorous, conventional optimization methods at the expense of robustness. This trend can be overturned when we exploit dimensionality reduction at the core of optimization. I contend that even the classical convex optimization did not reach yet its limits of scalability.
Storage and arithmetic costs are critical bottlenecks that prevent us from solving many large scale optimization problems. For instance, semidefinite programs (SDP) often have low-rank solutions, yet SDP algorithms require us to store a large matrix decision variable in ambient dimensions. We ask a fundamental question to address this problem: Suppose that the solution to an optimization problem has a compact representation. Can we develop algorithms that provably approximate a solution using storage that does not exceed the size of the problem data or the size of the solution?
In this talk, we describe a new convex optimization paradigm that achieves this goal. Our key insight is to design algorithms that maintain only a small sketch of the decision variable. Combining this idea with the conditional gradient method, we introduce an algorithm that can solve huge SDP instances that are not accessible to other convex methods.
Biography
Alp Yurtsever is a PhD candidate at Ecole Polytechnique Federale de Lausanne (EPFL), advised by Prof. Volkan Cevher. Before his PhD, he received his double major BSc from Electrical and Electronics Engineering and Physics Departments at Middle East Technical University in Ankara. His research interests lie in the area of optimization and machine learning. He was the recipient of the EPFL IC Doctoral Fellowship in 2013, and a student paper award in IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing in 2015. He regularly publishes in machine learning venues.