On Seven Fundamental Optimization Challenges in Machine Learning

In this thesis, we discuss some new developments in optimization inspired by the needs and practice of machine learning, federated learning, and data science. In particular, we consider seven key challenges of mathematical optimization that are relevant to modern machine learning applications, and develop a solution to each.

Overview

Abstract

Many recent successes of machine learning went hand in hand with advances in optimization. The exchange of ideas between these fields has worked both ways, with machine learning building on standard optimization procedures such as gradient descent, as well as with new directions in the optimization theory stemming from machine learning applications. In this thesis, we discuss some new developments in optimization inspired by the needs and practice of machine learning, federated learning, and data science. In particular, we consider seven key challenges of mathematical optimization that are relevant to modern machine learning applications, and develop a solution to each.

Our first contribution is the resolution of a key open problem in Federated Learning: we establish the first theoretical guarantees for the famous Local SGD algorithm in the crucially important heterogeneous data regime. As the second challenge, we close the gap between the upper and lower bounds for the theory of two incremental algorithms known as Random Reshuffling (RR) and Shuffle-Once that are widely used in practice, and in fact set as the default data selection strategies for SGD in modern machine learning software. Our third contribution can be seen as a combination of our new theory for proximal RR and Local SGD yielding a new algorithm, which we call FedRR. Unlike Local SGD, FedRR is the first local first-order method that can provably beat gradient descent in communication complexity in the heterogeneous data regime. The fourth challenge is related to the class of adaptive methods. In particular, we present the first parameter-free stepsize rule for gradient descent that provably works for any locally smooth convex objective. The fifth challenge we resolve in the affirmative is the development of an algorithm for distributed optimization with quantized updates that preserves global linear convergence of gradient descent. Finally, in our sixth and seventh challenges, we develop new VR mechanisms applicable to the non-smooth setting based on proximal operators and matrix splitting.

In all cases, our theory is simpler, tighter and uses fewer assumptions than the prior literature. We accompany each chapter with numerical experiments to show the tightness of the proposed theoretical results.

Brief Biography

Konstantin Mishchenko received his BSc from the Moscow Institute of Physics and Technology in 2016, and his double-degree MSc from Paris-Dauphine and École normale supérieure Paris-Saclay in 2017. During his PhD at KAUST, he worked under the supervision of Peter Richtárik, was a recipient of the Dean's Award, and had research internships at Google Brain and Amazon. Konstantin has been recognized as an outstanding reviewer for NeurIPS 2019, ICML 2020, AAAI 2020, ICLR 2021, and ICML 2021. He has published 8 conference papers at ICML, NeurIPS, AISTATS, and UAI, 1 journal paper at SIOPT, presented 6 workshop papers and co-authored 8 more preprints, some of which are currently under peer review. After graduating from KAUST, Konstantin is joining the group of Alexandre d’Aspremont and Francis Bach in Paris as a CNRS Postdoctoral Researcher.

Presenters