ProxSkip: Yes! Local Gradient Steps Provably Lead to Communication Acceleration! Finally!

Event Start
Event End
Location
Building 9, Level 2, Room 2325

Abstract

We introduce ProxSkip---a surprisingly simple and provably efficient method for minimizing the sum of a smooth ($f$) and an expensive nonsmooth proximable ($\psi$) function. The canonical approach to solving such problems is via the proximal gradient descent (ProxGD) algorithm, which is based on the evaluation of the gradient of $f$ and the prox operator of $\psi$ in each iteration. In this work we are specifically interested in the regime in which the evaluation of prox is costly relative to the evaluation of the gradient, which is the case in many applications. ProxSkip allows for the expensive prox operator to be skipped in most iterations: while its iteration complexity is $O(\kappa \log 1/\epsilon)$, where $\kappa$ is the condition number of $f$, the number of prox evaluations is $O(\sqrt{\kappa} \log 1/\epsilon)$ only. Our main motivation comes from federated learning, where evaluation of the gradient operator corresponds to taking a local GD step independently on all devices, and evaluation of prox corresponds to (expensive) communication in the form of gradient averaging. In this context, ProxSkip offers an effective acceleration of communication complexity. Unlike other local gradient-type methods, such as FedAvg, Scaffold, S-Local-GD and FedLin, whose theoretical communication complexity is worse than, or at best matching, that of vanilla GD in the heterogeneous data regime, we obtain a provable and large improvement without any heterogeneity-bounding assumptions.

Brief Biography

Peter Richtarik is a professor of Computer Science at the King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia, where he leads the Optimization and Machine Learning Lab. At KAUST, he has a courtesy affiliation with the Applied Mathematics and Computational Sciences program and the Statistics program, and is a member of the Visual Computing Center, and the Extreme Computing Research Center. Prof Richtarik is a founding member and a Fellow of the Alan Turing Institute (UK National Institute for Data Science and Artificial Intelligence), and an EPSRC Fellow in Mathematical Sciences. During 2017-2019, he was a Visiting Professor at the Moscow Institute of Physics and Technology. Prior to joining KAUST, he was an Associate Professor of Mathematics at the University of Edinburgh, and held postdoctoral and visiting positions at Université Catholique de Louvain, Belgium, and University of California, Berkeley, USA, respectively. He received his PhD in 2007 from Cornell University, USA.

Prof Richtarik’s research interests lie at the intersection of mathematics, computer science, machine learning, optimization, numerical linear algebra, and high-performance computing. Through his work on randomized and distributed optimization algorithms, he has contributed to the foundations of machine learning, optimization and randomized numerical linear algebra. He is one of the original developers of Federated Learning – a new subfield of artificial intelligence whose goal is to train machine learning models over private data stored across a large number of heterogeneous devices, such as mobile phones or hospitals, in an efficient manner, and without compromising user privacy. In an October 2020 Forbes article, and alongside self-supervised learning and transformers, Federated Learning was listed as one of three emerging areas that will shape the next generation of Artificial Intelligence technologies.

Contact Person