Federated Learning of a Mixture of Global and Local Models

Event Start
Event End
Location
KAUST

Abstract

We propose a new optimization formulation for training federated learning models. The standard formulation has the form of an empirical risk minimization problem constructed to find a single global model trained from the private data stored across all participating devices. In contrast, our formulation seeks an explicit trade-off between this traditional global model and the local models, which can be learned by each device from its own private data without any communication. Further, we develop several efficient variants of SGD (with and without partial participation and with and without variance reduction) for solving the new formulation and prove communication complexity guarantees. Notably, our methods are similar but not identical to federated averaging / local SGD, thus shedding some light on the essence of the elusive method. In particular, our methods do not perform full averaging steps and instead merely take steps towards averaging. We argue for the benefits of this new paradigm for federated learning.

Brief Biography

Filip Hanzely is a recent PhD graduate from the group of Peter Richtarik. His research focuses mostly on various aspects of optimization for machine learning, designing provably efficient algorithms for solving big data problems. Filip is joining Toyota Technological Institute in Chicago (TTIC) as a research assistant professor in December.

Contact Person