Trainable Decompositions: Addressing Device Heterogeneity

Event Start
Event End
Location
Buliding 4, Level 5, Room 5220

Abstract

In the first part of the talk, we introduce Ordered Dropout, a mechanism that achieves an ordered, nested representation of knowledge in deep neural networks (DNNs). This allows for the extraction of lower-footprint submodels without the need for retraining.

In the second part of the talk, we will discuss the FjORD framework. FjORD is a framework designed to address the problem of client heterogeneity in federated learning. It utilizes Ordered Dropout to prune the model width gradually, enabling clients with varying capabilities to participate by tailoring the model width to their specific capabilities, all without the necessity of retraining.

Lastly, I will explore the Maestro framework. Maestro tackles the issue of model size by employing Ordered Dropout to discover low-rank layers. This method effectively compresses the model while maintaining accuracy, addressing a critical challenge in the field of machine learning.

Brief Biography

Samuel Horvath is an assistant professor of Machine Learning at Mohamed bin Zayed University of Artificial Intelligence (MBZUAI). Before joining MBZUAI, he completed his MS and Ph.D. in statistics at King Abdullah University of Science and Technology (KAUST), advised by Professor Peter Richtárik.  His research interests lie in distributed, collaborative, and efficient on-device ML.

Contact Person