This dissertation studies learning under structural information constraints across three major paradigms: federated learning, cooperative multi-agent reinforcement learning, and black-box optimization of large language models.

Overview

In modern machine learning systems, data are often decentralized, environments are partially observable, and model internals are inaccessible, fundamentally limiting the information available for optimization and decision-making.

To address these challenges, this work develops principled algorithmic frameworks that treat learning as a process of decision-making under uncertainty. In federated learning, we propose a combinatorial client filtering approach that formulates participation as a non-monotone optimization problem, enabling the selection of client subsets that improve convergence and generalization under statistical heterogeneity and communication constraints. We further introduce a decentralized personalized federated learning framework based on bi-level optimization, which jointly learns client models and collaboration graphs, allowing adaptive and asymmetric cooperation without prior knowledge of data similarity.

In cooperative multi-agent reinforcement learning, we present a latent inference framework for decentralized partially observable environments. The method learns compact global representations during centralized training and enables agents to infer these latent states from local observation histories during execution, significantly improving coordination robustness.

Finally, for black-box large language models, we develop an entropy-regularized actor–critic framework for instruction optimization. By reformulating discrete prompt design as continuous latent policy optimization, the approach enables efficient exploration and learning using only interaction feedback, outperforming both human-designed prompts and existing automated methods.

Across these domains, the dissertation demonstrates that combinatorial optimization, structured collaboration, latent representation learning, and reinforcement learning provide complementary tools for overcoming information limitations. These contributions advance the development of scalable, robust, and adaptive learning systems that operate effectively under decentralized, partially observable, and black-box conditions.

Presenters

Brief Biography

Salma Kharrat is a Ph.D. candidate in Computer Science at King Abdullah University of Science and Technology (KAUST), where her research focuses on machine learning under limited information, spanning federated learning, multi-agent reinforcement learning, and large language models.

Her research has been published in leading AI and machine learning venues, including AISTATS, EMNLP, and ECAI, with contributions such as FilFL, DPFL, and ACING, which address client selection in federated learning, decentralized personalization, and instruction optimization for large language models. During her Ph.D., she was recognized with the KAUST Dean’s List Award.

Salma earned her M.Sc. in Computer Science from KAUST and her engineering degree from the National School of Computer Science in Tunisia, where she ranked among the top students.

In addition to her research, she has been actively involved in teaching and mentoring, serving as an instructor and teaching assistant for machine learning and AI courses at KAUST and across Saudi Arabia, and mentoring student research projects through KAUST Academy.