KL Divergence Regularized Learning Model for Multi-Agent Decision Making
The large population game framework has been widely adopted in biology, economics, and engineering fields to model and analyze strategic interactions among decision-making agents. In this framework, a population of agents select strategies of interaction with one another and repeatedly revise their strategy choices using revisions defined by a decision-making model. While many of existing works in the literature focus on designing decision-making models that ensure convergence of the agents’ strategy revision to Nash equilibrium, a still open challenge is to establish the convergence when the agents’ strategy revision is subject to time delay. Such scenarios include multi-agent decision problems in which there is delay in propagation of traffic congestion in congestion games, communication between the electric power utility and demand response agents in demand response games, and information transmission between agents in network games. In this seminar, I’ll introduce our recent work on designing a new decision-making model called the Kullback-Leibler (KL) divergence regularized learning. We will discuss how the new model enables a large population of agents to learn and self-organize to an effective strategy profile in population games subject to time delay and implication of the new model in engineering applications.
Overview
Abstract
The large population game framework has been widely adopted in biology, economics, and engineering fields to model and analyze strategic interactions among decision-making agents. In this framework, a population of agents select strategies of interaction with one another and repeatedly revise their strategy choices using revisions defined by a decision-making model. While many of existing works in the literature focus on designing decision-making models that ensure convergence of the agents’ strategy revision to Nash equilibrium, a still open challenge is to establish the convergence when the agents’ strategy revision is subject to time delay. Such scenarios include multi-agent decision problems in which there is delay in propagation of traffic congestion in congestion games, communication between the electric power utility and demand response agents in demand response games, and information transmission between agents in network games. In this seminar, I’ll introduce our recent work on designing a new decision-making model called the Kullback-Leibler (KL) divergence regularized learning. We will discuss how the new model enables a large population of agents to learn and self-organize to an effective strategy profile in population games subject to time delay and implication of the new model in engineering applications.
Brief Biography
Shinkyu Park is an assistant professor of Electrical and Computer Engineering. Prior to joining KAUST, he was Associate Research Scholar at Princeton University engaged in cross-departmental robotics projects. He received the Ph.D. degree in electrical engineering from the University of Maryland College Park in 2015. Later he held Postdoctoral Researcher positions at the National Geographic Society (2016) and Massachusetts Institute of Technology (2016-2019). Park’s research focuses on the design and control of multi-robot systems. His past research projects include designing animal-borne sensor networks to monitor wild animal groups in their natural habitats. He also created a fleet of urban autonomous surface vessels capable of transporting people, providing deliveries, and trash removal services through urban canal networks. His current research interests are in robotics, multi-robot control/coordination, feedback control theory, and game theory.