Handling multiple objectives and multiple agents is an important and ubiquitous characteristic of many, if not most, real-world decision problems. Mathematically, this translates to agents receiving a reward vector, rather than a scalar reward. This seemingly minor change fundamentally transforms the problem, shaping both the optimization criteria and the solution concepts. For example, the well-known game-theory result that every (single-objective) normal form game has a Nash equilibrium, no longer holds when the agents care about more than one objective.
In this tutorial, we will start with what it means to care about more than one aspect of the solution and why it is pertinent for modelling multi-agent settings. We will examine what agents should optimise for in multi-objective settings and discuss different assumptions, culminating in a taxonomy of multi-objective multi-agent settings and the accompanying solution concepts. We will then follow up with existing results and algorithmic approaches from evolutionary and multi-agent multi-objective reinforcement learning.
Speakers

Gaurav Dixit
Gaurav is a postdoctoral scholar at the Autonomous Agents and Distributed Intelligence Lab, at Oregon State University. He earned his Ph.D. from Oregon State University in 2023. His work with the AI-CARING Institute aims to facilitate collective decision-making required to pursue high-level, long-term, dynamic, and possibly ill-defined objectives emerging from changing user preferences.
Roxana Rădulescu
Roxana is an assistant professor at the Intelligent Systems group, Utrecht University. She obtained her PhD degree at the Vrije Universiteit Brussel in September 2021. Her research is focussed on the development of multi-agent decision making systems where each agent is driven by different objectives and goals, under the paradigm of multi-objective multi-agent reinforcement learning.