# Optimal Cooperation, Communication, and Learning in Decentralized Systems (14w5077)

Arriving in Banff, Alberta Sunday, October 12 and departing Friday October 17, 2014

## Organizers

Aditya Mahajan (McGill University)

Maxim Raginsky (University of Illinois, Urbana Champaign)

Demosthenis Teneketzis (University of Michigan, Ann Arbor)

Serdar Yüksel (Queen's University)

## Objectives

The question of optimal decision making in decentralized systems arises in different application domains including smart grids, cyber-physical systems, communication networks, machine learning, and information processing in organizations. Traditionally, these application domains have been investigated by different research communities, and each community has developed its own set of mathematical tools and theories to address optimal decentralized decision making.The objective of this workshop is to provide an opportunity for researchers from different communities (stochastic control, economics, information theory and machine learning) working on optimal decentralized decision-making to exchange ideas and learn new mathematical tools and techniques (used by other communities). This workshop will explore the various connections between the solution approaches of the different communities, foster collaborations, and provide an improved understanding of optimal decentralized decision making.The workshop will address the following research themes:

**Cooperation and coordination in decentralized systems.** In decentralized systems, no decision maker (DM) knows the information known to all other DMs, yet all DMs must cooperate to achieve a common, system-wide objective. Multiple approaches have been used in the literature to achieve cooperation and coordination; they include (a) Identifying optimality conditions so that a solution can be obtained by all DMs based on their local information (either using mathematical programming or dynamic programming); (b) Identifying projected sub-problems that are solved at each DM and iterating after exchanging information (either through a pricing mechanism or through explicit data communication subject to constraints) to reconcile the results. This workshop will bring these approaches together to develop solution techniques for a larger class of decentralized decision making problems. **Role of communication in decentralized systems.** Communication or information-exchange is an important aspect of decentralized decision making because of the following: (a) communication generates common knowledge among DMs. Such a common knowledge is useful for dynamic programming (see the previous bullet) and learning (see the next bullet). (b) When the DMs have an incentive to communicate, the global optimization problem is usually non-convex; while if such an incentive is absent, e.g., in static and partially nested teams, the global optimization problem is convex. Thus, on one hand, the incentive to communicate makes the decentralized optimization problem harder. On the other hand, the presence of communication sometimes facilitates a dynamic programming decomposition, e.g., in partial history sharing. (c) Economic and technological constraints often impose a restriction on information exchange, which in turn imposes restrictions on the solution approaches. This workshop will create a holistic understanding of these different roles of communication in decentralized systems. **Learning in decentralized systems.** When the DMs communicate, they exchange information in order to reduce their uncertainty. This process of uncertainty reduction is generically referred to as “learning.” Existing research on learning in decentralized systems is along two complementary directions: (a) Bayesian learning (and its variants), which models the process by which the DMs form and refine their probabilistic beliefs about other DMs (including their knowledge, their strategies, etc.) and about the overall “state of nature” relevant to the problem at hand, assuming that all DMs conform to certain axioms of rational behavior; and (b) Non-Bayesian learning or learning by boundedly rational agents, which model the process of learning in repeated or sequential situations in the presence of various resource and complexity constraints. Bayesian learning describes the situation in which the DMs know the system model, so all the uncertainty arises due to the presence of other DMs, as well as due to local nature of communication; by contrast, bounded rationality deals with situations in which the system model is not completely known, so the DMs need to learn the model based on their local observations. This workshop will bring these perspectives together.