Catálogo de publicaciones - libros

Compartir en
redes sociales


Coordination of Large-Scale Multiagent Systems

Paul Scerri ; Régis Vincent ; Roger Mailler (eds.)

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence (incl. Robotics); Theory of Computation; Computation by Abstract Devices; Mathematical Logic and Formal Languages; Software Engineering/Programming and Operating Systems

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2006 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-0-387-26193-5

ISBN electrónico

978-0-387-27972-5

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer Science+Business Media, Inc. 2006

Tabla de contenidos

The Effects of Locality and Asymmetry in Large-Scale Multiagent MDPs

Dmitri A Dolgov; Edmund H Durfee

As multiagent systems scale up, the complexity of interactions between agents (cooperative coordination in teams, or strategic reasoning in the case of self-interested agents) often increases exponentially. In particular, in multiagent MDPs, it is generally necessary to consider the joint state space of all agents, making the size of the problem and the solution exponential in the number of agents. However, often interactions between the agents are only local, which suggests a more compact problem representation. We consider a subclass of multiagent MDPs with local interactions where dependencies between agents are asymmetric, meaning that agents can affect others in a unidirectional manner. This asymmetry, which often occurs in large-scale domains with authority-driven relationships between agents, allows us to make better use of the locality of agents’ interactions. We discuss a graphical model that exploits this form of problem structure and use it to analyze the effects of locality and asymmetry on the complexity and structure of optimal policies. For problems where the solutions retain some of the compactness of problem representation, we present computationally-efficient algorithms for constructing optimal multiagent policies.

Palabras clave: Optimal Policy; Assembly Line; Action Space; Multiagent System; Dependency Graph.

Part I - Effects of Scaling Coordination | Pp. 3-25

A Study of Scalability Properties in Robotic Teams

Avi Rosenfeld; Gal A Kaminka; Sarit Kraus

In this chapter we describe how the productivity of homogeneous robots scales with group size. Economists found that the addition of workers into a group results in their contributing progressively less productivity; a concept called the Law of Marginal Returns. We study groups that differ in their coordination algorithms, and note that they display increasing marginal returns only until a certain group size. After this point the groups’ productivity drops with the addition of robots. Interestingly, the group size where this phenomenon occurs varies between groups using differing coordination methods. We define a measure of interference that enables comparison, and find a high negative correlation between interference and productivity within these groups. Effective coordination algorithms maintain increasing productivity over larger groups by reducing the team’s interference levels. Using this result we are able to examine the productivity of robotic groups in several simulated domains in thousands of trials. We find that in theory groups should always add productivity during size scale-up, but spatial limitations within domains cause robots to fail to achieve this ideal. We believe that coordination methods can be developed that improve a group’s performance by minimizing interference. We present our findings of composite coordination methods that provide evidence of this claim.

Palabras clave: Scalability Property; Marginal Return; Search Domain; Interference Level; Marginal Gain.

Part I - Effects of Scaling Coordination | Pp. 27-51

Comparing Three Approaches to Large-Scale Coordination

Paul Scerri; Régis Vincent; Roger Mailler

Coordination of large groups of agents or robots is starting to reach a level of maturity where prototype systems can be built and tested in realistic environments. These more realistic systems require that both algorithmic and practical issues are addressed in an integrated solution. In this chapter, we look at three implementations of large-scale coordination examining common issues, approaches, and open problems. The key result of the comparison is that there is a surprising degree of commonality between the independently developed approaches, in particular the use of partial, dynamic centralization. Conversely, open issues and problems encountered varied greatly with the notable exception that debugging was a major issue for each approach.

Palabras clave: Team Member; MultiAgent System; Associate Network; Large Team; Conflict Propagation.

Part I - Effects of Scaling Coordination | Pp. 53-71

Decentralized Partner Finding in Multi-Agent Systems

Marcelo M. Vanzin; K. S. Barber

One of the most compelling characteristics of multi agent systems is the ability to form coalitions to share their resources, create plans, share execution of tasks, etc. The work presented in this article offers scalable methods for finding potential coalition partners when not all agents in a large multi-agent system are known to the agents in need of help to achieve their goals. A new algorithm for finding partners in a MAS, based on concepts inspired by peer-to-peer networks, is described. The proposed algorithm design aims to provide a new, completely decentralized scheme that can be used by agents to gather information needed to make decisions about potential partners. The proposed algorithm is competitive with centralized approaches in smaller multi-agent systems and provides better scalability for larger systems.

Palabras clave: MultiAgent System; Distribute Hash Table; Central Directory; Needed Information; Finger Table.

Part II - Scaling Existing Coordination Approaches | Pp. 75-98

Distributed Coordination of an Agent Society Based on Obligations and Commitments to Negotiated Agreements

Jiangbo Dang; Devendra Shrotri; Michael N. Huhns

This chapter discusses coordination from a commitment basis. Typically, commitments are established via a process of negotiation between the parties—the debtor and creditor—involved in the commitment. We define obligations to be those commitments, sometimes termed norms or social commitments, without a clearly identifiable creditor. The establishment of a commitment occurs in response to the adoption of a goal or the acceptance and performance of a task. Using a service-oriented computing (SOC) context, we describe an efficient negotiation process for establishing commitments. We then show how commitments and obligations can be used to monitor and control the aggregate behavior of a group of agents to yield coordinated progress towards the agents’ overall objective.

Palabras clave: Multiagent System; Agent Coordination; Negotiate Agreement; Computational Tree Logic; Package Deal.

Part II - Scaling Existing Coordination Approaches | Pp. 99-126

A Family of Graphical-Game-Based Algorithms for Distributed Constraint Optimization Problems

Rajiv T. Maheswaran; Jonathan P. Pearce; Milind Tambe

This paper addresses the application of distributed constraint optimization problems (DCOPs) to large-scale dynamic environments. We introduce a decomposition of DCOP into a graphical game and investigate the evolution of various stochastic and deterministic algorithms. We also develop techniques that allow for coordinated negotiation while maintaining distributed control of variables. We prove monotonicity properties of certain approaches and detail arguments about equilibrium sets that offer insight into the tradeoffs involved in lever-aging efficiency and solution quality. The algorithms and ideas were tested and illustrated on several graph coloring domains.

Palabras clave: Nash Equilibrium; Solution Quality; Congestion Game; Graphical Game; Graph Coloring Problem.

Part II - Scaling Existing Coordination Approaches | Pp. 127-146

Key-Based Coordination Strategies: Scalability Issues

Tom Wagner; John Phelps; Valerie Guralnik; Ryan VanRiper

We describe a key-based approach to multi-agent coordination, where certain coordination decisions are done only when the agent holds a coordination key. This approach is primarily decentralized, but has some centralized aspects, including synchronization of coordination decisions and schedule information sharing. The approach is described within the context of the application requirements that motivated its development. Finally, its scalability properties are discussed.

Palabras clave: Problem Instance; Coordination Problem; Scalability Issue; Mission Control; Service Team.

Part II - Scaling Existing Coordination Approaches | Pp. 147-171

Designing Agent Utilities for Coordinated, Scalable and Robust Multi-Agent Systems

Kagan Turner

Coordinating the behavior of a large number of agents to achieve a system level goal poses unique design challenges. In particular, problems of scaling (number of agents in the thousands to tens of thousands), observability (agents have limited sensing capabilities), and robustness (the agents are unreliable) make it impossible to simply apply methods developed for small multi-agent systems composed of reliable agents. To address these problems, we present an approach based on deriving agent goals that are aligned with the overall system goal, and can be computed using information readily available to the agents. Then, each agent uses a simple reinforcement learning algorithm [26] to pursue its own goals. Because of the way in which those goals are derived, there is no need to use difficult to scale external mechanisms to force collaboration or coordination among the agents, or to ensure that agents actively attempt to appropriate the tasks of agents that suffered failures. To present these results in a concrete setting, we focus on the problem of finding the subset of a set of imperfect devices that results in the best aggregate device [5]. This is a large distributed agent coordination problem where each agent (e. g., device) needs to determine whether to be part of the aggregate device. Our results show that the approach proposed in this work provides improvements of over an order of magnitude over both traditional search methods and traditional multi-agent methods. Furthermore, the results show that even in extreme cases of agent failures (i. e., half the agents failed midway through the simulation) the system’s performance degrades gracefully and still outperforms a failure-free and centralized search algorithm. The results also show that the gains increase as the size of the system (e. g., number of agents) increases. This latter result is particularly encouraging and suggests that this method is ideally suited for domains where the number of agents is currently in the thousands and will reach tens or hundreds of thousands in the near future.

Part II - Scaling Existing Coordination Approaches | Pp. 173-188

Learning Scalable Coalition Formation in an Organizational Context

Sherief Abdallah; Victor Lesser

Palabras clave: Multiagent System; Organizational Context; Coalition Formation; Resource Type; Combinatorial Auction.

Part III - New Approaches for Large Scale Coordination | Pp. 191-215

Multi-Agent Coordination in Open Environments

Myriam Abramson; Ranjeev Mittu

This paper proposes a new approach to multi-agent systems leveraging from recent advances in networking and reinforcement learning to scale up teamwork based on joint intentions . In this approach, teamwork is subsumed by the coordination of learning agents. The intuition behind this approach is that successful coordination at the global level generates opportunities for teamwork interactions at the local level and vice versa. This unique approach scales up model-based teamwork theory with an adaptive approach to coordination. Some preliminary results are reported using a novel coordination evaluation.

Palabras clave: Open Environment; Reinforcement Learning; Multiagent System; Coordination Task; Team Formation.

Part III - New Approaches for Large Scale Coordination | Pp. 217-229