Catálogo de publicaciones - libros
Título de Acceso Abierto
Foundations of Trusted Autonomy
Parte de: Studies in Systems, Decision and Control
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
Trusted Autonomy; Automation Technology; Autonomous Systems; Self-Governance; Trusted Autonomous Systems; Design of Algorithms and Methodologies
Disponibilidad
Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No requiere | 2018 | Directory of Open access Books | ||
No requiere | 2018 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-3-319-64815-6
ISBN electrónico
978-3-319-64816-3
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2018
Cobertura temática
Tabla de contenidos
Foundations of Trusted Autonomy: An Introduction
Hussein A. Abbass; Jason Scholz; Darryn J. Reid
To aid in understanding the chapters to follow, a general conceptualisation of autonomy may be useful. Foundationally, autonomy is concerned with an agent that acts in an environment. However, this definition is insufficient for autonomy as it requires persistence (or resilience) to the hardships that the environment acts upon the agent. An agent whose first action ends in its demise would not demonstrate autonomy. The themes of autonomy then include agency, persistence and action.
Pp. 1-12
Universal Artificial Intelligence
Tom Everitt; Marcus Hutter
Foundational theories have contributed greatly to scientific progress in many fields. Examples include Zermelo-Fraenkel set theory in mathematics, and universal Turing machines in computer science. Universal Artificial Intelligence (UAI) is an increasingly well-studied foundational theory for artificial intelligence, based on ancient principles in the philosophy of science and modern developments in information and probability theory. Importantly, it refrains from making unrealistic Markov, ergodicity, or stationarity assumptions on the environment. UAI provides a theoretically optimal agent AIXI and principled ideas for constructing practical autonomous agents. The theory also makes it possible to establish formal results on the motivations of AI systems. Such results may greatly enhance the trustability of autonomous agents, and guide design choices towards more robust agent architectures and incentive schemes. Finally, UAI offers a deeper appreciation of fundamental problems such as the induction problem and the exploration-exploitation dilemma.
Part I - Autonomy | Pp. 15-46
Goal Reasoning and Trusted Autonomy
Benjamin Johnson; Michael W. Floyd; Alexandra Coman; Mark A. Wilson; David W. Aha
This chapter discusses the topic of Goal Reasoning and its relation to Trusted Autonomy. Goal Reasoning studies how autonomous agents can extend their reasoning capabilities beyond their plans and actions, to consider their goals. Such capability allows a Goal Reasoning system to more intelligently react to unexpected events or changes in the environment. We present two models of Goal Reasoning: Goal-Driven Autonomy (GDA) and goal refinement. We then discuss several research topics related to each, and how they relate to the topic of Trusted Autonomy. Finally, we discuss several directions of ongoing work that are particularly interesting in the context of the chapter: using a model of inverse trust as a basis for adaptive autonomy, and studying how Goal Reasoning agents may choose to rebel (i.e., act contrary to a given command).
Part I - Autonomy | Pp. 47-66
Social Planning for Trusted Autonomy
Tim Miller; Adrian R. Pearce; Liz Sonenberg
In this chapter, we describe social planning mechanisms for constructing and representing explainable plans in human-agent interactions, addressing one aspect of what it will take to meet the requirements of a trusted autonomous system. Social planning is automated planning in which the planning agent maintains and reasons with an explicit model of the other agents, human or artificial, with which it interacts, including the humans’ goals, intentions, and beliefs, as well as their potential behaviours. The chapter includes a brief overview of the challenge of planning in human-agent teams, and an introduction to a recent body of technical work in multi-agent epistemic planning. The benefits of planning in the presence of nested belief reasoning and first-person multi-agent planning are illustrated in two scenarios, hence indicating how social planning could be used for planning human-agent interaction explicitly as part of an agent’s deliberation.
Part I - Autonomy | Pp. 67-86
A Neuroevolutionary Approach to Adaptive Multi-agent Teams
Bobby D. Bryant; Risto Miikkulainen
A multi-agent architecture called the (ATA) is introduced, wherein homogeneous agents adopt specific roles in a team dynamically in order to address all the sub-tasks necessary to meet the team’s goals. Artificial neural networks are then trained by neuroevolution to produce an example of such a team, trained to solve the problem posed by a simple strategy game. The evolutionary algorithm is found to induce the necessary in situ adaptivity of behavior into the agents, even when controlled by stateless feed-forward networks.
Part I - Autonomy | Pp. 87-115
The Blessing and Curse of Emergence in Swarm Intelligence Systems
John Harvey
In an increasingly complex and interconnected world, there is an increasing need for autonomous systems that can control systems that are beyond the capabilities of human operators. One of the key issues to be addressed in developing trusted autonomous systems is dealing with the phenomenon of “emergence”; either by taking advantage of emergence or avoiding emergence. Swarm intelligence systems, based on the interaction of a large number of relatively simples agents, rely on emergent intelligence for their problem solving capabilities. When used in trusted autonomous systems, the emergent behaviour of swarm intelligence systems can be both a blessing and a curse.
Part I - Autonomy | Pp. 117-124
Trusted Autonomous Game Play
Michael Barlow
Game play by humans has always required autonomy and trust. Autonomy because a person chooses to play and takes in-game actions; and trust that the other players adopt a lusory attitude and abide by the rules of the game. The chapter highlights several areas of digital (computer) game design and development that will be revolutionised by the technology of, and framework of Trusted Autonomy: TA game AI that display emotional and other (non-logic) forms of intelligence; TA games that are aware of themselves and of the player, and self-modify to enhance play; TA communities that create a safe, fulfilling, and non-toxic environment; and TA augmented reality games that keep the player and wider community safe during play.
Part I - Autonomy | Pp. 125-132
The Role of Trust in Human-Robot Interaction
Michael Lewis; Katia Sycara; Phillip Walker
As robots become increasingly common in a wide variety of domains—from military and scientific applications to entertainment and home use—there is an increasing need to define and assess the trust humans have when interacting with robots. In human interaction with robots and automation, previous work has discovered that humans often have a tendency to either overuse automation, especially in cases of high workload, or underuse automation, both of which can make negative outcomes more likely. Frthermore, this is not limited to naive users, but experienced ones as well. Robotics brings a new dimension to previous work in trust in automation, as they are envisioned by many to work as teammates with their operators in increasingly complex tasks. In this chapter, our goal is to highlight previous work in trust in automation and human-robot interaction and draw conclusions and recommendations based on the existing literature. We believe that, while significant progress has been made in recent years, especially in quantifying and modeling trust, there are still several places where more investigation is needed.
Part II - Trust | Pp. 135-159
Trustworthiness of Autonomous Systems
S. Kate Devitt
Effective robots and autonomous systems must be trustworthy. This chapter examines models of trustworthiness from a philosophical and empirical perspective to inform the design and adoption of autonomous systems. Trustworthiness is a property of trusted agents or organisations that engenders trust in other agent or organisations. Trust is a complex phenomena defined differently depending on the discipline. This chapter aims to bring different approaches under a single framework for investigation with three sorts of questions: Who or what is trustworthy?–metaphysics. How do we know who or what is trustworthy?–epistemology. What factors influence what or who should we trust?–normativity. A two-component model of trust is used that incorporates competence (skills, reliability and experience) and integrity (motives, honesty and character). It is supposed that human levels of competence yield the highest trust whereas trust is reduced at sub-human and super-human levels. The threshold for trustworthiness of an agent or organisation in a particular context is a function of their relationship with the truster and potential impacts of decisions. Building trustworthy autonomous systems requires obeying the norms of logic, rationality and ethics under pragmatic constraints–even though there is disagreement on these principles by experts. Autonomous systems may need sophisticated social identities including empathy and reputational concerns to build human-like trust relationships. Ultimately transdisciplinary research drawing on metaphysical, epistemological and normative human and machine theories of trust are needed to design trustworthy autonomous systems for adoption.
Part II - Trust | Pp. 161-184
Trusted Autonomy Under Uncertainty
Michael Smithson
The relationship between trust and uncertainty has not been fully developed in current frameworks on trust, including trust in autonomous systems. This chapter presents an investigation of this relationship. It begins with a survey of trust and distrust in general, followed by a focus on human-robot interaction (HRI). Thereafter, the roles of uncertainty in trust and distrust are elucidated, and the impacts of different kinds and sources of uncertainty are elaborated.
Part II - Trust | Pp. 185-201