Catálogo de publicaciones - libros


Título de Acceso Abierto

Foundations of Trusted Autonomy

Parte de: Studies in Systems, Decision and Control

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Trusted Autonomy; Automation Technology; Autonomous Systems; Self-Governance; Trusted Autonomous Systems; Design of Algorithms and Methodologies

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No requiere 2018 Directory of Open access Books acceso abierto Descargá directamente
No requiere 2018 SpringerLink acceso abierto Descargá directamente

Información

Tipo de recurso:

libros

ISBN impreso

978-3-319-64815-6

ISBN electrónico

978-3-319-64816-3

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Tabla de contenidos

The Need for Trusted Autonomy in Military Cyber Security

Andrew Dowse

Information systems in the early 21st Century have become a critical enabler of increased value to the business, or as people in Defence might call a ‘force multiplier’.

Part II - Trust | Pp. 203-213

Reinforcing Trust in Autonomous Systems: A Quantum Cognitive Approach

Peter D. Bruza; Eduard C. Hoenkamp

We investigated if an autonomous system can be provided with reasoning that maintains trust between human and system even when human and autonomous system reach discrepant conclusions. Tversky and Kahneman’s research [] and the vast literature following it distinguishes two modes of human decision making: System 1, which is fast, emotional, and automatic, and System 2 which is slower, more deliberative, and more rational. Autonomous systems are thus far endowed with System 2. So when interacting with such a system, humans may follow System 1 unawares that their autonomous partner follows System 2. This can easily confuse the user when a discrepant decision is reached, eroding their trust in the autonomous system. Hence we investigated if trust in the message could interfere with trust its source, namely the autonomous system. For this we presented participants with images that might or might not be genuine, and found that they often distrusted the image (e.g., as photoshopped) when they distrusted its content. We present a quantum cognitive model that explains this interference. We speculate that enriching an autonomous system with this model will allow it to predict when its decisions may confuse the user, take pro-active steps to prevent this, and with it reinforce and maintain trust in the system.

Part II - Trust | Pp. 215-224

Learning to Shape Errors with a Confusion Objective

Jason Scholz

Errors are the enemy of classification systems, so minimising the total probability of error is an understandable objective in statistical machine learning classifiers. However, for open-world application in trusted autonomous systems, not all errors are equal in terms of their consequences. So, the ability for users and designers to define an objective function that distributes errors according to preference criteria might elevate trust. Previous approaches in cost-sensitive classification have focussed on dealing with distribution imbalances by cost weighting the probability of classification. A novel alternative is proposed that learns a ‘confusion objective’ and is suitable for integration with modular Deep Network architectures. The approach demonstrates an ability to control the error distribution in training of supervised networks via back-propagation for the penalty of an increase in total errors. Theory is developed for the new confusion objective function and compared with cross-entropy and squared loss objectives. The capacity for error shaping is demonstrated via a range of empirical experiments using a shallow and deep network. The classification of handwritten digits from up to three independent databases demonstrates desired error performance is maintained across unforeseen data distributions. Some significant and unique forms of error control are demonstrated and their limitations investigated.

Part II - Trust | Pp. 225-245

Developing Robot Assistants with Communicative Cues for Safe, Fluent HRI

Justin W. Hart; Sara Sheikholeslami; Brian Gleeson; Elizabeth Croft; Karon MacLean; Frank P. Ferrie; Clément Gosselin; Denis Laurandeau

The Collaborative Advanced Robotics and Intelligent Systems (CARIS) laboratory at the University of British Columbia studies the development of robotic systems that are capable of autonomous human-robot interaction. This chapter will provide an overview of our laboratory’s activities and methodologies. We first discuss a recently-concluded multi-institutional three year project to develop autonomous robot assistants which aid in assembly operations at manufacturing facilities. Next we discuss the primary methodology employed by our laboratory, by which we identify communicative cues used in interactions between people, describe these cues in detail, then implement and test them on robots. This is followed by an overview of recent communicative cue studies carried out by our group and our collaborators. We conclude by discussing current and future work.

Part II - Trust | Pp. 247-270

Intrinsic Motivation for Truly Autonomous Agents

Ron Sun

In this chapter, I argue that agents need human-like intrinsic motivation in order to achieve true autonomy, which is especially important for dealing with complex, uncertain, or unpredictable environments. A computational cognitive architecture is presented that incorporates not only cognitive capabilities necessary for the functioning of agents but also intrinsic (as well as derived) motivation for agents. With this model, an agent is able to function properly and autonomously in complex environments, as demonstrated by a wide range of simulations.

Part III - Trusted Autonomy | Pp. 273-292

Computational Motivation, Autonomy and Trustworthiness: Can We Have It All?

Kathryn Merrick; Adam Klyne; Medria Hardhienata

Computational motivation—such as curiosity, novelty-seeking, achievement, affiliation and power motivation-facilitates open-ended goal generation by artificial agents and robots. This further supports diversity, adaptation and cumulative, life-long learning by machines. However, as machines acquire greater autonomy, this may begin to affect human perception of their trustworthiness. Can machines be self-motivated, autonomous and trustworthy? This chapter examines the impact of self-motivated autonomy on trustworthiness in the context of intrinsically motivated agent swarms.

Part III - Trusted Autonomy | Pp. 293-316

Are Autonomous-and-Creative Machines Intrinsically Untrustworthy?

Selmer Bringsjord; Naveen Sundar Govindarajulu

Given what has been discovered in the case of human cognition, this principle seems plausible: An artificial agent that is both autonomous (A) and creative (C) will tend to be, from the viewpoint of a rational, fully informed agent, (U) untrustworthy. After briefly explaining the intuitive, internal structure of this disturbing (in the context of the human sphere) principle, we provide a more formal rendition of the principle designed to apply to the realm of intelligent artificial agents. The more-formal version makes use of some basic structures available in one of our cognitive-event calculi, and can be expressed as a (confessedly — for reasons explained — naïve) theorem. We prove the theorem, and provide simple demonstrations of it in action, using a novel theorem prover (ShadowProver). We end by pointing toward some future defensive engineering measures that should be taken in light of the theorem.

Part III - Trusted Autonomy | Pp. 317-335

Trusted Autonomous Command and Control

Noel Derwort

This chapter will use a demonstration scenario, with intersecting vignettes, to investigate a path to where, by 2030, the promise of autonomous systems will have been realised. Like many steps humans have taken in technological evolution, some will be deliberate and cautious whereas others will happen out of necessity - perceived or real. Looking back through historical scenarios can provide an in-sight to the steps taken, their triggering events and the temporal factors, in order to identify potential futures. The crucible of conflict has always been a fertile ground for evolutionary change, particular those pushing the boundaries of moral and ethical thinking. From the flimsy flying machines at the beginning of World War One humanity saw the development of transcontinental aircraft and rockets in a little over three decades, culminating in the first operational use of an atomic weapon. High levels of digitisation and automation became central to the most capable weapon systems where human interaction could often result in degraded system performance, occasionally with disastrous consequences. Ongoing war-fare and disregard for the global rules based order enabled an environment where the combining of cyber and social media capabilities developed into crude but effective ‘trusted autonomous command and control’ system. Like all such steps taken before, this was embraced by global major powers and exploited to its maximum potential.

Part III - Trusted Autonomy | Pp. 337-346

Trusted Autonomy in Training: A Future Scenario

Leon D. Young

Being able to trust your teacher has been a pivotal assumption within the training systems. The advent of autonomous systems capable of delivering training in innovative and traditional ways creates a number of questions. The premise of this book allows us to examine how autonomous systems, that is non-human, will impact training and learning environments. The following section seeks to explore the future of trusted autonomy within a training context through both an extrapolation of current trends and creative thought.

Part III - Trusted Autonomy | Pp. 347-353

Future Trusted Autonomous Space Scenarios

Russell Boyce; Douglas Griffin

This chapter describes the nature of the space environment that makes autonomous space systems a desirable application; describes the various types of space activities in near-Earth and deep space missions, and examples of autonomous systems deployed in space to date; outlines the current state-of-the-art of the intersection between trusted autonomous systems and autonomous space systems; and then presents a variety of possible future trusted autonomous space scenarios.

Part III - Trusted Autonomy | Pp. 355-364