Catálogo de publicaciones - libros

Compartir en
redes sociales


Intelligent Virtual Agents: 7th International Conference, IVA 2007 Paris, France, September 17-19, 2007 Proceedings

Catherine Pelachaud ; Jean-Claude Martin ; Elisabeth André ; Gérard Chollet ; Kostas Karpouzis ; Danielle Pelé (eds.)

En conferencia: 7º International Workshop on Intelligent Virtual Agents (IVA) . Paris, France . September 17, 2007 - September 19, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

User Interfaces and Human Computer Interaction; Artificial Intelligence (incl. Robotics); Information Systems Applications (incl. Internet); Computers and Education

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-74996-7

ISBN electrónico

978-3-540-74997-4

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

Dynamic Behavior Qualifiers for Conversational Agents

Maurizio Mancini; Catherine Pelachaud

We aim at defining conversational agents that exhibit qualitatively distinctive behaviors. To this aim we provide a small set of parameters to allow one to define behavior profiles and then leave to the system the task of animating the agent. Our approach is to manipulate the behavior tendency of the agent depending on its communicative intention and emotional state.

In this paper we will define the concepts of and . The Baseline of an agent is defined as a set of fixed parameters that represent the personalized agent behavior, while the Dynamicline, is a set of parameters values that derive both from the Baseline and the current communicative goals and emotional state.

- Behavior Models | Pp. 112-124

Creating Rapport with Virtual Agents

Jonathan Gratch; Ning Wang; Jillian Gerten; Edward Fast; Robin Duffy

Recent research has established the potential for virtual characters to establish rapport with humans through simple contingent nonverbal behaviors. We hypothesized that the contingency, not just the frequency of positive feedback is crucial when it comes to creating rapport. The primary goal in this study was evaluative: can an agent generate behavior that engenders feelings of rapport in human speakers and how does this compare to human generated feedback? A secondary goal was to answer the question: Is contingency (as opposed to frequency) of agent feedback crucial when it comes to creating feelings of rapport? Results suggest that contingency matters when it comes to creating rapport and that agent generated behavior was as good as human listeners in creating rapport. A “virtual human listener” condition performed worse than other conditions.

- Feedback Models | Pp. 125-138

Incremental Multimodal Feedback for Conversational Agents

Stefan Kopp; Thorsten Stocksmeier; Dafydd Gibbon

Just like humans, conversational computer systems should not listen silently to their input and then respond. Instead, they should enforce the speaker-listener link by attending actively and giving feedback on an utterance while perceiving it. Most existing systems produce direct feedback responses to decisive (e.g. prosodic) cues. We present a framework that conceives of feedback as a more complex system, resulting from the interplay of conventionalized responses to eliciting speaker events and the multimodal behavior that signals how internal states of the listener evolve. A model for producing such incremental feedback, based on multi-layered processes for perceiving, understanding, and evaluating input, is described.

- Feedback Models | Pp. 139-146

Searching for Prototypical Facial Feedback Signals

Dirk Heylen; Elisabetta Bevacqua; Marion Tellier; Catherine Pelachaud

Embodied conversational agents should be able to provide feedback on what a human interlocutor is saying. We are compiling a list of facial feedback expressions that signal attention and interest, grounding and attitude. As expressions need to serve many functions at the same time and most of the component signals are ambiguous, it is important to get a better idea of the many to many mappings between displays and functions. We asked people to label several dynamic expressions as a probe into this semantic space. We compare simple signals and combined signals in order to find out whether a combination of signals can have a meaning on its own or not, i. e. the meaning of single signals is different from the meaning attached to the combination of these signals. Results show that in some cases a combination of signals alters the perceived meaning of the backchannel.

- Feedback Models | Pp. 147-153

Fluid Semantic Back-Channel Feedback in Dialogue: Challenges and Progress

Gudny Ragna Jonsdottir; Jonathan Gratch; Edward Fast; Kristinn R. Thórisson

Participation in natural, real-time dialogue calls for behaviors supported by perception-action cycles from around 100 msec and up. Generating certain kinds of such behaviors, namely envelope feedback, has been possible since the early 90s. Real-time backchannel feedback related to the content of a dialogue has been more difficult to achieve. In this paper we describe our progress in allowing virtual humans to give rapid within-utterance content-specific feedback in real-time dialogue. We present results from human-subject studies of content feedback, where results show that content feedback to a particular phrase or word in human-human dialogue comes 560-2500 msec from the phrase’s onset, 1 second on average. We also describe a system that produces such feedback with an autonomous agent in limited topic domains, present performance data of this agent in human-agent interactions experiments and discuss technical challenges in light of the observed human-subject data.

- Feedback Models | Pp. 154-160

T2D: Generating Dialogues Between Virtual Agents Automatically from Text

Paul Piwek; Hugo Hernault; Helmut Prendinger; Mitsuru Ishizuka

The Text2Dialogue (T2D) system that we are developing allows digital content creators to generate attractive multi-modal dialogues presented by two virtual agents—by simply providing textual information as input. We use Rhetorical Structure Theory (RST) to decompose text into segments and to identify rhetorical discourse relations between them. These are then “acted out” by two 3D agents using synthetic speech and appropriate conversational gestures. In this paper, we present version 1.0 of the T2D system and focus on the novel technique that it uses for mapping rhetorical relations to question–answer pairs, thus transforming (monological) text into a form that supports dialogues between virtual agents.

- Dialogues | Pp. 161-174

So Let’s See: Taking and Keeping the Initiative in Collaborative Dialogues

Sabine Payr

In order to create and maintain social relationships with human users in mixed-initiative dialogues, IVAs have to give off coherent signals of claiming or relinquishing leadership in discourse. Quantitaive and qualitative analyses of human-human collaborative task-solving dialogues from the Ohio State University Quake Corpus reveal that discursive dominance is a shared achievement of speakers and given, taken or kept in a consensual way, up to the point where they incur “costs” in terms of efficiency in solving the task. Some verbal signals can be identified as relevant to this process.

- Dialogues | Pp. 175-182

Health Document Explanation by Virtual Agents

Timothy W. Bickmore; Laura M. Pfeifer; Michael K. Paasche-Orlow

We describe the design and evaluation of a virtual agent that explains health documents to patients. The prevalence and impact of low health literacy is presented as a motivation for such agents, given that face-to-face interaction with health providers is cited as one of the most effective means of communicating with these patients. We analyze the form and distribution of pointing gestures used by experts in explaining health documents, and use this data to develop a computational model of agent-based document explanation. This model is evaluated in a randomized controlled trial. Preliminary results indicate that patients with low health literacy are more satisfied with health document explanation by a virtual agent compared to a human.

- Applications | Pp. 183-196

Virtual Patients for Clinical Therapist Skills Training

Patrick Kenny; Thomas D. Parsons; Jonathan Gratch; Anton Leuski; Albert A. Rizzo

Virtual humans offer an exciting and powerful potential for rich interactive experiences. Fully embodied virtual humans are growing in capability, ease, and utility. As a result, they present an opportunity for expanding research into burgeoning virtual patient medical applications. In this paper we consider the ways in which one may go about building and applying virtual human technology to the virtual patient domain. Specifically we aim to show that virtual human technology may be used to help develop the interviewing and diagnostics skills of developing clinicians. Herein we proffer a description of our iterative design process and preliminary results to show that virtual patients may be a useful adjunct to psychotherapy education.

- Applications | Pp. 197-210

Integrating a Virtual Agent into the Real World: The Virtual Anatomy Assistant Ritchie

Volker Wiendl; Klaus Dorfmüller-Ulhaas; Nicolas Schulz; Elisabeth André

Augmented realities, which are partially real and partially virtual, open up new ways for humans to interact with Embodied Conversational Agents (ECAs) since they allow users to meet ECAs in the physical space. Nevertheless, attempts to integrate ECAs as digital overlays in a physical space have been rare. Obvious reasons are the high demands such an integration puts to the animation of ECAs as virtual augmentations of the physical space, their capabilities to perceive not only the virtual, but also the physical world as well as reactive behavior control. In this paper, we describe our technical contributions towards solving these challenges. To illustrate our ideas, we present the virtual anatomy assistant Ritchie that monitors the user’s actions in a physical space and dynamically responds to them.

- Applications | Pp. 211-224