Catálogo de publicaciones - libros
Intelligent Virtual Agents: 7th International Conference, IVA 2007 Paris, France, September 17-19, 2007 Proceedings
Catherine Pelachaud ; Jean-Claude Martin ; Elisabeth André ; Gérard Chollet ; Kostas Karpouzis ; Danielle Pelé (eds.)
En conferencia: 7º International Workshop on Intelligent Virtual Agents (IVA) . Paris, France . September 17, 2007 - September 19, 2007
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
User Interfaces and Human Computer Interaction; Artificial Intelligence (incl. Robotics); Information Systems Applications (incl. Internet); Computers and Education
Disponibilidad
Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No detectada | 2007 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-3-540-74996-7
ISBN electrónico
978-3-540-74997-4
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2007
Información sobre derechos de publicación
© Springer-Verlag Berlin Heidelberg 2007
Tabla de contenidos
Simulation Level of Detail for Virtual Humans
Cyril Brom; Ondřej Šerý; Tomáš Poch
Graphical level of detail (LOD) is a set of techniques for coping with the issue of limited computational resources by reducing the graphical detail of the scene far from the observer. Simulation LOD reduces quality of the simulation at the places unseen. Contrary to graphical LOD, simulation LOD has been almost unstudied. As a part of our on-going effort on a large virtual-storytelling game populated by tens of complex virtual humans, we have developed and implemented a set of simulation LOD algorithms for simplifying virtual space and behaviour of virtual humans. The main feature of our technique is that it allows for several degrees of detail, i.e. for varying of simulation quality. In this paper, we summarise the main lessons learned, introduce the prototype implementation called IVE and discuss the possibility of scaling our technique to other applications featuring virtual humans.
- Rendering and Analysis | Pp. 1-14
Towards Natural Gesture Synthesis: Evaluating Gesture Units in a Data-Driven Approach to Gesture Synthesis
Michael Kipp; Michael Neff; Kerstin H. Kipp; Irene Albrecht
Virtual humans still lack naturalness in their nonverbal behaviour. We present a data-driven solution that moves towards a more natural synthesis of hand and arm gestures by recreating gestural behaviour in the style of a human performer. Our algorithm exploits the concept of gesture units to make the produced gestures a continuous flow of movement. We empirically validated the use of gesture units in the generation and show that it causes the virtual human to be perceived as more natural.
- Rendering and Analysis | Pp. 15-28
3D Audiovisual Rendering and Real-Time Interactive Control of Expressivity in a Talking Head
Jean-Claude Martin; Christophe d’Alessandro; Christian Jacquemin; Brian Katz; Aurélien Max; Laurent Pointal; Albert Rilliard
The integration of virtual agents in real-time interactive virtual applications raises several challenges. The rendering of the movements of the virtual character in the virtual scene (locomotion of the character or rotation of its head) and the binaural rendering in 3D of the synthetic speech during these movements need to be spatially coordinated. Furthermore, the system must enable real-time adaptation of the agent’s expressive audiovisual signals to user’s on-going actions. In this paper, we describe a platform that we have designed to address these challenges as follows: (1) the modules enabling real time synthesis and spatial rendering of the synthetic speech, (2) the modules enabling 3D real time rendering of facial expressions using a GPU-based 3D graphic engine, and (3) the integration of these modules within an experimental platform using gesture as an input modality. A new model of phoneme-dependent human speech directivity patterns is included in the speech synthesis system, so that the agent can move in the virtual scene with realistic 3D visual and audio rendering. Future applications of this platform include perceptual studies about multimodal perception and interaction, expressive real time question and answer system and interactive arts.
- Rendering and Analysis | Pp. 29-36
Semantic Segmentation of Motion Capture Using Laban Movement Analysis
Durell Bouchard; Norman Badler
Many applications that utilize motion capture data require small, discrete, semantic segments of data, but most motion capture collection processes produce long sequences of data. The smaller segments are often created from the longer sequences manually. This segmentation process is very laborious and time consuming. This paper presents an automatic motion capture segmentation method based on movement qualities derived from Laban Movement Analysis (LMA). LMA provides a good compromise between high-level semantic features, which are difficult to extract for general motions, and low-level kinematic features which, often yield unsophisticated segmentations. The LMA features are computed using a collection of neural networks trained with temporal variance in order to create a classifier that is more robust with regard to input boundaries. The actual segmentation points are derived through simple time series analysis of the LMA features.
- Rendering and Analysis | Pp. 37-44
A Computational Model of Culture-Specific Conversational Behavior
Dušan Jan; David Herrera; Bilyana Martinovski; David Novick; David Traum
This paper presents a model for simulating cultural differences in the conversational behavior of virtual agents. The model provides parameters for differences in proxemics, gaze and overlap in turn taking. We present a review of literature on these factors and show results of a study where native speakers of North American English, Mexican Spanish and Arabic were asked to rate the realism of the simulations generated based on different cultural parameters with respect to their culture.
- Culture and Identity | Pp. 45-56
Ethnic Identity and Engagement in Embodied Conversational Agents
Francisco Iacobelli; Justine Cassell
In this paper we present the design, development and initial evaluation of a virtual peer that models ethnicity through culturally authentic verbal and non-verbal behaviors. The behaviors chosen for the implementation come from an ethnographic study with African-American and Caucasian children and the evaluation of the virtual peer consists of a study in which children interacted with an African American or a Caucasian virtual peer and then assessed its ethnicity. Results suggest that it may be possible to tip the ethnicity of a embodied conversational agent by changing verbal and non-verbal behaviors instead of surface attributes, and that children engage with those virtual peers in ways that have promise for educational applications.
- Culture and Identity | Pp. 57-63
Neuroticism – A Competitive Advantage (Also) for IVAs?
Christoph Hermann; Helmuth Melcher; Stefan Rank; Robert Trappl
Real-Time Strategy (RTS) games are a challenging genre for the design of Intelligent Virtual Agents. We were interested whether incorporating a simple emotional model to an existing bot-script improves playing strength. We implemented a bot for Microsoft’s RTS game “Age of Mythology”. The emotional model is based on the “Big-Five” and the Emotion-Connectionist Model. Four variants of the bot were evaluated, each using different personality parameters; one of the variants was designed to show “neurotic” behaviour. The emotion-driven bots were evaluated in a bot-versus-bot setup, playing matches against the game’s default script. Evaluation results indicate a significant increase in playing strength, the “neurotic” bot being the strongest one.
- Culture and Identity | Pp. 64-71
Emotionally Expressive Head and Body Movement During Gaze Shifts
Brent Lance; Stacy C. Marsella
The current state of the art virtual characters fall far short of characters produced by skilled animators. One reason for this is that the physical behaviors of virtual characters do not express the emotions and attitudes of the character adequately. A key deficiency possessed by virtual characters is that their gaze behavior is not emotionally expressive. This paper describes work on expressing emotion through head movement and body posture during gaze shifts, with intent to integrate a model of emotionally expressive eye movement into this work in the future. The paper further describes an evaluation showing that users can recognize the emotional states generated by the model.
- Behavior Models | Pp. 72-85
Fuzzy Similarity of Facial Expressions of Embodied Agents
Radosław Niewiadomski; Catherine Pelachaud
In this paper we propose an algorithm based on fuzzy similarity which models the concept of resemblance between facial expressions of an Embodied Conversational Agent (ECA). The algorithm measures the degree of visual resemblance between any two facial expressions. We also present an evaluation study in which we compared the users’ perception of similarity of facial expressions. Finally we describe an application of this algorithm to generate complex facial expressions of an ECA.
- Behavior Models | Pp. 86-98
The Behavior Markup Language: Recent Developments and Challenges
Hannes Vilhjálmsson; Nathan Cantelmo; Justine Cassell; Nicolas E. Chafai; Michael Kipp; Stefan Kopp; Maurizio Mancini; Stacy Marsella; Andrew N. Marshall; Catherine Pelachaud; Zsofi Ruttkay; Kristinn R. Thórisson; Herwin van Welbergen; Rick J. van der Werf
Since the beginning of the SAIBA effort to unify key interfaces in the multi-modal behavior generation process, the Behavior Markup Language (BML) has both gained ground as an important component in many projects worldwide, and continues to undergo further refinement. This paper reports on the progress made in the last year in further developing BML. It discusses some of the key challenges identified that the effort is facing, and reviews a number of projects that already are making use of BML or support its use.
- Behavior Models | Pp. 99-111