Catálogo de publicaciones - libros

Compartir en
redes sociales


Intelligent Virtual Agents: 7th International Conference, IVA 2007 Paris, France, September 17-19, 2007 Proceedings

Catherine Pelachaud ; Jean-Claude Martin ; Elisabeth André ; Gérard Chollet ; Kostas Karpouzis ; Danielle Pelé (eds.)

En conferencia: 7º International Workshop on Intelligent Virtual Agents (IVA) . Paris, France . September 17, 2007 - September 19, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

User Interfaces and Human Computer Interaction; Artificial Intelligence (incl. Robotics); Information Systems Applications (incl. Internet); Computers and Education

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-74996-7

ISBN electrónico

978-3-540-74997-4

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

AVSML: An XML-Based Markup Language for Web Information Integration in 3D Virtual Space

Yasuhiko Kitamura; Yatsuho Shibata; Keisuke Tokuda; Kazuki Kobayashi; Noriko Nagata

3D virtual space can visually represent the spatial structure to users and it has been applied to many fields such as city planning, navigation, education, entertainment and so on. In the 3D virtual space, an agent can navigate a user in an interactive manner [5]. Various platforms to build a 3D virtual space and languages to control the agents have been proposed [1,2,3,4]. For example, VKSC (Virtual Kobe Sanda Campus) is a 3D virtual space of Kobe Sanda Campus, Kwansei Gakuin University [1]. In VKSC, an agent called Suzie guides a user in the campus upon his/her request.

- Poster Session | Pp. 385-386

Simulation Environment for Anticipatory Behaving Agents from the Artificial Life Domain

Karel Kohout; Pavel Nahodil

Our research is focused on simulation of agents - animates. The architecture of these agents is mainly inspired by nature therefore they are sometimes called artificial creatures. The main contribution of this paper is the description of designed simulation environment architecture for the Artificial Life (ALife) domain. It was named the World of Artificial Life (WAL). Our platform incorporates results of research in domain of hybrid agent architectures.

- Poster Session | Pp. 387-388

Towards an Architecture for Aligned Speech and Gesture Production

Stefan Kopp; Kirsten Bergmann

The automatic production of speech and gesture is one of the most challenging issues in building embodied conversational agents, due to the intricacy of how the two modalities seem to “align” with each other. Based on experiences from computational approaches and inspired by current theoretical modeling attempts, we propose an architecture to simulate how on-the-spot speech and gesture production might run and bring about more natural multimodal behavior.

- Poster Session | Pp. 389-390

Avatar Puppetry Using Real-Time Audio and Video Analysis

Sylvain Le Gallou; Gaspard Breton; Renaud Séguier; Christophe Garcia

We present a system which consists of a lifelike agent animated in real-time using video and audio analysis from the user. This kind of system could be used for Instant Messaging where an avatar controlled like a puppet is displayed instead of the webcam flow. The overall system is made of video analysis based on Active Appearance Models and audio analysis based on Hidden Markov Model. The parameters from these two modules are sent to a control system driving the animation engine. The video analysis extracts the head orientation and the audio analysis provides the phonetic string used to move the lips.

- Poster Session | Pp. 391-392

Double Appraisal for Synthetic Characters

Sandy Louchart; Ruth Aylett; Joao Dias

The paper describes a double appraisal-based emotion system for synthetic characters. This approach gives intelligent agents the ability to make decisions with respect to the emotional states of others, thus implementing aspects of the theory of mind concept and laying the basis for cognitive empathy.

- Poster Session | Pp. 393-394

An Expressive Avatar for Instant Messaging Endowed with Emotional Intelligence

Alena Neviarouskaya; Helmut Prendinger; Mitsuru Ishizuka

In this paper, we propose to endow a graphical representation of a user in Instant Messaging – an avatar – with the ability to recognize and to express emotions and to play social nonverbal behaviour, on the basis of textual affect sensing and interpretation of communicative functions conveyed by online conversations. The developed Affect Analysis Model integrated with Instant Messaging (IM) media supports the recognition of affect from text to ensure avatar animation in an appropriate and expressive manner.

- Poster Session | Pp. 395-396

ALICIA

Marco Paleari; Brian Duffy; Benoit Huet

One of the most important social ability for effective social interaction with people is the capacity to understand, feel and ultimately express emotions. In this paper we present an architecture, based on the BDI paradigm, employing a three layered approach and coupling an emotion engine which simulates the generation of affective states based on Scherer’s component process theory and influences decision making.

- Poster Session | Pp. 397-398

Towards a Unified Model of Social and Environment-Directed Agent Gaze Behaviour

Christopher Peters

When considering the gaze behaviours of embodied agents, and necessarily the underlying processes of visual attention that help to guide them, most work to date has been focused either on models aimed at controlling gaze in social situations, for example when one or more ECA’s and humans are interacting, or for controlling gaze directed at the environment in a more general sense, typically for helping AVA’s accomplish tasks such as navigation. We are designing a model of visual attention that attempts to amalgamate these concepts into a unified design in order to produce intelligent virtual agents that can behave in a social manner with social stimuli while also being attentive to events when mobile in the more general environment.

- Poster Session | Pp. 399-400

Automatic Generation of Expressive Gaze in Virtual Animated Characters: From Artists Craft to a Behavioral Animation Model

Rossana B. Queiroz; Leandro M. Barros; Soraia R. Musse

We present a model for automatic generation of expressive gaze in virtual agents. Our main focus is the eye behavior associated to expressiveness. Our approach is to collect data from animated Computer Graphics films, and codify such observations into an animation framework. The main contribution is the modeling aspects of an animation system, calibrated with empirical observations in order to generate realistic eyes motion. Results show that this approach generates convincing animations that improve the empathy of virtual agents.

- Poster Session | Pp. 401-402

Analyzing Gaze During Face-to-Face Interaction

Stephan Raidt; Gérard Bailly; Frédéric Elisei

We present here the analysis of multimodal data gathered during realistic face-to-face interaction of a target speaker with a number of interlocutors. Videos and gaze have been monitored with an experimental setup using coupled cameras and screens with integrated eye trackers. With the aim to understand the functions of gaze in social interaction and to develop a coherent gaze control model for our talking heads we investigate the influence of cognitive state and social role on the observed gaze behavior.

- Poster Session | Pp. 403-404