Catálogo de publicaciones - libros

Compartir en
redes sociales


Affective Computing and Intelligent Interaction: 2nd International Conference, ACII 2007 Lisbon, Portugal, September 12-14, 2007 Proceedings

Ana C. R. Paiva ; Rui Prada ; Rosalind W. Picard (eds.)

En conferencia: 2º International Conference on Affective Computing and Intelligent Interaction (ACII) . Lisbon, Portugal . September 12, 2007 - September 14, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-74888-5

ISBN electrónico

978-3-540-74889-2

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

Combining Audio and Video by Dominance in Bimodal Emotion Recognition

Lixing Huang; Le Xin; Liyue Zhao; Jianhua Tao

We propose a novel bimodal emotion recognition approach by using the boosting-based framework, in which we can automatically determine the adaptive weights for audio and visual features. In this way, we balance the dominances of audio and visual features dynamically in feature-level to obtain better performance.

- Posters | Pp. 729-730

Metric Adaptation and Representation Upgrade in an Emotion-Based Agent Model

Rodrigo Ventura; Carlos Pinto-Ferreira

The research presented here follows a biologically inspired approach, based on the hypothesis that emotions contribute decisively for humans to cope with complex and dynamic environments. This hypothesis is founded on neurophysiological findings showing that damage in the emotion circuitry of the brain cause inability to handle simple, common life tasks [1]. Inspired by these findings, an emotion-based agent model was previously presented [2], proposing a double-processing of stimuli: a simple representation termed designed for fast processing and immediate response to urgent situations, and a complex representation termed thus slow to process, are extracted from each stimulus reaching the agent. These two representations are extracted and processed, simultaneously, by the two levels of the architecture: the perceptual and the cognitive levels. The parallelism of the processing is essential, so that quick response to urgent situations is not compromised by the slow processing of the cognitive level. These two representations are then associated and stored in memory. Once the agent faces a new situation, it matches the incoming stimulus with the agent memory, thus retrieving the associated images.

- Posters | Pp. 731-732

Combining Global and Local Classifiers for Lipreading

Shengping Zhang; Hongxun Yao; Yuqi Wan; Dan Wang

Lipreading has become a hot research topic in recent years since the visual information extracted from the lip movement has been shown to improve the performance of automatic speech recognition (ASR) system especially under noisy environments [1]-[3], [5]. There are two important issues related to lipreading: 1) how to extract the most efficient features from lip image sequences, 2) how to build lipreading models. This paper mainly focuses on how to choose more efficient features for lipreading.

- Posters | Pp. 733-734

The Personality-Enabled Architecture for Cognition (PAC)

Stephen Read; Lynn Miller; Anna Kostygina; Gurveen Chopra; John L. Christensen; Charisse Corsbie-Massay; Wayne Zachary; Jean-Christophe LeMentec; Vassil Iordanov; Andrew Rosoff

The Personality-enabled Architecture for Cognition (PAC) is a new modeling architecture designed to create Intelligent Virtual Agents (IVAs) with specific personality traits. PAC integrates theory and empirical data from personality psychology, social psychology, cognitive science, and neuroscience to build a model of personality that is based on fundamental underlying human motivational systems.

- Posters | Pp. 735-736

Rules of Emotions: A Linguistic Interpretation of an Emotion Model for Affect Sensing from Texts

Mostafa Al Masum Shaikh; Helmut Prendinger; Ishizuka Mitsuru

This paper aims to interpret the cognitive theory of emotions known as the OCC emotion model from the computational linguistic standpoint. Since the OCC emotions are associated with several cognitive variables, in this paper we explain how the values could be assigned to those by analyzing and processing natural language components.

- Posters | Pp. 737-738

The Role of Internal States in the Emergence of Motivation and Preference: A Robotics Approach

Carlos Herrera; Alberto Montebelli; Tom Ziemke

In order to explain and model emotion we need to attend to the role internal states play in the generation of behavior. We argue that motivational and perceptual roles emerge from the dynamical interaction between physiological processes, sensory-motor processes and the environment. We investigate two aspects inherent to emotion appraisal and response which rely on physiological process: the ability to categorize relations with the environment and to modulate response generating different action tendencies.

- Posters | Pp. 739-740

Feature Combination for Better Differentiating Anger from Neutral in Mandarin Emotional Speech

Tsang-Long Pao; Yu-Te Chen; Jun-Heng Yeh; Yun-Maw Cheng; Charles S. Chien

Just as written language is a sequence of elementary alphabet, speech is a sequence of elementary acoustic symbols. Speech signals convey more than spoken words. The additional information conveyed in speech includes gender information, age, accent, speaker’s identity, health, prosody and emotion [1].

- Posters | Pp. 741-742

Affect-Insensitive Speaker Recognition by Feature Variety Training

Dongdong Li; Yingchun Yang

A great deal of inner variabilities such as emotion and stress are largely missing from traditional speaker recognition system. The direct result is that the recognition system is easily disturbed when the enrollment and the authentication are made under different emotional state. Reynolds [1] proposed a new normalization technique called feature mapping. This technique achieved big successes in channel robust speaker verification. We extend the mapping idea to develop a feature variety training approach for affective-insensitive speaker recognition.

- Posters | Pp. 743-744

Expressing Complex Mental States Through Facial Expressions

Xueni Pan; Marco Gillies; Tevfik Metin Sezgin; Celine Loscos

A face is capable of producing about twenty thousand different facial expressions [2]. Many researchers on Virtual Characters have selected a limited set of emotional facial expressions and defined them as basic emotions, which are universally recognized facial expressions. These basic emotions have been well studied since 1969 and employed in many applications [3]. However, real life communication usually entails more complicated emotions. For instance, communicative emotions like “convinced”, “persuaded” and “bored” are difficult to describe adequately with basic emotions. Our daily face-to-face interaction is already accompanied by more complex mental states, so an empathic animation system should support them. Compared to basic emotions, complex mental states are harder to model because they require knowledge of temporal changes in facial displays and head movements as opposed to a static snapshot of the facial expression. We address this by building animation models for complex emotions based on video clips of professional actors displaying these emotions.

- Posters | Pp. 745-746

Metaphor and Affect Detection in an ICA

T. H. Rumbell; C. J. Smith; J. A. Barnden; M. G. Lee; S. R. Glasbey; A. M. Wallington

We discuss an aspect of an affect-detection system used in edrama by intelligent conversational agents, namely affective interpretation of limited sorts of metaphorical utterance. We discuss how these metaphorical utterances are recognized and how they are analysed and their affective content determined.

- Posters | Pp. 747-748