Catálogo de publicaciones - libros

Compartir en
redes sociales


Affective Computing and Intelligent Interaction: 1st International Conference, ACII 2005, Beijing, China, October 22-24, 2005, Proceedings

Jianhua Tao ; Tieniu Tan ; Rosalind W. Picard (eds.)

En conferencia: 1º International Conference on Affective Computing and Intelligent Interaction (ACII) . Beijing, China . October 22, 2005 - October 24, 2005

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2005 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-29621-8

ISBN electrónico

978-3-540-32273-3

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2005

Tabla de contenidos

Watch and Feel: An Affective Interface in a Virtual Storytelling Environment

Rui Figueiredo; Ana Paiva

In this paper we describe a study carried out with SenToy: a tangible interface that has the shape of a doll and is used to capture emotions from its user whilst performing specific gestures. SenToy was used with an application named Fearnot!, which is a virtual storytelling environment, where characters act autonomously and in character, so that stories emerge from those autonomous actions. The integration of SenToy in FearNot! was evaluated in two ways: (1) if users were able to manipulate the tangible interface appropriately, even if engaged in storytelling situation and (2) if the emotions expressed by the users with SenToy were the same as the ones reported to have been felt after the session. The results of the study show that Sentoy can be used to understand how the viewers reacted to the stories portrayed, and at the same time that the emotions that were expressed with SenToy (apart from some exceptions, reported in the paper) are indeed similar as the ones reported to have been felt by the users.

- Affective Interaction and Systems and Applications | Pp. 915-922

Affective Music Processing: Challenges

Somnuk Phon-Amnuaisuk

Our states of mind keep on changing all the time. How do we determine a mapping between our states of mind and music stimulus? In this report, we discuss, from a bird’s eye view, the meanings and effects of music which are impressed upon us. We review related literature in musical meanings and computational model, and finally we discuss the impression while listening to the fourth movement of the Pastoral. We point out challenges and suggest plausible computational approches to the affective music computing.

- Affective Interaction and Systems and Applications | Pp. 923-930

A User-Centered Approach to Affective Interaction

Petra Sundström; Anna Ståhl; Kristina Höök

We have built eMoto, a mobile service for sending and receiving affective messages, with the explicit aim of addressing the inner experience of emotions. eMoto is a designed artifact that carries emotional experiences only achieved through interaction. Following on the theories of embodiment, we argue emotional experiences can not be design in only design for. eMoto is the result of a user-centered design approach, realized through a set of initial brainstorming methods, a , a of body language and a . eMoto is not a system that could have been designed from theory only, but require an iterative engagement with end-users, however, in combination with theoretical work. More specifically, we will show how we have managed to design an and open system that allows for users’ .

- Affective Interaction and Systems and Applications | Pp. 931-938

Designing and Redesigning an Affective Interface for an Adaptive Museum Guide

Dina Goren-Bar; Ilenia Graziola; Cesare Rocchi; Fabio Pianesi; Oliviero Stock; Massimo Zancanaro

The ideal museum guide should observe the user affective reactions to the presentations and adapt its behavior. In this paper we describe the user-centred design of an adaptive multimedia mobile guide with an affective interface. The novel approach has required a series of redesign cycles. We comment in particular on the last experiments we did with the prototype, users’ observations during interviews and more objective considerations based on logs. We show how the last design is better understood by the user, though there is still room for improvements.

- Affective Interaction and Systems and Applications | Pp. 939-946

Intelligent Interaction for Linguistic Learning

Vito Leonardo Plantamura; Enrica Gentile; Anna Angelini

The choice of a friendly interface for learning a foreign language is the fundamental theme of the research illustrated in this paper. Several are the means used in a dialogue among human beings and they essentially require the knowledge of the linguistic terminology, the context where the dialogue is inserted and the final goal. The use of technological tools which emulate the human beings’ manners can be useful in the communication processes where the interaction with situations of the every-day life is required. HyperEnglish, the multimedia courseware we are developing, is a prototype for the experimentation and evaluation of the reached hypotheses, where the fundamental goal is the building of an intelligent environment to learn the English language simplifying the communication and allowing to live emotions of the real life.

- Affective Interaction and Systems and Applications | Pp. 947-955

A Three-Layered Architecture for Socially Intelligent Agents: Modeling the Multilevel Process of Emotions

Christine L. Lisetti; Andreas Marpaung

In this article, we propose the design of a three-layered agent architecture inspired from the Multilevel Process Theory of Emotion (Leventhal and Scherer, 1987). Our project aims at modeling emotions on an autonomous embodied robotic agent, expanding upon our previous work (Lisetti, et al., 2004). Our agent is designed to socially interact with humans, navigating in an office suite environment, and engaging people in social interactions. We describe: (1) the psychological theory of emotion which inspired our design, (2) our proposed agent architecture, (3) the needed hardware additions that we implemented on a robot, (3) the robot’s multi-modal interface designed especially to engage humans in natural (and hopefully pleasant) social interactions.

- Affective Interaction and Systems and Applications | Pp. 956-963

Multi-stream Confidence Analysis for Audio-Visual Affect Recognition

Zhihong Zeng; Jilin Tu; Ming Liu; Thomas S. Huang

Changes in a speaker’s emotion are a fundamental component in human communication. Some emotions motivate human actions while others add deeper meaning and richness to human interactions.  In this paper, we explore the development of a computing algorithm that uses audio and visual sensors to recognize a speaker’s affective state. Within the framework of Multi-stream Hidden Markov Model (MHMM), we analyze audio and visual observations to detect 11 cognitive/emotive states. We investigate the use of individual modality confidence measures as a means of estimating weights when combining likelihoods in the audio-visual decision fusion. Person-independent experimental results from 20 subjects in 660 sequences suggest that the use of stream exponents estimated on training data results in classification accuracy improvement of audio-visual affect recognition

- Affective Interaction and Systems and Applications | Pp. 964-971

Investigation of Emotive Expressions of Spoken Sentences

Wenjie Cao; Chengqing Zong; Bo Xu

When we meet a emotion keyword in a sentence that expresses a kind of emotion, or a word that does not directly express emotion but carry an attitude clue, it could be the case that the speaker is just stating a truth without any affection; it could be an expression of attitudes or emotive states of the agent but using different ways, or it could be other cases. In this consideration, it seems doubtable to determine the exact communicative emotion function of a sentence just based on the “keywords”. This paper endeavors to investigate the collective influence of some factors to the communicative emotion function of a sentence. These factors include emotion keywords, and sentence features such as the mood, negation, etc. we believe that the results will be useful for emotion detection or generation of short sentences.

- Affective Interaction and Systems and Applications | Pp. 972-980

Affective Computing: A Review

Jianhua Tao; Tieniu Tan

Affective computing is currently one of the most active research topics, furthermore, having increasingly intensive attention. This strong interest is driven by a wide spectrum of promising applications in many areas such as virtual reality, smart surveillance, perceptual interface, etc. Affective computing concerns multidisciplinary knowledge background such as psychology, cognitive, physiology and computer sciences. The paper is emphasized on the several issues involved implicitly in the whole interactive feedback loop. Various methods for each issue are discussed in order to examine the state of the art. Finally, some research challenges and future directions are also discussed.

- Affective Interaction and Systems and Applications | Pp. 981-995

Personalized Facial Animation Based on 3D Model Fitting from Two Orthogonal Face Images

Yonglin Li; Jianhua Tao

In this paper, a personalized MPEG-4 compliant facial animation system in embedded platform is presented. We report a semi-automatic and rapid approach for personalized modeling from two orthogonal face images. The approach is very easy and efficient. With multi-dimension texture mapping, the personalized face model offers a much more lifelike behavior. The system can be used in game, interactive services etc.

- Affective Interaction and Systems and Applications | Pp. 996-1003