Catálogo de publicaciones - libros
Affective Computing and Intelligent Interaction: 2nd International Conference, ACII 2007 Lisbon, Portugal, September 12-14, 2007 Proceedings
Ana C. R. Paiva ; Rui Prada ; Rosalind W. Picard (eds.)
En conferencia: 2º International Conference on Affective Computing and Intelligent Interaction (ACII) . Lisbon, Portugal . September 12, 2007 - September 14, 2007
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
No disponibles.
Disponibilidad
Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No detectada | 2007 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-3-540-74888-5
ISBN electrónico
978-3-540-74889-2
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2007
Información sobre derechos de publicación
© Springer-Verlag Berlin Heidelberg 2007
Tabla de contenidos
Early Prediction of Student Frustration
Scott W. McQuiggan; Sunyoung Lee; James C. Lester
Affective reasoning has been the subject of increasing attention in recent years. Because negative affective states such as frustration and anxiety can impede progress toward learning goals, intelligent tutoring systems should be able to detect when a student is anxious or frustrated. Being able to detect negative affective states early, i.e., before they lead students to abandon learning tasks, could permit intelligent tutoring systems sufficient time to adequately prepare for, plan, and enact affective tutorial support strategies. A first step toward this objective is to develop predictive models of student frustration. This paper describes an inductive approach to student frustration detection and reports on an experiment whose results suggest that frustration models can make predictions early and accurately.
- Evaluating Affective Systems | Pp. 698-709
A Novel Feature for Emotion Recognition in Voice Based Applications
Hari Krishna Maganti; Stefan Scherer; Günther Palm
In the context of affective computing, a significant trend in multi modal human-computer interaction is focused to determine emotional status of the users. For a constructive and natural human-computer interaction, the computers should be able to adapt to the user’s emotional state and respond appropriately. This work proposes few simple and robust features in the framework of determining emotions from speech. Our approach is suitable for voice based applications, such as call centers or interactive voice systems, which are dependent on telephone conversations. For a typical call center application, it is crucial to recognize and classify agitation (anger, happiness, fear, and disgust) and calm (neutral, sadness, and boredom) callers, for the systems to respond appropriately. For instance, in a typical voice based application, the system should be able to either apologize or appreciate the problem of the caller suitably, if necessary by directing the call to the supervisor concerned.
- Posters | Pp. 710-711
Asymmetry of Left Versus Right Lateral Face in Face Recognition
Wenfeng Chen; Chang Hong Liu; Xiaolan Fu
Prior research has found that the left side of the face is emotionally more expressive than the right side [1]. This was demonstrated in a study where the right and the left halves of a face image were combined with their mirror-reversed duplicates to make composite images. When observers were asked which composite face appeared more emotional, they selected the left-left over the right-right composite more often.
- Posters | Pp. 712-713
Simulating Dynamic Speech Behaviour for Virtual Agents in Emotional Situations
Artemy Kotov
In this article we demonstrate the application of a linguistic model of d/r-scripts to simulate the speech behaviour of artificial agents in virtual emotional situations. We apply the model to animate speech behaviour of a virtual agent for a term of about 10-30 seconds. This behaviour can be controlled by short emotional states, expressed in communication -
- Posters | Pp. 714-715
A Definition Approach for an “Emotional Turing Test”
Dirk M. Reichardt
There are lots of modelling approaches for emotional agents. Can they be compared in any way? The intention of this work is to provide a basis for comparison in a small but consistent environment which focuses on the impact of emotions in the decision making of agents. We chose the public goods game with punishment option as a scenario. Why? In this scenario it has been proven that humans show emotional, reactions. An emotional agent should therefore be able to show the same emotions and the underlying models should be capable them! The simulation and test environment is designed to allow any emotional agent model. Eventually, human players should not be distinguishable from artificial emotional agents.
- Posters | Pp. 716-717
Interpolating Expressions in Unit Selection
Marc Schröder
In expressive speech synthesis, a key challenge is the generation of flexibly varying expressive tone while maintaining the high quality achieved with unit selection speech synthesis methods. Existing approaches have either concentrated on achieving high synthesis quality with no flexibility, or they have aimed at parametric models, requiring the use of parametric synthesis technologies such as diphone, formant or HMM-based synthesis.
- Posters | Pp. 718-720
Induction and Evaluation of for Facial Motion Capture
Gaspard Breton; Florence Février; Eric Jamet; Géraldine Rouxel
In this study, we are interested in capturing the facial configuration of in order to use them for Embodied Conversational Agents. In order to create a believable , it is necessary to capture natural that can be learnt and replayed. However, until now, animation data are extracted from videos and their description is far from being sufficient to generate realistic facial expressions. It seems that believable results cannot be obtained without using 3D motion capture. This is why in this study we tried to set up a protocol for induction in a motion capture situation with manipulated subjects who are unaware of the real goals. Similarly from [1], we induce natural in order to capture the related facial expressions.
- Posters | Pp. 721-722
Ontology-Driven Affective Chinese Text Analysis and Evaluation Method
Linhong Xu; Hongfei Lin
This paper studies text affective analysis from three different levels: words, sentences and discourses. Firstly, affective lexicon ontology is constructed by employing the manual and automatic classification methods referred to the present emotional classification. Then affective analysis of sentences and discourses is completed by emotional information of affective lexicon ontology. For sentential recognition, the lexical emotion information and semantic features are appended into Condition Random Fields, and the emotional chain of a text document is generated. Finally affective structure of a text document is evaluated by two different methods as single sentence evaluation and joint sentence evaluation. In analyzing the discourses, affective tone is acquired through Emotional Largest Chain method and Support Vector Machine method.
- Posters | Pp. 723-724
Modeling the Dynamic Nonlinear Nature of Emotional Phenomena
Luís Morgado; Graça Gaspar
The study of emotional phenomena, particularly the development of emotion models for intelligent agent implementation, has been mainly based on a perspective of emotion as a human phenomenon and on the assumption that emotions can be divided into discrete and independent categories. We propose an alternative model that emphasizes the continuous nonlinear nature of emotion processes, where emotional phenomena are rooted not on high level cognitive or even nervous structures, but on biophysical principles that are pervasive among biological organisms.
- Posters | Pp. 725-726
Deception Detection Via Blob Motion Pattern Analysis
Fan Xia; Hong Wang; Junxian Huang
Deception dectection is one of the most difficult problems in affect recognition and expression research area. Recently, non-verbal methods of detecting deception have appeared to be promising. Thomas[1] presented a proof-of-concept study based on the blob analysis of some suspects’ interviews and mock experiments video clips. In this paper, we present our recent research work in the direction of developing an automated deception detection system. We propose a blob motion pattern analysis approach to solve this problem.
- Posters | Pp. 727-728