Catálogo de publicaciones - libros

Compartir en
redes sociales


Affective Computing and Intelligent Interaction: 2nd International Conference, ACII 2007 Lisbon, Portugal, September 12-14, 2007 Proceedings

Ana C. R. Paiva ; Rui Prada ; Rosalind W. Picard (eds.)

En conferencia: 2º International Conference on Affective Computing and Intelligent Interaction (ACII) . Lisbon, Portugal . September 12, 2007 - September 14, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-74888-5

ISBN electrónico

978-3-540-74889-2

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

Affective Video Data Collection Using an Automobile Simulator

Tevfik Metin Sezgin; Peter Robinson

The correlation between driver status and safety has increased interest in automated systems that can infer the physical and mental state of drivers [2,3]. So far, most research on automated recognition of driver state has focused on physical state. More recently, researchers have proposed inferring the mental states of drivers by extracting drivers’ facial expressions from video data. Although systems that infer physical state have been evaluated in simulators or in real driving conditions, this has not been the case for systems inferring mental states which rely primarily on video data. Consequently, issues regarding the design of an experimental setup to collect such affective data have not been explored. Here, we describe our experimental setup for collecting video data from drivers.

- Posters | Pp. 770-771

Mixed Feelings About Using Phoneme-Level Models in Emotion Recognition

Hannes Pirker

This study deals with the application of MFCC based models for both the recognition of emotional speech and the recognition of emotions in speech. More specifically it investigates the performance of phone-level models. First, results from performing forced alignment for the phonetic segmentation on GEMEP, a novel multimodal corpus of acted emotional utterances are presented, then the newly acquired segmentations are used for experiments with emotion recognition.

- Posters | Pp. 772-773

Emotional Faces in Peripheral Crowding

Tian Ran; Wenfeng Chen; Xiaolan Fu

Crowding effect refers to the deficit in identifying viewed targets, such as letters, numerals, line segments, or grating patches, when other shapes are nearby. This effect is reduced when distractors have a different color, contrast, or binocular disparity than that of the target. With feature singleton targets, the crowding effect decreases dramatically with an increasing number of distractors for both simple orientation and more complex letter identification tasks [4]. With a target that is not a salient feature singleton, however, the increasing number of distractors worsens rather than improves the perception of the target [2].

- Posters | Pp. 774-775

Affect Related Acoustic Features of Speech and Their Modification

Dandan Cui; Fanbo Meng; Lianhong Cai; Liuyi Sun

This paper presents our recent work on the investigation of affect related acoustic features of Mandarin speech and their modification. A syllabic F0 contour model so called ”Gravity Model” is proposed, which can describe the detailed changes of the F0 contour on the syllable level between different emotions. Then, different representations of spectrum, together with different modification methods, are also tried and compared; a maximal-mean amplitude representation proves to be the best, with its corresponding modifying method.

- Posters | Pp. 776-777