Catálogo de publicaciones - libros

Compartir en
redes sociales


Affective Computing and Intelligent Interaction: 1st International Conference, ACII 2005, Beijing, China, October 22-24, 2005, Proceedings

Jianhua Tao ; Tieniu Tan ; Rosalind W. Picard (eds.)

En conferencia: 1º International Conference on Affective Computing and Intelligent Interaction (ACII) . Beijing, China . October 22, 2005 - October 24, 2005

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2005 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-29621-8

ISBN electrónico

978-3-540-32273-3

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2005

Tabla de contenidos

Real-Time Facial Expression Recognition System Based on HMM and Feature Point Localization

Yumei Fan; Ning Cheng; Zhiliang Wang; Jiwei Liu; Changsheng Zhu

It is difficult for computer to recognize facial expression in real-time. Until now, no good method is put forward to solve this problem in and abroad. In this paper, we present an effective automated system that we developed to recognize facial gestures in real-time. According to psychologists’ facial expression classification, we define four basic facial expressions and localize key facial feature points exactly then extract facial components’ contours. We analyze and record facial components’ movements using information in sequential frames to recognize facial gestures. Since different facial expressions can have some same movements, it is necessary to use a good facial expression model to describe the relation between expression states and observed states of facial components’ movements for achieving a good recognition results. HMM is such a good method which can meet our requirement. We present a facial expression model based on HMM and get good real-time recognition results.

- Affective Face and Gesture Processing | Pp. 210-217

Discriminative Features Extraction in Minor Component Subspace

Wenming Zheng; Cairong Zou; Li Zhao

In this paper, we propose a new method of extracting the discriminative features for classification from a given training dataset. The proposed method combines the advantages of both the null space method and the maximum margin criterion (MMC) method, whilst overcomes their drawbacks. The better performance of the proposed method is confirmed by face recognition experiments.

- Affective Face and Gesture Processing | Pp. 218-223

An Information Acquiring Channel —— Lip Movement

Xiaopeng Hong; Hongxun Yao; Qinghui Liu; Rong Chen

This paper is to prove that lip-movement is an available channel for information acquiring. The reasoning is given by describing two kinds of valid applications, which are constructed on lip movement information only. One is lip-reading, the other is lip-movement utterance recognition. The accuracy of the former system with speaker-dependent could achieve 68%, and of the latter achieves over 99.5% for test-independent (TI) and nearly 100% for test-dependent (TD) in experiments till now. From this conclusion, it could be easily got that lip-reading channel is an effective one and can be applied independently.

- Affective Face and Gesture Processing | Pp. 232-238

Content-Based Affective Image Classification and Retrieval Using Support Vector Machines

Qingfeng Wu; Changle Zhou; Chaonan Wang

In this paper a new method to classify and retrieve affective images is proposed. First users express the affective semantics of the images with adjective words; process the data got by Semantic Differential method to obtain main factors of affection and establish affective space; extract low-level visual features of image to construct visual feature space; calculate the correlation between affective space and visual feature space with SVMs. The prototype system that embodies trained SVMs has been implemented. The system can classify the images automatically and support the affective image retrieval. The experimental results prove the effectiveness of this method.

- Affective Face and Gesture Processing | Pp. 239-247

A Novel Real Time System for Facial Expression Recognition

Xiaoyi Feng; Matti Pietikäinen; Abdenour Hadid; Hongmei Xie

In this paper, a fully automatic, real-time system is proposed to recognize seven basic facial expressions (angry, disgust, fear, happiness, neutral, sadness and surprise), which is insensitive to illumination changes. First, face is located and normalized based on an illumination insensitive skin model and face segmentation; then, the basic Local Binary Patterns (LBP) technique, which is invariant to monotonic grey level changes, is used for facial feature extraction; finally, a coarse-to-fine scheme is used for expression classification. Theoretical analysis and experimental results show that the proposed system performs well in variable illumination and some degree of head rotation.

- Affective Face and Gesture Processing | Pp. 248-256

Fist Tracking Using Bayesian Network

Peng Lu; Yufeng Chen; Mandun Zhang; Yangsheng Wang

This paper presents a Bayesian network based multi-cue fusion method for robust and real-time fist tracking. Firstly, a new strategy, which employs the latest work in face recognition, is used to create accurate color model of the fist automatically. Secondly, color cue and motion cue are used to generate the possible position of the fist. Then, the posterior probability of each possible position is evaluated by Bayesian network, which fuses color cue and appearance cue. Finally, the fist position is approximated by the hypothesis that maximizes a posterior. Experimental results show that our algorithm is real-time and robust.

- Affective Face and Gesture Processing | Pp. 257-262

Grounding Affective Dimensions into Posture Features

Andrea Kleinsmith; P. Ravindra De Silva; Nadia Bianchi-Berthouze

Many areas of today’s society are seeing an increased importance in the creation of systems capable of interacting with users on an affective level through a variety of modalities. Our focus has been on affective posture recognition. However, a deeper understanding of the relationship between emotions in terms of postural expressions is required. The goal of this study was to identify affective dimensions that human observers use when discriminating between postures, and to investigate the possibility of grounding this affective space into a set of posture features. Using multidimensional scaling, arousal, valence, and action tendency were identified as the main factors in the evaluation process. Our results showed that, indeed, low-level posture features could effectively discriminate between the affective dimensions.

- Affective Face and Gesture Processing | Pp. 263-270

Intonation Modelling and Adaptation for Emotional Prosody Generation

Zeynep Inanoglu; Steve Young

This paper proposes an HMM-based approach to generating emotional intonation patterns. A set of models were built to represent syllable-length intonation units. In a classification framework, the models were able to detect a sequence of intonation units from raw fundamental frequency values. Using the models in a generative framework, we were able to synthesize smooth and natural sounding pitch contours. As a case study for emotional intonation generation, Maximum Likelihood Linear Regression (MLLR) adaptation was used to transform the neutral model parameters with a small amount of happy and sad speech data. Perceptual tests showed that listeners could identify the speech with the sad intonation 80% of the time. On the other hand, listeners formed a bimodal distribution in their ability to detect the system generated happy intontation and on average listeners were able to detect happy intonation only 46% of the time.

- Affective Speech Processing | Pp. 286-293

Application of Psychological Characteristics to D-Script Model for Emotional Speech Processing

Artemy Kotov

D-scripts model is originally developed for description of affective (emotional) mass media texts and with extension also applies to emotional speech synthesis. In this model we distinguish units for “rational” inference (r-scripts) and units for “emotional” processing of meaning (d-scripts). Basing on a psycholinguistics study we demonstrate relations between classes of emotional utterances in d-script model and psychological characteristics of informants. The study proposes a theoretical framework for an affective agent simulating given psychological characteristics in it’s emotional speech behaviour.

- Affective Speech Processing | Pp. 294-302

A Hybrid GMM and Codebook Mapping Method for Spectral Conversion

Yongguo Kang; Zhiwei Shuang; Jianhua Tao; Wei Zhang; Bo Xu

This paper proposes a new mapping method combining GMM and codebook mapping methods to transform spectral envelope for voice conversion system. After analyzing overly smoothing problem of GMM mapping method in detail, we propose to convert the basic spectral envelope by GMM method and convert envelope-subtracted spectral details by GMM and phone-tied codebook mapping method. Objective evaluations based on performance indices show that the performance of proposed mapping method averagely improves 27.2017% than GMM mapping method, and listening tests prove that the proposed method can effectively reduce over smoothing problem of GMM method while it can avoid the discontinuity problem of codebook mapping method.

- Affective Speech Processing | Pp. 303-310