Catálogo de publicaciones - libros

Compartir en
redes sociales


Affective Computing and Intelligent Interaction: 1st International Conference, ACII 2005, Beijing, China, October 22-24, 2005, Proceedings

Jianhua Tao ; Tieniu Tan ; Rosalind W. Picard (eds.)

En conferencia: 1º International Conference on Affective Computing and Intelligent Interaction (ACII) . Beijing, China . October 22, 2005 - October 24, 2005

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2005 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-29621-8

ISBN electrónico

978-3-540-32273-3

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2005

Tabla de contenidos

Expression Recognition Using Elastic Graph Matching

Yujia Cao; Wenming Zheng; Li Zhao; Cairong Zhou

In this paper, we proposed a facial expression recognition method based on the elastic graph matching (EGM) approach.The EGM approach is widely considered very effective due to it’s robustness against face position and lighting variations. Among all the feature extraction methods which have been used with the EGM, we choose Gabor wavelet transform according to its good performance. In order to effectively represent the facial expression information, we choose the fiducial points from the local areas where the distortion caused by expression is obvious. The better performance of the proposed method is confirmed by the JAFFE facial expression database, compared to the some previous works. We can achieve the average expression recognition rate as high as 93.4%. Moreover, we can get face recognition result simultaneously in our experiment.

- Affective Face and Gesture Processing | Pp. 8-15

The Bunch-Active Shape Model

Jingcai Fan; Hongxun Yao; Wen Gao; Yazhou Liu; Xin Liu

Active Shape Model (ASM) is one of the most powerful statistical tools for face image alignment. In this paper, we propose a novel method, called Bunch-Active Shape Model (Bunch-ASM), based on the standard ASM, to automatically locate facial feature points in Face Recognition. In Bunch-ASM, eyes are localized by a face detector and the matching strategy used in Elastic Bunch Graph Matching (EBGM) is introduced. Experimental results prove that the Bunch-ASM performs much better than the standard ASM and the ASM with iris-refinement.

- Affective Face and Gesture Processing | Pp. 16-23

Facial Signs of Affect During Tutoring Sessions

Dirk Heylen; Mattijs Ghijsen; Anton Nijholt; Rieks op den Akker

An emotionally intelligent tutoring system should be able to taking into account relevant aspects of the mental state of the student when providing feedback. The student’s facial expressions, put in context, could provide cues with respect to this state. We discuss the analysis of the facial expression displayed by students interacting with an Intelligent Tutoring System and our attempts to relate expression, situation and mental state building on Scherer’s component process model of emotion appraisal.

- Affective Face and Gesture Processing | Pp. 24-31

Towards Unsupervised Detection of Affective Body Posture Nuances

P. Ravindra De Silva; Andrea Kleinsmith; Nadia Bianchi-Berthouze

Recently, researchers have been modeling three to nine discrete emotions for creating affective recognition systems. However, in every day life, humans use a rich and powerful language for defining a large variety of affective states. Thus, one of the challenging issues in affective computing is to give computers the ability to recognize a variety of affective states using unsupervised methods. In order to explore this possibility, we describe affective postures representing 4 emotion categories using low level descriptors. We applied multivariate analysis to recognize and categorize these postures into nuances of these categories. The results obtained show that low-level posture features may be used for this purpose, leaving the naming issue to interactive processes.

- Affective Face and Gesture Processing | Pp. 32-39

Face Alignment Under Various Poses and Expressions

Shengjun Xin; Haizhou Ai

In this paper, we present a face alignment system to deal with various poses and expressions. In addition to global shape model, we use component shape model such as mouth shape model, contour shape model in addition to global shape model to achieve more powerful representation for face components under complex pose and expression variations. Different from 1-D profile texture feature in classical ASM, we use 2-D local texture feature for more accuracy, and in order to achieve high robustness and fast speed it is represented by Haar-wavelet features as in [5]. Extensive experiments are reported to show its effectiveness.

- Affective Face and Gesture Processing | Pp. 40-47

Face Tracking Using Mean-Shift Algorithm: A Fuzzy Approach for Boundary Detection

Farhad Dadgostar; Abdolhossein Sarrafzadeh; Scott P. Overmyer

Face and hand tracking are important areas of research, related to adaptive human-computer interfaces, and affective computing. In this article we have introduced two new methods for boundary detection of the human face in video sequences: (1) edge density thresholding, and (2) fuzzy edge density. We have analyzed these algorithms based on two main factors: convergence speed and stability against white noise. The results show that “fuzzy edge density” method has an acceptable convergence speed and significant robustness against noise. Based on the results we believe that this method of boundary detection together with the mean-shift and its variants like cam-shift algorithm, can achieve fast and robust tracking of the face in noisy environment, that makes it a good candidate for use with cheap cameras and real-world applications.

- Affective Face and Gesture Processing | Pp. 56-63

Modelling Nonrigid Object from Video Sequence Under Perspective Projection

Guanghui Wang; Yantao Tian; Guoqiang Sun

The paper is focused on the problem of estimating 3D structure and motion of nonrigid object from a monocular video sequence. Many previous methods on this problem utilize the extension technique of factorization based on rank constraint to the tracking matrix, where the 3D shape of nonrigid object is expressed as weighted combination of a set of shape bases. All these solutions are based on the assumption of affine camera model. This assumption will become invalid and cause large reconstruction errors when the object is close to the camera. The main contribution of this paper is that we extend these methods to the general perspective camera model. The proposed algorithm iteratively updates the shape and motion from weak perspective projection to fully perspective projection by refining the scalars corresponding to the projective depths. Extensive experiments on real sequences validate the effectiveness and improvements of the proposed method.

- Affective Face and Gesture Processing | Pp. 64-71

Sketch Based Facial Expression Recognition Using Graphics Hardware

Jiajun Bu; Mingli Song; Qi Wu; Chun Chen; Cheng Jin

In this paper, a novel system is proposed to recognize facial expression based on face sketch, which is produced by programmable graphics hardware-GPU(Graphics Processing Unit). Firstly, an expression subspace is set up from a corpus of images consisting of seven basic expressions. Secondly, by applying a GPU based edge detection algorithm, the real-time facial expression sketch extraction is performed. Subsequently, noise elimination is carried out by tone mapping operation on GPU. Then, an ASM instance is trained to track the facial feature points in the sketched face image more efficiently and precisely than that on a grey level image directly. Finally, by the normalized key feature points, Eigen expression vector is deduced to be the input of MSVM(Multi-SVMs) based expression recognition model, which is introduced to perform the expression classification. Test expression images are categorized by MSVM into one of the seven basic expression subspaces. Experiment on a data set containing 500 pictures clearly shows the efficacy of the algorithm.

- Affective Face and Gesture Processing | Pp. 72-79

Facial Expression Recognition Using HLAC Features and WPCA

Fang Liu; Zhi-liang Wang; Li Wang; Xiu-yan Meng

This paper proposes a new facial expression recognition method which combines Higher Order Local Autocorrelation (HLAC) features with Weighted PCA. HLAC features are computed at each pixel in the human face image. Then these features are integrated with a weight map to obtain a feature vector. We select the weight by combining statistic method with psychology theory. The experiments on the “CMU-PITTSBURGH AU-Coded Face Expression Image Database” show that our Weighted PCA method can improve the recognition rate significantly without increasing the computation, when compared with PCA.

- Affective Face and Gesture Processing | Pp. 88-94

Motion Normalization

Yan Gao; Lizhuang Ma; Zhihua Chen; Xiaomao Wu

This paper presents a very simple but efficient algorithm to normalize all motion data in database with same skeleton length. The input motion stream is processed sequentially while the computation for a single frame at each step requires only the results from the previous step over a neighborhood of nearby backward frames. In contrast to previous motion retargeting approaches, we simplify the constraint condition of retargeting problem, which leads to the simpler solutions. Moreover, we improve Shin et al.’s algorithm [10], which is adopted by a widely used Kovar’s footskate cleanup algorithm [6] through adding one case missed by it.

- Affective Face and Gesture Processing | Pp. 95-101