Catálogo de publicaciones - libros

Compartir en
redes sociales


Affective Computing and Intelligent Interaction: 1st International Conference, ACII 2005, Beijing, China, October 22-24, 2005, Proceedings

Jianhua Tao ; Tieniu Tan ; Rosalind W. Picard (eds.)

En conferencia: 1º International Conference on Affective Computing and Intelligent Interaction (ACII) . Beijing, China . October 22, 2005 - October 24, 2005

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2005 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-29621-8

ISBN electrónico

978-3-540-32273-3

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2005

Tabla de contenidos

Fusing Face and Body Display for Bi-modal Emotion Recognition: Single Frame Analysis and Multi-frame Post Integration

Hatice Gunes; Massimo Piccardi

This paper presents an approach to automatic visual emotion recognition from two modalities: expressive face and body gesture. Face and body movements are captured simultaneously using two separate cameras. For each face and body image sequence single “expressive” frames are selected manually for analysis and recognition of emotions. Firstly, individual classifiers are trained from individual modalities for mono-modal emotion recognition. Secondly, we fuse facial expression and affective body gesture information at the feature and at the decision-level. In the experiments performed, the emotion classification using the two modalities achieved a better recognition accuracy outperforming the classification using the individual facial modality. We further extend the affect analysis into a whole image sequence by a multi-frame post integration approach over the single frame recognition results. In our experiments, the post integration based on the fusion of face and body has shown to be more accurate than the post integration based on the facial modality only.

- Affective Face and Gesture Processing | Pp. 102-111

A Composite Method to Extract Eye Contour

Ke Sun; Hong Wang

An eye contour extraction method which combines a simplied version of Active Shape Model(ASM) with a gradient method is proposed. Considering the large amount of calculations required by ASM, it is only used to extract eyelids. As iris is considered to have some more regular shape, the detection of iris is done by the simple but fast gradient method, which is improved by introducing gradient value to the weight matrix. Our detection method has been implemented in the C programming language and experimental results shows good accuracy and efficiency.

- Affective Face and Gesture Processing | Pp. 112-118

Simulated Annealing Based Hand Tracking in a Discrete Space

Wei Liang; Yunde Jia; Yang Liu; Cheng Ge

Hand tracking is a challenging problem due to the complexity of searching in a 20+ degrees of freedom (DOF) space for an optimal estimation of hand configuration. This paper represents the feasible hand configurations as a discrete space, which avoids learning to find parameters as general configuration space representations do. Then, we propose an extended simulated annealing method with particle filter to search for optimal hand configuration in this discrete space, in which simplex search running in multi-processor is designed to predict the hand motion instead of initializing the simulated annealing randomly, and particle filter is employed to represent the state of the tracker at each layer for searching in high dimensional configuration space. The experimental results show that the proposed method makes the hand tracking more efficient and robust.

- Affective Face and Gesture Processing | Pp. 119-126

Modulation of Attention by Faces Expressing Emotion: Evidence from Visual Marking

Fang Hao; Hang Zhang; Xiaolan Fu

Recent findings demonstrated that negative emotional faces (sad, anger or fear) tend to attract attention more than positive faces do. This study used the paradigm of visual marking to test the perspective that mentioned and explored whether the preview benefit still existed when using schematic faces as materials. The results found that preview benefit was significant in the search of affective materials. In a gap condition, it was faster to search negative faces than to search positive faces. However, this advantage did not appear in half-element condition when negative faces as distractors, which indicated that the view that negative faces capture attention more efficiently is not always like this.

- Affective Face and Gesture Processing | Pp. 127-134

Automatic Facial Expression Recognition Using Linear and Nonlinear Holistic Spatial Analysis

Rui Ma; Jiaxin Wang

This paper is engaged in the holistic spatial analysis on facial expression images. We present a systematic comparison of machine learning methods applied to the problem of automatic facial expression recognition, including supervised and unsupervised subspace analysis, SVM classifier and their nonlinear versions. Image-based holistic spatial analysis is more adaptive to recognition task in that it automatically learns the inner structure of training samples and extracts the most pertinent features for classification. Nonlinear analysis methods which could extract higher order dependencies among input patterns are supposed to promote the performance of classification. Surprisingly, the linear classifiers outperformed their nonlinear versions in our experiments. We proposed a new feature selection method named the Weighted Saliency Maps(WSM). Compared to other feature selection schemes such as Adaboost and PCA, WSM has the advantage of being simple, fast and flexible.

- Affective Face and Gesture Processing | Pp. 144-151

A Novel Regularized Fisher Discriminant Method for Face Recognition Based on Subspace and Rank Lifting Scheme

Wen-Sheng Chen; Pong Chi Yuen; Jian Huang; Jianhuang Lai; Jianliang Tang

The null space () of total scatter matrix contains no useful information for pattern classification. So, discarding the null space () results in dimensionality reduction without loss discriminant power. Combining this subspace technique with proposed rank lifting scheme, a new regularized Fisher discriminant (SL-RFD) method is developed to deal with the small sample size (S3) problem in face recognition. Two public available databases, namely FERET and CMU PIE databases, are exploited to evaluate the proposed algorithm. Comparing with existing LDA-based methods in solving the S3 problem, the proposed SL-RFD method gives the best performance.

- Affective Face and Gesture Processing | Pp. 152-159

Hand Motion Recognition for the Vision-based Taiwanese Sign Language Interpretation

Chia-Shiuan Cheng; Pi-Fuei Hsieh; Chung-Hsien Wu

In this paper we present a system to recognize the hand motion of Taiwanese Sign Language (TSL) using the Hidden Markov Models (HMMs) through a vision-based interface. Our hand motion recognition system consists of four phases: construction of color model, hand tracking, trajectory representation, and recognition. Our hand tracking can accurately track the hand positions. Since our system is recognized to hand motions that are variant with rotation, translation, symmetric, and scaling in Cartesian coordinate system, we have chosen invariant features which convert our coordinate system from Cartesian coordinate system to Polar coordinate system. There are nine hand motion patterns defined for TSL. Experimental results show that our proposed method successfully chooses invariant features to recognition with accuracy about 90%.

- Affective Face and Gesture Processing | Pp. 160-167

A Canonical Face Based Virtual Face Modeling

Seongah Chin

The research presented here is to create 3D virtual face based on the canonical face model derived from a clustering method on facial feature points. The algorithm efficiently transforms feature points of the canonical face model into those of the new face model for input images without creating new face manually. By comparative experiments, we have shown both facial models generated by manually and automatically. In conclusion, both facial models are quite identical visually whereas efficiency is totally different.

- Affective Face and Gesture Processing | Pp. 179-186

What Expression Could Be Found More Quickly? It Depends on Facial Identities

Hang Zhang; Yuming Xuan; Xiaolan Fu

Visual search task was used to explore the role of facial identity in the processing of facial expression. Participants were asked to search for a happy or sad face in a crowd of emotional face pictures. Expression search was more quickly and accurate when all the faces in a display belonged to one identity than two identities. This suggested the interference of identity variance on expression recognition. At the same time the search speed for a certain expression also depended on the number of facial identities. When faces in a display belonged to one identity, a sad face among happy faces could be found more quickly than a happy face among sad faces; otherwise, when faces in a display belonged to two identities, a happy face could be found more quickly than a sad face.

- Affective Face and Gesture Processing | Pp. 195-201

Using an Avatar to Develop a System for the Predication of Human Body Pose from Moments

Song Hu; Bernard F. Buxton

Tracking people using movie sequences is not straightforward because of the human body’s articulation and the complexity of a person’s movements. In this paper we show how a person’s 3D pose can be reconstructed by using corresponding silhouettes of video sequences from a monocular view. Currently, a virtual avatar is used to train the model for inferring the pose and a different avatar is used to produce novel examples not in the training set in order to evaluate the approach. The approach was subsequently tested using the silhouettes of walking people.

- Affective Face and Gesture Processing | Pp. 202-209