Catálogo de publicaciones - libros

Compartir en
redes sociales


Computer Vision: ACCV 2007: 8th Asian Conference on Computer Vision, Tokyo, Japan, November 18-22, 2007, Proceedings, Part I

Yasushi Yagi ; Sing Bing Kang ; In So Kweon ; Hongbin Zha (eds.)

En conferencia: 8º Asian Conference on Computer Vision (ACCV) . Tokyo, Japan . November 18, 2007 - November 22, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-76385-7

ISBN electrónico

978-3-540-76386-4

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

Tracking and Classifying of Human Motions with Gaussian Process Annealed Particle Filter

Leonid Raskin; Michael Rudzsky; Ehud Rivlin

This paper presents a framework for 3D articulated human body tracking and action classification. The method is based on nonlinear dimensionality reduction of high dimensional data space to low dimensional latent space. Motion of human body is described by concatenation of low dimensional manifolds which characterize different motion types. We introduce a body pose tracker, which uses the learned mapping function from low dimensional latent space to high dimensional body pose space. The trajectories in the latent space provide low dimensional representations of body poses performed during motion. They are used to classify human actions. The approach was checked on HumanEva dataset as well as on our own one. The results and the comparison to other methods are presented.

- Poster Session 2: Face/Gesture/Action Detection and Recognition | Pp. 442-451

Gait Identification Based on Multi-view Observations Using Omnidirectional Camera

Kazushige Sugiura; Yasushi Makihara; Yasushi Yagi

We propose a method of gait identification based on multi-view gait images using an omnidirectional camera. We first transform omnidirectional silhouette images into panoramic ones and obtain a spatio-temporal Gait Silhouette Volume (GSV). Next, we extract frequency- domain features by Fourier analysis based on gait periods estimated by autocorrelation of the GSVs. Because the omnidirectional camera makes it possible to observe a straight-walking person from various views, multi-view features can be extracted from the GSVs composed of multi-view images. In an identification phase, distance between a probe and a gallery feature of the same view is calculated, and then these for all views are integrated for matching. Experiments of gait identification including 15 subjects from 5 views demonstrate the effectiveness of the proposed method.

- Poster Session 2: Face/Gesture/Action Detection and Recognition | Pp. 452-461

Gender Classification Based on Fusion of Multi-view Gait Sequences

Guochang Huang; Yunhong Wang

In this paper, we present a new method for gender classification based on fusion of multi-view gait sequences. For each silhouette of gait sequences, we first use a simple method to divide the silhouette into 7 (for 90 degree, i.e. fronto-parallel view) or 5 (for 0 and 180 degree, i.e. front view and back view) parts, and then fit ellipses to each of the regions. Next, the features are extracted from each sequence by computing the ellipse parameters. For each view angle, every subject’s features are normalized and combined as a feature vector. The combination of feature vector contains enough information to perform well on gender recognition. Sum rule and SVM are applied to fuse the similarity measures from 0, 90, and 180. We carried our experiments on CASIA Gait Database, one of the largest gait databases as we know, and achieved the classification accuracy of 89.5%.

- Poster Session 2: Face/Gesture/Action Detection and Recognition | Pp. 462-471

MAPACo-Training: A Novel Online Learning Algorithm of Behavior Models

Heping Li; Zhanyi Hu; Yihong Wu; Fuchao Wu

The traditional co-training algorithm, which needs a great number of unlabeled examples in advance and then trains classifiers by iterative learning approach, is not suitable for online learning of classifiers. To overcome this barrier, we propose a novel semi-supervised learning algorithm, called MAPACo-Training, by combining the co-training with the principle of Maximum A Posteriori adaptation. This MAPACo-Training algorithm is an online multi-class learning algorithm, and has been successfully applied to online learning of behaviors modeled by Hidden Markov Model. The proposed algorithm is tested with the Li’s database as well as Schuldt’s dataset.

- Poster Session 2: Learning | Pp. 472-481

Optimal Learning High-Order Markov Random Fields Priors of Colour Image

Ke Zhang; Huidong Jin; Zhouyu Fu; Nianjun Liu

In this paper, we present an optimised learning algorithm for learning the parametric prior models for high-order Markov random fields (MRF) of colour images. Compared to the priors used by conventional low-order MRFs, the learned priors have richer expressive power and can capture the statistics of natural scenes. Our proposed optimal learning algorithm is achieved by simplifying the estimation of partition function without compromising the accuracy of the learned model. The parameters in MRF colour image priors are learned alternatively and iteratively in an EM-like fashion by maximising their likelihood. We demonstrate the capability of the proposed learning algorithm of high-order MRF colour image priors with the application of colour image denoising. Experimental results show the superior performance of our algorithm compared to the state–of–the–art of colour image priors in [1], although we use a much smaller training image set.

- Poster Session 2: Learning | Pp. 482-491

Hierarchical Learning of Dominant Constellations for Object Class Recognition

Nathan Mekuz; John K. Tsotsos

The importance of spatial configuration information for object class recognition is widely recognized. Single isolated local appearance codes are often ambiguous. On the other hand, object classes are often characterized by groups of local features appearing in a specific spatial structure. Learning these structures can provide additional discriminant cues and boost recognition performance. However, the problem of learning such features automatically from raw images remains largely uninvestigated. In contrast to previous approaches which require accurate localization and segmentation of objects to learn spatial information, we propose learning by hierarchical voting to identify frequently occurring spatial relationships among local features directly from raw images. The method is resistant to common geometric perturbations in both the training and test data. We describe a novel representation developed to this end and present experimental results that validate its efficacy by demonstrating the improvement in class recognition results realized by including the additional learned information.

- Poster Session 2: Learning | Pp. 492-501

Multistrategical Approach in Visual Learning

Hiroki Nomiya; Kuniaki Uehara

In this paper, we propose a novel visual learning framework to develop flexible and accurate object recognition methods. Currently, most of visual learning based recognition methods adopt the monostrategy learning framework using a single feature. However, the real-world objects are so complex that it is quite difficult for monostrategy method to correctly classify them. Thus, utilizing a wide variety of features is required to precisely distinguish them. In order to utilize various features, we propose multistrategical visual learning by integrating multiple visual learners. In our method, multiple visual learners are collaboratively trained. Specifically, a visual learner intensively learns the examples misclassified by the other visual learners. Instead, the other visual learners learn the examples misclassified by . As a result, a powerful object recognition method can be developed by integrating various visual learners even if they have mediocre recognition performance.

- Poster Session 2: Learning | Pp. 502-511

Cardiac Motion Estimation from Tagged MRI Using 3D-HARP and NURBS Volumetric Model

Jia Liang; Yuanquan Wang; Yunde Jia

Concerning analysis of tagged cardiac MR images, harmonic phase (HARP) is a promising technique with the largest potential for clinical use in terms of rapidity and automation without tags detection and tracking. However, it is usually applied to 2D images and only provides “apparent motion” information. In this paper, HARP is integrated with a nonuniform rational B-spline (NURBS) volumetric model to densely reconstruct 3D motion of left ventricle (LV). The NURBS model represents anatomy of LV compactly, and displacement information that HARP provides within short-axis and long-axis images drives the model to deform. After estimating the motion at each phase, we smooth the NURBS models temporally to achieve a 4D continuous time-varying representation of LV motion. Experimental results on data show that the proposed strategy could estimate 3D motion of LV rapidly and effectively benefiting from both HARP and NURBS model.

- Poster Session 2: Motion and Tracking | Pp. 512-521

Fragments Based Parametric Tracking

C. Prakash; Balamanohar Paluri; S. Nalin Pradeep; Hitesh Shah

The paper proposes a parametric approach for color based tracking. The method fragments a multimodal color object into multiple homogeneous, unimodal, fragments. The fragmentation process consists of multi level thresholding of the object color space followed by an assembling. Each homogeneous region is then modelled using a single parametric distribution and the tracking is achieved by fusing the results of the multiple parametric distributions. The advantage of the method lies in tracking complex objects with partial occlusions and various deformations like non-rigid, orientation and scale changes. We evaluate the performance of the proposed approach on standard and challenging real world datasets.

- Poster Session 2: Motion and Tracking | Pp. 522-531

Spatiotemporal Oriented Energy Features for Visual Tracking

Kevin Cannons; Richard Wildes

This paper presents a novel feature set for visual tracking that is derived from “oriented energies”. More specifically, energy measures are used to capture a target’s multiscale orientation structure across both space and time, yielding a rich description of its spatiotemporal characteristics. To illustrate utility with respect to a particular tracking mechanism, we show how to instantiate oriented energy features efficiently within the mean shift estimator. Empirical evaluations of the resulting algorithm illustrate that it excels in certain important situations, such as tracking in clutter with multiple similarly colored objects and environments with changing illumination. Many trackers fail when presented with these types of challenging video sequences.

- Poster Session 2: Motion and Tracking | Pp. 532-543