Catálogo de publicaciones - libros
Computer Vision: ACCV 2007: 8th Asian Conference on Computer Vision, Tokyo, Japan, November 18-22, 2007, Proceedings, Part I
Yasushi Yagi ; Sing Bing Kang ; In So Kweon ; Hongbin Zha (eds.)
En conferencia: 8º Asian Conference on Computer Vision (ACCV) . Tokyo, Japan . November 18, 2007 - November 22, 2007
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
No disponibles.
Disponibilidad
Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No detectada | 2007 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-3-540-76385-7
ISBN electrónico
978-3-540-76386-4
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2007
Información sobre derechos de publicación
© Springer-Verlag Berlin Heidelberg 2007
Tabla de contenidos
Adaptive Multiple Object Tracking Using Colour and Segmentation Cues
Pankaj Kumar; Michael J. Brooks; Anthony Dick
We consider the problem of reliably tracking multiple objects in video, such as people moving through a shopping mall or airport. In order to mitigate difficulties arising as a result of object occlusions, mergers and changes in appearance, we adopt an integrative approach in which multiple cues are exploited. Object tracking is formulated as a Bayesian parameter estimation problem. The object model used in computing the likelihood function is incrementally updated. Key to the approach is the use of a background subtraction process to deliver foreground segmentations. This enables the object colour model to be constructed using weights derived from a distance transform operating over foreground regions. Results from foreground segmentation are also used to gain improved localisation of the object within a particle filter framework. We demonstrate the effectiveness of the approach by tracking multiple objects through videos obtained from the CAVIAR dataset.
- Poster Session 3: Motion and Tracking | Pp. 853-863
Image Assimilation for Motion Estimation of Atmospheric Layers with Shallow-Water Model
Nicolas Papadakis; Patrick Héas; Étienne Mémin
The complexity of dynamical laws governing 3D atmospheric flows associated to incomplete and noisy observations makes very difficult the recovery of atmospheric dynamics from satellite images sequences. In this paper, we face the challenging problem of joint estimation of time-consistent horizontal motion fields and pressure maps at various atmospheric depths. Based on a vertical decomposition of the atmosphere, we propose a dense motion estimator relying on a multi-layer dynamical model. Noisy and incomplete pressure maps obtained from satellite images are reconstructed according to shallow-water model on each cloud layer using a framework derived from data assimilation. While reconstructing dense pressure maps, this variational process estimates time-consistent horizontal motion fields related to the multi-layer model. The proposed approach is validated on a synthetic example and applied to a real world meteorological satellite image sequence.
- Poster Session 3: Motion and Tracking | Pp. 864-874
Probability Hypothesis Density Approach for Multi-camera Multi-object Tracking
Nam Trung Pham; Weimin Huang; S. H. Ong
Object tracking with multiple cameras is more efficient than tracking with one camera. In this paper, we propose a multiple-camera multiple-object tracking system that can track 3D object locations even when objects are occluded at cameras. Our system tracks objects and fuses data from multiple cameras by using the probability hypothesis density filter. This method avoids data association between observations and states of objects, and tracks multiple objects in single-object state space. Hence, it has lower computation than methods using joint state space. Moreover, our system can track varying number of objects. The results demonstrate that our method has a high reliability when tracking 3D locations of objects.
- Poster Session 3: Motion and Tracking | Pp. 875-884
AdaBoost Learning for Human Detection Based on Histograms of Oriented Gradients
Chi-Chen Raxle Wang; Jenn-Jier James Lien
We developed a novel learning-based human detection system, which can detect people having different sizes and orientations, under a wide variety of backgrounds or even with crowds. To overcome the affects of geometric and rotational variations, the system automatically assigns the dominant orientations of each block-based feature encoding by using the rectangular- and circular-type histograms of orientated gradients (HOG), which are insensitive to various lightings and noises at the outdoor environment. Moreover, this work demonstrated that Gaussian weight and tri-linear interpolation for HOG feature construction can increase detection performance. Particularly, a powerful feature selection algorithm, AdaBoost, is performed to automatically select a small set of discriminative HOG features with orientation information in order to achieve robust detection results. The overall computational time is further reduced significantly without any performance loss by using the cascade-of-rejecter structure, whose hyperplanes and weights of each stage are estimated by using the AdaBoost approach.
- Human Detection | Pp. 885-895
Multi-posture Human Detection in Video Frames by Motion Contour Matching
Qixiang Ye; Jianbin Jiao; Hua Yu
In the paper, we proposed a method for moving human detection in video frames by motion contour matching. Firstly, temporal and spatial difference of frames is calculated and contour pixels are extracted by global thresholding as the basic features. Then, skeleton templates with multiple representative postures are built on these features to represent multi-posture human contours. In the detection procedure, a dynamic programming algorithm is adopted to find best global match between the built templates and with extracted contour features. Finally a thresholding method is used to classify a matching result into moving human or negatives. And in the matching process scale problem and interpersonal contour difference are considered. Experiments on real video data prove the effectiveness of the proposed method.
- Human Detection | Pp. 896-904
A Cascade of Feed-Forward Classifiers for Fast Pedestrian Detection
Yu-Ting Chen; Chu-Song Chen
We develop a method that can detect humans in a single image based on a new cascaded structure. In our approach, both the rectangle features and 1-D edge-orientation features are employed in the feature pool for weak-learner selection, which can be computed via the integral-image and the integral-histogram techniques, respectively. To make the weak learner more discriminative, Real AdaBoost is used for feature selection and learning the stage classifiers from the training images. Instead of the standard boosted cascade, a novel cascaded structure that exploits both the stage-wise classification information and the inter-stage cross-reference information is proposed. Experimental results show that our approach can detect people with both efficiency and accuracy.
- Human Detection | Pp. 905-914
Combined Object Detection and Segmentation by Using Space-Time Patches
Yasuhiro Murai; Hironobu Fujiyoshi; Takeo Kanade
This paper presents a method for classifying the direction of movement and for segmenting objects simultaneously using features of space-time patches. Our approach uses vector quantization to classify the direction of movement of an object and to estimate its centroid by referring to a codebook of the space-time patch feature, which is generated from multiple learning samples. We segmented the objects’ regions based on the probability calculated from the mask images of the learning samples by using the estimated centroid of the object. Even though occlusions occur when multiple objects overlap in different directions of movement, our method detects objects individually because their direction of movement is classified. Experimental results show that object detection is more accurate with our method than with the conventional method, which is only based on appearance features.
- Human Detection | Pp. 915-924
Embedding a Region Merging Prior in Level Set Vector-Valued Image Segmentation
Ismail Ben Ayed; Amar Mitiche
In the scope of level set image segmentation, the number of regions is fixed beforehand. This number occurs as a constant in the objective functional and its optimization. In this study, we propose a region merging prior which optimizes the objective functional with respect to the number of regions. A statistical interpretation of the functional and learning over a set of relevant images and segmentation examples allow setting the weight of this prior to obtain the correct number of regions. This method is investigated and validated with color images and motion maps.
- Segmentation | Pp. 925-934
A Basin Morphology Approach to Colour Image Segmentation by Region Merging
Erchan Aptoula; Sébastien Lefèvre
The problem of colour image segmentation is investigated in the context of mathematical morphology. Morphological operators are extended to colour images by means of a lexicographical ordering in a polar colour space, which are then employed in the preprocessing stage. The actual segmentation is based on the use of the watershed transformation, followed by region merging, with the procedure being formalized as a , where regions are “eroded” in order to form greater catchment basins. The result is a fully automated processing chain, with multiple levels of parametrisation and flexibility, the application of which is illustrated by means of the Berkeley segmentation dataset.
- Segmentation | Pp. 935-944
Detecting and Segmenting Un-occluded Items by Actively Casting Shadows
Tze K. Koh; Amit Agrawal; Ramesh Raskar; Steve Morgan; Nicholas Miles; Barrie Hayes-Gill
We present a simple and practical approach for segmenting un-occluded items in a scene by actively casting shadows. By ‘items’, we refer to objects (or part of objects) enclosed by depth edges. Our approach utilizes the fact that under varying illumination, un-occluded items will cast shadows on occluded items or background, but will not be shadowed themselves.
We employ an active illumination approach by taking multiple images under different illumination directions, with illumination source close to the camera. Our approach ignores the texture edges in the scene and uses only the shadow and silhouette information to determine the occlusions. We show that such a segmentation does not require the estimation of a depth map or 3D information, which can be cumbersome, expensive and often fails due to the lack of texture and presence of specular objects in the scene. Our approach can handle complex scenes with self-shadows and specularities. Results on several real scenes along with the analysis of failure cases are presented.
- Segmentation | Pp. 945-955