Catálogo de publicaciones - libros
Computer Analysis of Images and Patterns: 12th International Conference, CAIP 2007, Vienna, Austria, August 27-29, 2007. Proceedings
Walter G. Kropatsch ; Martin Kampel ; Allan Hanbury (eds.)
En conferencia: 12º International Conference on Computer Analysis of Images and Patterns (CAIP) . Vienna, Austria . August 27, 2007 - August 29, 2007
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
Image Processing and Computer Vision; Pattern Recognition; Artificial Intelligence (incl. Robotics); Computer Graphics; Algorithm Analysis and Problem Complexity
Disponibilidad
Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No detectada | 2007 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-3-540-74271-5
ISBN electrónico
978-3-540-74272-2
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2007
Información sobre derechos de publicación
© Springer-Verlag Berlin Heidelberg 2007
Tabla de contenidos
Connection Geometry, Color, and Stereo
Ohad Ben-Shahar; Gang Li; Steven W. Zucker
The visual systems in primates are organized around orientation with a rich set of long-range horizontal connections. We abstract this from a differential-geometric perspective, and introduce the covariant derivative of frame fields as a general framework for early vision. This paper overviews our research showing how curve detection, texture, shading, color (hue), and stereo can be unified within this framework.
- Invited Talks | Pp. 13-19
Adaptable Model-Based Tracking Using Analysis-by-Synthesis Techniques
Harald Wuest; Folker Wientapper; Didier Stricker
In this paper we present a novel analysis-by-synthesis approach for real-time camera tracking in industrial scenarios. The camera pose estimation is based on the tracking of line features which are generated dynamically in every frame by rendering a polygonal model and extracting contours out of the rendered scene. Different methods of the line model generation are investigated. Depending on the scenario and the given 3D model either the image gradient of the frame buffer or discontinuities of the z-buffer and the normal map are used for the generation of a 2D edge map. The 3D control points on a contour are calculated by using the depth value stored in the z-buffer. By aligning the generated features with edges in the current image, the extrinsic parameters of the camera are estimated. The camera pose used for rendering is predicted by a line-based frame-to-frame tracking which takes advantage of the generated edge features. The method is validated and evaluated with the help of ground-truth data as well as real image sequences.
- Motion Detection and Tracking | Pp. 20-27
Mixture Models Based Background Subtraction for Video Surveillance Applications
Chris Poppe; Gaëtan Martens; Peter Lambert; Rik Van de Walle
Background subtraction is a method commonly used to segment objects of interest in image sequences. By comparing new frames to a background model, regions of interest can be found. To cope with highly dynamic and complex environments, a mixture of several models has been proposed in the literature. This paper proposes a novel background subtraction technique based on the popular Mixture of Gaussian Models technique. Moreover edge-based image segmentation is used to improve the results of the proposed technique. Experimental analysis shows that our system outperforms the standard system both in processing speed and detection accuracy.
- Motion Detection and Tracking | Pp. 28-35
Deterministic and Stochastic Methods for Gaze Tracking in Real-Time
Javier Orozco; F. Xavier Roca; Jordi Gonzàlez
Psychological evidence demonstrates how eye gaze analysis is requested for human computer interaction endowed with emotion recognition capabilities. The existing proposals analyse eyelid and iris motion by using colour information and edge detectors, but eye movements are quite fast and difficult for precise and robust tracking. Instead, we propose to reduce the dimensionality of the image-data by using multi-Gaussian modelling and transition estimations by applying partial differences. The tracking system can handle illumination changes, low-image resolution and occlusions while estimating eyelid and iris movements as continuous variables. Therefore, this is an accurate and robust tracking system for eyelids and irises in 3D for standard image quality.
- Motion Detection and Tracking | Pp. 45-52
Integration of Multiple Temporal and Spatial Scales for Robust Optic Flow Estimation in a Biologically Inspired Algorithm
Cornelia Beck; Thomas Gottbehuet; Heiko Neumann
We present a biologically inspired iterative algorithm for motion estimation that combines the integration of multiple temporal and spatial scales. This work extends a previously developed algorithm that is based on mechanisms of motion processing in the human brain [1]. The temporal integration approach realizes motion detection using one reference frame and multiple past and/or future frames leading to correct motion estimates at positions that are temporarily occluded. In addition, this mechanism enables the detection of subpixel movements and therefore achieves smoother and more precise flow fields. We combine the temporal integration with a recently proposed spatial multi scale approach [2]. The combination further improves the optic flow estimates when the image contains regions of different spatial frequencies and represents a very robust and efficient algorithm for optic flow estimation, both on artificial and real-world sequences.
- Motion Detection and Tracking | Pp. 53-60
Classification of Optical Flow by Constraints
Yusuke Kameda; Atsushi Imiya
In this paper, we analyse mathematical properties of spatial optical-flow computation algorithm. First by numerical analysis, we derive the convergence property on variational optical-flow computation method used for cardiac motion detection. From the convergence property of the algorithm, we clarify the condition for the scheduling of the regularisation parameters. This condition shows that for the accurate and stable computation with scheduling the regularisation coefficients, we are required to control the sampling interval for numerical computation.
- Motion Detection and Tracking | Pp. 61-68
Target Positioning with Dominant Feature Elements
Zhuan Qing Huang; Zhuhan Jiang
We propose a dominant-feature based matching method for capturing a target in a video sequence through the dynamic decomposition of the target template. The target template is segmented via intensity bands to better distinguish itself from the local background. Dominant feature elements are extracted from such segments to measure the matching degree of a candidate target via a sum of similarity probabilities. In addition, spatial filtering and contour adaptation are applied to further refine the object location and shape. The implementation of the proposed method has shown its effectiveness in capturing the target in a moving background and with non-rigid object motion.
- Motion Detection and Tracking | Pp. 69-76
Foreground and Shadow Detection Based on Conditional Random Field
Yang Wang
This paper presents a conditional random field (CRF) approach to integrate spatial and temporal constraints for moving object detection and cast shadow removal in image sequences. Interactions among both detection (foreground/background/shadow) labels and observed data are unified by a probabilistic framework based on the conditional random field, where the interaction strength can be adaptively adjusted in terms of data similarity of neighboring sites. Experimental results show that the proposed approach effectively fuses contextual dependencies in video sequences and significantly improves the accuracy of object detection.
- Motion Detection and Tracking | Pp. 85-92
-Grams of Action Primitives for Recognizing Human Behavior
Christian Thurau; Václav Hlaváč
This paper presents a novel approach for behavior recognition from video data. A biologically inspired action representation is derived by applying a clustering algorithm to sequences of motion images. To obey the temporal context, we express behaviors as sequences of -grams of basic actions. Novel video sequences are classified by comparing histograms of action -grams to stored histograms of known behaviors. Experimental validation shows a high accuracy in behavior recognition.
- Motion Detection and Tracking | Pp. 93-100
Human Action Recognition in Table-Top Scenarios : An HMM-Based Analysis to Optimize the Performance
Pradeep Reddy Raamana; Daniel Grest; Volker Krueger
Hidden Markov models have been extensively and successfully used for the recognition of human actions. Though there exist well-established algorithms to optimize the transition and output probabilities, the type of features to use and specifically the number of states and Gaussian have to be chosen manually. Here we present a quantitative study on selecting the optimal feature set for recognition of simple object manipulation actions pointing, rotating and grasping in a table-top scenario. This study has resulted in recognition rate higher than 90%. Also three different parameters, namely the number of states and Gaussian for HMM and the number of training iterations, are considered for optimization of the recognition rate with 5 different feature sets on our motion capture data set from 10 persons.
- Motion Detection and Tracking | Pp. 101-108