Catálogo de publicaciones - libros

Compartir en
redes sociales


Pattern Recognition: 29th DAGM Symposium, Heidelberg, Germany, September 12-14, 2007. Proceedings

Fred A. Hamprecht ; Christoph Schnörr ; Bernd Jähne (eds.)

En conferencia: 29º Joint Pattern Recognition Symposium (DAGM) . Heidelberg, Germany . September 12, 2007 - September 14, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Pattern Recognition; Image Processing and Computer Vision; Artificial Intelligence (incl. Robotics); Computer Graphics; Algorithm Analysis and Problem Complexity

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-74933-2

ISBN electrónico

978-3-540-74936-3

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

Rigid Motion Constraints for Tracking Planar Objects

Olaf Kähler; Joachim Denzler

Typical tracking algorithms exploit temporal coherence, in the sense of expecting only small object motions. Even without exact knowledge of the scene, additional spatial coherence can be exploited by expecting only a rigid 3d motion. Feature tracking will benefit from knowing about this rigidity of the scene, especially if individual features cannot be tracked by themselves due to occlusions or illumination changes. We present and compare different approaches of dealing with the spatial coherence in the context of tracking planar scenes. We also show the benefits in scenes with occlusions and changes in illumination, even without models of these distortions.

- Motion, Tracking and Optical Flow | Pp. 102-111

Detectability of Moving Objects Using Correspondences over Two and Three Frames

Jens Klappstein; Fridtjof Stein; Uwe Franke

The detection of moving objects is crucial for robot navigation and driver assistance systems. In this paper the detectability of moving objects is studied. To this end, image correspondences over two and three frames are considered whereas the images are acquired by a moving monocular camera. The detection is based on the constraints linked to static 3D points. These constraints (epipolar, positive depth, positive height, and trifocal constraint) are discussed briefly, and an algorithm incorporating all of them is proposed. The individual constraints differ in their action depending on the motion of the object. Thus, the detectability of a moving object is influenced by its motion. Three types of motions are investigated: parallel, lateral, and circular motion. The study of the detection limits is applied to real imagery.

- Motion, Tracking and Optical Flow | Pp. 112-121

An Analysis-by-Synthesis Camera Tracking Approach Based on Free-Form Surfaces

Kevin Koeser; Bogumil Bartczak; Reinhard Koch

We propose a model-based camera pose estimation approach, which makes use of GPU-assisted analysis-by-synthesis methods on a very wide field of view (e.g. fish-eye) camera. After an initial registration, the synthesis part of the tracking is performed on graphics hardware, which simulates internal and external parameters of the camera, this way minimizing lens and perspective differences between a model view and a real camera image. We show how such a model is automatically created from a scene and analyze the sensitivity of the tracking to the model accuracy, in particular the case when we represent free-form surfaces by planar patches. We also examine accuracy and show on synthetic and on real data that the system does not suffer from drift accumulation. The wide field of view of the camera and the subdivision of our reference model into many textured free-form surfaces make the system robust against moving persons and other occlusions within the environment and provide a camera pose estimate in a fixed and known coordinate system.

- Motion, Tracking and Optical Flow | Pp. 122-131

An Adaptive Confidence Measure for Optical Flows Based on Linear Subspace Projections

Claudia Kondermann; Daniel Kondermann; Bernd Jähne; Christoph Garbe

Confidence measures are important for the validation of optical flow fields by estimating the correctness of each displacement vector. There are several frequently used confidence measures, which have been found of at best intermediate quality. Hence, we propose a new confidence measure based on linear subspace projections. The results are compared to the best previously proposed confidence measures with respect to an optimal confidence. Using the proposed measure we are able to improve previous results by up to 31%.

- Motion, Tracking and Optical Flow | Pp. 132-141

Bayesian Model Selection for Optical Flow Estimation

Kai Krajsek; Rudolf Mester

Global optical flow techniques minimize a mixture of two terms: a relating the observable signal with the optical flow, and a imposing prior knowledge/assumptions on the solution. A large number of different data terms have been developed since the first global optical flow estimator proposed by Horn and Schunk [1]. Recently [2], these data terms have been classified with respect to their properties. Thus, for image sequences where certain properties about image as well as motion characteristics are known in advance, the appropriate data term can be chosen from this classification. In this contribution, we deal with the situation where the optimal data term is known in advance. We apply the Bayesian evidence framework for automatically choosing the optimal relative weight between two data terms as well as the regularization term based only on the given input signal.

- Motion, Tracking and Optical Flow | Pp. 142-151

Illumination-Robust Variational Optical Flow with Photometric Invariants

Yana Mileva; Andrés Bruhn; Joachim Weickert

Since years variational methods belong to the most accurate techniques for computing the optical flow in image sequences. However, if based on the grey value constancy assumption only, such techniques are not robust enough to cope with typical illumination changes in real-world data. In our paper we tackle this problem in two ways: First we discuss different photometric invariants for the design of illumination-robust variational optical flow methods. These invariants are based on colour information and include such concepts as spherical/conical transforms, normalisation strategies and the differentiation of logarithms. Secondly, we embed them into a suitable multichannel generalisation of the highly accurate variational optical flow technique of Brox This in turn allows us to access the true potential of such invariants for estimating the optical flow. Experiments with synthetic and real-world data demonstrate the success of combining accuracy and robustness: Even under strongly varying illumination, reliable and precise results are obtained.

- Motion, Tracking and Optical Flow | Pp. 152-162

Online Smoothing for Markerless Motion Capture

Bodo Rosenhahn; Thomas Brox; Daniel Cremers; Hans-Peter Seidel

Tracking 3D objects from 2D image data often leads to jittery tracking results. In general, unsmooth motion is a sign of tracking errors, which, in the worst case, can cause the tracker to loose the tracked object. A straightforward remedy is to demand temporal consistency and to smooth the result. This is often done in form of a post-processing. In this paper, we present an approach for online smoothing in the scope of 3D human motion tracking. To this end, we extend an energy functional by a term that penalizes deviations from smoothness. It is shown experimentally that such online smoothing on pose parameters and joint angles leads to improved results and can even succeed in cases, where tracking without temporal consistency assumptions fails completely.

- Motion, Tracking and Optical Flow | Pp. 163-172

Occlusion Modeling by Tracking Multiple Objects

Christian Schmaltz; Bodo Rosenhahn; Thomas Brox; Joachim Weickert; Daniel Cremers; Lennart Wietzke; Gerald Sommer

This article introduces a technique for region-based pose tracking of multiple objects. Our algorithm uses surface models of the objects to be tracked and at least one calibrated camera view, but does not require color, texture, or other additional properties of the objects. By optimizing a joint energy defined on the pose parameters of all objects, the proposed algorithm can explicitly handle occlusions between different objects. Tracking results in simulated as well as real world scenes demonstrate the effects of occlusion and how they are handled by the proposed method.

- Motion, Tracking and Optical Flow | Pp. 173-183

Simultaneous Estimation of Surface Motion, Depth and Slopes Under Changing Illumination

Tobias Schuchert; Hanno Scharr

In this paper we extend a multi-camera model for simultaneous estimation of 3d position, normals, and 3d motion of surface patches [17] to be able to handle brightness changes coming from changing illumination. In the target application only surface orientation and 3d motion are of interest. Thus color related surface properties like bidirectional reflectance distribution function do not need to be reconstructed. Consequently we characterize only changes of the brightness using a second-order power series. We test two new models within a total least squares estimation framework using synthetic data with ground truth available. Motion estimation results improve severely with respect to the brightness constancy model when brightness changes are present in the data.

- Motion, Tracking and Optical Flow | Pp. 184-193

Recursive Estimation with Implicit Constraints

Richard Steffen; Christian Beder

Recursive estimation or Kalman filtering usually relies on explicit model functions, that directly and explicitly describe the effect of the parameters on the observations. However, many problems in computer vision, including all those resulting in homogeneous equation systems, are easier described using implicit constraints between the observations and the parameters. By implicit we mean, that the constraints are given by equations, that are not easily solvable for the observation vector.

We present a framework, that allows to incorporate such implicit constraints as measurement equations into a Kalman filter. The algorithm may be used as a black-box, simplifying the process of specifying suitable measurement equations for many problems. As a byproduct, the possibility of specifying model equations non-explicitly, some non-linearities may be avoided and better results can be achieved for certain problems.

- Motion, Tracking and Optical Flow | Pp. 194-203