Catálogo de publicaciones - libros
Computer Vision: ECCV 2002: 7th European Conference on Computer Vision Copenhagen, Denmark, May 28-31, 2002 Proceedings, Part I
Anders Heyden ; Gunnar Sparr ; Mads Nielsen ; Peter Johansen (eds.)
En conferencia: 7º European Conference on Computer Vision (ECCV) . Copenhagen, Denmark . May 28, 2002 - May 31, 2002
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
Image Processing and Computer Vision; Computer Graphics; Pattern Recognition; Artificial Intelligence
Disponibilidad
Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No detectada | 2002 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-3-540-43745-1
ISBN electrónico
978-3-540-47969-7
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2002
Información sobre derechos de publicación
© Springer-Verlag Berlin Heidelberg 2002
Cobertura temática
Tabla de contenidos
New View Generation with a Bi-centric Camera
Daphna Weinshall; Mi-Suen Lee; Tomas Brodsky; Miroslav Trajkovic; Doron Feldman
We propose a novel method for new view generation from a rectified sequence of images. Our new images correspond to a new camera model, which we call a bi-centric camera; in this model the centers of horizontal and vertical projections lie in different locations on the camera’s optical axis. This model reduces to the regular pinhole camera when the two projection centers coincide, and the pushbroom camera when one projection center lies at infinity. We first analyze the properties of this camera model. We then show how to generate new bi-centric views from vertical cuts in the epipolar volume of a rectified sequence. Every vertical cut generates a new bi-centric view, where the specific parameters of the cut determine the location of the projection centers. We discuss and demonstrate applications, including the generation of images where the virtual camera lies behind occluding surfaces (e.g., behind the back wall of a room), and in unreachable positions (e.g., in front of a glass window). Our final application is the generation of movies taken by a simulated forward moving camera, using as input a movie taken by a sideways moving camera.
- Image Features / Visual Motion | Pp. 614-628
Recognizing and Tracking Human Action
Josephine Sullivan; Stefan Carlsson
Human activity can be described as a sequence of 3D body postures. The traditional approach to recognition and 3D reconstruction of human activity has been to track motion in 3D, mainly using advanced geometric and dynamic models. In this paper we reverse this process. View based activity recognition serves as an input to a human body location tracker with the ultimate goal of 3D reanimation in mind. We demonstrate that specific human actions can be detected from single frame postures in a video sequence. By recognizing the image of a person’s posture as corresponding to a particular key frame from a set of stored key frames, it is possible to map body locations from the key frames to actual frames. This is achieved using a shape matching algorithm based on qualitative similarity that computes point to point correspondence between shapes, together with information about appearance. As the mapping is from fixed key frames, our tracking does not suffer from the problem of having to reinitialise when it gets lost. It is effectively a closed loop. We present experimental results both for recognition and tracking for a sequence of a tennis player.
- Image Features / Visual Motion | Pp. 629-644
Towards Improved Observation Models for Visual Tracking: Selective Adaptation
Jaco Vermaak; Patrick Pérez; Michel Gangnet; Andrew Blake
An important issue in tracking is how to incorporate an appropriate degree of adaptivity into the observation model. Without any adaptivity, tracking fails when object properties change, for example when illumination changes affect surface colour. Conversely, if an observation model adapts too readily then, during some transient failure of tracking, it is liable to adapt erroneously to some part of the background. The approach proposed here is to adapt selectively, allowing adaptation only during periods when two particular conditions are met: that the object should be both present and in motion. The proposed mechanism for adaptivity is tested here with a foreground colour and motion model. The experimental setting itself is novel in that it uses combined colour and motion observations from a fixed filter bank, with motion used also for initialisation via a Monte Carlo proposal distribution. Adaptation is performed using a stochastic EM algorithm, during periods that meet the conditions above. Tests verify the value of such adaptivity, in that immunity to distraction from clutter of similar colour to the object is considerably enhanced.
- Image Features / Visual Motion | Pp. 645-660
Color-Based Probabilistic Tracking
P. Pérez; C. Hue; J. Vermaak; M. Gangnet
Color-based trackers recently proposed in [,,] have been proved robust and versatile for a modest computational cost. They are especially appealing for tracking tasks where the spatial structure of the tracked objects exhibits such a dramatic variability that trackers based on a space-dependent appearance reference would break down very fast. Trackers in [,,] rely on the deterministic search of a window whose color content matches a reference histogram color model.
Relying on the same principle of color histogram distance, but within a probabilistic framework, we introduce a new Monte Carlo tracking technique. The use of a particle filter allows us to better handle color clutter in the background, as well as complete occlusion of the tracked entities over a few frames.
This probabilistic approach is very flexible and can be extended in a number of useful ways. In particular, we introduce the following ingredients: multi-part color modeling to capture a rough spatial layout ignored by global histograms, incorporation of a background color model when relevant, and extension to multiple objects.
- Image Features / Visual Motion | Pp. 661-675
Dense Motion Analysis in Fluid Imagery
T. Corpetti; É. Mémin; P. Pérez
Analyzing fluid motion is essential in number of domains and can rarely be handled using generic computer vision techniques. In this particular application context, we address two distinct problems. First we describe a dedicated dense motion estimator. The approach relies on constraints issuing from fluid motion properties and allows us to recover dense motion fields of good quality. Secondly, we address the problem of analyzing such velocity fields. We present a kind of motion-based segmentation relying on an analytic representation of the motion field that permits to extract important quantities such as singularities, stream-functions or velocity potentials. The proposed method has the advantage to be robust, simple, and fast.
- Image Features / Visual Motion | Pp. 676-691
A Layered Motion Representation with Occlusion and Compact Spatial Support
Allan D. Jepson; David J. Fleet; Michael J. Black
We describe a 2.5D layered representation for visual motion analysis. The representation provides a global interpretation of image motion in terms of several spatially localized foreground regions along with a background region. Each of these regions comprises a parametric shape model and a parametric motion model. The representation also contains depth ordering so visibility and occlusion are rightly included in the estimation of the model parameters. Finally, because the number of objects, their positions, shapes and sizes, and their relative depths are all unknown, initial models are drawn from a proposal distribution, and then compared using a penalized likelihood criterion. This allows us to automatically initialize new models, and to compare different depth orderings.
- Image Features / Visual Motion | Pp. 692-706
Incremental Singular Value Decomposition of Uncertain Data with Missing Values
Matthew Brand
We introduce an incremental singular value decomposition () of incomplete data. The is developed as data arrives, and can handle arbitrary missing/untrusted values, correlated uncertainty across rows or columns of the measurement matrix, and user priors. Since incomplete data does not uniquely specify an , the procedure selects one having minimal rank. For a dense matrix of low rank , the incremental method has time complexity () and space complexity (())—better than highly optimized batch algorithms such as svd(). In cases of missing data, it produces factorings of lower rank and residual than batch algorithms applied to standard missing-data imputations. We show applications in computer vision and audio feature extraction. In computer vision, we use the incremental to develop an efficient and unusually robust subspace-estimating flow-based tracker, and to handle occlusions/missing points in structure-from-motion factorizations.
- Image Features / Visual Motion | Pp. 707-720
Symmetrical Dense Optical Flow Estimation with Occlusions Detection
Luis Alvarez; Rachid Deriche; Théo Papadopoulo; Javier Sánchez
Traditional techniques of dense optical flow estimation don’t generally yield symmetrical solutions: the results will differ if they are applied between images and or between images and . In this work, we present a method to recover a dense optical flow field map from two images, while explicitely taking into account the symmetry across the images as well as possible occlusions and discontinuities in the flow field. The idea is to consider both displacements vectors from to and to and to minimise an energy functional that explicitely encodes all those properties. This variational problem is then solved using the gradient flow defined by the Euler-Lagrange equations associated to the energy. In order to reduce the risk to be trapped within some irrelevant minimum, a focusing strategy based on a multi-resolution technique is used to converge toward the solution. Promising experimental results on both synthetic and real images are presented to illustrate the capabilities of this symmetrical variational approach to recover accurate optical flow.
- Image Features / Visual Motion | Pp. 721-735
Audio-Video Sensor Fusion with Probabilistic Graphical Models
Matthew J. Beal; Hagai Attias; Nebojsa Jojic
We present a new approach to modeling and processing multimedia data. This approach is based on graphical models that combine audio and video variables. We demonstrate it by developing a new algorithm for tracking a moving object in a cluttered, noisy scene using two microphones and a camera. Our model uses unobserved variables to describe the data in terms of the process that generates them. It is therefore able to capture and exploit the statistical structure of the audio and video data separately, as well as their mutual dependencies. Model parameters are learned from data via an EM algorithm, and automatic calibration is performed as part of this procedure. Tracking is done by Bayesian inference of the object location from data. We demonstrate successful performance on multimedia clips captured in real world scenarios using off-the-shelf equipment.
- Image Features / Visual Motion | Pp. 736-750
Increasing Space-Time Resolution in Video
Eli Shechtman; Yaron Caspi; Michal Irani
We propose a method for constructing a video sequence of high space-time resolution by combining information from multiple low-resolution video sequences of the same dynamic scene. Super-resolution is performed simultaneously in time and in space. By “temporal super-resolution” we mean recovering rapid dynamic events that occur faster than regular frame-rate. Such dynamic events are not visible (or else observed incorrectly) in any of the input sequences, even if these are played in “slow-motion”.
The spatial and temporal dimensions are very different in nature, yet are inter-related. This leads to interesting visual tradeoffs in time and space, and to new video applications. These include: (i) treatment of artifacts (e.g., motion-blur) by increasing the resolution, and (ii) combination of input sequences of different space-time resolutions (e.g., NTSC, PAL, and even high quality still images) to generate a high quality video sequence.
- Visual Motion | Pp. 753-768