Catálogo de publicaciones - libros

Compartir en
redes sociales


Computer Vision: ECCV 2002: 7th European Conference on Computer Vision Copenhagen, Denmark, May 28-31, 2002 Proceedings, Part IV

Anders Heyden ; Gunnar Sparr ; Mads Nielsen ; Peter Johansen (eds.)

En conferencia: 7º European Conference on Computer Vision (ECCV) . Copenhagen, Denmark . May 28, 2002 - May 31, 2002

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Image Processing and Computer Vision; Computer Graphics; Pattern Recognition; Artificial Intelligence

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2002 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-43748-2

ISBN electrónico

978-3-540-47979-6

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2002

Tabla de contenidos

Camera Calibration with One-Dimensional Objects

Zhengyou Zhang

Camera calibration has been studied extensively in computer vision and photogrammetry, and the proposed techniques in the literature include those using 3D apparatus (two or three planes orthogonal to each other, or a plane undergoing a pure translation, etc.), 2D objects (planar patterns undergoing unknown motions), and 0D features (self-calibration using unknown scene points). This paper yet proposes a new calibration technique using 1D objects (points aligned on a line), thus filling the missing dimension in calibration. In particular, we show that camera calibration is not possible with free-moving 1D objects, but can be solved if one point is fixed. A closed-form solution is developed if six or more observations of such a 1D object are made. For higher accuracy, a nonlinear technique based on the maximum likelihood criterion is then used to refine the estimate. Besides the theoretical aspect, the proposed technique is also important in practice especially when calibrating multiple cameras mounted apart from each other, where the calibration objects are required to be visible simultaneously.

- Calibration / Active and Real-Time and Robot Vision / Image and Video Indexing / Medical Image Understanding / Vision Systems / Engineering and Evaluations / Statistical Learning | Pp. 161-174

Automatic Camera Calibration from a Single Manhattan Image

J. Deutscher; M. Isard; J. MacCormick

We present a completely automatic method for obtaining the approximate calibration of a camera (alignment to a world frame and focal length) from a single image of an unknown scene, provided only that the scene satisfies a assumption. This assumption states that the imaged scene contains three orthogonal, dominant directions, and is often satisfied by outdoor or indoor views of man-made structures and environments.

The proposed method combines the calibration likelihood introduced in [] with a stochastic search algorithm to obtain a MAP estimate of the camera’s focal length and alignment. Results on real images of indoor scenes are presented. The calibrations obtained are less accurate than those from standard methods employing a calibration pattern or multiple images. However, the outputs are certainly good enough for common vision tasks such as tracking. Moreover, the results are obtained without any user intervention, from a single image, and without use of a calibration pattern.

- Calibration / Active and Real-Time and Robot Vision / Image and Video Indexing / Medical Image Understanding / Vision Systems / Engineering and Evaluations / Statistical Learning | Pp. 175-188

What Can Be Known about the Radiometric Response from Images?

Michael D. Grossberg; Shree K. Nayar

Brightness values of pixels in an image are related to image irradiance by a non-linear function, called the radiometric response function. Recovery of this function is important since many algorithms in computer vision and image processing use image irradiance. Several investigators have described methods for recovery of the radiometric response, without using charts, from multiple exposures of the same scene. All these recovery methods are based solely on the correspondence of gray-levels in one exposure to gray-levels in another exposure. This correspondence can be described by a function we call the brightness transfer function. We show that brightness transfer functions, and thus images themselves, do not uniquely determine the radiometric response function, nor the ratios of exposure between the images. We completely determine the ambiguity associated with the recovery of the response function and the exposure ratios. We show that all previous methods break these ambiguities only by making assumptions on the form of the response function. While iterative schemes which may not converge were used previously to find the exposure ratio, we show when it can be recovered directly from the brightness transfer function. We present a novel method to recover the brightness transfer function between images from only their brightness histograms. This allows us to determine the brightness transfer function between images of different scenes whenever the change in the distribution of scene radiances is small enough. We show an example of recovery of the response function from an image sequence with scene motion by constraining the form of the response function to break the ambiguities.

- Calibration / Active and Real-Time and Robot Vision / Image and Video Indexing / Medical Image Understanding / Vision Systems / Engineering and Evaluations / Statistical Learning | Pp. 189-205

Estimation of Illuminant Direction and Intensity of Multiple Light Sources

Wei Zhou; Chandra Kambhamettu

This paper presents a novel scheme for locating multiple light sources and estimating their intensities from a pair of stereo images of a sphere. No prior knowledge of the location and radius of the sphere is necessary. The sphere surface is not assumed to be a pure Lambertian surface, instead, it has both Lambertian and specular properties. The light source locating algorithm is based on the fact that the Lambertian intensity is not dependent on the direction of view point, while the specular intensity is highly dependent on the direction of the view point. From this fact, we can use a pair of stereo images whose view point changes can be utilized to separate the image of the sphere into two images, one with Lambertian intensities, and the other with specular intensities. The specular image is used to find the directions of the light sources, and then Lambertian image model is used to find the intensities of the light sources. Experiments on both synthetic and real images show that the scheme is successful and robust in finding the directions of the light sources accurately with accurate intensity estimation.

- Calibration / Active and Real-Time and Robot Vision / Image and Video Indexing / Medical Image Understanding / Vision Systems / Engineering and Evaluations / Statistical Learning | Pp. 206-220

3D Modelling Using Geometric Constraints: A Parallelepiped Based Approach

Marta Wilczkowiak; Edmond Boyer; Peter Sturm

In this paper, efficient and generic tools for calibration and 3D reconstruction are presented. These tools exploit geometric constraints frequently present in man-made environments and allow camera calibration as well as scene structure to be estimated with a small amount of user interactions and little knowledge. The proposed approach is based on primitives that naturally characterize rigidity constraints: parallelepipeds. It has been shown previously that the intrinsic metric characteristics of a parallelepiped are dual to the intrinsic characteristics of a perspective camera. Here, we generalize this idea by taking into account additional redundancies between multiple images of multiple parallelepipeds. We propose a method for the estimation of camera and scene parameters that bears strong similarities with some self-calibration approaches. Taking into account prior knowledge on scene primitives or cameras, leads to simpler equations than for standard self-calibration, and is expected to improve results, as well as to allow structure and motion recovery in situations that are otherwise under-constrained. These principles are illustrated by experimental calibration results and several reconstructions from uncalibrated images.

- Calibration / Active and Real-Time and Robot Vision / Image and Video Indexing / Medical Image Understanding / Vision Systems / Engineering and Evaluations / Statistical Learning | Pp. 221-236

Geometric Properties of Central Catadioptric Line Images

João P. Barreto; Helder Araujo

It is highly desirable that an imaging system has a single effective viewpoint. Central catadioptric systems are imaging systems that use mirrors to enhance the field of view while keeping a unique center of projection. A general model for central catadioptric image formation has already been established. The present paper exploits this model to study the catadioptric projection of lines. The equations and geometric properties of general catadioptric line imaging are derived. We show that it is possible to determine the position of both the effective viewpoint and the absolute conic in the catadioptric image plane from the images of three lines. It is also proved that it is possible to identify the type of catadioptric system and the position of the line at infinity without further information. A methodology for central catadioptric system calibration is proposed. Reconstruction aspects are discussed. Experimental results are presented. All the results presented are original and completely new.

- Calibration / Active and Real-Time and Robot Vision / Image and Video Indexing / Medical Image Understanding / Vision Systems / Engineering and Evaluations / Statistical Learning | Pp. 237-251

Another Way of Looking at Plane-Based Calibration: The Centre Circle Constraint

Pierre Gurdjos; Alain Crouzil; René Payrissat

The plane-based calibration consists in recovering the internal parameters of the camera from the views of a planar pattern with a known geometric structure. The existing direct algorithms use a problem formulation based on the properties of basis vectors. They minimize algebraic distances and may require a ‘good’ choice of system normalization. Our contribution is to put this problem into a more intuitive geometric framework. A solution can be obtained by intersecting circles, called Centre Circles, whose parameters are computed from the world-to-image homographies. The Centre Circle is the camera centre locus when planar figures are in perpective correspondence, in accordance with a Poncelet’s theorem. An interesting aspect of our formulation, using the Centre Circle constraint, is that we can easily transform the cost function into a sum of squared Euclidean distances. The simulations on synthetic data and an application with real images confirm the strong points of our method.

- Calibration / Active and Real-Time and Robot Vision / Image and Video Indexing / Medical Image Understanding / Vision Systems / Engineering and Evaluations / Statistical Learning | Pp. 252-266

Active Surface Reconstruction Using the Gradient Strategy

Marcel Mitran; Frank P. Ferrie

This paper describes the design and implementation of an active surface reconstruction algorithm for two-frame image sequences using passive imaging. A novel strategy based on the statistical grouping of image gradient features is used. It is shown that the gradient of the intensity in an image can successfully be used to drive the direction of the viewer’s motion. As such, an increased efficiency in the accumulation of information is demonstrated through a significant increase in the convergence rate of the depth estimator (3 to 4 times for the presented results) over traditional passive depth-from-motion. The viewer is considered to be restricted to a short baseline. A maximal-estimation framework is adopted to provide a simple approach for propagating information in a bottom-up fashion in the system. A Kalman filtering scheme is used for accumulating information temporally. The paper provides results for real-textured data to support the findings.

- Calibration / Active and Real-Time and Robot Vision / Image and Video Indexing / Medical Image Understanding / Vision Systems / Engineering and Evaluations / Statistical Learning | Pp. 267-281

Linear Pose Estimation from Points or Lines

Adnan Ansar; Kostas Daniilidis

Estimation of camera pose from an image of points or lines with known correspondence is a thoroughly studied problem in computer vision. Most solutions are iterative and depend on nonlinear optimization of some geometric constraint, either on the world coordinates or on the projections to the image plane. For real-time applications we are interested in linear or closed-form solutions free of initialization. We present a general framework which allows for a novel set of linear solutions to the pose estimation problem for both points and lines. We present a number of simulations which compare our results to two other recent linear algorithm as well as to iterative approaches. We conclude with tests on real imagery in an augmented reality setup. We also present an analysis of the sensitivity of our algorithms to image noise.

- Calibration / Active and Real-Time and Robot Vision / Image and Video Indexing / Medical Image Understanding / Vision Systems / Engineering and Evaluations / Statistical Learning | Pp. 282-296

A Video-Based Drowning Detection System

Alvin H. Kam; Wenmiao Lu; Wei-Yun Yau

This paper provides new insights into robust human tracking and semantic event detection within the context of a novel real-time video surveillance system capable of automatically detecting drowning incidents in a swimming pool. An effective background model that incorporates prior knowledge about swimming pools and aquatic environments enables swimmers to be reliably detected and tracked despite the significant presence of water ripples, splashes and shadows. Visual indicators of water crises are identified based on professional knowledge of water crisis recognition and modelled by a hierarchical set of carefully chosen swimmer descriptors. An effective alarm generation methodology is then developed to enable the timely detection of genuine water crises while minimizing the number of false alarms. The system has been tested on numerous instances of simulated water crises and potential false alarm scenarios with encouraging results.

- Calibration / Active and Real-Time and Robot Vision / Image and Video Indexing / Medical Image Understanding / Vision Systems / Engineering and Evaluations / Statistical Learning | Pp. 297-311