Catálogo de publicaciones - libros

Compartir en
redes sociales


Computer Vision: ACCV 2007: 8th Asian Conference on Computer Vision, Tokyo, Japan, November 18-22, 2007, Proceedings, Part I

Yasushi Yagi ; Sing Bing Kang ; In So Kweon ; Hongbin Zha (eds.)

En conferencia: 8º Asian Conference on Computer Vision (ACCV) . Tokyo, Japan . November 18, 2007 - November 22, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-76385-7

ISBN electrónico

978-3-540-76386-4

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

Less Is More: Coded Computational Photography

Ramesh Raskar

Computational photography combines plentiful computing, digital sensors, modern optics, actuators, and smart lights to escape the limitations of traditional cameras, enables novel imaging applications and simplifies many computer vision tasks. However, a majority of current Computational Photography methods involve taking multiple sequential photos by changing scene parameters and fusing the photos to create a richer representation. The goal of Coded Computational Photography is to modify the optics, illumination or sensors at the time of capture so that the scene properties are encoded in a single (or a few) photographs. We describe several applications of coding exposure, aperture, illumination and sensing and describe emerging techniques to recover scene parameters from coded photographs.

- Plenary and Invited Talks | Pp. 1-12

Optimal Algorithms in Multiview Geometry

Richard Hartley; Fredrik Kahl

This is a survey paper summarizing recent research aimed at finding guaranteed optimal algorithms for solving problems in Multiview Geometry. Many of the traditional problems in Multiview Geometry now have optimal solutions in terms of minimizing residual imageplane error. Success has been achieved in minimizing (least-squares) or (smallest maximum error) norm. The main methods involve Second Order Cone Programming, or quasi-convex optimization, and Branch-and-bound. The paper gives an overview of the subject while avoiding as far as possible the mathematical details, which can be found in the original papers.

- Plenary and Invited Talks | Pp. 13-34

Machine Vision in Early Days: Japan’s Pioneering Contributions

Masakazu Ejiri

The history of machine vision started in the mid-1960s by the efforts of Japanese industry researchers. A variety of prominent vision-based systems was made possible by creating and evolving real-time image processing techniques, and was applied to factory automation, office automation, and even to social automation during the 1970-2000 period. In this article, these historical attempts are briefly explained to promote understanding of the pioneering efforts that opened the door and formed the bases of today’s computer vision research.

- Plenary and Invited Talks | Pp. 35-53

Coarse-to-Fine Statistical Shape Model by Bayesian Inference

Ran He; Stan Li; Zhen Lei; ShengCai Liao

In this paper, we take a predefined geometry shape as a constraint for accurate shape alignment. A shape model is divided in two parts: fixed shape and active shape. The fixed shape is a user-predefined simple shape with only a few landmarks which can be easily and accurately located by machine or human. The active one is composed of many landmarks with complex shape contour. When searching an active shape, pose parameter is calculated by the fixed shape. Bayesian inference is introduced to make the whole shape more robust to local noise generated by the active shape, which leads to a compensation factor and a smooth factor for a coarse-to-fine shape search. This method provides a simple and stable means for online and offline shape analysis. Experiments on cheek and face contour demonstrate the effectiveness of our proposed approach.

- Shape and Texture | Pp. 54-64

Efficient Texture Representation Using Multi-scale Regions

Horst Wildenauer; Branislav Mičušík; Markus Vincze

This paper introduces an efficient way of representing textures using connected regions which are formed by coherent multi-scale over-segmentations. We show that the recently introduced covariance-based similarity measure, initially applied on rectangular windows, can be used with our newly devised, irregular structure-coherent patches; increasing the discriminative power and consistency of the texture representation. Furthermore, by treating texture in multiple scales, we allow for an implicit encoding of the spatial and statistical texture properties which are persistent across scale. The meaningfulness and efficiency of the covariance based texture representation is verified utilizing a simple binary segmentation method based on min-cut. Our experiments show that the proposed method, despite the low dimensional representation in use, is able to effectively discriminate textures and that its performance compares favorably with the state of the art.

- Shape and Texture | Pp. 65-74

Comparing Timoshenko Beam to Energy Beam for Fitting Noisy Data

Ilić Slobodan

In this paper we develop highly flexible   model for tracking large deformations in noisy data. We demonstrate that by neglecting some physical properties of Timoshenko beam, classical can be derived. The comparison of these two models in terms of their robustness and precision against noisy data is given. We demonstrate that Timoshenko beam model is more robust and precise for tracking large deformations in the presence of clutter and partial occlusions. The experiments using both synthetic and real image data are performed. In synthetic images we fit both models to noisy data and use Monte Carlo simulation to analyze their performance. In real images we track deformations of the pole vault, the rat whiskers and the car antenna.

- Fitting | Pp. 75-84

A Family of Quadratic Snakes for Road Extraction

Ramesh Marikhu; Matthew N. Dailey; Stanislav Makhanov; Kiyoshi Honda

The geographic information system industry would benefit from flexible automated systems capable of extracting linear structures from satellite imagery. Quadratic snakes allow global interactions between points along a contour, and are well suited to segmentation of linear structures such as roads. However, a single quadratic snake is unable to extract disconnected road networks and enclosed regions. We propose to use a family of cooperating snakes, which are able to split, merge, and disappear as necessary. We also propose a preprocessing method based on oriented filtering, thresholding, Canny edge detection, and Gradient Vector Flow (GVF) energy. We evaluate the performance of the method in terms of precision and recall in comparison to ground truth data. The family of cooperating snakes consistently outperforms a single snake in a variety of road extraction tasks, and our method for obtaining the GVF is more suitable for road extraction tasks than standard methods.

- Fitting | Pp. 85-94

Multiperspective Distortion Correction Using Collineations

Yuanyuan Ding; Jingyi Yu

We present a new framework for correcting multiperspective distortions using collineations. A collineation describes the transformation between the images of a camera due to changes in sampling and image plane selection. We show that image distortions in many previous models of cameras can be effectively reduced via proper collineations. To correct distortions in a specific multiperspective camera, we develop an interactive system that allows users to select feature rays from the camera and position them at the desirable pixels. Our system then computes the optimal collineation to match the projections of these rays with the corresponding pixels. Experiments demonstrate that our system robustly corrects complex distortions without acquiring the scene geometry, and the resulting images appear nearly undistorted.

- Poster Session 1: Calibration | Pp. 95-105

Camera Calibration from Silhouettes Under Incomplete Circular Motion with a Constant Interval Angle

Po-Hao Huang; Shang-Hong Lai

In this paper, we propose an algorithm for camera calibration from silhouettes under circular motion with an unknown constant interval angle. Unlike previous silhouette-based methods based on surface of revolution, the proposed algorithm can be applied to sparse and incomplete image sequences. Under the assumption of circular motion with a constant interval angle, epipoles of successive image pairs remain constant and can be determined from silhouettes. A pair of epipoles formed by a certain interval angle can provide a constraint on the angle and focal length. With more pairs of epipoles recovered, the focal length can be determined from the one that most satisfies the constraints and determine the interval angle concurrently. The rest of camera parameters can be recovered from image invariants. Finally, the estimated parameters are optimized by minimizing the epipolar tangency constraints. Experimental results on both synthetic and real images are shown to demonstrate its performance.

- Poster Session 1: Calibration | Pp. 106-115

Mirror Localization for Catadioptric Imaging System by Observing Parallel Light Pairs

Ryusuke Sagawa; Nobuya Aoki; Yasushi Yagi

This paper describes a method of mirror localization to calibrate a catadioptric imaging system. While the calibration of a catadioptric system includes the estimation of various parameters, we focus on the localization of the mirror. The proposed method estimates the position of the mirror by observing pairs of parallel lights, which are projected from various directions. Although some earlier methods for calibrating catadioptric systems assume that the system is single viewpoint, which is a strong restriction on the position and shape of the mirror, our method does not restrict the position and shape of the mirror. Since the constraint used by the proposed method is that the relative angle of two parallel lights is constant with respect to the rigid transformation of the imaging system, we can omit both the translation and rotation between the camera and calibration objects from the parameters to be estimated. Therefore, the estimation of the mirror position by the proposed method is independent of the extrinsic parameters of a camera. We compute the error between the model of the mirror and the measurements, and then estimate the position of the mirror by minimizing this error. We test our method using both simulation and real experiments, and evaluate the accuracy thereof.

- Poster Session 1: Calibration | Pp. 116-126