Catálogo de publicaciones - libros

Compartir en
redes sociales


Computer Vision: ECCV 2002: 7th European Conference on Computer Vision Copenhagen, Denmark, May 28-31, 2002 Proceedings, Part III

Anders Heyden ; Gunnar Sparr ; Mads Nielsen ; Peter Johansen (eds.)

En conferencia: 7º European Conference on Computer Vision (ECCV) . Copenhagen, Denmark . May 28, 2002 - May 31, 2002

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Image Processing and Computer Vision; Computer Graphics; Pattern Recognition; Artificial Intelligence

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2002 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-43746-8

ISBN electrónico

978-3-540-47977-2

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2002

Tabla de contenidos

Recovery of Reflectances and Varying Illuminants from Multiple Views

Q.-Tuan Luong; Pascal Fua; Yvan Leclerc

We introduce a new methodology for from multiple images. It opens new possibilities because it allows simultaneous recovery of varying unknown illuminants (one per image), surface albedoes, and cameras’ radiometric responses. Designed to complement geometric reconstruction techniques, it only requires as input the geometry of the scene and of the cameras. Unlike photometric stereo approaches, it is not restricted to images taken from a single viewpoint. Linear and non-linear implementations in the Lambertian case are proposed; simulation results are discussed and compared to related work to demonstrate the gain in stability; and results on real images are shown.

- Texture Shading and Colour / Grouping and Segmentation / Object Recognition | Pp. 163-179

Composite Texture Descriptions

Alexey Zalesny; Vittorio Ferrari; Geert Caenen; Dominik Auf der Maur; Luc Van Gool

Textures can often more easily be described as a composition of subtextures than as a single texture. The paper proposes a way to model and synthesize such “composite textures”, where the layout of the different subtextures is itself modeled as a texture, which can be generated automatically. Examples are shown for building materials with an intricate structure and for the automatic creation of landscape textures. First, a model of the composite texture is generated. This procedure comprises manual or unsupervised texture segmentation to learn the spatial layout of the composite texture and the extraction of models for each of the subtextures. Synthesis of a composite texture includes the generation of a layout texture, which is subsequently filled in with the appropriate subtextures. This scheme is refined further by also including interactions between neighboring subtextures.

- Texture Shading and Colour / Grouping and Segmentation / Object Recognition | Pp. 180-194

Constructing Illumination Image Basis from Object Motion

Akiko Nakashima; Atsuto Maki; Kazuhiro Fukui

We propose to construct a 3D linear image basis which spans an image space of arbitrary illumination conditions, from images of a moving object observed under a static lighting condition. The key advance is to utilize the object motion which causes illumination variance on the object surface, rather than varying the lighting, and thereby simplifies the environment for acquiring the input images. Since we then need to re-align the pixels of the images so that the same view of the object can be seen, the correspondence between input images must be solved despite the illumination variance. In order to overcome the problem, we adapt the recently introduced geotensity constraint that accurately governs the relationship between four or more images of a moving object. Through experiments we demonstrate that equivalent 3D image basis is indeed computable and available for recognition or image rendering.

- Texture Shading and Colour / Grouping and Segmentation / Object Recognition | Pp. 195-209

Diffuse-Specular Separation and Depth Recovery from Image Sequences

Stephen Lin; Yuanzhen Li; Sing Bing Kang; Xin Tong; Heung-Yeung Shum

Specular reflections present difficulties for many areas of computer vision such as stereo and segmentation. To separate specular and diffuse reflection components, previous approaches generally require accurate segmentation, regionally uniform reflectance or structured lighting. To overcome these limiting assumptions, we propose a method based on color analysis and multibaseline stereo that simultaneously estimates the separation and the true depth of specular reflections. First, pixels with a specular component are detected by a novel form of color histogram differencing that utilizes the epipolar constraint. This process uses relevant data from all the stereo images for robustness, and addresses the problem of color occlusions. Based on the Lambertian model of diffuse reflectance, stereo correspondence is then employed to compute for specular pixels their corresponding diffuse components in other views. The results of color-based detection aid the stereo correspondence, which determines both separation and true depth of specular pixels. Our approach integrates color analysis and multibaseline stereo in a synergistic manner to yield accurate separation and depth, as demonstrated by our results on synthetic and real image sequences.

- Texture Shading and Colour / Grouping and Segmentation / Object Recognition | Pp. 210-224

Shape from Texture without Boundaries

D. A. Forsyth

We describe a shape from texture method that constructs a maximum a posteriori estimate of surface coefficients using only the deformation of individual texture elements. Our method does not need to use either the boundary of the observed surface or any assumption about the overall distribution of elements. The method assumes that texture elements are of a limited number of types of fixed shape. We show that, with this assumption and assuming generic view and texture, each texture element yields the surface gradient unique up to a two-fold ambiguity. Furthermore, texture elements that are not from one of the types can be identified and ignored. An EM-like procedure yields a surface reconstruction from the data. The method is defined for othographic views — an extension to perspective views appears to be complex, but possible. Examples of reconstructions for synthetic images of surfaces are provided, and compared with ground truth. We also provide examples of reconstructions for images of real scenes. We show that our method for recovering local texture imaging transformations can be used to retexture objects in images of real scenes.

- Texture Shading and Colour / Grouping and Segmentation / Object Recognition | Pp. 225-239

Statistical Modeling of Texture Sketch

Ying Nian Wu; Song Chun Zhu; Cheng-en Guo

Recent results on sparse coding and independent component analysis suggest that human vision first represents a visual image by a linear superposition of a relatively small number of localized, elongate, oriented image bases. With this representation, the sketch of an image consists of the locations, orientations, and elongations of the image bases, and the sketch can be visually illustrated by depicting each image base by a linelet of the same length and orientation. Built on the insight of sparse and independent component analysis, we propose a two-level generative model for textures. At the bottom-level, the texture image is represented by a linear superposition of image bases. At the top-level, a Markov model is assumed for the placement of the image bases or the sketch, and the model is characterized by a set of simple geometrical feature statistics.

- Texture Shading and Colour / Grouping and Segmentation / Object Recognition | Pp. 240-254

Classifying Images of Materials: Achieving Viewpoint and Illumination Independence

Manik Varma; Andrew Zisserman

In this paper we present a new approach to material classification under unknown viewpoint and illumination. Our texture model is based on the statistical distribution of clustered filter responses. However, unlike previous 3D texton representations, we use rotationally invariant filters and cluster in an extremely low dimensional space. Having built a texton dictionary, we present a novel method of classifying a single image without requiring any a priori knowledge about the viewing or illumination conditions under which it was photographed. We argue that using rotationally invariant filters while clustering in such a low dimensional space improves classification performance and demonstrate this claim with results on all 61 textures in the Columbia-Utrecht database. We then proceed to show how texture models can be further extended by compensating for viewpoint changes using weak isotropy.

The new clustering and classification methods are compared to those of Leung and Malik (ICCV 1999), Schmid (CVPR 2001) and Cula and Dana (CVPR 2001), which are the current state-of-the-art approaches.

- Texture Shading and Colour / Grouping and Segmentation / Object Recognition | Pp. 255-271

Estimation of Multiple Illuminants from a Single Image of Arbitrary Known Geometry

Yang Wang; Dimitris Samaras

We present a new method for the detection and estimation of multiple illuminants, using one image of any object with known geometry and Lambertian reflectance. Our method obviates the need to modify the imaged scene by inserting calibration objects of any particular geometry, relying instead on partial knowledge of the geometry of the scene. Thus, the recovered multiple illuminants can be used both for image-based rendering and for shape reconstruction. We first develop our method for the case of a sphere with known size, illuminated by a set of directional light sources. In general, each point of such a sphere will be illuminated by a subset of these sources. We propose a novel, robust way to segment the surface into regions, with each region illuminated by a different set of sources. The regions are separated by boundaries consisting of critical points (points where one illuminant is perpendicular to the normal). Our region-based recursive least-squares method is impervious to noise and missing data and significantly outperforms a previous boundary-based method using spheres[]. This robustness to missing data is crucial to extending the method to surfaces of arbitrary smooth geometry, other than spheres. We map the normals of the arbitrary shape onto a sphere, which we can then segment, even when only a subset of the normals is available on the scene. We demonstrate experimentally the accuracy of our method, both in detecting the number of light sources and in estimating their directions, by testing on images of a variety of synthetic and real objects.

- Texture Shading and Colour / Grouping and Segmentation / Object Recognition | Pp. 272-288

The Effect of Illuminant Rotation on Texture Filters: Lissajous’s Ellipses

M. Chantler; M. Schmidt; M. Petrou; G. McGunnigle

Changes in the angle of illumination incident upon a 3D surface texture can significantly change its appearance. These changes can affect the output of texture features to such an extent that they cause complete misclassification. We present new theory and experimental results that show that changes in illumination tilt angle cause texture clusters to describe Lissajous’s ellipses in feature space. We focus on texture features that may be modelled as a linear filter followed by an energy estimation process e.g. Laws filters, Gabor filters, ring and wedge filters. This general texture filter model is combined with a linear approximation of Lambert’s cosine law to predict that the outputs of these filters are sinusoidal functions of illuminant tilt. Experimentation with 30 real textures verifies this proposal. Furthermore we use these results to show that the clusters of distinct textures describe different elliptical paths in feature space as illuminant tilt varies. These results have significant implications for illuminant tilt invariant texture classification.

- Texture Shading and Colour / Grouping and Segmentation / Object Recognition | Pp. 289-303

On Affine Invariant Clustering and Automatic Cast Listing in Movies

Andrew Fitzgibbon; Andrew Zisserman

We develop a distance metric for clustering and classification algorithms which is invariant to affine transformations and includes priors on the transformation parameters. Such clustering requirements are generic to a number of problems in computer vision.

We extend existing techniques for affine-invariant clustering, and show that the new distance metric outperforms existing approximations to affine invariant distance computation, particularly under large transformations. In addition, we incorporate prior probabilities on the transformation parameters. This further regularizes the solution, mitigating arare but serious tendency of the existing solutions to diverge. For the particular special case of corresponding point sets we demonstrate that the affine invariant measure we introduced may be obtained in closed form.

As an application of these ideas we demonstrate that the faces of the principal cast of a feature film can be generated automatically using clustering with appropriate invariance. This is a very demanding test as it involves detecting and clustering over tens of thousands of images with the variances including changes in viewpoint, lighting, scale and expression.

- Texture Shading and Colour / Grouping and Segmentation / Object Recognition | Pp. 304-320