Catálogo de publicaciones - libros
3-D Shape Estimation and Image Restoration: Exploiting Defocus and Motion Blur
Paolo Favaro Stefano Soatto
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
No disponibles.
Disponibilidad
Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No detectada | 2007 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-1-84628-176-1
ISBN electrónico
978-1-84628-688-9
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2007
Información sobre derechos de publicación
© Springer-Verlag London Limited 2007
Cobertura temática
Tabla de contenidos
Introduction
Paolo Favaro; Stefano Soatto
The sense of vision plays an important role in the life of primates, by facilitating interactions with the environment that are crucial for survival tasks. Even relatively “unintelligent” animals can easily navigate through unknown, complex, dynamic environments, avoid obstacles, and recognize prey or predators at a distance. Skilled humans can view a scene and reproduce a model of it that captures its shape (sculpture) and appearance (painting) rather accurately.
Pp. 1-13
Basic models of image formation
Paolo Favaro; Stefano Soatto
The purpose of this chapter is to derive simple mathematical models of the image formation process. The reader who is not interested in the derivation can skip the details and go directly to equation (2.9). This is all the reader needs in order to proceed to Chapters 3 through 5. The reader may want to consult briefly Section 2.4 before Chapters 6 through 8.
Pp. 14-36
Some analysis: When can 3-D shape be reconstructed from blurred images?
Paolo Favaro; Stefano Soatto
In the previous chapter we have seen how a scene with a certain shape, represented by a function , and a certain radiance , can generate an image under a certain focus setting, summarized in equation (2.4). Our main concern from now on is to use this equation to try to infer shape and radiance given a number of images taken with different settings. Before we venture into the design of algorithms to infer shape and radiance from blurred images, however, we need to ask ourselves whether this problem can be solved at all. It is easy to concoct scenes that cannot be reconstructed based solely on defocus information. Take, for instance, a white cube illuminated uniformly on a white background. We will never be able to tell it apart from a white sphere on a white background, no matter how many defocused images we take. So, clearly, whether we can reconstruct 3-D shape depends on the radiance of the scene, which we do not know. Under what conditions, then, can we reconstruct shape and radiance? In this chapter we address this question and its many facets. We organize our analysis around the role of radiance. We first assume that it can be chosen purposefully (Sections 3.2 and 3.3), and later relax it to be arbitrary (Section 3.4). Along the way, we point out some issues concerning the hypothesis on the radiance in the design of algorithms for reconstructing shape from defocus (Section 3.3.1).
Pp. 37-49
Least-squares shape from defocus
Paolo Favaro; Stefano Soatto
The problem of inferring the 3-D shape and radiance of a scene from blurred images can be posed as the minimization of a cost functional, as we have seen in Section 3.6. There we have shown that the cost functional derived from the assumption that the noise is additive and Gaussian yields a least-squares formulation of the problem of shape from defocus. This assumption is not necessarily the most realistic, but it yields a particularly simple solution that separates the estimate of shape (shape from defocus) from that of radiance (image restoration). In this chapter we explore this approach; although based on questionable assumptions, it results in particularly simple, intuitive, and instructive algorithms. The reader should revisit Section 2.1.4 where we introduce the operator notation that we use extensively in this chapter.
Pp. 50-68
Enforcing positivity: Shape from defocus and image restoration by minimizing I-divergence
Paolo Favaro; Stefano Soatto
The solution to the problem of recovering shape from defocus described in the previous chapter is based on the assumption that the discrepancy between the ideal image and the measured one is additive Gaussian noise. This assumption is clearly not germane, because it admits the possibility that the measured image is negative. In fact, given a large enough variance of the noise , even if the ideal image is positive, one cannot guarantee that the measured image = + is positive as well. The Gaussian assumption is desirable because it yields a least-squares solution that is particularly simple, by allowing the separation of the problem of shape from defocus from that of image restoration.
Pp. 69-86
Defocus via diffusion: Modeling and reconstruction
Paolo Favaro; Stefano Soatto
In Chapter 2 we have described the formation of an image through a thin lens as the integral of the radiance of the scene, that can be thought of as the “perfect” (or sharp, or deblurred) image, against a kernel that depends on the geometry of the imaging device (equation (2.9)). In other words, a measured image is just a blurred version of the radiance, or energy distribution, of the scene. There, we thought of the radiance of the scene as describing the heat distribution on a metal plate, with the temperature representing the intensity at a given point, and heat diffusion on the plate blurring the original heat distribution (Figure 2.6). This process can be simulated by solving the , a particular kind of partial differential equation (PDE), with “time” being an index that describes the amount of blur.
Pp. 87-105
Dealing with motion: Unifying defocus and motion blur
Paolo Favaro; Stefano Soatto
In previous chapters we have assumed that images are obtained instantaneously by computing an integral of the type (2.4). In practice, physical sensors count photons, and counting is performed over a temporal window, which is called the , because it usually corresponds to the time while the shutter of the lens remains open. If the scene is static and the camera still, this has no ill effect on the image. However, if the camera moves during this interval, or if the scene moves relative to it, then we can experience motion blur, as we have anticipated in Section 2.5.
Pp. 106-119
Dealing with multiple moving objects
Paolo Favaro; Stefano Soatto
In the previous chapter we have seen how one can exploit motion blur, an artifact of the relative motion between the scene and the camera during the shutter interval, to recover the 3-D structure of the scene along with its motion and the (motion-deblurred) radiance. There we have assumed that there is only one object moving. Either the scene is static and the camera is moving relative to it, or the camera is still and the scene is moving as a single rigid object. In fact, we have further imposed the restriction that motion is just fronto-parallel; that is, the direction of translation is parallel to the image plane.
Pp. 120-146
Dealing with occlusions
Paolo Favaro; Stefano Soatto
So far we have considered scenes where there is no occlusion of line of sight, so that the entire scene is visible from the image, and its shape can therefore be represented by the graph of a function with domain on the image plane. Most often, however, real scenes exhibit complex surfaces that occlude one another. For instance, a pole in front of a wall occludes part of it, and the scene (pole plus wall) cannot be represented by the graph of a function. At first this seems to be a technicality. Because we cannot see beyond the occluding object, we might as well assume that it is connected to the occluder at the occluding boundary, so it can indeed be represented by the graph of a function, albeit not a continuous one. Right? Wrong. This reasoning would be correct if we had a pinhole imaging model, but for a finite-aperture camera, one can actually see portions of the image beyond an occlusion.
Pp. 147-158
Final remarks
Paolo Favaro; Stefano Soatto
Images are only an infinitesimal snapshot, in space and time, of the physical world. And yet they are a rich source of information that humans and other animals exploit to interact with it. The sophisticated variable-geometry of the lens in the human eye is known to play a role in the inference of spatial ordering, proximity and other three-dimensional cues. In engineering systems, accommodation artifacts such as defocus and motion blur are often seen as nuisances, and their effects minimized by means of expensive optics or image capture hardware. We hope that this book will at least encourage the engineer to look at defocus and motion blurs as friends, not foes.
Pp. 159-160