Catálogo de publicaciones - libros

Compartir en
redes sociales


3D Imaging for Safety and Security

Andreas Koschan ; Marc Pollefeys ; Mongi Abidi (eds.)

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-1-4020-6181-3

ISBN electrónico

978-1-4020-6182-0

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer 2007

Tabla de contenidos

3D Assisted Face Recognition: A Survey

M. Hamouz; J. R. Tena; J. Kittler; A. Hilton; J. Illingworth

3D face recognition has lately been attracting ever increasing attention. In this chapter we review the full spectrum of 3D face processing technology, from sensing to recognition. The review covers 3D face modelling, 3D to 3D and 3D to 2D registration, 3D based recognition and 3D assisted 2D based recognition. The fusion of 2D and 3D modalities is also addressed. The chapter complements other reviews in the face biometrics area by focusing on the sensor technology, and by detailing the efforts in 3D face modelling and 3D assisted 2D face matching. A detailed evaluation of a typical state-of-the-art 3D face registration algorithm is discussed and conclusions drawn.

Palabras clave: 3D sensors; 3D face models; 3D face registration; 3D face recognition.

Part I - Biometrics | Pp. 3-23

A Survey on 3D Modeling of Human Faces for Face Recognition

S. Huq; B. Abidi; S. G. Kong; M. Abidi

In its quest for more reliability and higher recognition rates the face recognition community has been focusing more and more on 3D based recognition. Depth information adds another dimension to facial features and provides ways to minimize the effects of pose and illumination variations for achieving greater recognition accuracy. This chapter reviews, therefore, the major techniques for 3D face modeling, the first step in any 3D assisted face recognition system. The reviewed techniques are laser range scans, 3D from structured light projection, stereo vision, morphing, shape from motion, shape from space carving, and shape from shading. Concepts, accuracy, feasibility, and limitations of these techniques and their effectiveness for 3D face recognition are discussed.

Palabras clave: 3D face reconstruction; face recognition; laser range scanner; structured light; stereo vision; morphing; shape from shading; shape from motion.

Part I - Biometrics | Pp. 25-67

Automatic 3D Face Registration Without Initialization

A. Koschan; V. R. Ayyagari; F. Boughorbel; M. A. Abidi

Recently 3D face reconstruction and recognition has gained an important role in computer vision and biometrics research. Depth information of a 3D face can aid solving the uncertainties in illumination and pose variation associated with face recognition. The registration of data that is usually acquired from different views is a fundamental element of any reconstruction process. This chapter focuses on the problem of automatic registration of 3D face point sets through a criterion based on Gaussian fields. The method defines a straightforward energy function, which is always differentiable and convex in a large neighborhood of the alignment parameters; allowing for the use of powerful standard optimization techniques. The introduced technique overcomes the necessity of close initialization, which is a requirement when applying the Iterative Closest Point algorithm. Moreover, the use of the Fast Gauss Transform reduces the computational complexity of the registration algorithm.

Palabras clave: 3D registration; range; Gaussian fields; 3D face reconstruction.

Part I - Biometrics | Pp. 69-93

A Genetic Algorithm Based Approach for 3D Face Recognition

Y. Sun; L. Yin

The ability to distinguish different people by using 3D facial information is an active research problem being undertaken by the face recognition community. In this paper, we propose to use a generic model to label 3D facial features. This approach relies on our realistic face modeling technique, by which the individual face model is created using a generic model and two views of a face. In the individualized model, we label face features by their principal curvatures. Among the labeled features, “good features” are selected by using a Genetic Algorithm based approach. The feature space is then formed by using these new 3D shape descriptors, and each individual face is classified according to its feature space correlation. We applied 105 individual models for the experiment. The experimental results show that the shape information obtained from the 3D individualized model can be used to classify and identify individual facial surfaces. The rank-4 recognition rate is 92%. The 3D individualized model provides consistent and sufficient details to represent individual faces while using a much more simplified representation than the range data models. To verify the accuracy and robustness of the selected feature spaces, a similar procedure is applied on the range data obtained from the 3D scanner. We used a subset of the optimal feature space derived from the Genetic Algorithm, and achieved an 87% rank-4 recognition rate. It shows that our approach provides a possible way to reduce the complexity of 3D data processing and is feasible to applications using different sources of 3D data.

Palabras clave: genetic algorithm; 3D face recognition; face recognition; feature selection; generic model; geometric modeling and labeling.

Part I - Biometrics | Pp. 95-118

Story of Cinderella

Alexander M. Bronstein; Michael M. Bronstein; Ron Kimmel

In this chapter, we address the question of what are the facial measures one could use in order to distinguish between people. Our starting point is the fact that the expressions of our face can, in most cases, be modeled as isometries, which we validate empirically. Then, based on this observation, we introduce a technique that enables us to distinguish between people based on the intrinsic geometry of their faces. We provide empirical evidence that the proposed geometric measures are invariant to facial expressions and relate our findings to the broad context of biometric methods, ranging from modern face recognition technologies to fairy tales and biblical stories.

Palabras clave: biometrics; isometry; face recognition; facial expression; multidimensional scaling; intrinsic geometry.

Part I - Biometrics | Pp. 119-131

Human Ear Detection From 3D Side Face Range Images

H. Chen; B. Bhanu

Ear is a new class of relatively stable biometrics which is not affected by facial expressions, cosmetics and eye glasses. To use ear biometrics for human identification, ear detection is the first part of an ear recognition system. In this chapter we propose two approaches for locating human ears in side face range images: (a) template matching based ear detection and (b) ear shape model based detection. For the first approach, the model template is represented by an averaged histogram of shape index that can be computed from principal curvatures. The ear detection is a four-step process: step edge detection and thresholding, image dilation, connect-component labeling and template matching. For the second approach, the ear shape model is represented by a set of discrete 3D vertices corresponding to ear helix and anti-helix parts. Given a side face range image, step edges are extracted and then the edge segments are dilated, thinned and grouped into different clusters which are the potential regions containing an ear. For each cluster, we register the ear shape model with the edges. The region with the minimum mean registration error is declared as the detected ear region; during this process the ear helix and anti-helix parts are identified. Experiments are performed with a large number of real side face range images to demonstrate the effectiveness of the proposed approaches.

Palabras clave: sear biometrics; ear detection; range images; shape model; shape index.

Part I - Biometrics | Pp. 133-155

Synthetic Aperture Focusing Using Dense Camera Arrays

V. Vaish; G. Garg; E.-V. Talvala; E. Antunez; B. Wilburn; M. Horowitz; M. Levoy

Synthetic aperture focusing consists of warping and adding together the images in a 4D light field so that objects lying on a specified surface are aligned and thus in focus, while objects lying off this surface are misaligned and hence blurred. This provides the ability to see through partial occluders such as foliage and crowds, making it a potentially powerful tool for surveillance. In this paper, we describe the image warps required for focusing on any given focal plane, for cameras in general position without having to perform a complete metric calibration. We show that when the cameras lie on a plane, it is possible to vary the focus through families of frontoparallel and tilted focal planes by shifting the images after an initial recitification. Being able to vary the focus by simply shifting and adding images is relatively simple to implement in hardware and facilitates a real-time implementation. We demonstrate this using an array of 30 video-resolution cameras; initial homographies and shifts are performed on per-camera FPGAs, and additions and a final warp are performed on 3 PCs.

Palabras clave: light fields; synthetic aperture; projective geometry; real-time system.

Part II - Safety and Security Applications | Pp. 159-172

Dynamic Pushbroom Stereo Vision

Z. Zhu; G. Wolberg; J.R. Layne

We present a dynamic pushbroom stereo geometry model for both 3D reconstruction and moving target extraction in applications such as aerial surveillance and cargo inspection. In a dynamic pushbroom camera model, a “line scan camera” scans across the scene. Both the scanning sensor and the objects in the scene are moving, and thus the image generated is a “moving picture” with one axis being space and the other being time. We study the geometry under a linear motion model for both the sensor and the object, and we investigate the advantages of using two such scanning systems to construct a dynamic pushbroom stereo vision system for 3D reconstruction and moving target extraction. Two real examples are given using the proposed models. In the first application, a fast and practical calibration procedure and an interactive 3D estimation method are provided for 3D cargo inspection with dual gamma-ray (or X-ray) scanning systems. In the second application, dynamic pushbroom stereo mosaics are generated by using a single camera mounted on an airplane, and a unified segmentation-based stereo matching algorithm is proposed to extract both 3D structures and moving targets from urban scenes. Experimental results are given.

Palabras clave: video mosaicing; motion analysis; stereo vision; 3D reconstruction; moving target extraction; pushbroom stereo.

Part II - Safety and Security Applications | Pp. 173-199

3D Modeling of Indoor Environments

P. Biber; S. Fleck; T. Duckett; M. Wand

Autonomous mobile robots will play a major role in future security and surveillance tasks for large scale environments such as shopping malls, airports, hospitals and museums. Robotic security guards will autonomously survey such environments, unless a remote human operator takes over control. In this context a 3D model can convey much more useful information than the typical 2D maps used in many robotic applications today, both for visualization of information and as human machine interface for remote control. This paper addresses the challenge of building such a model of a large environment (50x60m^2) using data from the robot's own sensors: a 2D laser scanner and a panoramic camera. The data are processed in a pipeline that comprises automatic, semiautomatic and manual stages. The user can interact with the reconstruction process where necessary to ensure robustness and completeness of the model. A hybrid representation, tailored to the application, has been chosen: floors and walls are represented efficiently by textured planes. Non-planar structures like stairs and tables, which are represented by point clouds, can be added if desired. Our methods to extract these structures include: simultaneous localization and mapping in 2D and wall extraction based on laser scanner range data, building textures from multiple omnidirectional images using multiresolution blending, and calculation of 3D geometry by a graph cut stereo technique. Various renderings illustrate the usability of the model for visualizing the security guard's position and environment.

Palabras clave: 3D modeling; robotic security guard; simultaneous localization and mapping; graph cut stereo.

Part II - Safety and Security Applications | Pp. 201-223

3D Site Modelling and Verification

V. Sequeira; G. Boström; J. G. M. Gonçalves

It is important in Nuclear Security to detect changes made in a given installation or track the progression of the construction work in a new plant. This chapter describes a system accepting multi-sensory, variable scale data as input. Scalability allows for different acquisition systems and algorithms according to the size of the objects/buildings/sites to be modeled. Semi-automated tools support the comparison between the acquired “as-built” and the approved design models. The chapter presents examples of the use in indoor and outdoor environments. It shows that it is possible at any time to redo a data acquisition from the same area without any assumptions of exact position, orientation or used scanner. Based on this new information it is possible to detect millimeter to decimeter changes in the scene.

Palabras clave: 3D reconstruction; change analysis; data fusion.

Part II - Safety and Security Applications | Pp. 225-247