Catálogo de publicaciones - libros

Compartir en
redes sociales


Advanced Concepts for Intelligent Vision Systems: 9th International Conference, ACIVS 2007, Delft, The Netherlands, August 28-31, 2007. Proceedings

Jacques Blanc-Talon ; Wilfried Philips ; Dan Popescu ; Paul Scheunders (eds.)

En conferencia: 9º International Conference on Advanced Concepts for Intelligent Vision Systems (ACIVS) . Delft, The Netherlands . August 28, 2007 - August 31, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Image Processing and Computer Vision; Pattern Recognition; Computer Graphics; Artificial Intelligence (incl. Robotics)

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-74606-5

ISBN electrónico

978-3-540-74607-2

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

A Framework for Scalable Vision-Only Navigation

Siniša Šegvić; Anthony Remazeilles; Albert Diosi; François Chaumette

This paper presents a monocular vision framework enabling feature-oriented appearance-based navigation in large outdoor environments containing other moving objects. The framework is based on a hybrid topological-geometrical environment representation, constructed from a learning sequence acquired during a robot motion under human control. The framework achieves the desired navigation functionality without requiring a global geometrical consistency of the underlying environment representation. The main advantages with respect to conventional alternatives are unlimited scalability, real-time mapping and effortless dealing with interconnected environments once the loops have been properly detected. The framework has been validated in demanding, cluttered and interconnected environments, under different imaging conditions. The experiments have been performed on many long sequences acquired from moving cars, as well as in real-time large-scale navigation trials relying exclusively on a single perspective camera. The obtained results imply that a globally consistent geometric environment model is not mandatory for successful vision-based outdoor navigation.

- Computer Vision | Pp. 1-12

Visual Tracking by Hypothesis Testing

Valentin Enescu; Ilse Ravyse; Hichem Sahli

A new approach for tracking a non-rigid target is presented. Tracking is formulated as a Maximum A Posteriori (MAP) segmentation problem where each pixel is assigned a binary label indicating whether it belongs to the target or not. The label field is modeled as a Markov Random Field whose Gibbs energy comprises three terms. The first term quantifies the error in matching the object model with the object’s appearance as given by the current segmentation. Coping with the deformations of the target while avoiding optical flow computation is achieved by marginalizing this likelihood over all possible motions per pixel. The second term penalizes the lack of continuity in the labels of the neighbor pixels, thereby encouraging the formation of a smoothly shaped object mask, without holes. Finally, for the sake of increasing robustness, the third term constrains the object mask to assume an elliptic shape model with unknown parameters. MAP optimization is performed iteratively, alternating between estimating the shape parameters and recomputing the segmentation using updated parameters. The latter is accomplished by discriminating each pixel via a simple hypothesis test. We demonstrate the efficiency of our approach on synthetic and real video sequences.

- Computer Vision | Pp. 13-24

A New Approach to the Automatic Planning of Inspection of 3D Industrial Parts

J. M. Sebastián; D. García; A. Traslosheros; F. M. Sánchez; S. Domínguez; L. Pari

The present article describes a novel algorithm of planning in order to carry out in an automatic way the dimensional inspection of elements with three-dimensional characteristic and which ones belong to the manufactured pieces, the measurements are obtained with a high precision. The method is considered as generalized from the piece complexity, the points since the measurements must be done and the range of the application of the system is not limited. According to the previously mentioned the analysis discretizes the space configurations of the positioning system of the piece and the surface of the own piece. All the techniques here presented have been proved and validated in a real system of inspection; the system is based on stereoscopic cameras which are endowed with a laser light.

- Computer Vision | Pp. 25-36

Low Latency 2D Position Estimation with a Line Scan Camera for Visual Servoing

Peter Briër; Maarten Steinbuch; Pieter Jonker

This paper describes the implementation of a visual position estimation algorithm, using a line-scan sensor positioned at an angle over a 2D repetitive pattern. An FFT is used with direct interpretation of the phase information at the fundamental frequencies of the pattern. The algorithm is implemented in a FPGA. The goal is to provide fast position estimation on visual data, to be used as feedback information in a dynamic control system. Traditional implementations of these systems are often hampered by low update rates (<100 Hz) and/or large latencies (>10 msec). These limit the obtainable bandwidths of the control system. Presented here is an implementation of an algorithm with a high update rate (30kHz) and low latency (100 sec). This system can be used for a range of repetitive structures and has a high robustness. Resolutions of less than 0.1 m have been demonstrated on real products with 210x70 m feature size.

- Computer Vision | Pp. 37-47

Optimization of Quadtree Triangulation for Terrain Models

Refik Samet; Emrah Ozsavas

The aim of the study is to increase the accuracy of a terrain triangulation while maintaining or reducing the number of triangles. To this end, a non-trivial algorithm for quadtree triangulation is proposed. The proposed algorithm includes: i) a resolution parameters calculation technique and ii) three error metric calculation techniques. Simulation software is also devised to apply the proposed algorithm. Initially, a data file is read to obtain the elevation data of a terrain. After that, a 3D mesh is generated by using the original quadtree triangulation algorithm and the proposed algorithm. For each of the algorithms, two situations are analyzed: i) the situation with fixed resolution parameters and ii) the situation with dynamically changing resolution parameters. For all of the cases, terrain accuracy value and number of triangles of 3D meshes are calculated and evaluated. Finally, it is shown that dynamically changing resolution parameters improve the algorithms’ performance.

- Computer Vision | Pp. 48-59

Analyzing DGI-BS: Properties and Performance Under Occlusion and Noise

Pilar Merchán; Antonio Adán

This paper analyzes a new 3D recognition method for occluded objects in complex scenes. The technique uses the Depth Gradient Image Based on Silhouette representation (DGI-BS) and settles the problem of identification-pose under occlusion and noise requirements. DGI-BS synthesizes both surface and contour information avoiding restrictions concerning the layout and visibility of the objects in the scene. Firstly, the paper is devoted to show the main properties of this method compared with a set of known techniques as well as to explain briefly the key concepts of the DGI-BS representation. Secondly, the performance of this strategy in real scenes under occlusion and noise circumstances is presented in detail.

- Computer Vision | Pp. 60-71

Real-Time Free Viewpoint from Multiple Moving Cameras

Vincent Nozick; Hideo Saito

In recent years, some Video-Based Rendering methods have advanced from off-line rendering to on-line rendering. However very few of them can handle moving cameras while recording. Moving cameras enable to follow an actor in a scene, come closer to get more details or just adjust the framing of the cameras. In this paper, we propose a new Video-Based Rendering method that creates new views of the scene in live from four moving webcams. These cameras are calibrated in real-time using multiple markers. Our method fully uses both CPU and GPU and hence requires only one consumer grade computer.

- Computer Vision | Pp. 72-83

A Cognitive Modeling Approach for the Semantic Aggregation of Object Prototypes from Geometric Primitives: Toward Understanding Implicit Object Topology

Peter Michael Goebel; Markus Vincze

Object recognition has developed to the most common approach for detecting arbitrary objects based on their appearance, where viewpoint dependency, occlusions, algorithmic constraints and noise are often hindrances for successful detection. Statistical pattern analysis methods, which are able to extract features from appearing images and enable the classification of the image content have reached a certain maturity and achieve excellent recognition on rather complex problems.

However, these systems do not seem directly scalable to human performance in a cognitive sense and appearance does not contribute to understanding the structure of objects. Syntactical pattern recognition methods are able to deal with structured objects, which may be constructed from primitives that were generated from extracted image features. Here, an eminent problem is how to aggregate image primitives in order to (re-) construct objects from such primitives.

In this paper, we propose a new approach to the aggregation of object prototypes by using geometric primitives derived from features out of image sequences and acquired from changing viewpoints. We apply syntactical rules for forming representations of the implicit object topology of object prototypes by a set of fuzzy graphs. Finally, we find a super-position of a prototype graph set, which can be used for updating and learning new object recipes in hippocampal like episodic memory that paves the way to cognitive understanding of natural scenes. The proposed implementation is exemplified with an object similar to the Necker cube.

- Computer Vision | Pp. 84-96

A Multi-touch Surface Using Multiple Cameras

Itai Katz; Kevin Gabayan; Hamid Aghajan

In this paper we present a framework for a multi-touch surface using multiple cameras. With an overhead camera and a side-mounted camera we determine the fingertip positions. After determining the fundamental matrix that relates the two cameras, we calculate the three dimensional coordinates of the fingertips. The intersection of the epipolar lines from the overhead camera with the fingertips detected in the side camera image provides the fingertip height. Touches are detected when the measured fingertip height from the touch surface is zero. We interpret touch events as hand gestures which can be generalized into commands for manipulating applications. We offer an example application of a multi-touch finger painting program.

- Computer Vision | Pp. 97-108

Fusion of Bayesian Maximum Entropy Spectral Estimation and Variational Analysis Methods for Enhanced Radar Imaging

Yuriy Shkvarko; Rene Vazquez-Bautista; Ivan Villalon-Turrubiates

A new fused Bayesian maximum entropy–variational analysis (BMEVA) method for enhanced radar/synthetic aperture radar (SAR) imaging is addressed as required for high-resolution remote sensing (RS) imagery. The variational analysis (VA) paradigm is adapted via incorporating the image gradient flow norm preservation into the overall reconstruction problem to control the geometrical properties of the desired solution. The metrics structure in the corresponding image representation and solution spaces is adjusted to incorporate the VA image formalism and RS model-level considerations; in particular, system calibration data and total image gradient flow power constraints. The BMEVA method aggregates the image model and system-level considerations into the fused SSP reconstruction strategy providing a regularized balance between the noise suppression and gained spatial resolution with the VA-controlled geometrical properties of the resulting solution. The efficiency of the developed enhanced radar imaging approach is illustrated through the numerical simulations with the real-world SAR imagery.

- Fusion, Detection and Classification | Pp. 109-120