Catálogo de publicaciones - libros

Compartir en
redes sociales


MICAI 2007: Advances in Artificial Intelligence: 6th Mexican International Conference on Artificial Intelligence, Aguascalientes, Mexico, November 4-10, 2007. Proceedings

Alexander Gelbukh ; Ángel Fernando Kuri Morales (eds.)

En conferencia: 6º Mexican International Conference on Artificial Intelligence (MICAI) . Aguascalientes, Mexico . November 4, 2007 - November 10, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence (incl. Robotics); Computation by Abstract Devices; Mathematical Logic and Formal Languages; Image Processing and Computer Vision

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-76630-8

ISBN electrónico

978-3-540-76631-5

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

A Single-Frame Super-Resolution Innovative Approach

Luz A. Torres-Méndez; Marco I. Ramírez-Sosa Morán; Mario Castelán

Super-resolution refers to the process of obtaining a high resolution image from one or more low resolution images. In this work, we present a novel method for the super-resolution problem for the limited case, where only one image of low resolution is given as an input. The proposed method is based on statistical learning for inferring the high frequencies regions which helps to distinguish a high resolution image from a low resolution one. These inferences are obtained from the correlation between regions of low and high resolution that come exclusively from the image to be super-resolved, in term of small neighborhoods. The Markov random fields are used as a model to capture the local statistics of high and low resolution data when they are analyzed at different scales and resolutions. Experimental results show the viability of the method.

- Image Processing, Computer Vision, and Robotics | Pp. 640-649

Shadows Attenuation for Robust Object Recognition

J. Gabriel Aviña-Cervantes; Leonardo Martínez-Jiménez; Michel Devy; Andres Hernández-Gutierrez; Dora L. Almanza; Mario A. Ibarra

Shadows are useful for synthetic images in order to increase extrinsically reality in image generation. However, in natural images, object recognition and segmentation are often negatively affected by cast shadows. Since shadows are a physical phenomena observed in most natural scenes, we propose a fast and reliable procedure to detect and attenuate shadows effects based on color/brightness density. Detected shadows are attenuated by modifying locally brightness and color that have the same color/brightness density. Some color artifacts (false colors on shadows) produced by the acquisition devices have been detected and discussed, and it has been noticed that they may affect some of the classical shadow removal methods. Finally, some experimental results of the proposed shadow attenuation method in real images are presented and evaluated.

- Image Processing, Computer Vision, and Robotics | Pp. 650-659

Fuzzy Directional Adaptive Recursive Temporal Filter for Denoising of Video Sequences

Alberto Rosales-Silva; Volodymyr Ponomaryov; Francisco Gallegos-Funes

In this paper we present the fuzzy directional adaptive recursive temporal filter for denoising of video sequences. The use of spatial-temporal information is considered more efficient in presence of fast motion and noise. We connect the differences between images, such as, the angle deviations to obtain several parameters to apply them in the proposed algorithm to detect and differentiate movement in background of noise. Extensive simulation results have demonstrated that the proposed fuzzy filter can consistently outperforms other filters by balancing the tradeoff between noise suppression and detail preservation.

- Image Processing, Computer Vision, and Robotics | Pp. 660-670

Bars Problem Solving - New Neural Network Method and Comparison

Václav Snášel; Dušan Húsek; Alexander Frolov; Hana Řezanková; Pavel Moravec; Pavel Polyakov

Bars problem is widely used as a benchmark for the class of feature extraction tasks. In this model, artificial data set is generated as a Boolean sum of a given number of bars. We show that the most suitable technique for feature set extraction in this case is neural network based Boolean factor analysis. Results are confronted with several dimension reduction techniques. These are singular value decomposition, semi-discrete decomposition and non-negative matrix factorization. Even if these methods are linear, it is interesting to compare them with neural network attempt, because they are well elaborated and are often used for a similar tasks. We show that frequently used cluster analysis methods can bring interesting results, at least for first insight to the data structure.

- Image Processing, Computer Vision, and Robotics | Pp. 671-682

A Coarse-and-Fine Bayesian Belief Propagation for Correspondence Problems in Computer Vision

Preeyakorn Tipwai; Suthep Madarasmi

We present the use of a multi-resolution, coarse-and-fine, pyramid image architecture to solve correspondence problems in various computer vision modules including shape recognition through contour matching, stereovision, and motion estimation. The algorithm works with a grid matching and an inter-grid correspondence model by message passing in a Bayesian belief propagation (BBP) network. The local smoothness and other constraints are expressed within each resolution scale grid and also between grids in a single paradigm. Top-down and bottom-up matching are concurrently performed for each pair of adjacent levels of the image pyramid level in order to find the best matched features at each level simultaneously. The coarse-and-fine algorithm uses matching results in each layer to constrain the process in its 2 adjacent upper and lower layers by measuring the consistency between corresponding points among adjacent layers so that good matches at different resolution scales constrain one another. The coarse-and-fine method helps avoid the local minimum problem by bringing features closer at the coarse level and yet providing a complete solution at the finer level. The method is used to constrain the solution with examples in shape retrieval, stereovision, and motion estimation to demonstrate its desirable properties such as rapid convergence, the ability to obtain near optimal solution while avoiding local minima, and immunity to error propagation found in the coarse-to-fine approach. ...

- Image Processing, Computer Vision, and Robotics | Pp. 683-693

3D Object Recognition Based on Low Frequency Response and Random Feature Selection

Roberto A. Vázquez; Humberto Sossa; Beatriz A. Garro

In this paper we propose a view-based method for 3D object recognition based on some biological aspects of infant vision. The biological hypotheses of this method are based on the role of the response to low frequencies at early stages, and some conjectures concerning how an infant detects subtle features (stimulating points) from an object. In order to recognize an object from different images of it (different orientations from 0° to 100°) we make use of a dynamic associative memory (DAM). As the infant vision responds to low frequencies of the signal, a low-filter is first used to remove high frequency components from the image. Then we detect subtle features in the image by means of a random feature selection detector. At last, the DAM is fed with this information for training and recognition. To test the accuracy of the proposal we use the Columbia Object Image Library (COIL 100) database.

- Image Processing, Computer Vision, and Robotics | Pp. 694-704

Image Processing for 3D Reconstruction Using a Modified Fourier Transform Profilometry Method

Jesus Carlos Pedraza Ortega; Jose Wilfrido Rodriguez Moreno; Leonardo Barriga Rodriguez; Efren Gorrostieta Hurtado; Tomas Salgado Jimenez; Juan Manuel Ramos Arreguin; Angel Rivas

An image processing algorithm based on the Fourier Transform Profilometry (FTP) method for 3D reconstruction purposes is presented. This method uses a global and local analysis for the phase unwrapping stage and obtains better results than using a simple unwrapping algorithm in the normal FTP method. A sinusoidal fringe pattern of known spatial frequency is firstly projected on a reference frame and an image is acquired. Then, the object of the shape to know is placed in front of the reference frame, and the same fringe pattern is projected. Once again another image is acquired. The projected pattern is distorted according to the shape of the object. Later, the modified Fourier Transform Profilometry method is applied to the acquired images. The digitized images contains the pixels of the object including the fringe pattern, and the phase difference between the acquired images contains the (height or depth) information. The novelty in the proposed method comes in the part of the unwrapping algorithm at the moment of obtaining the depth information by using the above mentioned combined analysis.

- Image Processing, Computer Vision, and Robotics | Pp. 705-712

3D Space Representation by Evolutive Algorithms

Rodrigo Montúfar-Chaveznava; Mónica Pérez-Meza

In this paper we present a system to obtain the representation of a 3D space using evolutive algorithms. Besides the evolutive algorithm, the proposed system is based on the mathematical principles of the vision stereo, particularly on stereoscopy. Vision stereo makes use of two images captured by a pair of cameras, in analogy to the mammalian vision system. Such images are employed to partially reconstruct the scene contained on them by some computational operations. In this work we employ only a camera, which is translated along a determined path, capturing the images every certain distance, providing the stereo images necessaries for reconstruction. As we can not perform all computations required for the total scene reconstruction, we employ an evolutionary algorithm to partially reconstruct the scene and obtain its representation. The evolutive algorithm employed is the fly algorithm [1], which employ spatial points named “flies” to reconstruct the principal characteristics of the world following the rules of evolution dictated by the algorithm.

- Image Processing, Computer Vision, and Robotics | Pp. 713-724

Knowledge Acquisition and Automatic Generation of Rules for the Inference Machine CLIPS

Veronica E. Arriola; Jesus Savage

A hierarchical representation of objects is dynamically generated from the input of a virtual vision system. It is used to analyze a sequence of actions and extract behavior rules that can be utilized by the inference machine CLIPS. The vision system is assumed to provide simplified positional and shape information about visible 3D silhouettes in a frame per frame basis. A virtual agent, attempts to keep track of every image, without any previous knowledge about the object it represents. The hierarchy is restructured as necessary, to include new perceived images, in such a way that it also reflects factual relationships amongst them. Modifications between consecutive frames are internally interpreted and represented as functions which take the original world description and transform it into the next frame. A partial order is defined while looking for the satisfaction of domain/codomain requirements in functions composition, thus leading to the CLIPS rules.

- Image Processing, Computer Vision, and Robotics | Pp. 725-735

On-Line Rectification of Sport Sequences with Moving Cameras

Jean-Bernard Hayet; Justus Piater

This article proposes a global approach to the rectification of sport sequences, to estimate the mapping from the video images to the terrain in the ground plane without using position sensors on the TV camera. Our strategy relies on three complementary techniques: (1) initial homography estimation using line-feature matching, (2) homography estimation with line-feature tracking, and (3) incremental homography estimation through point-feature tracking. Together, they allow continuous homography estimation over time, even during periods where the video does not contain sufficient line features to determine the homography from scratch. We illustrate the complementarity of the 3 techniques on a set of challenging examples.

- Image Processing, Computer Vision, and Robotics | Pp. 736-746