Catálogo de publicaciones - libros

Compartir en
redes sociales


Advances in Visual Computing: 2nd International Symposium, ISVC 2006, Lake Tahoe, NV, USA, November 6-8, 2006, Proceedings, Part I

George Bebis ; Richard Boyle ; Bahram Parvin ; Darko Koracin ; Paolo Remagnino ; Ara Nefian ; Gopi Meenakshisundaram ; Valerio Pascucci ; Jiri Zara ; Jose Molineros ; Holger Theisel ; Tom Malzbender (eds.)

En conferencia: 2º International Symposium on Visual Computing (ISVC) . Lake Tahoe, NV, USA . November 6, 2006 - November 8, 2006

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Software Engineering/Programming and Operating Systems; Pattern Recognition; Image Processing and Computer Vision; Artificial Intelligence (incl. Robotics); Computer Graphics; Algorithm Analysis and Problem Complexity

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2006 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-48628-2

ISBN electrónico

978-3-540-48631-2

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2006

Tabla de contenidos

Direct Estimation of the Stereo Geometry from Monocular Normal Flows

Ding Yuan; Ronald Chung

The increasing use of active vision systems makes it necessary to determine the relative geometry between the cameras in the system at arbitrary time. There has been some work on on-line estimation of the relative camera geometry parameters. However, many of them are based on epipolar geometry, motion correspondences, or even presence of some calibration reference objects in the scene. In this paper, we describe a method that allows the relative geometry of two cameras be estimated without assuming that their visual fields picture the same object, nor that motion correspondences in each camera are fully estimated beforehand. The method starts from monocular normal flows in the two cameras and estimates the relative geometry parameters without evening accessing the full optical flows. Experimental results are shown to illustrate the performance of the method.

Pp. 303-312

Singular Value Decomposition-Based Illumination Compensation in Video

Ki-Youn Lee; Rae-Hong Park

This paper presents a singular value decomposition (SVD)-based illumination compensation method in video having varying scene illumination. In video that does not contain scene changes, the color distributions in the RGB space are different frame to frame, mainly due to varying illumination. In this paper, the color distribution of a scene is modeled as an ellipsoid using SVD and scene illumination of successive frames is preserved by the linear transformation in the RGB space. The effect of illumination change is effectively removed by the linear transformation and the similarity measures such as the normalized cross correlation, the sum of absolute differences, and the sum of squared differences of two successive image frames, are preserved, which illustrates the effectiveness of the proposed algorithm. Simulation results with several synthetic and real test sequences show the robustness of the proposed method to illumination changes compared with the conventional methods.

Pp. 313-322

Facial Expression Transformations for Expression-Invariant Face Recognition

Hyung-Soo Lee; Daijin Kim

This paper presents a method of expression-invariant face recognition by transforming the input face image with an arbitrary expression into its corresponding neutral facial expression image. When a new face image with an arbitrary expression is queried, it is represented by a feature vector using the active appearance model. Then, the facial expression state of the queried feature vector is identified by the facial expression recognizer. Then, the queried feature vector is transformed into the neutral facial expression vector using the identified facial expression state via the or facial expression transformation, where the former uses the bilinear translation directly to transform the facial expression, but the latter uses the bilinear translation to obtain the relative expression parameters and transforms the facial expression indirectly by the obtained relative expression parameters. Then, the neutral facial expression vector is converted into the neutral facial expression image via the AAM reconstruction. Finally, the face recognition has been performed by the distance-based matching technique. Experimental results show that the proposed expression-invariant face recognition method is very robust under a variety of facial expressions.

Pp. 323-333

A High-Speed Parallel Architecture for Stereo Matching

Sungchan Park; Hong Jeong

The stereo matching algorithm based on the belief propagation (BP) has the low matching error as the global method, but has the disadvantage of a long processing time. In addition to a low error of less than 2.6% in the Middlebury image simulation, a new architecture based on BP shows a high-speed parallel VLSI structure of the time complexity O(N), at properly small iterations, so that it can be useful as a chip in the real-time application like robots and navigations.

Pp. 334-342

Light Simulation in a Distributed Driving Simulator

Stefan Lietsch; Henning Zabel; Martin Eikermann; Veit Wittenberg; Jan Berssenbrügge

In this paper we present our work on modularizing and distributing a VR application – the Virtual Night Drive simulator. The main focus in our work is the simulation of headlights. The realistic but still interactive visualization of those lights is essential for a usable driving simulator at night. Modern techniques like pixel and vertex shaders and volume rendering help to realize the complex task of light simulation. But there are still scenarios, especially when having several cars with headlights in one scene, that require distributed setups to run the simulation in an interactive way. In this paper we present an architecture that supports several approaches of distributed light simulation, compare it to existing systems and give an outlook on what is left to do.

Pp. 343-352

Self-adaptive RBF Neural Networks for Face Recognition

S. Gharai; S. Thakur; S. Lahiri; J. K. Sing; D. K. Basu; M. Nasipuri; M. Kundu

A self-adaptive radial basis function neural network (RBFNN)-based recognition of human faces has been proposed in this paper. Conventionally, all the hidden layer neurons of an RBFNN are considered to generate outputs at the output layer. In this work, a confidence measure has been imposed to select a subset of the hidden layer neurons to generate outputs at the output layer, thereby making the RBFNN as self-adaptive for choosing hidden layer neurons to be considered while generating outputs at the output layer. This process also reduces the computation time at the output layer of the RBFNN by neglecting the ineffective RBFs. The performance of the proposed method has been evaluated on the ORL and the UMIST face databases. The experimental results indicate that the proposed method can achieve excellent recognition rates and outperform some of the traditional face recognition approaches.

Pp. 353-362

An Improved Representation of Junctions Through Asymmetric Tensor Diffusion

Shawn Arseneau; Jeremy R. Cooperstock

Junctions form critical features in motion segmentation, image enhancement, and object classification to name but a few application domains. Traditional approaches to identifying junctions include convolutional methods, which involve considerable tuning to handle non-trivial inputs and diffusion techniques that address only symmetric structure. A new approach is proposed that requires minimal tuning and can distinguish between the basic, but critically different, ‘X’ and ‘T’ junctions. This involves a multi-directional representation of gradient structure and employs asymmetric tensor diffusion to emphasize such junctions. The approach combines the desirable properties of asymmetry from convolutional methods with the robustness of local support from diffusion.

Pp. 363-372

Accurate Extraction of Reciprocal Space Information from Transmission Electron Microscopy Images

Edward Rosten; Susan Cox

As the study of complex systems has become dominant in physics the link between computational and physical science has become ever more important. In particular, with the rising popularity of imaging techniques in physis, the development and application of cutting edge computer vision techniques has become vital. Here we present novel image analysis methods which can be used to extract the position of features in diffraction patterns (reciprocal space) with unprecedented accuracy.

The first contribution we have developed is a method for calculating the nonlinear response of photographic film by using the noise in the image enabling the extraction of accurate intensity information. This allows high-resolution (but non-linear) film to be used in place of low-resolution (but linear) CCD cameras. The second contribution is a method for accurately localising very faint features in diffraction patterns by modelling the features and using the expectation maximization algorithm directly on the image to fit them. The accuracy of this technique has been verified by testing it on synthetic data.

These methods have been applied to transmission electron microscopy data, and have already enabled discoveries which would have been impossible using previously available techniques.

Pp. 373-382

GPU Accelerated Isosurface Extraction on Tetrahedral Grids

Luc Buatois; Guillaume Caumon; Bruno Lévy

Visualizing large unstructured grids is extremely useful to understand natural and simulated phenomena. However, informative volume visualization is difficult to achieve efficiently due to the huge amount of information to process. In this paper, we present a method to efficiently tessellate on a GPU large unstructured tetrahedral grids made of millions of cells. This method avoids data redundancy by using textures for storing most of the needed data; textures are accessed through vertex texture lookup in the vertex shading unit of modern graphics cards. Results show that our method is about 2 times faster than the same CPU-based extraction, and complementary with previous approaches based on GPU registers: it is less efficient for small grids, but handles millions-tetrahedra grids in graphics memory, which was impossible with previous works. Future hardware evolutions are expected to make our approach much more efficient.

Pp. 383-392

Enhancing Information on Large Scenes by Mixing Renderings

Vincent Boyer; Dominique Sobczyk

We propose a new model for visualization of high scale scenes. It is designed to enhance pertinent informations that become quickly viewable on a large scene. It consists in mixing different kind of rendering techniques in the same frame. This method is achieved in real-time during the rendering process using GPU programming. Moreover rendering techniques used and key points defined by the user can be interactively changed. We present our model, and a new non-photorealistic rendering techniques. Images produced look better and provide more informations than traditional rendering techniques.

Pp. 393-400