Catálogo de publicaciones - libros
Advances in Visual Computing: 2nd International Symposium, ISVC 2006, Lake Tahoe, NV, USA, November 6-8, 2006, Proceedings, Part I
George Bebis ; Richard Boyle ; Bahram Parvin ; Darko Koracin ; Paolo Remagnino ; Ara Nefian ; Gopi Meenakshisundaram ; Valerio Pascucci ; Jiri Zara ; Jose Molineros ; Holger Theisel ; Tom Malzbender (eds.)
En conferencia: 2º International Symposium on Visual Computing (ISVC) . Lake Tahoe, NV, USA . November 6, 2006 - November 8, 2006
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
Software Engineering/Programming and Operating Systems; Pattern Recognition; Image Processing and Computer Vision; Artificial Intelligence (incl. Robotics); Computer Graphics; Algorithm Analysis and Problem Complexity
Disponibilidad
Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No detectada | 2006 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-3-540-48628-2
ISBN electrónico
978-3-540-48631-2
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2006
Información sobre derechos de publicación
© Springer-Verlag Berlin Heidelberg 2006
Tabla de contenidos
doi: 10.1007/11919476_81
Web-Based Interface for the Visualization of Microarray Data
B. Vanteru; J. Shaik; M. Yeasin
This paper presents the design and development of a web-based interface for the visualization of high dimensional data such as microarray data. A co-ordinate based method, namely, 3D Star Coordinate (3SC) projection technique is used for the visualization. The proposed web-based interface enables the user to choose an existing dataset from the database or upload a dataset and visualize the best possible projection of the data on an applet running on the client web browser. The proposed projection algorithm runs in Matlab at the server side for faster computation and using Java Servlets the results are delivered to the client machine.
Pp. 813-821
doi: 10.1007/11919476_82
3D and Texture Modelling of Precolombian Objects
Jorge Hernández; Flavio Prieto
In this paper we present a 3D and texture modelling for Precolombian objects. Our experimental setup consists of a no contact 3D digitizer for range image acquisition and a camera CCD of high resolution to acquire the intensity images of the target object. The mapping texture process is described as a parameterization function with a range image; we estimate the camera orientation from calibration techniques, utilizing pattern calibration before data acquisition. We describe a texturized mapping strategy based on multi-view to adequately address photography related problems such as inhomogeneous lighting, highlights and occlusion. For each triangle in the model, the optimal image is selected from source images with an average weighted the scalar product between the image normal vector and triangle normal vector. Finally, we show the highly detailed texture models of Precolombian objects.
Pp. 822-830
doi: 10.1007/11919476_83
Segmentation of Triangular Meshes Using Multi-scale Normal Variation
Kyungha Min; Moon-Ryul Jung
In this paper, we present a scheme that segments triangular meshes into several meaningful patches using multi-scale normal variation. In differential geometry, there is a traditional scheme that segments smooth surfaces into several patches such as elliptic, hyperbolic, or parabolic regions, with several curves such as ridge, valley, and parabolic curve between these regions, by means of the principal curvatures of the surface. We present a similar segmentation scheme for triangular meshes. For this purpose, we develop a simple and robust scheme that approximates the principal curvatures on triangular meshes by multi-scale normal variation scheme. Using these approximated principal curvatures and modifying the classical segmentation scheme for triangular meshes, we design a scheme that segments triangular meshes into several meaningful regions. This segmentation scheme is implemented by evaluating a at each vertex, which quantifies the likelihood that each vertex belongs to one of the regions. We test our scheme on several face models and demonstrate its capability by segmenting them into several meaningful regions.
Pp. 831-840
doi: 10.1007/11919476_84
Integration of Multiple Methods for Class and Specific Object Recognition
Al Mansur; Md. Altab Hossain; Yoshinori Kuno
Service robots need object recognition strategy that can work on various objects and backgrounds. Since no single method can work well in various situations, we need to combine several methods so that the robots can use an appropriate one automatically. In this paper we propose a scheme to classify situations depending on the characteristics of object of interest, background and user demand. We classify the situations into three categories and employ different techniques for each one. We use SIFT and biologically motivated object recognition techniques developed by Serre et al. for two categories. These two methods do not work well on the remaining category of situations. We propose a contour based technique for this remaining category. Through our experiments, we show that the contour based method performs better than the previously mentioned two methods for this category of situations.
Pp. 841-849
doi: 10.1007/11919476_85
An Efficient Photon Mapping Algorithm for Rendering Light-Emitting Fluids
Kyungha Min
In this paper, we present a novel and efficient algorithm for rendering light-emitting fluids such as fire. For this purpose, we extend the well-known photon mapping algorithm for volumetric environments. The photons emitted from the fluids are stored in a voxelized space, instead of k-d tree in the original photon mapping scheme. We further propose an efficient photon collecting algorithm based on Bresenham’s algorithm, which can collect photons in reduced computational loads. We prove the effectiveness and efficiency of the proposed algorithm by visualizing light-emitting fluids such as fire in various styles.
Pp. 850-859
doi: 10.1007/11919476_86
Face Recognition Using 2D and 3D Multimodal Local Features
Ajmal Mian; Mohammed Bennamoun; Robyn Owens
Machine recognition of faces is very challenging because it is an interclass recognition problem and the variation in faces is very low compared to other biometrics. Global features have been extensively used for face recognition however they are sensitive to variations caused by expressions, illumination, pose, occlusions and makeup. We present a novel 3D local feature for automatic face recognition which is robust to these variations. The 3D features are extracted by uniformly sampling local regions of the face in locally defined coordinate bases which makes them invariant to pose. The high descriptiveness of this feature makes it ideal for the challenging task of interclass recognition. In the 2D domain, we use the SIFT descriptor and fuse the results with the 3D approach at the score level. Experiments were performed using the FRGC v2.0 data and the achieved verification rates at 0.001 FAR were 98.5% and 86.0% for faces with neutral and non-neutral expressions respectively.
Pp. 860-870
doi: 10.1007/11919476_87
Adaptive Background Generation for Video Object Segmentation
Taekyung Kim; Joonki Paik
In this paper, we present a novel method for generating background that adopts frame difference and a median filter to sensitive areas where illumination changes occur. The proposed method also uses fewer frames than the existing methods. Background generation is widely used as a preprocessing for video-based tracking, surveillance, and object detection. The proposed background generation method utilizes differences and motion changes between two consecutive frames to cope with the changes of illumination in an image sequence. It also utilizes a median filter to adaptively generate a robust background. The proposed method enables more efficient background reconstruction with fewer frames than existing methods use.
Pp. 871-880
doi: 10.1007/11919476_88
Omnidirectional Camera Calibration and 3D Reconstruction by Contour Matching
Yongho Hwang; Jaeman Lee; Hyunki Hong
This paper presents a novel approach to both omnidirectional camera calibration and 3D reconstruction of the surrounding scene by contour matching in architectural scenes. By using a quantitative measure to consider the inlier distribution, we can estimate more precise camera model parameters and structure from motion. Since most of line segments of man-made objects are projected to the contours in omnidirectional images, contour matching problem is important in camera recovery process. We propose a novel 3D reconstruction method by contour matching in three omnidirectional views. First, two points on the contour and their viewing vectors are used to determine an interpretation plane equation, and we obtain a contour intersecting both the plane and the estimated patch of the camera model. Then, 3D line segment is calculated from two patches, which is projected to the contour on the third views, and these matching results are used in refinement of camera recovery.
Pp. 881-890
doi: 10.1007/11919476_89
Real-Time GPU-Based Simulation of Dynamic Terrain
Anthony S. Aquilio; Jeremy C. Brooks; Ying Zhu; G. Scott Owen
Although a great deal of research effort has been devoted to the realistic and efficient rendering of terrain data, the existing research mainly focuses on displaying static terrain. The realistic simulation of dynamic terrain, including vehicle-terrain interactions, is absent in most 3D graphical applications. In this paper, we present a new GPU-based algorithm for dynamic terrain simulation. A key component of this algorithm is a Dynamically-Displaced Height Map (DDHM) which is generated and manipulated on the GPU. In addition, our method achieves real-time performance by using new features of the latest graphics hardware and shader technology. We demonstrate our algorithm by simulating a ground vehicle traveling on soft terrain. Our algorithm is particularly useful for ground based visual simulation applications as well as 3D games.
Pp. 891-900
doi: 10.1007/11919476_90
High-Resolution Video from Series of Still Photographs
Ge Jin; James K. Hahn
In this paper, we explored the problem of creating a high-resolution video from a series of still photographs. Instead of enhancing the resolution from the video stream, we consider the problem of generating a high-resolution video as an image synthesis problem. Using the continuous shot in the digital camera, we can get a series of still photographs at 2 to 3 frames pre second. The main challenge in our approach is to synthesize the in between frames from two consecutive still images. The image synthesis approach varies based on the scene motion and image characteristics. We have applied optical flow, image segmentation, image filtering and skeleton based image warping techniques to generate high-resolution video.
Pp. 901-910