Catálogo de publicaciones - libros
Advances in Visual Computing: 3rd International Symposium, ISVC 2007, Lake Tahoe, NV, USA, November 26-28, 2007, Proceedings, Part I
George Bebis ; Richard Boyle ; Bahram Parvin ; Darko Koracin ; Nikos Paragios ; Syeda-Mahmood Tanveer ; Tao Ju ; Zicheng Liu ; Sabine Coquillart ; Carolina Cruz-Neira ; Torsten Müller ; Tom Malzbender (eds.)
En conferencia: 3º International Symposium on Visual Computing (ISVC) . Lake Tahoe, NV, USA . November 26, 2007 - November 28, 2007
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
Software Engineering/Programming and Operating Systems; Pattern Recognition; Image Processing and Computer Vision; Biometrics; Artificial Intelligence (incl. Robotics); Computer Graphics
Disponibilidad
Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No detectada | 2007 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-3-540-76857-9
ISBN electrónico
978-3-540-76858-6
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2007
Información sobre derechos de publicación
© Springer-Verlag Berlin Heidelberg 2007
Tabla de contenidos
A Quantitative Object-Level Metric for Segmentation Performance and Its Application to Cell Nuclei
Laura E. Boucheron; Neal R. Harvey; B. S. Manjunath
We present an object-level metric for segmentation performance which was developed to quantify both over- and under-segmentation errors, as well as to penalize segmentations with larger deviations in object shape. This metric is applied to the problem of segmentation of cell nuclei in routinely stained H&E histopathology imagery. We show the correspondence between the metric terms and qualitative observations of segmentation quality, particularly the presence of over- and under-segmentation. The computation of this metric does not require the use of any point-to-point or region-to-region correspondences but rather simple computations using the object mask from both the segmentation and ground truth.
- ST5: Medical Data Analysis | Pp. 208-219
A Convex Semi-definite Positive Framework for DTI Estimation and Regularization
Radhouène Neji; Noura Azzabou; Nikos Paragios; Gilles Fleury
In this paper we introduce a novel variational method for joint estimation and regularization of diffusion tensor fields from noisy raw data. To this end, we use the classic quadratic data fidelity term derived from the Stejskal-Tanner equation with a new smoothness term leading to a convex objective function. The regularization term is based on the assumption that the signal can be reconstructed using a weighted average of observations on a local neighborhood. The weights measure the similarity between tensors and are computed directly from the diffusion images. We preserve the positive semi-definiteness constraint using a projected gradient descent. Experimental validation and comparisons with a similar method using synthetic data with known noise model, as well as classification of tensors towards understanding the anatomy of human skeletal muscle demonstrate the potential of our method.
- ST5: Medical Data Analysis | Pp. 220-229
Robust Self-calibration from Single Image Using RANSAC
Qi Wu; Te-Chin Shao; Tsuhan Chen
In this paper, a novel approach for the self-calibration of single image is proposed. Unlike most existing methods, we can obtain the intrinsic and extrinsic parameters based on the information of restricted image points from single image. First, we show how the vanishing point, vanishing line and foot-to-head plane homology can be used to obtain the calibration parameters and then we show our approach how to efficiently adopt RANSAC to estimate them. In addition, noise reduction is proposed to handle the measurement uncertainties of input points. Results in synthetic and real scenes are presented to evaluate the performance of the proposed method.
- Calibration/Reconstruction | Pp. 230-237
Contour Matching in Omnidirectional Images
Yongho Hwang; Jaeman Lee; Hyunki Hong
This paper presents a novel method for contour matching in the architectural scenes captured by the omnidirectional camera. Since most line segments of man-made objects are projected to lines and contours, contour matching problem is important for 3D analysis in an omnidirectional indoor scene. First, we compute an initial estimate of the camera parameters from corner points and correlation-based matching. Then, the obtained edges by Canny detector are linked and divided into separate 3D line segments. By using a minimum angular error of endpoints of each contour, we establish the corresponding contours, and the initial parameters are refined iteratively from the correspondences. The simulation results demonstrate that the algorithm precisely estimates the extrinsic parameters of the camera by contour matching.
- Calibration/Reconstruction | Pp. 238-247
A Progressive Edge-Based Stereo Correspondence Method
Xiaoyuan Su; Taghi M. Khoshgoftaar
Local stereo correspondence is usually not satisfactory because neither big window nor small window based methods can accurately match densely-textured and textureless regions at the same time. In this paper, we present a progressive edge-based stereo matching algorithm, in which big window and small window based matches are progressively integrated based on the edges of disparity map of a big window based matching. In addition, an arbitrarily-shaped window based matching is used for the regions where big windows and small windows can not find matches, and a novel optimization method, progressive outlier remover, is used to effectively remove outliers and noise. Empirical results show that our method is comparable to some state-of-the-art stereo correspondence algorithms.
- Calibration/Reconstruction | Pp. 248-257
Creating Stereoscopic (3D) Video from a 2D Monocular Video Stream
Xiaokun Li; Roger Xu; Jin Zhou; Baoxin Li
It is a challenge to generate stereoscopic (3D) video through a single moving camera under widely varying conditions. We propose an efficient approach to create true stereoscopic video from a monocular video stream captured under various moving conditions. The approach contains three major steps. First, we apply Harris’ corner detector to detect distinctive feature points from a pair of image frames selected from the incoming video captured by a moving camera. Second, according to the consecutive property of the video, a local-window search based algorithm is developed for fast and accurate feature correspondence between the two image frames. Third, a hierarchical image rectification technique is designed to guarantee the success in creating a true and visually-comfortable stereo image for each incoming image frame. Besides, a software-based video stabilization algorithm is also developed for improved stereo video generation performance. Extensive tests using real video collected under various situations were performed for performance evaluation of the proposed approach.
- Calibration/Reconstruction | Pp. 258-267
3D Shape Recovery by the Use of Single Image Plus Simple Pattern Illumination
Zhan Song; Ronald Chung
This paper presents a method of surface orientation and in turn shape recovery from a single image captured under projection of a simple checker-board pattern. The essences of the method include that only one image is required, that accurate correspondence establishment between the image and the projected pattern is not necessary, that the determination of 3D is much less sensitive to imaging noise and illumination condition than intensity-based methods like shape from shading. The method relies upon the fact that surface orientations at the grid points are only decided by image tangents in the image data. Experiments on planar, spherical, and ribbon-like surfaces show that, with accurate calibration of the projector-and-camera system through a mechanism we proposed earlier, 3D shape can be recovered with ease and precision both much better than before.
- Calibration/Reconstruction | Pp. 268-277
Reliable Depth Map Regeneration Via a Novel Omnidirectional Stereo Sensor
Lei He; Chuanjiang Luo; Yanfeng Geng; Feng Zhu; Yingming Hao
We present a method to obtain dense 3D maps for a mobile robot that is equipped with a novel omnidirectional stereo vision sensor. The vision sensor is composed of a perspective camera and two hyperbolic mirrors. Once the system has been calibrated and two image points respectively projected by upper and nether mirrors are matched, the 3D coordinate of the space point can be acquired by means of triangulation. To satisfy the reliability requirement by mobile robot navigation, we use high-quality stereo matching algorithm – the graph cut method. An initial depth map can be calculated using efficient dynamic programming technique. With a relatively good initial map, the process of graph cut converges quickly. We also show the necessary modification to handle panoramic images, including deformed matching template, adaptable template scale. Experiment shows that this proposed vision system is feasible as a practical stereo sensor for accurate 3D map generation.
- Calibration/Reconstruction | Pp. 278-287
An Anti-aliasing Technique for Voxel-Based Massive Model Visualization Strategies
Gustavo N. Wagner; Alberto Raposo; Marcelo Gattass
CAD models of industrial installations usually have hundreds of millions of triangles. For this reason they cannot be interactively rendered in the current generation of computer hardware. There are many different approaches to deal with this problem, including the Far Voxels algorithm, which uses a hierarchical level-of-detail structure. In this structure, voxels are used to create a coarse representation of the model when required. This strategy yields interactive rates for large data sets because it deals well with levels of detail, culling, occlusion and out-of-core model storage. The Far Voxels algorithm, however, has a severe alias problem when it is used to represent small or thin objects, which is especially visible during transitions between different levels of detail. This paper presents a new version of the Far Voxels algorithm that improves visual quality during model navigation.
- Visualization I | Pp. 288-297
Photo-Realistic Depth-of-Field Effects Synthesis Based on Real Camera Parameters
Huei-Yung Lin; Kai-Da Gu
Depth-of-field (DOF) is an important visual cue used for computer graphics and photography to illustrate the focus of attention. In this work, we present a method for photo-realistic DOF simulation based on the characteristics of a real camera system. Both the depth-blur relation for different camera focus settings and the nonlinear intensity response of image sensors are modeled. The camera parameters are calibrated and used for defocus blur synthesis. For a well-focused real scene image, the DOF effect is generated by spatial convolution with a distance dependent circular Gaussian mask. Experiments have shown that the difference between the images synthesized using the proposed method and the real images captured by a camera is almost indistinguishable.
- Visualization I | Pp. 298-309