Catálogo de publicaciones - libros
Image Analysis: 14th Scandinavian Conference, SCIA 2005, Joensuu, Finland, June 19-22, 2005, Proceedings
Heikki Kalviainen ; Jussi Parkkinen ; Arto Kaarna (eds.)
En conferencia: 14º Scandinavian Conference on Image Analysis (SCIA) . Joensuu, Finland . June 19, 2005 - June 22, 2005
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
Image Processing and Computer Vision; Pattern Recognition; Computer Graphics
Disponibilidad
| Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
|---|---|---|---|---|
| No detectada | 2005 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-3-540-26320-3
ISBN electrónico
978-3-540-31566-7
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2005
Información sobre derechos de publicación
© Springer-Verlag Berlin Heidelberg 2005
Cobertura temática
Tabla de contenidos
doi: 10.1007/11499145_51
Non-rigid Registration Using Morphons
Andreas Wrangsjö; Johanna Pettersson; Hans Knutsson
The Morphon, a non-rigid registration method is presented and applied to a number of registration applications. The algorithm takes a prototype image (or volume) and morphs it into a target image using an iterative, multi-resolution technique. The deformation process is done in three steps: , and . The framework could be described in very general terms, but in this paper we focus on a specific implementation of the Morphon framework. The method can be employed in a wide range of registration tasks, which is shown in four very different registration examples; 2D photographs of hands and faces, 3D CT data of the hip region, and 3D MR brain images.
- Poster Presentations 1: Image Analysis, Computer Vision, Machine Vision, and Applications | Pp. 501-510
doi: 10.1007/11499145_52
Hybridization of the Ant Colony Optimization with the K-Means Algorithm for Clustering
Sara Saatchi; Chih Cheng Hung
In this paper the novel concept of ACO and its learning mechanism is integrated with the K-means algorithm to solve image clustering problems. The learning mechanism of the proposed algorithm is obtained by using the defined parameter called pheromone, by which undesired solutions of the K-means algorithm is omitted. The proposed method improves the K-means algorithm by making it less dependent on the initial parameters such as randomly chosen initial cluster centers, hence more stable.
- Poster Presentations 1: Image Analysis, Computer Vision, Machine Vision, and Applications | Pp. 511-520
doi: 10.1007/11499145_53
Incremental Locally Linear Embedding Algorithm
Olga Kouropteva; Oleg Okun; Matti Pietikäinen
A number of manifold learning algorithms have been recently proposed, including locally linear embedding (LLE). These algorithms not only merely reduce data dimensionality, but also attempt to discover a true low dimensional structure of the data. The common feature of the most of these algorithms is that they operate in a batch or offline mode. Hence, when new data arrive, one needs to rerun these algorithms with the old data augmented by the new data. A solution for this problem is to make a certain algorithm online or incremental so that sequentially coming data will not cause time consuming recalculations. In this paper, we propose an incremental version of LLE and experimentally demonstrate its advantages in terms of topology preservation. Also, compared to the original (batch) LLE, the incremental LLE needs to solve a much smaller optimization problem.
- Poster Presentations 1: Image Analysis, Computer Vision, Machine Vision, and Applications | Pp. 521-530
doi: 10.1007/11499145_54
On Aligning Sets of Points Reconstructed from Uncalibrated Affine Cameras
A. Bartoli; H. Martinsson; F. Gaspard; J. -M. Lavest
The reconstruction of rigid scenes from multiple images is a central topic in computer vision. Approaches merging partial 3D models in a hierarchical manner have proven the most effective to deal with large image sequences. One of the key building blocks of these hierarchical approaches is the alignment of two partial 3D models by computing a 3D transformation. This problem has been well-studied for the cases of 3D models obtained with calibrated or uncalibrated pinhole cameras.
We tackle the problem of aligning 3D models – sets of 3D points – obtained using uncalibrated affine cameras. This requires to estimate 3D affine transformations between the 3D models. We propose a factorization-based algorithm estimating simultaneously the aligning transformations and corrected points, exactly matching the estimated transformations, such that the reprojection error over all cameras is minimized.
We experimentally compare our algorithm to other methods using simulated and real data.
- Poster Presentations 1: Image Analysis, Computer Vision, Machine Vision, and Applications | Pp. 531-540
doi: 10.1007/11499145_55
A New Class of Learnable Detectors for Categorisation
Jiri Matas; Karel Zimmermann
A new class of image-level detectors that can be adapted by machine learning techniques to detect parts of objects from a given category is proposed. A classifier (e.g. neural network or adaboost trained classifier) within the detector selects a relevant subset of extremal regions, i.e. regions that are connected components of a thresholded image. Properties of extremal regions render the detector very robust to illumination change. Robustness to viewpoint change is achieved by using invariant descriptors and/or by modeling shape variations by the classifier.
The approach is brought to bear on three problems: text detection, face segmentation and leopard skin detection. High detection rates were obtained for unconstrained (i.e. brightness, affine and font invariant) text detection (92%) with a reasonable false positive rate.
The time-complexity of the detection is approximately linear in the number of pixels and a non-optimized implementation runs at about 1 frame per second for a 640× 480 image on a high-end PC.
- Poster Presentations 1: Image Analysis, Computer Vision, Machine Vision, and Applications | Pp. 541-550
doi: 10.1007/11499145_56
Overlapping Constraint for Variational Surface Reconstruction
Henrik Aanæs; Jan Erik Solem
In this paper a counter example, illustrating a shortcoming in most variational formulations for 3D surface estimation, is presented. The nature of this shortcoming is a lack of an overlapping constraint. A remedy for this shortcoming is presented in the form of a penalty function with an analysis of the effects of this function on surface motion. For practical purposes, this will only have minor influence on current methods. However, the insight provided in the analysis is likely to influence future developments in the field of variational surface reconstruction.
- Poster Presentations 1: Image Analysis, Computer Vision, Machine Vision, and Applications | Pp. 551-556
doi: 10.1007/11499145_57
Integation Methods of Model-Free Features for 3D Tracking
Ville Kyrki; Kerstin Schmock
A number of approaches for 3D pose tracking have been recently introduced, most of them utilizing an edge (wireframe) model of the target. However, the use of an edge model has significant problems in complex scenes due to background, occlusions, and multiple responses. Integration of model-free information has been recently proposed to decrease these problems.
In this paper, we propose two integration methods for model-free point features to enhance the robustness and to increase the performance of real-time model-based tracking. The relative pose change between frames is estimated using an optimization approach. This allows the pose change to be integrated very efficiently in a Kalman filter. Our first approach estimates the pose change in a least squares sense while the second one uses M-estimators to decrease the effect of outliers. Experiments are presented which demonstrate that the approaches are superior in performance to earlier approaches.
- Poster Presentations 1: Image Analysis, Computer Vision, Machine Vision, and Applications | Pp. 557-566
doi: 10.1007/11499145_58
Probabilistic Model-Based Background Subtraction
Volker Krüger; Jakob Anderson; Thomas Prehn
Usually, background subtraction is approached as a pixel-based process, and the output is (a possibly thresholded) image where each pixel reflects, independent from its neighboring pixels, the likelihood of itself belonging to a foreground object. What is neglected for better output is the correlation between pixels. In this paper we introduce a model-based background subtraction approach which facilitates prior knowledge of pixel correlations for clearer and better results. Model knowledge is being learned from good training video data, the data is stored for fast access in a hierarchical manner. Bayesian propagation over time is used for proper model selection and tracking during model-based background subtraction. Bayes propagation is attractive in our application as it allows to deal with uncertainties during tracking. We have tested our approach on suitable outdoor video data.
- Poster Presentations 1: Image Analysis, Computer Vision, Machine Vision, and Applications | Pp. 567-576
doi: 10.1007/11499145_59
A Bayesian Approach for Affine Auto-calibration
S. S. Brandt; K. Palander
In this paper, we propose a Bayesian approach for affine auto-calibration. By the Bayesian approach, a posterior distribution for the affine camera parameters can be constructed, where also the prior knowledge can be taken into account. Moreover, due to the linearity of the affine camera model, the structure and translations can be analytically marginalised out from the posterior distribution, if certain prior distributions are assumed. The marginalisation reduces the dimensionality of the problem substantially that makes the MCMC methods better suitable for exploring the posterior of the intrinsic camera parameters. The experiments verify that the proposed approach is a versatile, statistically sound alternative for the existing affine auto-calibration methods.
- Poster Presentations 1: Image Analysis, Computer Vision, Machine Vision, and Applications | Pp. 577-587
doi: 10.1007/11499145_60
Shape-Based Co-occurrence Matrices for Defect Classification
Rami Rautkorpi; Jukka Iivarinen
This paper discusses two statistical shape descriptors, the Edge Co-occurrence Matrix (ECM) and the Contour Co-occurrence Matrix (CCM), and their use in surface defect classification. Experiments are run on two image databases, one containing metal surface defects and the other paper surface defects. The extraction of Haralick features from the matrices is considered. The descriptors are compared to other shape descriptors from e.g. the MPEG-7 standard. The results show that the ECM and the CCM give superior classification accuracies.
- Poster Presentations 1: Image Analysis, Computer Vision, Machine Vision, and Applications | Pp. 588-597