Catálogo de publicaciones - libros

Compartir en
redes sociales


Progress in Spatial Data Handling: 12th International Symposium on Spatial Data Handling

Andreas Riedl ; Wolfgang Kainz ; Gregory A. Elmes (eds.)

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2006 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-35588-5

ISBN electrónico

978-3-540-35589-2

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2006

Tabla de contenidos

Skeleton Based Contour Line Generalization

Krzysztof Matuk; Christopher Gold; Zhilin Li

Contour lines are a widely utilized representation of terrain models in both cartography and Geographical Information Systems (GIS). Since they are often presented at different scales there is a need for generalization techniques. In this paper an algorithm for the generalization of contour lines based on skeleton pruning is presented. The algorithm is based on the boundary residual function and retraction of the skeleton of contour lines. The novelty of this method relies on pruning not only the internal skeleton branches, but also those skeleton branches placed outside the closed contour polygon. This approach, in contrast to original method which was designed for closed shapes is capable of handling also open polygonal chains.

A simplified version of the skeleton is extracted in the first step of the algorithm and in the next a simpler boundary is computed. The simpler boundary as shown in this paper, can be found using three different ways: detection of stable vertices, computation of an average vertex and approximation of the boundary by Bezier splines.

- Generalization | Pp. 643-658

Conflict Identification and Representation for Roads Based on a Skeleton

Stuart Thom

This paper presents a method to detect, represent and classify conflicts between roads. The dataset used in the study is OS Integrated Transport Network™(ITN). A partial skeleton is created from a constrained Delaunay triangulation of the road network. The skeleton is used to interrogate the space between the roads, creating ‘conflict region’ features. Nine types of ‘conflict region’ are characterized and examples of each are given, created from ITN data. A discussion is presented of possible uses for these features showing how they help to orchestrate the removal of conflicts from a road network as part of an automatic generalization process.

- Generalization | Pp. 659-680

The’ stroke’ Concept in Geographic Network Generalization and Analysis

Robert C. Thomson

Strokes are relatively simple linear elements readily perceived in a network. Apart from their role as graphical elements, strokes reflect lines of flow or movement within the network itself and so constitute natural functional units. Since the functional importance of a stroke is reflected in its perceived salience this makes strokes a suitable basis for network generalization, through the preferential preservation of salient strokes during data reduction. In this paper an exploration of the dual functional-graphical nature of strokes is approached via a look at perceptual grouping in generalization. The identification and use of strokes are then described. The strengths and limitations of stroke-based generalization are discussed; how the technique may be developed is also considered. Finally, the functional role of strokes in networks is highlighted by a look at recent developments in space syntax and related studies.

- Generalization | Pp. 681-697

An Integrated Cloud Model for Measurement Errors and Fuzziness

Tao Cheng; Zhilin Li; Deren Li; Deyi Li

Two kinds of uncertainties — measurement errors and concept (or classification) fuzziness, can be differentiated in GIS data. There are many tools to handle them separately. However, an integrated model is needed to assess their combined effect in GIS analysis (such as classification and overlay) and to assess the plausible effects on subsequent decision-making. The cloud model sheds lights on integrated modeling of uncertainties of fuzziness and randomness. But how to adopt the cloud model to GIS uncertainties needs to be investigated. Indeed, this paper proposes an integrated formal model for measurement errors and fuzziness based upon the cloud model. It addresses physical meaning of the parameters for the cloud model and provides the guideline of setting these values. Using this new model, via multi-criteria reasoning, the combined effect of uncertainty in data and classification on subsequent decision-making can be assessed through statistical indicators, which can be used for quality assurance.

- Uncertainty | Pp. 699-718

The Influence of Uncertainty Visualization on Decision Making: An Empirical Evaluation

Stephanie Deitrick; Robert Edsall

Uncertainty visualization is a research area that integrates visualization with the study of uncertainty. Many techniques have been developed for representing uncertainty, and there have been many participant-based empirical studies evaluating the effectiveness of specific techniques. However, there is little empirical evidence to suggest that uncertainty visualization influences, or results in, different decisions. Through a human-subjects experiment, this research evaluates whether specific uncertainty visualization methods, including texture and value, influence decisions and a users confidence in their decisions. The results of this study indicate that uncertainty visualization may effect decisions, but the degree of influence is affected by how the uncertainty is expressed.

- Uncertainty | Pp. 719-738

Modeling Uncertainty in Knowledge Discovery for Classifying Geographic Entities with Fuzzy Boundaries

Feng Qi; A-Xing Zhu

is a machine learning strategy originally designed to increase classification accuracies of classifiers through inductive learning. This paper argues that this strategy of learning and inference actually corresponds to a cognitive model that explains the uncertainty associated with class assignments for classifying geographic entities with fuzzy boundaries. This paper presents a study that adopts the boosting strategy in knowledge discovery, which allows for the modeling and mapping of such uncertainty when the discovered knowledge is used for classification. A case study of knowledge discovery for soil classification proves the effectiveness of this approach.

- Uncertainty | Pp. 739-754

Capturing and Representing Conceptualization Uncertainty Interactively using Object-Fields

Vlasios Voudouris; Peter F. Fisher; Jo Wood

We present a method for representing, recording and managing conceptualization uncertainty. We review components of uncertainty associated with semantics and metadata. We present a way of recording and visualizing uncertainty using sketching and suggest a framework for recording and managing uncertainty and associated semantics using Object-Fields. A case study is also used to demonstrate a software prototype that shows proof-of concept. We conclude by identifying future research challenges in terms of supporting dynamic exploration of uncertainty, semantics and field objects.

- Uncertainty | Pp. 755-770

From Point Cloud to Grid DEM: A Scalable Approach

Pankaj K. Agarwal; Lars Arge; Andrew Danner

Given a set S of points in ℝ sampled from an function : ℝ → ℝ, we present a scalable algorithm for constructing a grid digital elevation model (DEM). Our algorithm consists of three stages: First, we construct a quad tree on S to partition the point set into a set of non-overlapping segments. Next, for each segment , we compute the set of points in and all segments neighboring . Finally, we interpolate each segment independently using points within the segment and its neighboring segments.

Data sets acquired by LIDAR and other modern mapping technologies consist of hundreds of millions of points and are too large to fit in main memory. When processing such massive data sets, the transfer of data between disk and main memory (also called I/O), rather than the CPU time, becomes the performance bottleneck. We therefore present an algorithm for constructing a grid DEM. Our experiments show that the algorithm scales to data sets much larger than the size of main memory, while existing algorithms do not scale. For example, using a machine with 1GB RAM, we were able to construct a grid DEM containing 1.3 billion cells (occupying 1.2GB) from a LIDAR data set of over 390 million points (occupying 20GB) in about 53 hours. Neither ArcGIS nor GRASS, two popular GIS products, were able to process this data set.

- Elevation Modeling | Pp. 771-788

Use of Plan Curvature Variations for the Identification of Ridges and Channels on DEM

Sanjay Rana

This paper proposes novel improvements in the traditional algorithms for the identification of ridge and channel (also called ravines) topographic features on raster digital elevation models (DEMs). The overall methodology consists of two main steps: (1) smoothing the DEM by applying a mean filter, and (2) detection of ridge and channel features as cells with positive and negative plan curvature respectively, along with a decline and incline in plan curvature away from the cell in direction orthogonal to the feature axis respectively. The paper demonstrates a simple approach to visualize the multi-scale structure of terrains and utilize it for semiautomated topographic feature identification. Despite its simplicity, the revised algorithm produced markedly superior outputs than a comparatively sophisticated feature extraction algorithm based on conic-section analysis of terrain.

- Elevation Modeling | Pp. 789-804

An Evaluation of Spatial Interpolation Accuracy of Elevation Data

Qihao Weng

This paper makes a general evaluation of the spatial interpolation accuracy of elevation data. Six common interpolators were examined, including Kriging, inverse distance to a power, minimum curvature, modified Shepard’s method, radial basis functions, and triangulation with linear interpolation. The main properties and mathematical procedures of the interpolation algorithms were reviewed. In order to obtain full evaluation of the interpolations, both statistical (including root-mean-square-error, standard deviation, and mean) and spatial accuracy measures (including accuracy surface, and spatial autocorrelation) were employed. It is found that the accuracy of spatial interpolation of elevations was primarily subject to input data point density and distribution, grid size (resolution), terrain complexity, and interpolation algorithm used. The variations in interpolation parameters may significantly improve or worsen the accuracy. Further researches are needed to examine the impacts of terrain complexity in details and various data sampling strategies. The combined use of variogram models, accuracy surfaces, and spatial autocorrelation represents a promising direction in mapping spatial data accuracy.

- Elevation Modeling | Pp. 805-824