Catálogo de publicaciones - libros

Compartir en
redes sociales


Computer Aided Systems Theory: EUROCAST 2005: 10th International Conference on Computer Aided Systems Theory, Las Palmas de Gran Canaria, Spain, February 7-11, 2005, Revised Selected Papers

Roberto Moreno Díaz ; Franz Pichler ; Alexis Quesada Arencibia (eds.)

En conferencia: 10º International Conference on Computer Aided Systems Theory (EUROCAST) . Las Palmas de Gran Canaria, Spain . February 7, 2005 - February 11, 2005

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2005 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-29002-5

ISBN electrónico

978-3-540-31829-3

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2005

Tabla de contenidos

Ontology Integration for Statistical Information

Wilfried Grossmann; Markus Moschner

Metadata may be used for convenient handling of statistical information. Thus some metadata standards have emerged as guiding lines for information processing within statistical information systems. Another development stems from documentation development for data archives where metadata requirements of researchers in social sciences are considered. Different metadata standards have emerged out of these streams of science. Basic ideas on integration and translation between such different lines of development are given. Hereby principles of ontology engineering play a key role as starting point.

- Formal Approaches in Modelling | Pp. 63-68

On Recursive Functions and Well–Founded Relations in the Calculus of Constructions

J. L. Freire; E. Freire; A. Blanco

This paper presents a reflection about function construction through well-founded recursion in the type theory known as Calculus of Inductive Constructions. It shows the expressive power of this calculus when dealing with concepts as accesibility and noetherianity among others. The properties of the General Recursion Scheme ([2]) and its relation with Structural Recursion in inductive types are analyzed. As a consequence, a methodology arises which is illustrated with some examples.We use the INRIA’s compiler of the Calculus of Inductive Constructions: Coq [6].

- Intelligent Information Systems | Pp. 69-80

Longest Sorted Sequence Algorithm for Parallel Text Alignment

Tiago Ildefonso; Gabriel Pereira Lopes

This paper describes a language independent method for aligning parallel texts (texts that are translations of each other, or of a common source text), statistically supported. This new approach is inspired on previous work by Ribeiro et al (2000). The application of the second statistical filter, proposed by Ribeiro et al, based on Confidence Bands (CB), is substituted by the application of the Longest Sorted Sequence algorithm (LSSA). LSSA is described in this paper. As a result, 35% decrease in processing time and 18% increase in the number of aligned segments was obtained, for Portuguese-French alignments. Similar results were obtained regarding Portuguese-English alignments. Both methods are compared and evaluated, over a large parallel corpus made up of Portuguese, English and French parallel texts (approximately 250Mb of text per language).

- Intelligent Information Systems | Pp. 81-90

Information Retrieval and Large Text Structured Corpora

Fco. Mario Barcala; Miguel A. Molinero; Eva Domínguez

Conventional Information Retrieval Systems (IRSs), also called text indexers, deal with plain text documents or ones with a very elementary structure. These kinds of system are able to solve queries in a very efficient way, but they cannot take into account tags which mark different sections, or at best this capability is very limited.

In contrast with this, nowadays, documents which are part of a corpus often have a rich structure. They are structured using XML (Extensible Markup Language)[1] or in some other format which can be converted to XML in a more or less simple way. So, building classical IRSs to work with these kinds of corpus will not benefit from this structure and results will not be improved.

In addition, several of these corpora are very large and include hundreds or thousands of documents which in turn include millions or hundreds of millions of words. Therefore, there is the need to build efficient and flexible IRSs which work with large structured corpora.

- Intelligent Information Systems | Pp. 91-100

Meteorological Image Descriptors

J. L. Crespo; P. Bernardos; M. E. Zorrilla; E. Mora

The objective of this paper is to get a visual characterization of time evolution images, in particular, synoptic maps taken from Meteorology. Preliminary tasks required before image processing are reviewed. Two different types of numerical descriptors are extracted for characterizing the images, the called numerical descriptors, and the corresponding ones. The latter will be subsequently used for prediction tasks, meanwhile the former will be used for classification tasks. Three different relevant information sources in the images are identified as their descriptors. These are defined by the local density and orientation of the isobar lines, and the number of centres of high (H) and low (L) pressure. Regarding the descriptors, two main features are taken into account. The different procedures carried out to extract the previous descriptors for our images of interest are discussed.

- Intelligent Information Systems | Pp. 101-110

Towards a Certified and Efficient Computing of Gröbner Bases

J. Santiago Jorge; Víctor M. Gulías; José L. Freire; Juan J. Sánchez

In this paper, we present an example of the implementation and verification of a functional program. We expose an experience in developing an application in the area of symbolic computation: the computing of Gröbner basis of a set of multivariate polynomials. Our aim is the formal certification of several aspects of the program written in the functional language . In addition, efficient computing of the algorithm is another issue to take into account.

- Intelligent Information Systems | Pp. 111-120

CheapTB: A Low Cost of Operation Distributed Filesystem

Javier París; Victor M. Gulías; Carlos Abalde

The overall computing power available today has made possible for small enterprises and laboratories to develop applications that need big amounts of storage. This storage has traditionally been expensive, using propietary technologies. With the recent increase in storage space of cheap IDE and SCSI disks it is now possible to build a cluster to be used as a storage solution. CheapTB is a distributed filesystem for this environment, with an emphasys towards low cost operation rather than performance.

- Intelligent Information Systems | Pp. 121-130

Spelling Correction on Technical Documents

M. Vilares; J. Otero; J. Graña

We describe a novel approach to spelling correction applied on technical documents, a task that requires a number of especific properties such as efficiency, safety and maintenance. In opposite to previous works, we explore the region close to the point at which the recognition halts, gathering all relevant information for the repair process in order to avoid the phenomenom of errors in cascade. Our approach seems to reach the same quality provided by the most performance classic techniques, but with a significant reduction on both time and space costs.

- Intelligent Information Systems | Pp. 131-139

Verification of Language Based Fault-Tolerance

Clara Benac Earle; Lars-Åke Fredlund

In this paper we target the verification of fault tolerant aspects of distributed applications written in the Erlang programming language. Erlang programmers mostly work with ready-made language components. Our approach to verification of fault tolerance is to verify systems built using a central component of most Erlang software, a generic server component with fault tolerance handling.

To verify such Erlang programs we automatically translate them into processes of the CRL process algebra, generate their state spaces, and use a model checker to determine whether they satisfy correctness properties specified in the -calculus.

The key observation of this paper is that, due to the usage of these higher-level design patterns, the state space generated from a Erlang program, even with failures occurring, is relatively small, and can be generated automatically.

- Intelligent Information Systems | Pp. 140-149

Applying Stacking and Corpus Transformation to a Chunking Task

José A. Troyano; Víctor J. Díaz; Fernando Enríquez; Vicente Carrillo; Fermín Cruz

In this paper we present an application of the stacking technique to a chunking task: named entity recognition. Stacking consists in applying machine learning techniques for combining the results of different models. Instead of using several corpus or several tagger generators to obtain the models needed in stacking, we have applied three transformations to a single training corpus and then we have used the four versions of the corpus to train a single tagger generator. Taking as baseline the results obtained with the original corpus (F value of 81.84), our experiments show that the three transformations improve this baseline (the best one reaches 84.51), and that applying stacking also improves this baseline reaching an measure of 88.43.

- Intelligent Information Systems | Pp. 150-158