Catálogo de publicaciones - libros

Compartir en
redes sociales


Finite-State Methods and Natural Language Processing: 5th International Workshop, FSMNLP 2005, Helsinki, Finland, September 1-2, 2005. Revised Papers

James F. Peters ; Andrzej Skowron (eds.)

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence (incl. Robotics); Data Mining and Knowledge Discovery; Theory of Computation; Mathematical Logic and Formal Languages; Computation by Abstract Devices; Database Management

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2006 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-35467-3

ISBN electrónico

978-3-540-39382-5

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2006

Tabla de contenidos

Zdzisław Pawlak: Life and Work

James F. Peters; Andrzej Skowron

Professor Pawlak’s most widely recognized contribution is his incisive approach to classifying objects with their attributes (features) and his introduction of approximation spaces, which establish the foundations of granular computing and provide frameworks for perception and knowledge discovery in many areas. He was with us only for a short time and, yet, when we look back at his accomplishments, we realize how greatly he has influenced us with his generous spirit and creative work in many areas such as approximate reasoning, intelligent systems research, computing models, mathematics (especially, rough set theory), molecular computing, pattern recognition, philosophy, art, and poetry. This article attempts to give a vignette that highlights some of Pawlak’s remarkable accomplishments. This vignette is limited to a brief coverage of Pawlak’s work in rough set theory, molecular computing, philosophy, painting and poetry. Detailed coverage of these as well as other accomplishments by Pawlak is outside the scope of this commemorative article.

- Commemoration | Pp. 1-24

Rough Belief Change

Mohua Banerjee

The article aims at re-visiting the notion of proposed by Pawlak in 1987 [15] and investigating some of its ‘logical’ consequences. We focus on the formal deductive apparatus , that is sound and complete with respect to a semantics based on rough truth. turns out to be equivalent to the paraconsistent logic due to Jaśkowski. A significant feature of rough truth is that, a proposition and its negation may well be roughly true together. Thus, in [5], was introduced. Completeness of is proved with the help of this notion of consistency. The properties of motivate us to use it for a proposal of . During change, the operative constraints on a system of beliefs are that of consistency preservation and deductive closure with respect to . Following the AGM [1] line, eight basic postulates for defining rough revision and contraction functions are presented. Interrelationships of these functions are also proved. The proposal is, therefore, an example of paraconsistent belief change.

- Regular Papers | Pp. 25-38

Rough Sets and Vague Concept Approximation: From Sample Approximation to Adaptive Learning

Jan Bazan; Andrzej Skowron; Roman Swiniarski

We present a rough set approach to vague concept approximation. Approximation spaces used for concept approximation have been initially defined on samples of objects (decision tables) representing partial information about concepts. Such approximation spaces defined on samples are next inductively extended on the whole object universe. This makes it possible to define the concept approximation on extensions of samples. We discuss the role of inductive extensions of approximation spaces in searching for concept approximation. However, searching for relevant inductive extensions of approximation spaces defined on samples is infeasible for compound concepts. We outline an approach making this searching feasible by using a concept ontology specified by domain knowledge and its approximation. We also extend this approach to a framework for adaptive approximation of vague concepts by agents interacting with environments. This paper realizes a step toward approximate reasoning in multiagent systems (MAS), intelligent systems, and complex dynamic systems (CAS).

- Regular Papers | Pp. 39-62

Matching 2D Image Segments with Genetic Algorithms and Approximation Spaces

Maciej Borkowski; James F. Peters

This article introduces an approach to matching 2D image segments using approximation spaces. The rough set approach introduced by Zdzisław Pawlak provides a ground for concluding to what degree a particular set of similar image segments is a part of a set of image segments representing a norm or standard. The number of features (color difference and overlap between segments) typically used to solve the image segment matching problem is small. This means that there is not enough information to permit image segment matching with high accuracy. By contrast, many more features can be used in solving the image segment matching problem using a combination of evolutionary and rough set methods. Several different uses of a Darwinian form of a genetic algorithm (GA) are introduced as a means to partition large collections of image segments into blocks of similar image segments. After filtering, the output of a GA provides a basis for finding matching segments in the context of an approximation space. A coverage form of approximation space is presented in this article. Such an approximation space makes it possible to measure the the extent that a set of image segments representing a standard covers GA-produced blocks. The contribution of this article is the introduction of an approach to matching image segments in the context of an approximation space.

- Regular Papers | Pp. 63-101

An Efficient Algorithm for Inference in Rough Set Flow Graphs

C. J. Butz; W. Yan; B. Yang

Pawlak recently introduced (RSFGs) as a graphical framework for reasoning from data. No study, however, has yet investigated the complexity of the accompanying inference algorithm, nor the complexity of inference in RSFGs. In this paper, we show that the traditional RSFG inference algorithm has exponential time complexity. We then propose a new RSFG inference algorithm that exploits the factorization in a RSFG. We prove its correctness and establish its polynomial time complexity. In addition, we show that our inference algorithm never does more work than the traditional algorithm. Our discussion also reveals that, unlike traditional rough set research, RSFGs make implicit independency assumptions regarding the problem domain.

- Regular Papers | Pp. 102-122

Intelligent Algorithms for Movie Sound Tracks Restoration

Andrzej Czyżewski; Marek Dziubiński; Łukasz Litwic; Przemysław Maziewski

Two algorithms for movie sound tracks restoration are discussed in the paper. The first algorithm is the unpredictability measure computation applied to the psychoacoustic model-based broadband noise attenuation. A learning decision algorithm, based on a neural network, is employed for determining useful audio signal components acting as maskers of the noisy spectral parts. An application of the rough set decision system to this task is also considered. An iterative method for calculating the sound masking pattern is presented. The second of presented algorithms is the routine for precise evaluation of parasite frequency modulations (wow) utilizing sinusoidal components extracted from the sound spectrum. The results obtained employing proposed intelligent signal processing algorithms, as well as the relationship between both routines, will be presented and discussed in the paper.

- Regular Papers | Pp. 123-145

Rough Set-Based Application to Recognition of Emotionally-Charged Animated Character’s Gestures

Bożena Kostek; Piotr Szczuko

This research study is intended to analyze emotionally-charged animated character’s gestures. Animation methods and rules are first shortly reviewed in this paper. Then the experiment layout is presented. For the purpose of the experiment, the keyframe method is used to create animated objects characterized by differentiating emotions. The method comprised the creation of an animation achieved by changing the properties of a temporal structure of an animated sequence. The sequence is then analyzed in terms of identifying the locations and spacing of keyframes, as well as the features that could be related to emotions present in the animation. On the basis of this analysis several parameters contained in feature vectors describing each object emotions at key moments are derived. The labels are assigned to particular sequences by viewers participating in subjective tests. This served as a decision attribute. The rough set system is used to process the data. Rules related to various categories of emotions are derived. They are then compared with the ones used in traditional animation. Also, the most significant parameters are identified. The second part of the experiment is aimed at checking the viewers’ ability to discern less dominant emotional charge in gestures. A time-mixing method is proposed and utilized for the generation of new gestures emotionally-charged with differentiated intensity. Viewers’ assessment of the animations quality is presented and analyzed. Conclusions and future experiments are shortly outlined.

- Regular Papers | Pp. 146-166

Introducing a Rule Importance Measure

Jiye Li; Nick Cercone

Association rule algorithms often generate an excessive number of rules, many of which are not significant. It is difficult to determine which rules are more useful, interesting and important. We introduce a rough set based Rule Importance Measure to select the most important rules. We use ROSETTA software to generate multiple reducts. Apriori association rule algorithm is then applied to generate rule sets for each data set based on each reduct. Some rules are generated more frequently than the others among the total rule sets. We consider such rules as more important. We define rule importance as the frequency of an association rule generated across all the rule sets. Rule importance is different from either rule interestingness measures or rule quality measures because of their application tasks, the processes where the measures are applied and the contents they measure. The experimental results from an artificial data set, UCI machine learning datasets and an actual geriatric care medical data set show that our method reduces the computational cost for rule generation and provides an effective measure of how important is a rule.

- Regular Papers | Pp. 167-189

Variable Precision Bayesian Rough Set Model and Its Application to Engineering

Tatsuo Nishino; Mitsuo Nagamachi; Hideo Tanaka

This paper proposes a rough set method to extract decision rules from human evaluation data with much ambiguity such as sense and feeling. To handle totally ambiguous and probabilistic human evaluation data, we propose an extended decision table and a probabilistic set approximation based on a new definition of information gain. Furthermore, for our application, we propose a two-stage method to extract probabilistic - rules simply using decision functions of approximate regions. Finally, we implemented the computer program of our proposed rough set method and applied it to Engineering of coffee taste design and examined the effectiveness of the proposed method. The result shows that our proposed rough set method is definitely applicable to human evaluation data.

- Regular Papers | Pp. 190-206

P300 Wave Detection Based on Rough Sets

Sheela Ramanna; Reza Fazel-Rezai

The goal of P300 wave detection is to extract relevant features from the huge number of electrical signals and to detect the P300 component accurately. This paper introduces a modified approach to P300 wave detection combined with an application of rough set methods and non-rough set based methods to classify P300 signals. The modifications include an averaging method using Mexican hat wavelet coefficients to extract features of signals. The data set has been expanded to include signals from six words and a total of 3960 objects. Experiments with a variety of classifiers were performed. The signal data analysis includes comparisons of error rates, true positives and false negatives performed using a paired t-test. It has been found that the false negatives are better indicators of efficacy of the feature extraction method rather than error rate due to the nature of the signal data. The contribution of this paper is an in-depth study P300 wave detection using a modified averaging method for feature extraction together with rough set-based classification on an expanded data set.

- Regular Papers | Pp. 207-223