Catálogo de publicaciones - libros

Compartir en
redes sociales


AI*IA 2007: Artificial Intelligence and Human-Oriented Computing: 10th Congress of the Italian Association for Artificial Intelligence, Rome, Italy, September 10-13, 2007. Proceedings

Roberto Basili ; Maria Teresa Pazienza (eds.)

En conferencia: 10º Congress of the Italian Association for Artificial Intelligence (AI*IA) . Rome, Italy . September 10, 2007 - September 13, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence (incl. Robotics); Computation by Abstract Devices; Mathematical Logic and Formal Languages

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-74781-9

ISBN electrónico

978-3-540-74782-6

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

A Genetic Approach to the Automatic Generation of Fuzzy Control Systems from Numerical Controllers

Giuseppe Della Penna; Francesca Fallucchi; Benedetto Intrigila; Daniele Magazzeni

Control systems are small components that control the behavior of larger systems. In the last years, sophisticated controllers have been widely used in the hardware/software contained in a growing number of everyday products and appliances. Therefore, the problem of the automatic synthesis of controllers is extremely important. To this aim, several techniques have been applied, like , and, more recently, . The controllers generated using these techniques are typically that, however, often have a huge size and not enough robustness. In this paper we present an automatic iterative process, based on , that can be used to compress the huge information contained in such numerical controllers into smaller and more robust .

- Machine Learning | Pp. 230-241

Trip Around the HMPerceptron Algorithm: Empirical Findings and Theoretical Tenets

Roberto Esposito; Daniele P. Radicioni

In a recent work we have carried out , a novel algorithm for the fast evaluation of Supervised Sequential Learning (SSL) classifiers. In this paper we point out some interesting unexpected aspects of the learning behavior of the HMPerceptron algorithm that affect performances. This observation is the starting point of an investigation about the internal working of the HMPerceptron, which unveils crucial details of the internal working of the HMPerceptron learning strategy. The understanding of these details, augment the comprehension of the algorithm meanwhile suggesting further enhancements.

- Machine Learning | Pp. 242-253

Instance-Based Query Answering with Semantic Knowledge Bases

Nicola Fanizzi; Claudia d’Amato; Floriana Esposito

A procedure founded in is presented, for performing a form of analogical reasoning on knowledge bases expressed in a wide range of ontology languages. The procedure exploits a novel semi-distance measure for individuals, that is based on their semantics w.r.t. a number of dimensions corresponding to a committee of features represented by concept descriptions. The procedure can answer by analogy to class’membership queries on the grounds of the classification of a number of training instances (the nearest ones w.r.t. the semi-distance measure). Particularly, it may also predict assertions that are not logically entailed by the knowledge base. In the experimentation, where we compare the procedure to a logical reasoner, we show that it can be quite accurate and augment the scope of its applicability, outperforming previous prototypes that adopted other semantic measures.

- Machine Learning | Pp. 254-265

A Hierarchical Clustering Procedure for Semantically Annotated Resources

Nicola Fanizzi; Claudia d’Amato; Floriana Esposito

A clustering method is presented which can be applied to relational knowledge bases. It can be used to discover interesting groupings of resources through their (semantic) annotations expressed in the standard languages employed for modeling concepts in the Semantic Web. The method exploits a simple (yet effective and language-independent) semi-distance measure for individuals, that is based on the resource semantics w.r.t. a number of dimensions corresponding to a committee of features represented by a group of concept descriptions (discriminating features). The algorithm is an fusion of the classic with approaches based on medoids since they are intended to be applied to relational representations. We discuss its complexity and the potential applications to a variety of important tasks.

- Machine Learning | Pp. 266-277

Similarity-Guided Clause Generalization

S. Ferilli; T. M. A. Basile; N. Di Mauro; M. Biba; F. Esposito

Few works are available in the literature to define similarity criteria between First-Order Logic formulæ, where the presence of relations causes various portions of one description to be possibly mapped in different ways onto another description, which poses serious computational problems. Hence, the need for a set of general criteria that are able to support the comparison between formulæ. This could have many applications; this paper tackles the case of two descriptions (e.g., a definition and an observation) to be generalized, where the similarity criteria could help in focussing on the subparts of the descriptions that are more similar and hence more likely to correspond to each other, based only on their syntactic structure. Experiments on real-world datasets prove the effectiveness of the proposal, and the efficiency of the corresponding implementation in a generalization procedure.

- Machine Learning | Pp. 278-289

Structured Hidden Markov Model: A General Framework for Modeling Complex Sequences

Ugo Galassi; Attilio Giordana; Lorenza Saitta

Structured Hidden Markov Model (S-HMM) is a variant of Hierarchical Hidden Markov Model that shows interesting capabilities of extracting knowledge from symbolic sequences. In fact, the S-HMM structure provides an abstraction mechanism allowing a high level symbolic description of the knowledge embedded in S-HMM to be easily obtained. The paper provides a theoretical analysis of the complexity of the matching and training algorithms on S-HMMs. More specifically, it is shown that Baum-Welch algorithm benefits from the so called locality property, which allows specific components to be modified and retrained, without doing so for the full model. The problem of modeling duration and of extracting (embedding) readable knowledge from (into) a S-HMM is also discussed.

- Machine Learning | Pp. 290-301

Nearest Local Hyperplane Rules for Pattern Classification

Gábor Takács; Béla Pataki

Predicting the class of an observation from its nearest neighbors is one of the earliest approaches in pattern recognition. In addition to their simplicity, nearest neighbor rules have appealing theoretical properties, e.g. the asymptotic error probability of the plain 1-nearest-neighbor (NN) rule is at most twice the Bayes bound, which means zero asymptotic risk in the separable case. But given only a finite number of training examples, NN classifiers are often outperformed in practice. A possible modification of the NN rule to handle separable problems better is the nearest local hyperplane (NLH) approach. In this paper we introduce a new way of NLH classification that has two advantages over the original NLH algorithm. First, our method preserves the zero asymptotic risk property of NN classifiers in the separable case. Second, it usually provides better finite sample performance.

- Machine Learning | Pp. 302-313

The JIGSAW Algorithm for Word Sense Disambiguation and Semantic Indexing of Documents

P. Basile; M. Degemmis; A. L. Gentile; P. Lops; G. Semeraro

Word Sense Disambiguation (WSD) is traditionally considered an AI-hard problem. In fact, a breakthrough in this field would have a significant impact on many relevant fields, such as information retrieval and information extraction. This paper describes JIGSAW, a knowledge-based WSD algorithm that attemps to disambiguate all words in a text by exploiting WordNet senses. The main assumption is that a Part-Of-Speech (POS)-dependent strategy to WSD can turn out to be more effective than a unique strategy. Semantics provided by WSD gives an added value to applications centred on humans as users. Two empirical evaluations are described in the paper. First, we evaluated the accuracy of JIGSAW on Task 1 of SEMEVAL-1 competition. This task measures the effectiveness of a WSD algorithm in an Information Retrieval System. For the second evaluation, we used semantically indexed documents obtained through a WSD process in order to train a naïve Bayes learner that infers “semantic” user profiles as binary text classifiers. The goal of the second empirical evaluation has been to measure the accuracy of the user profiles in selecting relevant documents to be recommended within a document collection.

- Natural Language Processing | Pp. 314-325

Data-Driven Dialogue for Interactive Question Answering

Roberto Basili; Diego De Cao; Cristina Giannone; Paolo Marocco

In this paper, a light framework for dialogue based interactive question answering is presented. The resulting architecture is called (), and represents a flexible and adaptive platform for domain specific dialogue. REQUIRE characterizes as a domain-driven dialogue system, whose aim is to support the specific tasks evoked by interactive question answering scenarios. Among its benefits it should be mentioned its and across different domains, its through adaptive models of speech act recognition and planning and its adherence of knowledge representation standard. The framework will be exemplified through its application within a sexual health information service tailored to young people.

- Natural Language Processing | Pp. 326-338

GlossExtractor: A Web Application to Automatically Create a Domain Glossary

Roberto Navigli; Paola Velardi

We describe a web application, , that receives in input the output of a terminology extraction web application, , or a user-provided terminology, and then searches several repositories (on-line glossaries, web documents, user-specified web pages) for sentences that are candidate definitions for each of the input terms. Candidate definitions are then filtered using statistical indicators and machine-learned regular patterns. Finally, the user can inspect the acquired definitions and perform an individual or group validation. The validated glossary is then downloaded in one of several formats.

- Natural Language Processing | Pp. 339-349