Catálogo de publicaciones - libros
Knowledge Science, Engineering and Management: First International Conference, KSEM 2006, Guilin, China, August 5-8, 2006, Proceedings
Jérôme Lang ; Fangzhen Lin ; Ju Wang (eds.)
En conferencia: 1º International Conference on Knowledge Science, Engineering and Management (KSEM) . Guilin, China . August 5, 2006 - August 8, 2006
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
No disponibles.
Disponibilidad
| Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
|---|---|---|---|---|
| No detectada | 2006 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-3-540-37033-8
ISBN electrónico
978-3-540-37035-2
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2006
Información sobre derechos de publicación
© Springer-Verlag Berlin Heidelberg 2006
Tabla de contenidos
doi: 10.1007/11811220_21
Selection of Materialized Relations in Ontology Repository Management System
Man Li; Xiaoyong Du; Shan Wang
With the growth of ontology scale and complexity, the query performance of Ontology Repository Management System (ORMS) becomes more and more important. The paper proposes materialized relations technique which speeds up query processing in ORMS by making the implicit derived relations of ontology explicit. Here the selection of materialized relations is a key problem, because the materialized relations technique trades off required inference time against maintenance cost and storage space. However, the problem has not been discussed formally before. So the paper proposes a QSS model to describe the queries set of ontology formally and gives the benefit evaluation model and the selection algorithm of materialized relations based on QSS model. The method in this paper not only considers the benefit in query response of the materialization technique, but also the storage and maintenance cost of it. In the end, an application case is introduced to prove the selection method of materialized relations is effective.
- Regular Papers | Pp. 241-251
doi: 10.1007/11811220_22
Combining Topological and Directional Information: First Results
Sanjiang Li
Representing and reasoning about spatial information is important in artificial intelligence and geographical information science. Relations between spatial entities are the most important kind of spatial information. Most current formalisms of spatial relations focus on one single aspect of space. This contrasts sharply with real world applications, where several aspects are usually involved together. This paper proposes a qualitative calculus that combines a simple directional relation model with the well-known topological RCC5 model. We show by construction that the consistency of atomic networks can be decided in polynomial time.
- Regular Papers | Pp. 252-264
doi: 10.1007/11811220_23
Measuring Conflict Between Possibilistic Uncertain Information Through Belief Function Theory
Weiru Liu
Dempster Shafer theory of evidence (DS theory) and possibility theory are two main formalisms in modelling and reasoning with uncertain information. These two theories are inter-related as already observed and discussed in many papers (e.g. [DP82, DP88b]). One aspect that is common to the two theories is how to quantitatively measure the degree of conflict (or inconsistency) between pieces of uncertain information. In DS theory, traditionally this is judged by the combined mass value assigned to the emptyset. Recently, two new approaches to measuring the conflict among belief functions are proposed in [JGB01, Liu06]. The former provides a distance-based method to quantify how close a pair of beliefs is while the latter deploys a pair of values to reveal the degree of conflict of two belief functions. On the other hand, in possibility theory, this is done through measuring the degree of inconsistency of merged information. However, this measure is not sufficient when pairs of uncertain information have the same degree of inconsistency. At present, there are no other alternatives that can further differentiate them, except an initiative based on coherence-intervals ([HL05a, HL05b]). In this paper, we investigate how the two new approaches developed in DS theory can be used to measure the conflict among possibilistic uncertain information. We also examine how the reliability of a source can be assessed in order to weaken a source when a conflict arises.
- Regular Papers | Pp. 265-277
doi: 10.1007/11811220_24
WWW Information Integration Oriented Classification Ontology Integrating Approach
Anxiang Ma; Kening Gao; Bin Zhang; Yu Wang; Ying Yin
In WWW information integration, eliminating semantic hetero-geneity and implementing semantic combination is one of the key problems. This paper introduces classification ontology into WWW information integration to solve the semantic combination problem of heterogeneity classification architecture in Web information integration. However, there may be many kinds of ontology in a specific domain due to the structure of the websites, domain experts and different goals. So we have to combine all these kinds of ontology into logically unified integrated classification ontology in order to solve the problem of semantic heterogeneity commendably. This paper primarily discusses the method of building integrated classification ontology based on individual ontology, presents the definition of classification ontology, analyses the conceptual mapping and relational mapping between ontologies and solves the level conflict in the equivalent concepts.
- Regular Papers | Pp. 278-291
doi: 10.1007/11811220_25
Configurations for Inference Between Causal Statements
Philippe Besnard; Marie-Odile Cordier; Yves Moinard
When dealing with a cause, cases involving some effect due to that cause are precious as such cases contribute to what the cause is. They must be reasoned upon if inference about causes is to take place. It thus seems like a good logic for causes would arise from a semantics based on collections of cases, to be called configurations, that gather instances of a given cause yielding some effect(s). Two crucial features of this analysis of causation are transitivity, which is endorsed here, and the event-based formulation, which is given up here in favor of a fact-based approach. A reason is that the logic proposed is ultimately meant to deal with both deduction (given a cause, what is to hold?) and abduction (given the facts, what could be the cause?) thus paving the way to the inference of explanations. The logic developed is shown to enjoy many desirable traits. These traits form a basic kernel which can be modified but which cannot be extended significantly without losing the adequacy with the nature of causation rules.
- Regular Papers | Pp. 292-304
doi: 10.1007/11811220_26
Taking Seriously: A Plea for Iterated Belief Contraction
Abhaya Nayak; Randy Goebel; Mehmet Orgun; Tam Pham
Most work on iterated belief change has focused on iterated belief revision, namely how to compute (). Historically however, belief revision can be defined in terms of belief expansion and belief contraction, where expansion and contraction are viewed as primary operators. Accordingly, our attention to iterated belief change should be focused on constructions like (), (), () and (). The first two of these are relatively straightforward, but the last two are more problematic. Here we consider these latter, and formulate iterated belief change by employing the Levi identity and the Harper Identity as the guiding principles.
- Regular Papers | Pp. 305-317
doi: 10.1007/11811220_27
Description and Generation of Computational Agents
Roman Neruda; Gerd Beuster
A formalism for the logical description of computational agents and multi-agent systems is given. It is explained how it such a formal description can be used to configure and reason about multi-agent systems realizing computational intelligence models. A usage within a real software system is demonstrated. The logical description of multi-agent systems opens for interaction with ontology based distributed knowledge systems like the Semantic Web.
- Regular Papers | Pp. 318-329
doi: 10.1007/11811220_28
Knowledge Capability: A Definition and Research Model
Ye Ning; Zhi-Ping Fan; Bo Feng
Basing on the view of dynamic capacity and knowledge-based view, this paper explores the definition and dimensions of knowledge capability. Differing from previous literature that think knowledge capability is the sum total of the knowledge assets of organizations, this paper defines knowledge capability as including both knowledge assets and knowledge operating capacities. And it is proposed that knowledge capability is dynamic, that is to say it will reconstruct with the changing of the environment. Since there are few empirical studies on the relationship between capability and organization performance, this paper suggests a model for further empirical studies on the impact of knowledge capability on organization performance.
- Regular Papers | Pp. 330-340
doi: 10.1007/11811220_29
Quota-Based Merging Operators for Stratified Knowledge Bases
Guilin Qi; Weiru Liu; David A. Bell
Current merging methods for stratified knowledge bases are often based on the commensurability assumption, i.e. all knowledge bases share a common scale. However, this assumption is too strong in practice. In this paper, we propose a family of operators to merge stratified knowledge bases without commensurability assumption. Our merging operators generalize the quota operators, a family of important merging operators in classical logic. Both logical properties and computational complexity issues of the proposed operators are studied.
- Regular Papers | Pp. 341-353
doi: 10.1007/11811220_30
Enumerating Minimal Explanations by Minimal Hitting Set Computation
Ken Satoh; Takeaki Uno
We consider the problem of enumerating minimal explanations in propositional theory. We propose a new way of characterizing the enumeration problem in terms of not only the number of explanations, but also the number of . Maximal unexplanations are a maximal set of abducible formulas which cannot explain the observation given a background theory. In this paper, we interleavingly enumerate not only minimal explanations but also maximal unexplanations. To best of our knowledge, there has been no algorithm which is characterized in terms of such maximal unexplanations. We propose two algorithms to perform this task and also analyze them in terms of query complexity, space complexity and time complexity.
- Regular Papers | Pp. 354-365