Catálogo de publicaciones - libros

Compartir en
redes sociales


Modeling Decisions for Artificial Intelligence: 4th International Conference, MDAI 2007, Kitakyushu, Japan, August 16-18, 2007. Proceedings

Vicenç Torra ; Yasuo Narukawa ; Yuji Yoshida (eds.)

En conferencia: 4º International Conference on Modeling Decisions for Artificial Intelligence (MDAI) . Kitakyushu, Japan . August 16, 2007 - August 18, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence (incl. Robotics); Mathematical Logic and Formal Languages; Computation by Abstract Devices; Data Mining and Knowledge Discovery; Simulation and Modeling; Operation Research/Decision Theory

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-73728-5

ISBN electrónico

978-3-540-73729-2

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

Research on Next Generation Grids Using a Fuzzy Assessment Method

Chunyang Ye; Feng Xiong; Qi Luo

Main issues to be faced by next-generation Grids are the management and exploitation of the overwhelming amount of data produced by applications but also by Grid operation, and the intelligent use of Grid resou-rces and services. To achieve these very ambitious goals, next-generation Grids should include knowledge discovery and knowledge management functionalities, for both applications and system management. This paper proposes a fuzzy assessment method. The assessment of Next-generation Grids is composed of objective and subjective assessment. The results manifest that the assessment method is effective through testing in some system.

- Decision Making | Pp. 114-120

Combining Prioritized Decisions in Classification

Yaxin Bi; Shengli Wu; Gongde Guo

In this paper we present an alternative evidential method of combining prioritized decisions, in order to arrive at a “consensus”, or aggregate, decision. Previous studies have suggested that, in some classification domains, the better performance can be achieved through combining the first and second decisions from each evidence source. However, it is easy to illustrate the fact that going further down a decision list, to give longer preferred decisions, can provide the alternative to the method of combining only the first one and second decisions. Our objective here is to examine the theoretical aspect of an alternative method in terms of − how extending a decision list of any length by one extra preferred decision affects classification results. We also present the experimental results to demonstrate the effectiveness of our alternative method.

- Decision Making | Pp. 121-132

An Aggregation of Agents, Roles and Coalition Formation to Support Collaborative and Dynamic Organizations

Nora Houari; Behrouz H. Far

Capturing human resources, their capabilities and dynamics can greatly contribute to an efficient and effective organization mission goals accomplishment. In practice however, this is often done in an ad-hoc manner leading to a big gap between the organizational specification model (at the design time) and its instance model (during the life cycle). In this paper we present a generic organizational model that captures the human resources, their operational and strategic capabilities, and adaptive coalition formation. The key elements of the model are capabilities, roles, agents and coalitions. We show how the resulting model can be used for role functionalities and performance control, as well as for capturing the knowledge in context-aware applications which is often present in an implicit manner.

- Decision Making | Pp. 133-144

Multidimensional Fuzzy Integrals

Yasuo Narukawa; Vicenç Torra

This paper studies multidimensional fuzzy integrals. We consider a multidimensional generalized fuzzy integral that generalizes the multidimensional Choquet and Sugeno integrals. Then, an extension of part of Fubini’s theorem is given. Finally, citation analysis is considered as an example of the application of the results presented here.

- Non Additive Measures and Concept Lattices | Pp. 145-155

Lindig’s Algorithm for Concept Lattices over Graded Attributes

Radim Belohlavek; Bernard De Baets; Jan Outrata; Vilem Vychodil

Formal concept analysis (FCA) is a method of exploratory data analysis. The data is in the form of a table describing relationship between objects (rows) and attributes (columns), where table entries are grades representing degrees to which objects have attributes. The main output of FCA is a hierarchical structure (so-called concept lattice) of conceptual clusters (so-called formal concepts) present in the data. This paper focuses on algorithmic aspects of FCA of data with graded attributes. Namely, we focus on the problem of generating efficiently all clusters present in the data together with their subconcept-superconcept hierarchy. We present theoretical foundations, the algorithm, analysis of its efficiency, and comparison with other algorithms.

- Non Additive Measures and Concept Lattices | Pp. 156-167

A Version of Lebesgue Decomposition Theorem for Non-additive Measure

Jun Li; Masami Yasuda; Jianzeng Li

In this paper, Lebesgue decomposition type theorems for non-additive measure are shown under the conditions of null-additivity, converse null-additivity, weak null-additivity and -null-additivity, etc.. In our discussion, the monotone continuity of set function is not required.

- Non Additive Measures and Concept Lattices | Pp. 168-173

Trees in Concept Lattices

Radim Belohlavek; Bernard De Baets; Jan Outrata; Vilem Vychodil

The paper presents theorems characterizing concept lattices which happen to be trees after removing the bottom element. Concept lattices are the clustering/classification systems provided as an output of formal concept analysis. In general, a concept lattice may contain overlapping clusters and need not be a tree. On the other hand, tree-like classification schemes are appealing and are produced by several classification methods as the output. This paper attempts to help establish a bridge between concept lattices and tree-based classification methods. We present results presenting conditions for input data which are sufficient and necessary for the output concept lattice to form a tree after one removes its bottom element. In addition, we present illustrative examples and several remarks on related efforts and future research topics.

- Non Additive Measures and Concept Lattices | Pp. 174-184

An Axiomatization of Shapley Values of Games on Set Systems

Aoi Honda; Yoshiaki Okazaki

An axiomatization of a generalized Shapley value of games is proposed. The authors follow Faigle and Kern, in the sense that our basic material is the maximal chains of the underlying set system. This generalized Shapley value may have applicability to the game on set systems which satisfy the condition of a sort of normality.

- Non Additive Measures and Concept Lattices | Pp. 185-192

Formulation of Fuzzy -Means Clustering Using Calculus of Variations

Sadaaki Miyamoto

A membership matrix of fuzzy -mans clustering is associated with the corresponding fuzzy classification rules as membership functions defined on the whole space. In this paper such functions in fuzzy -means and possibilistic clustering are directly derived using the calculus of variations. Consequently, the present formulation generalizes the ordinary fuzzy -means and moreover related methods can be discussed within this framework.

- Clustering and Rough Sets | Pp. 193-203

CWC: A Clustering-Based Feature Weighting Approach for Text Classification

Lin Zhu; Jihong Guan; Shuigeng Zhou

Most existing text classification methods use the vector space model to represent documents, and the document vectors are evaluated by the method. However, weighting does not take into account the fact that the weight of a feature in a document is related not only to the document, but also to the class that document belongs to. In this paper, we present a Clustering-based feature Weighting approach for text Classification, or for short. takes each class in the training collection as a known cluster, and searches for feature weights iteratively to optimize the clustering objective function, so the best clustering result is achieved, and documents in different classes can be best distinguished by using the resulting feature weights. Performance of is validated by conducting classification over two real text collections, and experimental results show that outperforms the traditional NN.

- Clustering and Rough Sets | Pp. 204-215