Catálogo de publicaciones - libros

Compartir en
redes sociales


Modeling Decisions for Artificial Intelligence: 4th International Conference, MDAI 2007, Kitakyushu, Japan, August 16-18, 2007. Proceedings

Vicenç Torra ; Yasuo Narukawa ; Yuji Yoshida (eds.)

En conferencia: 4º International Conference on Modeling Decisions for Artificial Intelligence (MDAI) . Kitakyushu, Japan . August 16, 2007 - August 18, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence (incl. Robotics); Mathematical Logic and Formal Languages; Computation by Abstract Devices; Data Mining and Knowledge Discovery; Simulation and Modeling; Operation Research/Decision Theory

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-73728-5

ISBN electrónico

978-3-540-73729-2

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

A Novel Spatial Clustering Algorithm with Sampling

Cai-Ping Hu; Xiao-Lin Qin; Jun Zhang

Spatial clustering is one of the very important spatial data mining techniques. So far, a lot of spatial clustering algorithms have been proposed. is one of the effective spatial clustering algorithms, which can discover clusters of any arbitrary shape and handle the noise effectively. However, it has also several disadvantages. First, it does based on only spatial attributes, does not consider non-spatial attributes in spatial databases. Secondly, when does handle large-scale spatial databases, it requires large volume of memory support and the I/O cost. In this paper, a novel spatial clustering algorithm with sampling () based on is developed, which not only clusters large-scale spatial databases effectively, but also considers spatial attributes and non-spatial attributes. Experimental results of 2-D spatial datasets show thatis feasible and efficient.

- Clustering and Rough Sets | Pp. 216-225

Algorithms for Sequential Extraction of Clusters by Possibilistic Clustering

Sadaaki Miyamoto; Youhei Kuroda

Possibilistic clustering that is robust to noise in data is another useful tool in addition to the best-known fuzzy -means. However, there is a fundamental problem of strong dependence on initial values in possibilistic clustering and there is a proposal of an algorithm generating ‘one cluster at a time.’ Moreover this method is related to the mountain clustering algorithm. In this paper these features are reconsidered and a number of algorithms of sequential generation of clusters which includes a possibilistic medoid clustering are proposed. These algorithms automatically determine the number of clusters. An illustrative example with different methods of sequential clustering is given.

- Clustering and Rough Sets | Pp. 226-236

Fuzzy -Means for Data with Tolerance Defined as Hyper-Rectangle

Yasushi Hasegawa; Yasunori Endo; Yukihiro Hamasuna; Sadaaki Miyamoto

The paper presents some new clustering algorithms which are based on fuzzy -means. The algorithms can treat data with tolerance defined as hyper-rectangle. First, the tolerance is introduced into optimization problems of clustering. This is generalization of calculation errors or missing values. Next, the problems are solved and some algorithms are constructed based on the results. Finally, usefulness of the proposed algorithms are verified through numerical examples.

- Clustering and Rough Sets | Pp. 237-248

Kernel Functions Based on Fuzzy Neighborhoods and Agglomerative Clustering

Sadaaki Miyamoto; Yuichi Kawasaki

A fuzzy neighborhood model for analyzing information systems having topological structures on occurrences of keywords is proposed and associated kernel functions are studied. Sufficient conditions when a neighborhood defines a kernel are derived. Accordingly, agglomerative clustering algorithms are applicable which employ kernel functions. An illustrative example is given.

- Clustering and Rough Sets | Pp. 249-260

-Means Clustering on the Multinomial Manifold

Ryo Inokuchi; Sadaaki Miyamoto

In this paper, we discuss -means clustering algorithms on the multinomial manifold. Data forms a Riemannian manifold with the Fisher information metric via the probabilistic mapping from datum to a probability distribution. For discrete data, the statistical manifold of the multinomial distribution is appropriate. In general, The euclidean distance is not appropriate on the manifold because the parameter space of the distribution is not flat. We apply the Kullback-Leibler (KL) divergence or the Hellinger distance as approximations of the geodesic distance to hard -means and fuzzy -means.

- Clustering and Rough Sets | Pp. 261-268

On a Rough Sets Based Tool for Generating Rules from Data with Categorical and Numerical Values

Hiroshi Sakai; Kazuhiro Koba; Ryuji Ishibashi; Michinori Nakata

Rough set theory has mainly been applied to data with categorical values. In order to handle data with numerical values, we have defined numerical patterns with two symbols # and @, and have proposed more flexible rough sets based rule generation. The concepts of ‘’ and ‘’ for rules are explicitly defined according to numerical patterns. This paper focuses on the rough sets based method for rule generation, which is enhanced by numerical patterns, and refers to the tool programs. Tool programs are applied to data in UCI Machine Learning Repository, and some useful rules are obtained.

- Clustering and Rough Sets | Pp. 269-281

Applying Rough Sets to Information Tables Containing Probabilistic Values

Michinori Nakata; Hiroshi Sakai

Rough sets are applied to information tables containing imprecise values that are expressed in a probability distribution. A family of weighted equivalence classes is obtained where each equivalence class is accompanied by the probability to which it is an actual one. By using the family of weighted equivalence classes, we derive lower and upper approximations. The lower and upper approximations coincide with ones obtained from methods of possible worlds. Therefore, the method of weighted equivalence classes is justified. In addition, this method is applied to missing values interpreted probabilistically. Using weighted equivalence classes correctly derives a lower approximation, even in the case where the method of Kryszkiewicz does not derive any lower approximation.

- Clustering and Rough Sets | Pp. 282-294

Fuzzy Extension of Estimations with Randomness: The Perception-Based Approach

Yuji Yoshida

A set of perceived random events is given by a fuzzy random variable, and its estimation is represented by a functional on real random variables. The estimation of the perception regarding random events is obtained, extending the functional to a functional of fuzzy random variables. This paper discusses conditions and various properties of perception-based extension of estimations with randomness, and several examples of the perception-based extension are investigated. The results can be applied other estimations in engineering, economics and so on.

- Soft Computing | Pp. 295-306

Boltzmann Machine Incorporated Hybrid Neural Fuzzy System for Hardware/Software Partitioning in Embedded System Design

Yue Huang; Yong-Soo Kim

Nowadays one of the most vital problems in embedded system co-design is Hardware/Software (HW/SW) partitioning. Due to roughly assumed parameters in design specification and imprecise benchmarks for judging the solution’s quality, embedded system designers have been working on finding a more efficient method for HW/SW partitioning for years. We propose an application of a hybrid neural fuzzy system incorporating Boltzmann machine to the HW/SW partitioning problem. Its architecture and performance estimation against other popular algorithm are evaluated. The simulation result shows the proposed system outperforms other algorithm both in cost and performance.

- Soft Computing | Pp. 307-317

Artificial Bee Colony (ABC) Optimization Algorithm for Training Feed-Forward Neural Networks

Dervis Karaboga; Bahriye Akay; Celal Ozturk

Training an artificial neural network is an optimization task since it is desired to find optimal weight set of a neural network in training process. Traditional training algorithms has some drawbacks such as getting stuck in local minima and computational complexity. Therefore, evolutionary algorithms are employed to train neural networks to overcome these issues. In this work, Artificial Bee Colony (ABC) Algorithm which has good exploration and exploitation capabilities in searching optimal weight set is used in training neural networks.

- Soft Computing | Pp. 318-329