Catálogo de publicaciones - libros

Compartir en
redes sociales


Intelligent Data Engineering and Automated Learning: IDEAL 2005: 6th International Conference, Brisbane, Australia, July 6-8, 2005, Proceedings

Marcus Gallagher ; James P. Hogan ; Frederic Maire (eds.)

En conferencia: 6º International Conference on Intelligent Data Engineering and Automated Learning (IDEAL) . Brisbane, QLD, Australia . July 6, 2005 - July 8, 2005

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Database Management; Algorithm Analysis and Problem Complexity; Artificial Intelligence (incl. Robotics); Information Storage and Retrieval; Information Systems Applications (incl. Internet); Computers and Society

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2005 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-26972-4

ISBN electrónico

978-3-540-31693-0

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2005

Tabla de contenidos

Generating Predicate Rules from Neural Networks

Richi Nayak

Artificial neural networks play an important role for pattern recognition tasks. However, due to poor comprehensibility of the learned network, and the inability to represent explanation structures, they are not considered sufficient for the general representation of knowledge. This paper details a methodology that represents the knowledge of a trained network in the form of restricted first-order logic rules, and subsequently allows user interaction by interfacing with a knowledge based reasoner.

- Learning Algorithms and Systems | Pp. 234-241

Improving Ensembles with Classificational Cellular Automata

Petra Povalej; Mitja Lenič; Peter Kokol

In real world there are many examples where synergetic cooperation of multiple entities performs better than just single one. The same fundamental idea can be found in ensemble learning methods that have the ability to improve classification accuracy. Each classifier has specific view on the problem domain and can produce different classification for the same observed sample. Therefore many methods for combining classifiers into ensembles have been already developed. Most of them use simple majority voting or weighted voting of classifiers to combine the single classifier votes. In this paper we present a new approach for combining classifiers into an ensemble with Classificational Cellular Automata (CCA), which exploit the cellular automata self-organizational abilities. We empirically show that CCA improves the classification accuracy of three popular ensemble methods: Bagging, Boosting and MultiBoosting. The presented results also show important advantages of CCA, such as: problem independency, robustness to noise and no need for the user input.

- Learning Algorithms and Systems | Pp. 242-249

A Gradient BYY Harmony Learning Algorithm on Mixture of Experts for Curve Detection

Zhiwu Lu; Qiansheng Cheng; Jinwen Ma

Curve detection is a basic problem in image processing and has been extensively studied in the literature. However, it remains a difficult problem. In this paper, we study this problem from the Bayesian Ying-Yang (BYY) learning theory via the harmony learning principle on a BYY system with the mixture of experts (ME). A gradient BYY harmony learning algorithm is proposed to detect curves (straight lines or circles) from a binary image. It is demonstrated by the simulation and image experiments that this gradient algorithm can not only detect curves against noise, but also automatically determine the number of straight lines or circles during parameter learning.

- Learning Algorithms and Systems | Pp. 250-257

A Novel Anomaly Detection Using Small Training Sets

Qingbo Yin; Liran Shen; Rubo Zhang; Xueyao Li

Anomaly detection is an essential component of the protection mechanism against novel attacks.Traditional methods need very large volume of purely training dataset, which is expensive to classify it manually. A new method for anomaly intrusion detection is proposed based on supervised clustering and markov chain model, which is designed to train from a small set of normal data. After short system call sequences are clustered, markov chain is used to learn the relationship among these clusters and classify the normal or abnormal. The observed behavior of the system is analyzed to infer the probability that the markov chain of the norm profile supports the observed behavior. markov information source entropy and condition entropy are used to select parameters. The experiments have showed that the method is effective to detect anomalistic behaviors, and enjoys better generalization ability when a small number of training dataset is used only.

- Learning Algorithms and Systems | Pp. 258-263

Induction of Linear Decision Trees with Real-Coded Genetic Algorithms and k-D Trees

Sai-cheong Ng; Kwong-sak Leung

Although genetic algorithm-based decision tree algorithms are applied successfully in various classification tasks, their execution times are quite long on large datasets. A novel decision tree algorithm, called Real-Coded Genetic Algorithm-based Linear Decision Tree Algorithm with k-D Trees (RCGA-based LDT with kDT), is proposed. In the proposed algorithm, a k-D tree is built when a new node of a linear decision tree is created. The use of k-D trees speeds up the construction of linear decision trees without sacrificing the quality of the constructed decision trees.

- Learning Algorithms and Systems | Pp. 264-271

Intelligent Predictive Control of a 6-Dof Robotic Manipulator with Reliability Based Performance Improvement

Ahmet Akbas

A six-degree of freedom (dof) robotic manipulator from Stanford family is controlled with an intelligent control system designed by using Elman network and generalized predictive control (GPC) algorithm. Three of Elman networks are trained by using GPC based data. They are used in parallel form to improve the reliability of the system by error minimization. At the end of parallel implementation, the results of networks are evaluated by using torque equations to select the network with best result. Simulation based test results showed that the proposed controller improves the performance of the system.

- Learning Algorithms and Systems | Pp. 272-279

Sequential Search for Decremental Edition

José A. Olvera-López; J. Ariel Carrasco-Ochoa; José Fco. Martínez-Trinidad

The edition process is an important task in supervised classification because it helps to reduce the size of the training sample. On the other hand, Instance-Based classifiers store all the training set indiscriminately, which in almost all times, contains useless or harmful objects, for the classification process. Therefore it is important to delete unnecessary objects to increase both classification speed and accuracy. In this paper, we propose an edition method based on sequential search and we present an empirical comparison between it and some other decremental edition methods.

- Learning Algorithms and Systems | Pp. 280-285

Bearing Similarity Measures for Self-organizing Feature Maps

Narongdech Keeratipranon; Frederic Maire

The neural representation of space in rats has inspired many navigation systems for robots. In particular, (SOM) are often used to give a sense of location to robots by mapping sensor information to a low-dimensional grid. For example, a robot equipped with a panoramic camera can build a 2D SOM from vectors of landmark bearings. If there are four landmarks in the robot’s environment, then the 2D SOM is embedded in a 2D manifold lying in a 4D space. In general, the set of observable sensor vectors form a low-dimensional Riemannian manifold in a high-dimensional space. In a landmark bearing sensor space, the manifold can have a large curvature in some regions (when the robot is near a landmark for example), making the Eulidian distance a very poor approximation of the Riemannian metric. In this paper, we present and compare three methods for measuring the similarity between vectors of landmark bearings. We also discuss a method to equip SOM with a good approximation of the Riemannian metric. Although we illustrate the techniques with a landmark bearing problem, our approach is applicable to other types of data sets.

- Learning Algorithms and Systems | Pp. 286-293

Efficient Spatial Clustering Algorithm Using Binary Tree

Mohsin Ali; Xue Li; Zhao Yang Dong

In this paper we present an efficient k-Means clustering algorithm for two dimensional data. The proposed algorithm re-organizes dataset into a form of nested binary tree. Data items are compared at each node with only two nearest means with respect to each dimension and assigned to the one that has the closer mean. The main intuition of our research is as follows: We build the nested binary tree. Then we scan the data in raster order by in-order traversal of the tree. Lastly we compare data item at each node to the only two nearest means to assign the value to the intendant cluster. In this way we are able to save the computational cost significantly by reducing the number of comparisons with means and also by the least use to Euclidian distance formula. Our results showed that our method can perform clustering operation much faster than the classical ones.

- Learning Algorithms and Systems | Pp. 294-301

Cluster Analysis of High-Dimensional Data: A Case Study

Richard Bean; Geoff McLachlan

Normal mixture models are often used to cluster continuous data. However, conventional approaches for fitting these models will have problems in producing nonsingular estimates of the component-covariance matrices when the dimension of the observations is large relative to the number of observations. In this case, methods such as principal components analysis (PCA) and the mixture of factor analyzers model can be adopted to avoid these estimation problems. We examine these approaches applied to the Cabernet wine data set of Ashenfelter (1999), considering the clustering of both the wines and the judges, and comparing our results with another analysis. The mixture of factor analyzers model proves particularly effective in clustering the wines, accurately classifying many of the wines by location.

- Learning Algorithms and Systems | Pp. 302-310