Catálogo de publicaciones - libros
Bio-inspired Modeling of Cognitive Tasks: Second International Work-Conference on the Interplay Between Natural and Artificial Computation, IWINAC 2007, La Manga del Mar Menor, Spain, June 18-21, 2007, Proceedings, Part I
José Mira ; José R. Álvarez (eds.)
En conferencia: 2º International Work-Conference on the Interplay Between Natural and Artificial Computation (IWINAC) . La Manga del Mar Menor, Spain . June 18, 2007 - June 21, 2007
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
Artificial Intelligence (incl. Robotics); Computation by Abstract Devices; Algorithm Analysis and Problem Complexity; Image Processing and Computer Vision; Pattern Recognition; Computational Biology/Bioinformatics
Disponibilidad
Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No detectada | 2007 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-3-540-73052-1
ISBN electrónico
978-3-540-73053-8
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2007
Información sobre derechos de publicación
© Springer-Verlag Berlin Heidelberg 2007
Tabla de contenidos
A Novel Improvement of Neural Network Classification Using Further Division of Partition Space
Lin Wang; Bo Yang; Zhenxiang Chen; Ajith Abraham; Lizhi Peng
Further Division of Partition Space (FDPS) is a novel technique for neural network classification. Partition space is a space that is used to categorize data sample after sample, which are mapped by neural network learning. The data partition space, which are divided manually into few parts to categorize samples, can be considered as a line segment in the traditional neural network classification. It is proposed that the performance of neural network classification could be improved by using FDPS. In addition, the data partition space are to be divided into many partitions, which will attach to different classes automatically. Experiment results have shown that this method has favorable performance especially with respect to the optimization speed and the accuracy of classified samples.
Pp. 214-223
Morphisms of ANN and the Computation of Least Fixed Points of Semantic Operators
Anthony Karel Seda
We consider a notion of morphism of neural networks and develop its properties. We show how, given any definite logic program , the least fixed point of the immediate consequence operator can be computed by the colimit of a family of neural networks determined by .
Pp. 224-233
Predicting Human Immunodeficiency Virus (HIV) Drug Resistance Using Recurrent Neural Networks
Isis Bonet; María M. García; Yvan Saeys; Yves Van de Peer; Ricardo Grau
Predicting HIV resistance to drugs is one of many problems for which bioinformaticians have implemented and trained machine learning methods, such as neural networks. Predicting HIV resistance would be much easier if we could directly use the three-dimensional (3D) structure of the targeted protein sequences, but unfortunately we rarely have enough structural information available to train a neural network. Fur-thermore, prediction of the 3D structure of a protein is not straightforward. However, characteristics related to the 3D structure can be used to train a machine learning algorithm as an alternative to take into account the information of the protein folding in the 3D space. Here, starting from this philosophy, we select the amino acid energies as features to predict HIV drug resistance, using a specific topology of a neural network. In this paper, we demonstrate that the amino acid ener-gies are good features to represent the HIV genotype. In addi-tion, it was shown that Bidirectional Recurrent Neural Networks can be used as an efficient classification method for this prob-lem. The prediction performance that was obtained was greater than or at least comparable to results obtained previously. The accuracies vary between 81.3% and 94.7%.
Pp. 234-243
Error Weighting in Artificial Neural Networks Learning Interpreted as a Metaplasticity Model
Diego Andina; Aleksandar Jevtić; Alexis Marcano; J. M. Barrón Adame
Many Artificial Neural Networks design algorithms or learning methods imply the minimization of an error objective function. During learning, weight values are updated following a strategy that tends to minimize the final mean error in the Network performance. Weight values are classically seen as a representation of the synaptic weights in biological neurons and their ability to change its value could be interpreted as artificial plasticity inspired by this biological property of neurons. In such a way, metaplasticity is interpreted in this paper as the ability to change the efficiency of artificial plasticity giving more relevance to weight updating of less frequent activations and resting relevance to frequent ones. Modeling this interpretation in the training phase, the hypothesis of an improved training is tested in the Multilayer Perceptron with Backpropagation case. The results show a much more efficient training maintaining the Artificial Neural Network performance.
Pp. 244-252
A First Approach to Birth Weight Prediction Using RBFNNs
A. Guillén; I. Rojas; J. González; H. Pomares; L. J. Herrera
This paper presents a first approach to try to determine the weight of a newborn using a set of variables determined uniquely by the mother. The proposed model to approximate the weight is a Radial Basis Function Neural Network (RBFNN) because it has been successfully applied to many real world problems. The problem of determining the weight of a newborn could be very useful by the time of diagnosing the gestational diabetes mellitus, since it can be a risk factor, and also to determine if the newborn is macrosomic. However, the design of RBFNNs is another issue which still remains as a challenge since there is no perfect methodology to design an RBFNN using a reduced data set, keeping the generalization capabilities of the network. Within the many design techniques existing in the literature, the use of clustering algorithms as a first initialization step for the RBF centers is a quite common solution and many approaches have been proposed. The following work presents a comparative of RBFNNs generated using several algorithms recently developed concluding that, although RBFNNs that can approximate a training data set with an acceptable error, further work must be done in order to adapt RBFNN to large dimensional spaces where the generalization capabilities might be lost.
Pp. 253-260
Filtering Documents with a Hybrid Neural Network Model
Guido Bologna; Mathieu Boretti; Paul Albuquerque
This work presents an application example of text document filtering. We compare the DIMLP neural hybrid model to several machine learning algorithms. The clear advantage of this neural hybrid system is its transparency. In fact, the classification strategy of DIMLPs is almost completely encoded into the extracted rules. During cross-validation trials and in the majority of the situations, DIMLPs demonstrated to be at least as accurate as support vector machines, which is one of the most accurate classifiers of the text categorization domain. In the future, in order to further increase DIMLP accuracy, we believe that common sense knowledge could be easily inserted and refined with the use of symbolic rules.
Pp. 261-271
A Single Layer Perceptron Approach to Selective Multi-task Learning
Jaisiel Madrid-Sánchez; Miguel Lázaro-Gredilla; Aníbal R. Figueiras-Vidal
A formal definition of task relatedness to theoretically justify multi-task learning (MTL) improvements has remained quite elusive. The implementation of MTL using multi-layer perceptron (MLP) neural networks evoked the notion of related tasks sharing an underlying representation. This assumption of relatedness can sometimes hurt the training process if tasks are not truly related in that way. In this paper we present a novel single-layer perceptron (SLP) approach to selectively achieve knowledge transfer in a multi-tasking scenario by using a different notion of task relatedness. The experimental results show that the proposed scheme largely outperforms single-task learning (STL) using single layer perceptrons, working in a robust way even when not closely related tasks are present.
Pp. 272-281
Multi-task Neural Networks for Dealing with Missing Inputs
Pedro J. García-Laencina; Jesús Serrano; Aníbal R. Figueiras-Vidal; José-Luis Sancho-Gómez
Incomplete data is a common drawback in many pattern classification applications. A classical way to deal with unknown values is missing data estimation. Most machine learning techniques work well with missing values, but they do not focus the missing data estimation to solve the classification task. This paper presents effective neural network approaches based on Multi-Task Learning (MTL) for pattern classification with missing inputs. These MTL networks are compared with representative procedures used for handling incomplete data on two well-known data sets. The experimental results show the superiority of our approaches with respect to alternative techniques.
Pp. 282-291
Theoretical Study on the Capacity of Associative Memory with Multiple Reference Points
Enrique Mérida-Casermeiro; Domingo López-Rodríguez; Gloria Galán-Marín; Juan M. Ortiz-de-Lazcano-Lobato
An extension to Hopfield’s model of associative memory is studied in the present work. In particular, this paper is focused in giving solutions to the two main problems present in the model: the apparition of spurious patterns in the learning phase (implying the well-known and undesirable effect of storing the opposite pattern) and the problem of its reduced capacity (the probability of error in the retrieving phase increases as the number of stored patterns grows). In this work, a method to avoid spurious patterns is presented and studied, and an explanation to the previously mentioned effect is given. Another novel technique to increase the capacity of a network is proposed here, based on the idea of using several reference points when storing patterns. It is studied in depth, and an explicit formula for the capacity of the network is provided. This formula shows the linear dependence of the capacity of the new model on the number of reference points, implying the increase of the capacity in this model.
Pp. 292-302
Classification and Diagnosis of Heart Sounds and Murmurs Using Artificial Neural Networks
Juan Martínez-Alajarín; José López-Candel; Ramón Ruiz-Merino
Cardiac auscultation still remains today as the basic technique to easily achieve a cardiac valvular diagnosis. Nowadays, auscultation can be powered with automated computer-aided analysis systems to provide objective, accurate, documented and cost-effective diagnosis. This is particulary useful when such systems offer remote diagnosis capabilities. ASEPTIC is a telediagnosis system for cardiac sounds that allows the analysis of remote phonocardiographic signals. The pattern recognition stage of ASEPTIC is presented in this paper. It is based in feature selection from the cardiac events, and classification using a multilayer perceptron artificial neural network trained with Levenberg-Marquardt algorithm for fast convergence. Three categories of records have been considered: normal, with holosystolic murmur, and with midsystolic murmur. Experimental results show high correct classification rates for the three categories: 100%, 92.69%, and 97.57%, respectively.
Pp. 303-312