Catálogo de publicaciones - libros

Compartir en
redes sociales


Artificial Neural Networks: ICANN 2007: 17th International Conference, Porto, Portugal, September 9-13, 2007, Proceedings, Part I

Joaquim Marques de Sá ; Luís A. Alexandre ; Włodzisław Duch ; Danilo Mandic (eds.)

En conferencia: 17º International Conference on Artificial Neural Networks (ICANN) . Porto, Portugal . September 9, 2007 - September 13, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence (incl. Robotics); Computation by Abstract Devices; Pattern Recognition; Information Systems Applications (incl. Internet); Database Management; Neurosciences

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-74689-8

ISBN electrónico

978-3-540-74690-4

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

Fixed Points of the Abe Formulation of Stochastic Hopfield Networks

Marie Kratz; Miguel Atencia; Gonzalo Joya

The stability of stochastic Hopfield neural networks, in the Abe formulation, is studied. The aim is to determine whether the ability of the deterministic system to solve combinatorial optimization problems is preserved after the addition of random noise. In particular, the stochastic stability of the attractor set is analyzed: vertices, which are feasible points of the problem, should be stable, whereas interior points, which are unfeasible, should be unstable. Conditions on the noise intensity are stated, so that these properties are guaranteed. This theoretical investigation establishes the foundations for practical application of stochastic networks to combinatorial optimization.

- Neural Dynamics and Complex Systems | Pp. 599-608

Visualization of Dynamics Using Local Dynamic Modelling with Self Organizing Maps

Ignacio Díaz-Blanco; Abel A. Cuadrado-Vega; Alberto B. Diez-González; Juan J. Fuertes-Martánez; Manuel Domínguez-González; Perfecto Reguera-Acevedo

In this work, we describe a procedure to visualize nonlinear process dynamics using a self-organizing map based local model dynamical estimator. The proposed method exploits the topology preserving nature of the resulting estimator to extract visualizations (planes) of insightful dynamical features, that allow to explore nonlinear systems whose behavior changes with the operating point. Since the visualizations are obtained from a dynamical model of the process, measures on the goodness of this estimator (such as RMSE or AIC) are also applicable as a measure of the trustfulness of the visualizations. To illustrate the application of the proposed method, an experiment to analyze the dynamics of a nonlinear system on different operating points is included.

- Neural Dynamics and Complex Systems | Pp. 609-617

Comparison of Echo State Networks with Simple Recurrent Networks and Variable-Length Markov Models on Symbolic Sequences

Michal Čerňanský; Peter Tiňo

A lot of attention is now being focused on connectionist models known under the name “reservoir computing”. The most prominent example of these approaches is a recurrent neural network architecture called an echo state network (ESN). ESNs were successfully applied in more real-valued time series modeling tasks and performed exceptionally well. Also using ESNs for processing symbolic sequences seems to be attractive. In this work we experimentally support the claim that the state space of ESN is organized according to the Markovian architectural bias principles when processing symbolic sequences. We compare performance of ESNs with connectionist models explicitly using Markovian architectural bias property, with variable length Markov models and with recurrent neural networks trained by advanced training algorithms. Moreover we show that the number of reservoir units plays a similar role as the number of contexts in variable length Markov models.

- Neural Dynamics and Complex Systems | Pp. 618-627

Data Fusion and Auto-fusion for Quantitative Structure-Activity Relationship (QSAR)

Changjian Huang; Mark J. Embrechts; N. Sukumar; Curt M. Breneman

Data fusion originally referred to the process of combining multi-sensor data from different sources such that the resulting information/model is in some sense better than would be possible when these sources where used individually. In this paper the data fusion concept is extended to molecular drug design. Rather than using data from different sensor sources, different descriptor sets are used to predict activities or responses for a set of molecules. Data fusion techniques are applied in order to improve the predictive (QSAR) model on test data. In this case this type of data fusion is referred to as auto-fusion. An effective auto-fusion functional model and alternative architectures are proposed for a predictive molecular design or QSAR model to model and predict the binding affinity to the human serum albumin.

- Data Analysis | Pp. 628-637

Cluster Domains in Binary Minimization Problems

Leonid B. Litinskii

Previously it was found that when minimizing a quadratic functional depending on a great number of binary variables, it is reasonable to use aggregated variables, joining together independent binary variables in blocks (domains). Then one succeeds in finding deeper local minima of the functional. In the present publication we investigate an algorithm of the domains formation based on the clustering of the connection matrix.

- Data Analysis | Pp. 638-647

MaxSet: An Algorithm for Finding a Good Approximation for the Largest Linearly Separable Set

Leonardo Franco; José Luis Subirats; José M. Jerez

Finding the largest linearly separable set of examples for a given Boolean function is a NP-hard problem, that is relevant to neural network learning algorithms and to several problems that can be formulated as the minimization of a set of inequalities. We propose in this work a new algorithm that is based on finding a unate subset of the input examples, with which then train a perceptron to find an approximation for the largest linearly separable subset. The results from the new algorithm are compared to those obtained by the application of the Pocket learning algorithm directly with the whole set of inputs, and show a clear improvement in the size of the linearly separable subset obtained, using a large set of benchmark functions.

- Data Analysis | Pp. 648-656

Generalized Softmax Networks for Non-linear Component Extraction

Jörg Lücke; Maneesh Sahani

We develop a probabilistic interpretation of non-linear component extraction in neural networks that activate their hidden units according to a softmax-like mechanism. On the basis of a generative model that combines hidden causes using the max -function, we show how the extraction of input components in such networks can be interpreted as maximum likelihood parameter optimization. A simple and neurally plausible Hebbian -rule is derived. For approximately-optimal learning, the activity of the hidden neural units is described by a generalized softmax function and the classical softmax is recovered for very sparse input. We use the bars benchmark test to numerically verify our analytical results and to show competitiveness of the derived learning algorithms.

- Data Analysis | Pp. 657-667

Stochastic Weights Reinforcement Learning for Exploratory Data Analysis

Ying Wu; Colin Fyfe; Pei Ling Lai

We review a new form of immediate reward reinforcement learning in which the individual unit is deterministic but has stochastic synapses. 4 learning rules have been developed from this perspective and we investigate the use of these learning rules to perform linear projection techniques such as principal component analysis, exploratory projection pursuit and canonical correlation analysis. The method is very general and simply requires a reward function which is specific to the function we require the unit to perform. We also discuss how the method can be used to learn kernel mappings and conclude by illustrating its use on a topology preserving mapping.

- Data Analysis | Pp. 668-676

Post Nonlinear Independent Subspace Analysis

Zoltán Szabó; Barnabás Póczos; Gábor Szirtes; András Lőrincz

In this paper a generalization of Post Nonlinear Independent Component Analysis (PNL-ICA) to Post Nonlinear Independent Subspace Analysis (PNL-ISA) is presented. In this framework sources to be identified can be multidimensional as well. For this generalization we prove a separability theorem: the ambiguities of this problem are essentially the same as for the linear Independent Subspace Analysis (ISA). By applying this result we derive an algorithm using the mirror structure of the mixing system. Numerical simulations are presented to illustrate the efficiency of the algorithm.

- Data Analysis | Pp. 677-686

Algebraic Geometric Study of Exchange Monte Carlo Method

Kenji Nagata; Sumio Watanabe

In hierarchical learning machines such as neural networks, Bayesian learning provides better generalization performance than maximum likelihood estimation. However, its accurate approximation using Markov chain Monte Carlo (MCMC) method requires huge computational cost. The exchange Monte Carlo (EMC) method was proposed as an improved algorithm of MCMC method. Although its effectiveness has been shown not only in Bayesian learning but also in many fields, the mathematical foundation of EMC method has not yet been established. In this paper, we clarify the asymptotic behavior of symmetrized Kullback divergence and average exchange ratio, which are used as criteria for designing the EMC method.

- Estimation | Pp. 687-696