Catálogo de publicaciones - libros

Compartir en
redes sociales


Artificial Neural Networks: ICANN 2007: 17th International Conference, Porto, Portugal, September 9-13, 2007, Proceedings, Part I

Joaquim Marques de Sá ; Luís A. Alexandre ; Włodzisław Duch ; Danilo Mandic (eds.)

En conferencia: 17º International Conference on Artificial Neural Networks (ICANN) . Porto, Portugal . September 9, 2007 - September 13, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence (incl. Robotics); Computation by Abstract Devices; Pattern Recognition; Information Systems Applications (incl. Internet); Database Management; Neurosciences

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-74689-8

ISBN electrónico

978-3-540-74690-4

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

Solving Deep Memory POMDPs with Recurrent Policy Gradients

Daan Wierstra; Alexander Foerster; Jan Peters; Jürgen Schmidhuber

This paper presents Recurrent Policy Gradients, a model-free reinforcement learning (RL) method creating limited-memory sto-chastic policies for partially observable Markov decision problems (POMDPs) that require long-term memories of past observations. The approach involves approximating a policy gradient for a Recurrent Neural Network (RNN) by backpropagating return-weighted characteristic eligibilities through time. Using a “Long Short-Term Memory” architecture, we are able to outperform other RL methods on two important benchmark tasks. Furthermore, we show promising results on a complex car driving simulation task.

- Estimation | Pp. 697-706

Soft Clustering for Nonparametric Probability Density Function Estimation

Ezequiel López-Rubio; Juan Miguel Ortiz-de-Lazcano-Lobato; Domingo López-Rodríguez; María del Carmen Vargas-González

We present a nonparametric probability density estimation model. The classical Parzen window approach builds a spherical Gaussian density around every input sample. Our method has a first stage where hard neighbourhoods are determined for every sample. Then soft clusters are considered to merge the information coming from several hard neighbourhoods. Our proposal estimates the local principal directions to yield a specific Gaussian mixture component for each soft cluster. This leads to outperform other proposals where local parameter selection is not allowed and/or there are no smoothing strategies, like the manifold Parzen windows.

- Estimation | Pp. 707-716

Vector Field Approximation by Model Inclusive Learning of Neural Networks

Yasuaki Kuroe; Hajimu Kawakami

The problem of vector field approximation arises in the wide range of fields such as motion control, computer vision and so on. This paper proposes a method for reconstructing an entire continuous vector field from a sparse set of sample data by training neural networks. In order to make approximation results possess inherent properties of vector fields and to attain reasonable approximation accuracy with computational efficiency, we include a priori knowledge on inherent properties of vector fields into the learning problem of neural networks, which we call model inclusive learning. An efficient learning algorithm of neural networks is derived. It is shown through numerical experiments that the proposed method makes it possible to reconstruct vector fields accurately and efficiently.

- Estimation | Pp. 717-726

Spectral Measures for Kernel Matrices Comparison

Javier González; Alberto Muñoz

With the emergence of data fusion techniques (kernel combinations, ensemble methods and boosting algorithms), the task of comparing distance/similarity/kernel matrices is becoming increasingly relevant. However, the choice of an appropriate metric for matrices involved in pattern recognition problems is far from trivial.

In this work we propose a general spectral framework to build metrics for matrix spaces. Within the general framework of matrix pencils, we propose a new metric for symmetric and semi-positive definite matrices, called Pencil Distance (PD). The generality of our approach is demonstrated by showing that the Kernel Alignment (KA) measure is a particular case of our spectral approach.

We illustrate the performance of the proposed measures using some classification problems.

- Estimation | Pp. 727-736

A Novel and Efficient Method for Testing Non Linear Separability

David Elizondo; Juan Miguel Ortiz-de-Lazcano-Lobato; Ralph Birkenhead

The notion of linear separability is widely used in machine learning research. Learning algorithms that use this concept to learn include neural networks (Single Layer Perceptron and Recursive Deterministic Perceptron), and kernel machines (Support Vector Machines). Several algorithms for testing linear separability exist. Some of these methods are computationally intense. Also, several of them will converge if the classes are linearly separable, but will fail to converge otherwise. A fast and efficient test for non linear separability is proposed which can be used to pretest classification data sets for non linear separability thus avoiding expensive computations. This test is based on the convex hull separability method but does not require the computation of the convex hull.

- Estimation | Pp. 737-746

A One-Step Unscented Particle Filter for Nonlinear Dynamical Systems

Nikolay Y. Nikolaev; Evgueni Smirnov

This paper proposes a one-step unscented particle filter for accurate nonlinear estimation. Its design involves the elaboration of a reliable one-step unscented filter that draws state samples deterministically for doing both the time and measurement updates, without linearization of the observation model. Empirical investigations show that the one-step unscented particle filter compares favourably to relevant filters on nonlinear dynamic systems modelling.

- Estimation | Pp. 747-756

Spike-Timing-Dependent Synaptic Plasticity to Learn Spatiotemporal Patterns in Recurrent Neural Networks

Masahiko Yoshioka; Silvia Scarpetta; Maria Marinaro

Assuming asymmetric time window of the spike-timing- dependent synaptic plasticity (STDP), we study spatiotemporal learning in recurrent neural networks. We first show numerical simulations of spiking neural networks in which spatiotemporal Poisson patterns (i.e., random spatiotemporal patterns generated by independent Poisson process) are successfully memorized by the STDP-based learning rule. Then, we discuss the underlying mechanism of the STDP-based learning, mentioning our recent analysis on associative memory analog neural networks for periodic spatiotemporal patterns. Order parameter dynamics in the analog neural networks explains time scale change in retrieval process and the shape of the STDP time window optimal to encode a large number of spatiotemporal patterns. The analysis further elucidates phase transition due to destabilization of retrieval state. These findings on analog neural networks are found to be consistent with the previous results on spiking neural networks. These STDP-based spatiotemporal associative memory possibly gives some insights into the recent experimental results in which spatiotemporal patterns are found to be retrieved at the various time scale.

- Spatial and Spatio-Temporal Learning | Pp. 757-766

A Distributed Message Passing Algorithm for Sensor Localization

Max Welling; Joseph J. Lim

We propose a fully distributed message passing algorithm based on expectation propagation for the purpose of sensor localization. Sensors perform noisy measurements of their mutual distances and their relative angles. These measurements form the basis for an iterative, local (i.e. distributed) algorithm to compute the sensor’s locations including uncertainties for these estimates. This approach offers a distributed, computationally efficient and flexible framework for information fusion in sensor networks.

- Spatial and Spatio-Temporal Learning | Pp. 767-775

An Analytical Model of Divisive Normalization in Disparity-Tuned Complex Cells

Wolfgang Stürzl; Hanspeter A. Mallot; A. Knoll

Based on the energy model for disparity-tuned neurons, we calculate probability density functions of complex cell activity for random-dot stimuli. We investigate the effects of normalization and give analytical expressions for the disparity tuning curve and its variance. We show that while normalized and non-normalized complex cells have similar tuning curves, the variance is significantly lower for normalized complex cells, which makes disparity estimation more reliable. The results of the analytical calculations are compared to computer simulations.

- Spatial and Spatio-Temporal Learning | Pp. 776-787

Automatic Design of Modular Neural Networks Using Genetic Programming

Naser NourAshrafoddin; Ali R. Vahdat; Mohammad Mehdi Ebadzadeh

Traditional trial-and-error approach to design neural networks is time consuming and does not guarantee yielding the best neural network feasible for a specific application. Therefore automatic approaches have gained more importance and popularity. In addition, traditional (non-modular) neural networks can not solve complex problems since these problems introduce wide range of overlap which, in turn, causes a wide range of deviations from efficient learning in different regions of the input space, whereas a modular neural network attempts to reduce the effect of these problems via a divide and conquer approach. In this paper we are going to introduce a different approach to autonomous design of modular neural networks. Here we use genetic programming for automatic modular neural networks design; their architectures, transfer functions and connection weights. Our approach offers important advantages over existing methods for automated neural network design. First it prefers smaller modules to bigger modules, second it allows neurons even in the same layer to use different transfer functions, and third it is not necessary to convert each individual into a neural network to obtain the fitness value during the evolution process. Several tests were performed with problems based on some of the most popular test databases. Results show that using genetic programming for automatic design of neural networks is an efficient method and is comparable with the already existing techniques.

- Evolutionary Computing | Pp. 788-798