Catálogo de publicaciones - libros

Compartir en
redes sociales


Computational and Ambient Intelligence: 9th International Work-Conference on Artificial Neural Networks, IWANN 2007, San Sebastián, Spain, June 20-22, 2007. Proceedings

Francisco Sandoval ; Alberto Prieto ; Joan Cabestany ; Manuel Graña (eds.)

En conferencia: 9º International Work-Conference on Artificial Neural Networks (IWANN) . San Sebastián, Spain . June 20, 2007 - June 22, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence (incl. Robotics); Computation by Abstract Devices; Algorithm Analysis and Problem Complexity; Image Processing and Computer Vision; Pattern Recognition; Computational Biology/Bioinformatics

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-73006-4

ISBN electrónico

978-3-540-73007-1

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

Generating Random Deviates Consistent with the Long Term Behavior of Stochastic Search Processes in Global Optimization

Arturo Berrones

A new stochastic search algorithm is proposed, which in first instance is capable to give a probability density from which populations of points that are consistent with the global properties of the associated optimization problem can be drawn. The procedure is based on the Fokker – Planck equation, which is a linear differential equation for the density. The algorithm is constructed in such a way that only involves linear operations and a relatively small number of evaluations of the given cost function.

- Theoretical Concepts and Neuro Computational Formulations | Pp. 1-7

Dynamics of Neural Networks - Some Qualitative Properties

Daniela Danciu; Vladimir Răsvan

All neural networks, both natural and artificial, are characterized by two kinds of dynamics. The first one is concerned with what we would call “learning dynamics”, in fact the sequential (discrete time) dynamics of the choice of synaptic weights. The second one is the intrinsic dynamics of the neural network viewed as a dynamical system after the weights have been established learning. The paper deals with the second kind of dynamics. Since the emergent computational capabilities of a recurrent neural network can be achieved provided it has suitable dynamical properties when viewed as a system with several equilibria, the paper deals with those qualitative properties connected to the achievement of such dynamical properties, more precisely the gradient like behavior. In the case of the neural networks with delays, these aspects are reformulated in accordance with the state of the art of the theory of delay dynamical systems.

- Theoretical Concepts and Neuro Computational Formulations | Pp. 8-15

A Comparative Study of PCA, ICA and Class-Conditional ICA for Naïve Bayes Classifier

Liwei Fan; Kim Leng Poh

The performance of the Naïve Bayes classifier can be improved by appropriate preprocessing procedures. This paper presents a comparative study of three preprocessing procedures, namely Principle Component Analysis (PCA), Independent Component Analysis (ICA) and class-conditional ICA, for Naïve Bayes classifier. It is found that all the three procedures keep improving the performance of the Naïve Bayes classifier with the increase of the number of attributes. Although class-conditional ICA has been found to be superior to PCA and ICA in most cases, it may not be suitable for the case where the sample size for each class is not large enough.

- Theoretical Concepts and Neuro Computational Formulations | Pp. 16-22

Effect of Increasing Inhibitory Inputs on Information Processing Within a Small Network of Spiking Neurons

Roberta Sirovich; Laura Sacerdote; Alessandro E. P. Villa

In this paper the activity of a spiking neuron A that receives a background input from the network in which it is embedded and strong inputs from an excitatory unit E and an inhibitory unit I is studied. The membrane potential of the neuron A is described by a jump diffusion model. Several types of interspike interval distributions of the excitatory strong inputs are considered as Poissonian inhibitory inputs increase intensity. It is shown that, independently of the distribution of the excitatory inpu, they are more efficiently transmitted as inhibition increases to larger intensities.

- Theoretical Concepts and Neuro Computational Formulations | Pp. 23-30

An Efficient VAD Based on a Hang-Over Scheme and a Likelihood Ratio Test

O. Pernía; J. M. Górriz; J. Ramírez; C. G. Puntonet; I. Turias

The emerging applications of wireless speech communication are demanding increasing levels of performance in noise adverse environments together with the design of high response rate speech processing systems. This is a serious obstacle to meet the demands of modern applications and therefore these systems often needs a noise reduction algorithm working in combination with a precise voice activity detector (VAD). This paper presents a new voice activity detector (VAD) for improving speech detection robustness in noisy environments and the performance of speech recognition systems. The algorithm defines an optimum likelihood ratio test (LRT) involving Multiple and correlated Observations (MO) and assuming a jointly Gaussian probability density function (jGpdf). An analysis of the methodology for  = {2,3} shows the robustness of the proposed approach by means of a clear reduction of the classification error as the number of observations is increased. The algorithm is also compared to different VAD methods including the G.729, AMR and AFE standards, as well as recently reported algorithms showing a sustained advantage in speech/non-speech detection accuracy and speech recognition performance.

- Theoretical Concepts and Neuro Computational Formulations | Pp. 31-38

Analysis of Hebbian Models with Lateral Weight Connections

Pedro J. Zufiria; J. Andrés Berzal

In this paper, the behavior of some hebbian artificial neural networks with lateral weights is analyzed. Hebbian neural networks are employed in communications and signal processing applications for implementing on-line Principal Component Analysis (PCA). Different improvements over the original Oja model have been developed in the last two decades. Among them, models with lateral weights have been designed to directly provide the eigenvectors of the correlation matrix [1,5,6,9]. The behavior of hebbian models has been traditionally studied by resorting to an associated continuous-time formulation under some questionable assumptions which are not guaranteed in real implementations. In this paper we employ the alternative deterministic discrete-time (DDT) formulation that characterizes the average evolution of these nets and gathers the influence of the learning gains time evolution [12]. The dynamic behavior of some of these hebbian models is analytically characterized in this context and several simulations complement this comparative study.

- Theoretical Concepts and Neuro Computational Formulations | Pp. 39-46

Power Quality Event Identification Using Higher-Order Statistics and Neural Classifiers

Juan-José González de-la-Rosa; Carlos G. Puntonet; Antonio Moreno Muñoz

This paper deals with power-quality (PQ) event detection, classification and characterization using higher-order sliding cumulants to examine the signals. Their maxima and minima are the main features, and the classification strategy is based in competitive layers. Concretely, we concentrate on the task of differentiating two types of transients (short duration and long duration). By measuring the fourth-order central cumulants’ maxima and minima, we build the two-dimensional feature measured vector. Cumulants are calculated over high-pass digitally filtered signals, to avoid the low-frequency 50-Hz signal. We have observed that the minima and maxima measurements produce clusters in the feature space for 4th-order cumulants; third-order cumulants are not capable of differentiate these two very similar PQ events. The experience aims to set the foundations of an automatic procedure for PQ event detection.

- Theoretical Concepts and Neuro Computational Formulations | Pp. 47-54

Bio-inspired Memory Generation by Recurrent Neural Networks

Manuel G. Bedia; Juan M. Corchado; Luis F. Castillo

The knowledge about higher brain centres in insects and how they affect the insect’s behaviour has increased significantly in recent years by experimental investigations. A large body of evidence suggests that higher brain centres of insects are important for learning, short-term and long-term memory and play an important role for context generalisation. In this paper, we focus on artificial recurrent neural networks that model non-linear systems, in particular, Lotka-Volterra systems. After studying the typical behavior and processes that emerge in appropiate Lotka-Volterra systems, we analyze the relationship between sequential memory encoding processes and the higher brain centres in insects in order to propose a way to develop a general ’insect-brain’ control architecture to be implemented on simple robots.

- Theoretical Concepts and Neuro Computational Formulations | Pp. 55-62

Non-parametric Residual Variance Estimation in Supervised Learning

Elia Liitiäinen; Amaury Lendasse; Francesco Corona

The residual variance estimation problem is well-known in statistics and machine learning with many applications for example in the field of nonlinear modelling. In this paper, we show that the problem can be formulated in a general supervised learning context. Emphasis is on two widely used non-parametric techniques known as the Delta test and the Gamma test. Under some regularity assumptions, a novel proof of convergence of the two estimators is formulated and subsequently verified and compared on two meaningful study cases.

- Theoretical Concepts and Neuro Computational Formulations | Pp. 63-71

A Study on the Use of Statistical Tests for Experimentation with Neural Networks

Julián Luengo; Salvador García; Francisco Herrera

In this work, we get focused on the use of statistical techniques for behavior analysis of Artificial Neural Networks in the task of classification. A study of the non-parametric tests use is presented, using some well-known models of neural networks. The results show the need of using non-parametric statistic, because the Artificial Neural Networks used do not verify the hypothesis required for classical parametric tests.

- Theoretical Concepts and Neuro Computational Formulations | Pp. 72-79