Catálogo de publicaciones - libros

Compartir en
redes sociales


Advances in Neural Networks: 4th International Symposium on Neural Networks, ISNN 2007, Nanjing, China, June 3-7, 2007, Proceedings, Part I

Derong Liu ; Shumin Fei ; Zeng-Guang Hou ; Huaguang Zhang ; Changyin Sun (eds.)

En conferencia: 4º International Symposium on Neural Networks (ISNN) . Nanjing, China . June 3, 2007 - June 7, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence (incl. Robotics); Computation by Abstract Devices; Computer Communication Networks; Algorithm Analysis and Problem Complexity; Discrete Mathematics in Computer Science; Pattern Recognition

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-72382-0

ISBN electrónico

978-3-540-72383-7

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

A Robust Online Sequential Extreme Learning Machine

Minh-Tuan T. Hoang; Hieu T. Huynh; Nguyen H. Vo; Yonggwan Won

Online-sequential extreme learning machine (OS-ELM) shows a good solution to online learning using extreme learning machine approach for single-hidden-layer feedforward network. However, the algorithm tends to be data-dependent, i.e. the bias values need to be adjusted depending on each particular problem. In this paper, we propose an enhancement to OS-ELM, which is referred to as robust OS-ELM (ROS-ELM). ROS-ELM has a systematic method to select the bias that allows the bias to be selected following the input weights. Hence, the proposed algorithm works well for every benchmark dataset. ROS-ELM has all the pros of OS-ELM, i.e. the capable of learning one-by-one, chunk-by-chunk with fixed or varying chunk size. Moreover, the performance of the algorithm is higher than OS-ELM and it produces a better generalization performance with benchmark datasets.

- Learning and Approximation | Pp. 1077-1086

Intelligence Through Interaction: Towards a Unified Theory for Learning

Ah-Hwee Tan; Gail A. Carpenter; Stephen Grossberg

Machine learning, a cornerstone of intelligent systems, has typically been studied in the context of specific tasks, including clustering (unsupervised learning), classification (supervised learning), and control (reinforcement learning). This paper presents a learning architecture within which a universal adaptation mechanism unifies a rich set of traditionally distinct learning paradigms, including learning by matching, learning by association, learning by instruction, and learning by reinforcement. In accordance with the notion of embodied intelligence, such a learning theory provides a computational account of how an autonomous agent may acquire the knowledge of its environment in a real-time, incremental, and continuous manner. Through a case study on a minefield navigation domain, we illustrate the efficacy of the proposed model, the learning paradigms encompassed, and the various types of knowledge learned.

- Learning and Approximation | Pp. 1094-1103

Uniform Approximation Capabilities of Sum-of-Product and Sigma-Pi-Sigma Neural Networks

Jinling Long; Wei Wu; Dong Nan

Investigated in this paper are the uniform approximation capabilities of sum-of-product () and sigma-pi-sigma () neural networks. It is proved that the set of functions that are generated by an with its activation function in (ℝ) is dense in for any compact , if and only if the activation function is not a polynomial. It is also shown that if the activation function of an is in (ℝ), then the functions generated by the are dense in if and only if the activation function is not a constant.

- Learning and Approximation | Pp. 1110-1116

Neuro-electrophysiological Argument on Energy Coding

Rubin Wang; Zhikang Zhang

According to analysis of both neuro-electrophysiological experimental data and the biophysical properties of neurons, in early research paper we proposed a new biophysical model that reflects the property of energy coding in neuronal activity. On the based of the above research work, in this paper the proposed biophysical model can reproduce the membrane potentials and the depolarizing membrane current by means of neuro-electrophysiological experimental data. Combination with our previous research results, the proposed biophysical model is demonstrated again to be more effective compared with known biophysical models of neurons.

- Learning and Approximation | Pp. 1127-1134

Statistical Neurodynamics for Sequence Processing Neural Networks with Finite Dilution

Pan Zhang; Yong Chen

We extend the statistical neurodynamics to study transient dynamics of sequence processing neural networks with finite dilution, and the theoretical results are supported by extensive numerical simulations. It is found that the order parameter equations are completely equivalent to those of the Generating Functional Method, which means that crosstalk noise follows normal distribution even in the case of failure in retrieval process. In order to verify the gaussian assumption of crosstalk noise, we numerically obtain the cumulants of crosstalk noise, and third- and fourth-order cumulants are found to be indeed zero even in non-retrieval case.

- Learning and Approximation | Pp. 1144-1152

A Novel Elliptical Basis Function Neural Networks Model Based on a Hybrid Learning Algorithm

Ji-Xiang Du; Guo-Jun Zhang; Zeng-Fu Wang

In this paper, a novel elliptical basis function neural networks model (EBFNN) based on a hybrid learning algorithm (HLA) is proposed. Firstly, a geometry analytic algorithm is applied to construct the hyper-ellipsoid units of hidden layer of the EBFNN, i.e., initial the structure of the EBFNN. Then, the hybrid learning algorithm (HLA) is further applied to adjust the centers and the shape parameters. The experimental results demonstrated the proposed hybrid learning algorithm for the EBFNN model is feasible and efficient, and the EBFNN is not only parsimonious but also has better generalization performance than the RBFNN.

- Learning and Approximation | Pp. 1153-1161

A Multi-Instance Learning Algorithm Based on Normalized Radial Basis Function Network

Yu-Mei Chai; Zhi-Wu Yang

Multiple-Instance Learning is increasingly becoming one of the most promiscuous research areas in machine learning. In this paper, a new algorithm named NRBF-MI is proposed for Multi-Instance Learning based on normalized radial basis function network. This algorithm defined Compact Neighborhood of bags on which a new method is designed for training the network structure of NRBF-MI. The behavior of kernel function radius and its influence is analyzed. Furthermore a new kernel function is also defined for dealing with the labeled bags. Experimental results show that the NRBF-MI is a high efficient algorithm for Multi-Instance Learning.

- Learning and Approximation | Pp. 1162-1172

Neural Networks Training with Optimal Bounded Ellipsoid Algorithm

Jose de Jesus Rubio; Wen Yu

Compared to normal learning algorithms, for example backpropagation, the optimal bounded ellipsoid (OBE) algorithm has some better properties, such as faster convergence, since it has a similar structure as Kalman filter. OBE has some advantages over Kalman filter training, the noise is not required to be Guassian. In this paper OBE algorithm is applied traing the weights of recurrent neural networks for nonlinear system identification Both hidden layers and output layers can be updated. From a dynamic systems point of view, such training can be useful for all neural network applications requiring real-time updating of the weights. A simple simulation gives the effectiveness of the suggested algorithm.

- Learning and Approximation | Pp. 1173-1182

Efficient Training of RBF Networks Via the BYY Automated Model Selection Learning Algorithms

Kai Huang; Le Wang; Jinwen Ma

Radial basis function (RBF) networks of Gaussian activation functions have been widely used in many applications due to its simplicity, robustness, good approximation and generalization ability, etc.. However, the training of such a RBF network is still a rather difficult task in the general case and the main crucial problem is how to select the number and locations of the hidden units appropriately. In this paper, we utilize a new kind of Bayesian Ying-Yang (BYY) automated model selection (AMS) learning algorithm to select the appropriate number and initial locations of the hidden units or Gaussians automatically for an input data set. It is demonstrated well by the experiments that this BYY-AMS training method is quite efficient and considerably outperforms the typical existing training methods on the training of RBF networks for both clustering analysis and nonlinear time series prediction.

- Learning and Approximation | Pp. 1183-1192

Unsupervised Image Categorization Using Constrained Entropy-Regularized Likelihood Learning with Pairwise Constraints

Zhiwu Lu; Xiaoqing Lu; Zhiyuan Ye

We usually identify the categories in image databases using some clustering algorithms based on the visual features extracted from images. Due to the well-known gap between the semantic features (e.g., categories) and the visual features, the results of unsupervised image categorization may be quite disappointing. Of course, it can be improved by adding some extra semantic information. Pairwise constraints between some images are easy to provide, even when we have little prior knowledge about the image categories in a database. A semi-supervised learning algorithm is then proposed for unsupervised image categorization based on Gaussian mixture model through incorporating such semantic information into the entropy-regularized likelihood (ERL) learning, which can automatically detect the number of image categories in the database. The experiments further show that this algorithm can lead to some promising results when applied to image categorization.

- Learning and Approximation | Pp. 1193-1200