Catálogo de publicaciones - libros

Compartir en
redes sociales


Advances in Natural Computation: 1st International Conference, ICNC 2005, Changsha, China, August 27-29, 2005, Proceedings, Part I

Lipo Wang ; Ke Chen ; Yew Soon Ong (eds.)

En conferencia: 1º International Conference on Natural Computation (ICNC) . Changsha, China . August 27, 2005 - August 29, 2005

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence (incl. Robotics); Image Processing and Computer Vision; Computation by Abstract Devices; Algorithm Analysis and Problem Complexity; Pattern Recognition; Evolutionary Biology

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2005 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-28323-2

ISBN electrónico

978-3-540-31853-8

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2005

Tabla de contenidos

The Improved CMAC Model and Learning Result Analysis

Daqi Zhu; Min Kong; YonQing Yang

An improved neural networks online learning scheme is proposed to speed up the learning process in cerebellar model articulation controllers(CMAC). The improved learning approach is to use the learned times of the addressed hypercubes as the credibility (confidence) of the learned values in the early learning stage, and the updating data for addressed hypercubes is proportional to the inverse of the exponent of learned times, in the later stage the updating data for addressed hypercubes is proportional to the inverse of learned times. With this idea, the learning speed can indeed be improved.

- Neural Network Learning Algorithms | Pp. 15-24

A New Smooth Support Vector Regression Based on -Insensitive Logistic Loss Function

Yang Hui-zhong; Shao Xin-guang; Ding Feng

A new smooth support vector regression based on -insensitive logistic loss function, shortly L-SSVR, was proposed in this paper, which is similar to SSVR, but without adding any heuristic smoothing parameters and with robust absolute loss. Taking advantage of L-SSVR, one can now consider SVM as linear programming, and efficiently solve large-scale regression problems without any optimization packages. Details of this algorithm and its implementation were presented in this paper. Simulation results for both artificial and real data show remarkable improvement of generalization performance and training time.

- Neural Network Learning Algorithms | Pp. 25-32

Neural Network Classifier Based on the Features of Multi-lead ECG

Mozhiwen; Feng Jun; Qiu Yazhu; Shu Lan

In this study, two methods for the electrocardiogram (ECG) QRS waves detection were presented and compared. One hand, a modified approach of the linear approximation distance thresholding (LADT) algorithm was studied and the features of the ECG were gained for the later work. The other hand, Mexican-hat wavelet transform was adopted to detect the character points of ECG. A part of the features of the ECG were used to train the RBF network, and then all of them were used to examine the performance of the network. The algorithms were tested with ECG signals of MIT-BIH, and compared with other tests, the result shows that the detection ability of the Mexican-hat wavelet transform is very good for its quality of time-frequency representation and the ECG character points was represented by the local extremes of the transformed signals and the correct rate of QRS detection rises up to 99.9%. Also, the classification performance with its result is so good that the correct rate with the trained wave is 100%, and untrained wave is 86.6%.

- Neural Network Learning Algorithms | Pp. 33-43

A New Learning Algorithm for Diagonal Recurrent Neural Network

Deng Xiaolong; Xie Jianying; Guo Weizhong; Liu Jun

A new hybrid learning algorithm combining the extended Kalman filter (EKF) and particle filter is presented. The new algorithm is firstly applied to train diagonal recurrent neural network (DRNN). The EKF is used to train DRNN and particle filter applies the resampling algorithm to optimize the particles, namely DRNNs, with the relative network weights. These methods make the training shorter and DRNN convergent more quickly. Simulation results of the nonlinear dynamical identification verify the validity of the new algorithm.

- Neural Network Learning Algorithms | Pp. 44-50

Locally Determining the Number of Neighbors in the -Nearest Neighbor Rule Based on Statistical Confidence

Jigang Wang; Predrag Neskovic; Leon N. Cooper

The -nearest neighbor rule is one of the most attractive pattern classification algorithms. In practice, the value of is usually determined by the cross-validation method. In this work, we propose a new method that locally determines the number of nearest neighbors based on the concept of statistical confidence. We define the confidence associated with decisions that are made by the majority rule from a finite number of observations and use it as a criterion to determine the number of nearest neighbors needed. The new algorithm is tested on several real-world datasets and yields results comparable to those obtained by the -nearest neighbor rule. In contrast to the -nearest neighbor rule that uses a fixed number of nearest neighbors throughout the feature space, our method locally adjusts the number of neighbors until a satisfactory level of confidence is reached. In addition, the statistical confidence provides a natural way to balance the trade-off between the reject rate and the error rate by excluding patterns that have low confidence levels.

- Neural Network Learning Algorithms | Pp. 71-80

Fuzzy Self-Organizing Map Neural Network Using Kernel PCA and the Application

Qiang Lv; Jin-shou Yu

The fuzzy self-organizing map neural network using kernel principal component analysis is presented and a hybrid-learning algorithm (KPCA-FSOM) divided into two stages is proposed to train this network. The first stage, the KPCA algorithm is applied to extract the features of nonlinear data. The second stage, combining both the fuzzy theory and locally-weight distortion index to extend SOM basic algorithm, the fuzzy SOM algorithm is presented to train the SOM network with features gained. A real life application of KPCA-FSOM algorithm in classifying data of acrylonitrile reactor is provided. The experimental results show this algorithm can obtain better clustering and network after training can more effectively monitor yields

- Neural Network Learning Algorithms | Pp. 81-90

Self-organized Locally Linear Embedding for Nonlinear Dimensionality Reduction

Jian Xiao; Zongtan Zhou; Dewen Hu; Junsong Yin; Shuang Chen

Locally Linear Embedding (LLE) is an efficient nonlinear algorithm for mapping high-dimensional data to a low-dimensional observed space. However, the algorithm is sensitive to several parameters that should be set artificially, and the resulting maps may be invalid in case of noises. In this paper, the original LLE algorithm is improved by introducing the self-organizing features of a novel SOM model we proposed recently called DGSOM to overcome these shortages. In the improved algorithm, nearest neighbors are selected automatically according to the topology connections derived from DGSOM. The proposed algorithm can also estimate the intrinsic dimensionality of the manifold and eliminate noises simultaneously. All these advantages are illustrated with abundant experiments and simulations.

- Neural Network Learning Algorithms | Pp. 101-109

Active Learning for Probabilistic Neural Networks

Bülent Bolat; Tülay Yıldırım

In many neural network applications, the selection of best training set to represent the entire sample space is one of the most important problems. Active learning algorithms in the literature for neural networks are not appropriate for Probabilistic Neural Networks (PNN). In this paper, a new active learning method is proposed for PNN. The method was applied to several benchmark problems.

- Neural Network Learning Algorithms | Pp. 110-118

Adaptive Training of Radial Basis Function Networks Using Particle Swarm Optimization Algorithm

Hongkai Ding; Yunshi Xiao; Jiguang Yue

A novel methodology to determine the optimum number of centers and the network parameters simultaneously based on Particle Swarm Optimization (PSO) algorithm with matrix encoding is proposed in this paper. For tackling structure matching problem, a random structure updating rule is employed for determining the current structure at each epoch. The effectiveness of the method is illustrated through the nonlinear system identification problem.

- Neural Network Learning Algorithms | Pp. 119-128

A Game-Theoretic Approach to Competitive Learning in Self-Organizing Maps

Joseph Herbert; JingTao Yao

Self-Organizing Maps (SOM) is a powerful tool for clustering and discovering patterns in data. Competitive learning in the SOM training process focuses on finding a neuron that is most similar to that of an input vector. Since an update of a neuron only benefits part of the feature map, it can be thought of as a local optimization problem. The ability to move away from a local optimization model into a global optimization model requires the use of game theory techniques to analyze overall of the SOM. A new algorithm GTSOM is introduced to take into account cluster quality measurements and dynamically modify learning rates to ensure improved quality through successive iterations.

- Neural Network Learning Algorithms | Pp. 129-138