Catálogo de publicaciones - libros

Compartir en
redes sociales


Advances in Neural Networks: 4th International Symposium on Neural Networks, ISNN 2007, Nanjing, China, June 3-7, 2007, Proceedings, Part II

Derong Liu ; Shumin Fei ; Zengguang Hou ; Huaguang Zhang ; Changyin Sun (eds.)

En conferencia: 4º International Symposium on Neural Networks (ISNN) . Nanjing, China . June 3, 2007 - June 7, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence (incl. Robotics); Computation by Abstract Devices; Computer Communication Networks; Algorithm Analysis and Problem Complexity; Discrete Mathematics in Computer Science; Pattern Recognition

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-72392-9

ISBN electrónico

978-3-540-72393-6

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

Stock Prediction Using FCMAC-BYY

Jiacai Fu; Kok Siong Lum; Minh Nhut Nguyen; Juan Shi

The increasing reliance on Computational Intelligence applications to predict stock market positions have resulted in numerous researches in financial forecasting and trading trend identifications. Stock market price prediction applications are required to be adaptive to new incoming data as well as have fast learning capabilities due to the volatility nature of market movements. This paper analyses stock market price prediction based on a Fuzzy Cerebellar Model Articulation Controller – Bayesian Ying Yang (FCMAC-BYY) neural network. The model is motivated from the Chinese ancient Ying-Yang philosophy which states that everything in the universe can be viewed as a product of a constant conflict between opposites, Ying and Yang. A perfect status is reached if Ying and Yang achieves harmony. The analyzed experiment on a set of real stock market data (Singapore Airlines Ltd – SIA) in the Singapore Stock Exchange (SGX) and Ibex35 stock index shows the effectiveness of the FCMAC-BYY in the universal approximation and prediction.

- Neural Fuzzy Systems | Pp. 346-351

A Hybrid Rule Extraction Method Using Rough Sets and Neural Networks

Shufeng Wang; Gengfeng Wu; Jianguo Pan

Rough sets and neural networks are two common techniques applied to rule extraction from data table. Integrating the advantages of two approaches, this paper presents a Hybrid Rule Extraction Method (HREM) using rough sets and neural networks. In the HREM, the rule extraction is mainly done based on rough sets, while neural networks are only served as a tool to reduce the decision table and filter its noises when the final knowledge (rule sets) is generated from the reduced decision table by rough sets. Therefore, the HREM avoids the difficult of extracting rules from a trained neural network and possesses the robustness which the rough sets based approaches are lacking. The effectiveness of HREM is verified by comparing the experiment results with the approaches of traditional rough sets and neural networks.

- Neural Fuzzy Systems | Pp. 352-361

Neural Network Training Using Genetic Algorithm with a Novel Binary Encoding

Yong Liang; Kwong-Sak Leung; Zong-Ben Xu

Genetic algorithms (GAs) are widely used in the parameter training of Neural Network (NN). In this paper, we investigate GAs based on our proposed novel genetic representation to train the parameters of NN. A splicing/decomposable (S/D) binary encoding is designed based on some theoretical guidance and existing recommendations. Our theoretical and empirical investigations reveal that the S/D binary representation is more proper than other existing binary encodings for GAs’ searching. Moreover, a new genotypic distance on the S/D binary space is equivalent to the Euclidean distance on the real-valued space during GAs convergence. Therefore, GAs can reliably and predictably solve problems of bounded complexity and the methods depended on the Euclidean distance for solving different kinds of optimization problems can be directly used on the S/D binary space. This investigation demonstrates that GAs based our proposed binary representation can efficiently and effectively train the parameters of NN.

- Training and Learning Algorithms for Neural Networks | Pp. 371-380

Adaptive Training of a Kernel-Based Representative and Discriminative Nonlinear Classifier

Benyong Liu; Jing Zhang; Xiaowei Chen

Adaptive training of a classifier is necessary when feature selection and sparse representation are considered. Previously, we proposed a kernel-based nonlinear classifier for simultaneous representation and discrimination of pattern features. Its batch training has a closed-form solution. In this paper we implement an adaptive training algorithm using an incremental learning procedure that exactly retains the generalization ability of batch training. It naturally yields a sparse representation. The feasibility of the presented methods is illustrated by experimental results on handwritten digit classification.

- Training and Learning Algorithms for Neural Networks | Pp. 381-390

Indirect Training of Grey-Box Models: Application to a Bioprocess

Francisco Cruz; Gonzalo Acuña; Francisco Cubillos; Vicente Moreno; Danilo Bassi

Grey-box neural models mix differential equations, which act as white boxes, and neural networks, used as black boxes. The purpose of the present work is to show the training of a grey-box model by means of indirect backpropagation and Levenberg-Marquardt in Matlab, extending the black box neural model in order to fit the discretized equations of the phenomenological model. The obtained grey-box model is tested as an estimator of a state variable of a biotechnological batch fermentation process on solid substrate, with good results.

- Training and Learning Algorithms for Neural Networks | Pp. 391-397

FNN (Feedforward Neural Network) Training Method Based on Robust Recursive Least Square Method

JunSeok Lim; KoengMo Sung

We present a robust recursive least squares algorithm for multilayer feed-forward neural network training. So far, recursive least squares (RLS) has been successfully applied to training multilayer feed-forward neural networks. However, RLS method has a tendency to become diverse due to the instability in the recursive inversion procedure. In this paper, we propose a numerically robust recursive least square type algorithm using prewhitening. The proposed algorithm improves the performance of RLS in infinite numerical precision as well as in finite numerical precision. The computer simulation results in the various precision cases show that the proposed algorithm improves the numerical robustness of RLS training.

- Training and Learning Algorithms for Neural Networks | Pp. 398-405

Learning Bayesian Networks Based on a Mutual Information Scoring Function and EMI Method

Fengzhan Tian; Haisheng Li; Zhihai Wang; Jian Yu

At present, most of the algorithms for learning Bayesian Networks (BNs) use EM algorithm to deal with incomplete data. They are of low efficiency because EM algorithm has to perform iterative process of probability reasoning to complete the incomplete data. In this paper we present an efficient BN learning algorithm, which use the combination of EMI method and a scoring function based on mutual information theory. The algorithm first uses EMI method to estimate, from incomplete data, probability distributions over local structures of BNs, then evaluates BN structures with the scoring function and searches for the best one. The detailed procedure of the algorithm is depicted in the paper. The experimental results on Asia and Alarm networks show that when achieving high accuracy, the algorithm is much more efficient than two EM based algorithms, SEM and EM-EA algorithms.

- Training and Learning Algorithms for Neural Networks | Pp. 414-423

Learning Dynamic Bayesian Networks Structure Based on Bayesian Optimization Algorithm

Song Gao; Qinkun Xiao; Quan Pan; Qingguo Li

An optimization algorithm for dynamic Bayesian networks (DBN) based on Bayesian optimization algorithm (BOA) is developed for learning and constructing the DBN structure. In this paper, we first introduce some basic theories and concepts of probability model evolutionary algorithm. Then we describe, the basic mode for constructing DBN diagram and the mechanism of DBN structure learning based on BOA. The DBN structure learning based on BOA consists of two parts. The first part is to obtain the structure and parameters of DBN in terms of a good solution, and the second part is to produce new groups according to the obtained DBN structure. In this paper, the DBN learning is achieved by genetics algorithm based on a greedy mechanism. The DBN inference is performed by a forward-simulation algorithm. Simulation results are provided to demonstrate the effectiveness of the proposed algorithm.

- Training and Learning Algorithms for Neural Networks | Pp. 424-431

Stock Index Prediction Based on Adaptive Training and Pruning Algorithm

Jinyuan Shen; Huaiyu Fan; Shengjiang Chang

A tapped delay neural network (TDNN) with an adaptive learning and pruning algorithm is proposed to predict the nonlinear time serial stock indexes. The TDNN is trained by the recursive least square (RLS) in which the learning-rate parameter can be chosen automatically. This results in the network converging fast. Subsequently the architecture of the trained neural network is optimized by utilizing pruning algorithm to reduce the computational complexity and enhance the network’s generalization. And then the optimized network is retrained so that it has optimum parameters. At last the test samples are predicted by the ultimate network. The simulation and comparison show that this optimized neuron network model can not only reduce the calculating complexity greatly, but also improve the prediction precision. In our simulation, the computational complexity is reduced to 0.0556 and mean square error of test samples reaches 8.7961×10.

- Training and Learning Algorithms for Neural Networks | Pp. 457-464

An Improved Algorithm for Eleman Neural Network by Adding a Modified Error Function

Zhiqiang Zhang; Zheng Tang; GuoFeng Tang; Vairappan Catherine; XuGang Wang; RunQun Xiong

The Eleman Neural Network has been widely used in various fields ranging from temporal version of the Exclusive-OR function to the discovery of syntactic categories in natural language date. However, one of the problems often associated with this type of network is the local minima problem which usually occurs in the process of the learning. To solve this problem, we have proposed an error function which can harmonize the update weights connected to the hidden layer and those connected to the output layer by adding one term to the conventional error function. It can avoid the local minima problem caused by this disharmony. We applied this method to the Boolean Series Prediction Questions problems to demonstrate its validity. The result shows that the proposed method can avoid the local minima problem and largely accelerate the speed of the convergence and get good results for the prediction tasks.

- Training and Learning Algorithms for Neural Networks | Pp. 465-473