Catálogo de publicaciones - libros

Compartir en
redes sociales


Advances in Natural Computation: 1st International Conference, ICNC 2005, Changsha, China, August 27-29, 2005, Proceedings, Part I

Lipo Wang ; Ke Chen ; Yew Soon Ong (eds.)

En conferencia: 1º International Conference on Natural Computation (ICNC) . Changsha, China . August 27, 2005 - August 29, 2005

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence (incl. Robotics); Image Processing and Computer Vision; Computation by Abstract Devices; Algorithm Analysis and Problem Complexity; Pattern Recognition; Evolutionary Biology

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2005 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-28323-2

ISBN electrónico

978-3-540-31853-8

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2005

Tabla de contenidos

Short-Term Load Forecasting Based on Self-organizing Map and Support Vector Machine

Zhejing Bao; Daoying Pi; Youxian Sun

An approach for short-term load forecasting by combining self-organizing map(SOM) and support vector machine(SVM) is proposed in this paper. First, historical load data of same type are clustered using SOM, and then daily 48-point load values are vertically predicted respectively based on SVM. In clustering, factors such as date type, weather conditions and time delay are considered. In addition, influences of kernel function and SVM parameters on load forecasting are discussed and performance of SOM-SVM is compared with pure SVM. It is shown that normal smoothing technique in preprocessing is not suitable to be used in vertical forecasting. Finally, the approach is tested by data from EUNITE network, and results show that the approach runs with high speed and good accuracy.

- Statistical Neural Network Models and Support Vector Machines | Pp. 688-691

A Multi-class Classifying Algorithm Based on Nonlinear Dimensionality Reduction and Support Vector Machines

Lukui Shi; Qing Wu; Xueqin Shen; Pilian He

Many problems in pattern classifications involve some form of dimensionality reduction. ISOMAP is a representative nonlinear dimensionality reduction algorithm, which can discover low dimensional manifolds from high dimensional data. To speed ISOMAP and decrease the dependency to the neighborhood size, we propose an improved algorithm. It can automatically select a proper neighborhood size and an appropriate landmark set according to a stress function. A multi-class classifier with high efficiency is obtained through combining the improved ISOMAP with SVM. Experiments show that the classifier presented is effective in fingerprint classifications.

- Statistical Neural Network Models and Support Vector Machines | Pp. 692-695

A VSC Scheme for Linear MIMO Systems Based on SVM

Zhang Yibo; Yang Chunjie; Pi Daoying; Sun Youxian

A variable structure control (VSC) scheme for linear MIMO systems based on support vector machine (SVM) is developed. By analyzing the characters of linear MIMO system, a VSC scheme based on Exponent Reaching Law is adopted to track desired trajectory. Then one input of the system is trained as the output of SVM, while sliding mode function, differences and other inputs of the system are trained as the inputs of SVM. So one VSC input of the black-box system could be obtained directly by trained SVM after other inputs of the system are selected manually, and recognition of system parameters is avoided. A linear MIMO system is used to prove the scheme, and simulation results show that this scheme has high identification precision and quick training speed.

- Statistical Neural Network Models and Support Vector Machines | Pp. 696-699

Global Convergence of FastICA: Theoretical Analysis and Practical Considerations

Gang Wang; Xin Xu; Dewen Hu

FastICA is now a popular algorithm for independent component analysis (ICA) based on negentropy. However the convergence of FastICA has not been comprehensively studied. This paper provides the global convergence analysis of FastICA and some practical considerations on algorithmic implementations. The exhaustive equilibria are obtained from the iteration first. Then the global convergence property is given on the 2-channel system with cubic nonlinearity function, and the results can also be generalized to the multi-channel system. In addition, two practical considerations, e.g. the convergence threshold for demixing matrix and independence restriction for sources, are evaluated and the influence on the separation solutions is illustrated respectively.

- Statistical Neural Network Models and Support Vector Machines | Pp. 700-705

SVM Based Nonparametric Model Identification and Dynamic Model Control

Weimin Zhong; Daoying Pi; Youxian Sun

In this paper, a support vector machine (SVM) with linear kernel function based nonparametric model identification and dynamic matrix control (SVM_DMC) technique is presented. First, a step response model involving manipulated variables is obtained via system identification by SVM with linear kernel function according to random test data or manufacturing data. Second, an explicit control law of a receding horizon quadric objective is gotten through the predictive control mechanism. Final, the approach is illustrated by a simulation of a system with dead time delay. The results show that SVM_DMC technique has good performance in predictive control with good capability in keeping reference trajectory.

- Statistical Neural Network Models and Support Vector Machines | Pp. 706-709

Weighted On-line SVM Regression Algorithm and Its Application

Hui Wang; Daoying Pi; Youxian Sun

Based on KKT condition and Lagrangian multiplier method a weighted SVM regression model and its on-line training algorithm are developed. Standard SVM regression model processes every sample equally with the same error requirement, which is not suitable in the case that different sample has different contribution to the construction of the regression model. In the new weighted model, every training sample is given a weight coefficient to reflect the difference among samples. Moreover, standard online training algorithm couldn’t remove redundant samples effectively. A new method is presented to remove the redundant samples. Simulation with a benchmark problem shows that the new algorithm can quickly and accurately approximate nonlinear and time-varying functions with less computer memory needed.

- Statistical Neural Network Models and Support Vector Machines | Pp. 716-719

Convergence of an Online Gradient Method for BP Neural Networks with Stochastic Inputs

Zhengxue Li; Wei Wu; Guorui Feng; Huifang Lu

An online gradient method for BP neural networks is presented and discussed. The input training examples are permuted stochastically in each cycle of iteration. A monotonicity and a weak convergence of deterministic nature for the method are proved.

- Other Topics in Neural Network Models | Pp. 720-729

Stochastic High-Order Hopfield Neural Networks

Yi Shen; Guoying Zhao; Minghui Jiang; Shigeng Hu

In 1984 Hopfield showed that the time evolution of a symmetric Hopfield neural networks are a motion in state space that seeks out minima in the energy function (i.e., equilibrium point set of Hopfield neural networks). Because high-order Hopfield neural networks have more extensive applications than Hopfield neural networks, and have been discussed on the convergence of the networks. In practice, a neural network is often subject to environmental noise. It is therefore useful and interesting to find out whether the high-order neural network system still approacher some limit set under stochastic perturbation. In this paper, we will give a number of useful bounds for the noise intensity under which the stochastic high-order neural network will approach its limit set. Our result cancels the requirement of symmetry of the connection weight matrix and includes the classic result on Hopfield neural networks, which is a special case of stochastic high-order Hopfield neural networks. In the end, A example is given to verify the effective of our results.

- Other Topics in Neural Network Models | Pp. 740-749

An Efficient Score Function Generation Algorithm with Information Maximization

Woong Myung Kim; Hyon Soo Lee

In this study, we propose this new algorithm that generates score function in ICA (Independent Component Analysis) using entropy theory. To generate score function, estimation of probability density function about original signals are certainly necessary and density function should be differentiated. Therefore, we used kernel density estimation method in order to derive differential equation of score function by original signals. After changing the formula to convolution form to increase speed of density estimation, we used FFT algorithm which calculates convolution faster. Proposed score function generation method reduces estimation error, it is density difference of recovered signals and original signals. Also, we insert constraint which is able to information maximization using smoothing parameters. In the result of computer simulation, we estimate density function more similar to original signals compared with Extended Infomax algorithm and Fixed Point ICA in blind source separation problem and get improved performance at the SNR (Signal to Noise Ratio) between recovered signals and original signals.

- Other Topics in Neural Network Models | Pp. 760-768

A New Criterion on Exponential Stability of a Class of Discrete Cellular Neural Networks with Time Delay

Fei Hao; Long Wang; Tianguang Chu

A new criterion on exponential stability of the equilibrium point for a class of discrete cellular neural networks (CNNs) with delay is established by Lyapunov-Krasovskii function methods. The obtained result shows a relation between the delayed time and the corresponding parameters of the network. A numerical example is given to illustrate the efficiency of the proposed approach.

- Other Topics in Neural Network Models | Pp. 769-772