Catálogo de publicaciones - libros

Compartir en
redes sociales


Advances in Natural Computation: 1st International Conference, ICNC 2005, Changsha, China, August 27-29, 2005, Proceedings, Part I

Disponibilidad

Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2005 SpringerLink

Tabla de contenidos

The Improved CMAC Model and Learning Result Analysis

Daqi Zhu; Min Kong; YonQing Yang

An improved neural networks online learning scheme is proposed to speed up the learning process in cerebellar model articulation controllers(CMAC). The improved learning approach is to use the learned times of the addressed hypercubes as the credibility (confidence) of the learned values in the early learning stage, and the updating data for addressed hypercubes is proportional to the inverse of the exponent of learned times, in the later stage the updating data for addressed hypercubes is proportional to the inverse of learned times. With this idea, the learning speed can indeed be improved.

- Neural Network Learning Algorithms | Pp. 15-24

A New Smooth Support Vector Regression Based on -Insensitive Logistic Loss Function

Yang Hui-zhong; Shao Xin-guang; Ding Feng

A new smooth support vector regression based on -insensitive logistic loss function, shortly L-SSVR, was proposed in this paper, which is similar to SSVR, but without adding any heuristic smoothing parameters and with robust absolute loss. Taking advantage of L-SSVR, one can now consider SVM as linear programming, and efficiently solve large-scale regression problems without any optimization packages. Details of this algorithm and its implementation were presented in this paper. Simulation results for both artificial and real data show remarkable improvement of generalization performance and training time.

- Neural Network Learning Algorithms | Pp. 25-32

Neural Network Classifier Based on the Features of Multi-lead ECG

Mozhiwen; Feng Jun; Qiu Yazhu; Shu Lan

In this study, two methods for the electrocardiogram (ECG) QRS waves detection were presented and compared. One hand, a modified approach of the linear approximation distance thresholding (LADT) algorithm was studied and the features of the ECG were gained for the later work. The other hand, Mexican-hat wavelet transform was adopted to detect the character points of ECG. A part of the features of the ECG were used to train the RBF network, and then all of them were used to examine the performance of the network. The algorithms were tested with ECG signals of MIT-BIH, and compared with other tests, the result shows that the detection ability of the Mexican-hat wavelet transform is very good for its quality of time-frequency representation and the ECG character points was represented by the local extremes of the transformed signals and the correct rate of QRS detection rises up to 99.9%. Also, the classification performance with its result is so good that the correct rate with the trained wave is 100%, and untrained wave is 86.6%.

- Neural Network Learning Algorithms | Pp. 33-43

A New Learning Algorithm for Diagonal Recurrent Neural Network

Deng Xiaolong; Xie Jianying; Guo Weizhong; Liu Jun

A new hybrid learning algorithm combining the extended Kalman filter (EKF) and particle filter is presented. The new algorithm is firstly applied to train diagonal recurrent neural network (DRNN). The EKF is used to train DRNN and particle filter applies the resampling algorithm to optimize the particles, namely DRNNs, with the relative network weights. These methods make the training shorter and DRNN convergent more quickly. Simulation results of the nonlinear dynamical identification verify the validity of the new algorithm.

- Neural Network Learning Algorithms | Pp. 44-50

Locally Determining the Number of Neighbors in the -Nearest Neighbor Rule Based on Statistical Confidence

Jigang Wang; Predrag Neskovic; Leon N. Cooper

The -nearest neighbor rule is one of the most attractive pattern classification algorithms. In practice, the value of is usually determined by the cross-validation method. In this work, we propose a new method that locally determines the number of nearest neighbors based on the concept of statistical confidence. We define the confidence associated with decisions that are made by the majority rule from a finite number of observations and use it as a criterion to determine the number of nearest neighbors needed. The new algorithm is tested on several real-world datasets and yields results comparable to those obtained by the -nearest neighbor rule. In contrast to the -nearest neighbor rule that uses a fixed number of nearest neighbors throughout the feature space, our method locally adjusts the number of neighbors until a satisfactory level of confidence is reached. In addition, the statistical confidence provides a natural way to balance the trade-off between the reject rate and the error rate by excluding patterns that have low confidence levels.

- Neural Network Learning Algorithms | Pp. 71-80

Fuzzy Self-Organizing Map Neural Network Using Kernel PCA and the Application

Qiang Lv; Jin-shou Yu

The fuzzy self-organizing map neural network using kernel principal component analysis is presented and a hybrid-learning algorithm (KPCA-FSOM) divided into two stages is proposed to train this network. The first stage, the KPCA algorithm is applied to extract the features of nonlinear data. The second stage, combining both the fuzzy theory and locally-weight distortion index to extend SOM basic algorithm, the fuzzy SOM algorithm is presented to train the SOM network with features gained. A real life application of KPCA-FSOM algorithm in classifying data of acrylonitrile reactor is provided. The experimental results show this algorithm can obtain better clustering and network after training can more effectively monitor yields

- Neural Network Learning Algorithms | Pp. 81-90

Self-organized Locally Linear Embedding for Nonlinear Dimensionality Reduction

Jian Xiao; Zongtan Zhou; Dewen Hu; Junsong Yin; Shuang Chen

Locally Linear Embedding (LLE) is an efficient nonlinear algorithm for mapping high-dimensional data to a low-dimensional observed space. However, the algorithm is sensitive to several parameters that should be set artificially, and the resulting maps may be invalid in case of noises. In this paper, the original LLE algorithm is improved by introducing the self-organizing features of a novel SOM model we proposed recently called DGSOM to overcome these shortages. In the improved algorithm, nearest neighbors are selected automatically according to the topology connections derived from DGSOM. The proposed algorithm can also estimate the intrinsic dimensionality of the manifold and eliminate noises simultaneously. All these advantages are illustrated with abundant experiments and simulations.

- Neural Network Learning Algorithms | Pp. 101-109

Active Learning for Probabilistic Neural Networks

Bülent Bolat; Tülay Yıldırım

In many neural network applications, the selection of best training set to represent the entire sample space is one of the most important problems. Active learning algorithms in the literature for neural networks are not appropriate for Probabilistic Neural Networks (PNN). In this paper, a new active learning method is proposed for PNN. The method was applied to several benchmark problems.

- Neural Network Learning Algorithms | Pp. 110-118

Adaptive Training of Radial Basis Function Networks Using Particle Swarm Optimization Algorithm

Hongkai Ding; Yunshi Xiao; Jiguang Yue

A novel methodology to determine the optimum number of centers and the network parameters simultaneously based on Particle Swarm Optimization (PSO) algorithm with matrix encoding is proposed in this paper. For tackling structure matching problem, a random structure updating rule is employed for determining the current structure at each epoch. The effectiveness of the method is illustrated through the nonlinear system identification problem.

- Neural Network Learning Algorithms | Pp. 119-128

A Game-Theoretic Approach to Competitive Learning in Self-Organizing Maps

Joseph Herbert; JingTao Yao

Self-Organizing Maps (SOM) is a powerful tool for clustering and discovering patterns in data. Competitive learning in the SOM training process focuses on finding a neuron that is most similar to that of an input vector. Since an update of a neuron only benefits part of the feature map, it can be thought of as a local optimization problem. The ability to move away from a local optimization model into a global optimization model requires the use of game theory techniques to analyze overall of the SOM. A new algorithm GTSOM is introduced to take into account cluster quality measurements and dynamically modify learning rates to ensure improved quality through successive iterations.

- Neural Network Learning Algorithms | Pp. 129-138

Generate Different Neural Networks by Negative Correlation Learning

Yong Liu

This paper describes two methods on how to generate different neural networks in an ensemble. One is based on negative correlation learning. The other is based on cross-validation with negative correlation learning, i.e., bagging with negative correlation learning. In negative correlation learning, all individual networks are trained simultaneously on the same training set. In bagging with negative correlation learning, different individual networks are trained on the different sampled data set with replacement from the training set. The performance and correct response sets are compared between two learning methods. The purpose of this paper is to find how to design more effective neural network ensembles.

- Neural Network Learning Algorithms | Pp. 149-156

Learning Outliers to Refine a Corpus for Chinese Webpage Categorization

Dingsheng Luo; Xinhao Wang; Xihong Wu; Huisheng Chi

Webpage categorization has turned out to be an important topic in recent years. In a webpage, text is usually the main content, so that (ATC) becomes the key technique to such a task. For Chinese text categorization as well as Chinese webpage categorization, one of the basic and urgent problems is the construction of a good benchmark corpus. In this study, a machine learning approach is presented to refine a corpus for Chinese webpage categorization, where the AdaBoost algorithm is adopted to identify outliers in the corpus. The standard (kNN) algorithm under a (VSM) is adopted to construct a webpage categorization system. Simulation results as well as manual investigation of the identified outliers reveal that the presented method works well.

- Neural Network Learning Algorithms | Pp. 167-178

A New Learning Algorithm Based on Lever Principle

Xiaoguang He; Jie Tian; Xin Yang

In this paper a new learning algorithm, Lever Training Machine (LTM), is presented for binary classification. LTM is a supervised learning algorithm and its main idea is inspired from a physics principle: Lever Principle. Figuratively, LTM involves rolling a hyper-plane around the convex hull of the target training set, and using the equilibrium position of the hyper-plane to define a decision surfaces. In theory, the optimal goal of LTM is to maximize the correct rejection rate. If the distribution of target set is convex, a set of such decision surfaces can be trained for exact discrimination without false alarm. Two mathematic experiments and the practical application of face detection confirm that LTM is an effective learning algorithm.

- Neural Network Learning Algorithms | Pp. 187-198

A Novel LDA Approach for High-Dimensional Data

Guiyu Feng; Dewen Hu; Ming Li; Zongtan Zhou

Linear Discriminant Analysis (LDA) is one of the most popular linear projection techniques for feature extraction. The major drawback of this method is that it may encounter the small sample size problem in practice. In this paper, we present a novel LDA approach for high-dimensional data. Instead of direct dimension reduction using PCA as the first step, the high-dimensional data are mapped into a relatively lower dimensional similarity space, and then the LDA technique is applied. The preliminary experimental results on the ORL face database verify the effectiveness of the proposed approach.

- Neural Network Learning Algorithms | Pp. 209-212

Support Vector Regression with Smoothing Property

Zhixia Yang; Nong Wang; Ling Jing

The problem of construction of smoothing curve is actually regression problem. How to use SVR to solve the problem of curve smoothing reconstruction in reverse engineering is discussed in this paper. A modified support vector regression model is proposed. Numerical result shows that the smoothness of curves fitted by modified method is better than by the standard SVR, when there are some bad measure points in the data.

- Neural Network Learning Algorithms | Pp. 217-220

A Fast SMO Training Algorithm for Support Vector Regression

Haoran Zhang; Xiaodong Wang; Changjiang Zhang; Xiuling Xu

Support vector regression (SVR) is a powerful tool to solve regression problem, this paper proposes a fast Sequential Minimal Optimization (SMO) algorithm for training support vector regression (SVR), firstly gives a analytical solution to the size two quadratic programming (QP) problem, then proposes a new heuristic method to select the working set which leads to algorithm’s faster convergence. The simulation results indicate that the proposed SMO algorithm can reduce the training time of SVR, and the performance of proposed SMO algorithm is better than that of original SMO algorithm.

- Neural Network Learning Algorithms | Pp. 221-224

A New Predictive Vector Quantization Method Using a Smaller Codebook

Min Shi; Shengli Xie

For improving coding efficiency, a new predictive vector quantization (VQ) method was proposed in this paper. Two codebooks with different dimensionalities and different size were employed in our algorithm. The defined blocks are first classified based on variance. For smooth areas, the current processing vectors are sampled into even column vectors and odd column vectors. The even column vectors are encoded with the lower-dimensional and smaller size codebook. The odd ones are predicted using the decoded pixels from intra-blocks and inter-blocks at the decoder. For edge areas, the current processing vectors are encoded with traditional codebook to maintain the image quality. An efficient method for codebook design was also presented to improve the quality of the resulted codebook. The experimental comparisons with the other methods show good performance of our algorithm.

- Neural Network Learning Algorithms | Pp. 229-236

Universal Approach to Study Delayed Dynamical Systems

Tianping Chen

In this paper, we propose a universal approach to study dynamical behaviors of various neural networks with time-varying delays. A universal model is proposed, which includes most of the existing models as special cases. An effective approach, which was first proposed in [1] , to investigate global stability is given, too. It is pointed out that the approach proposed in the paper [1] applies to the systems with time-varying delays, too.

- Neural Network Architectures | Pp. 245-253

Long-Range Connections Based Small-World Network and Its Synchronizability

Liu Jie; Lu Jun-an

How crucial is the long-distance connections in small-world networks produced by the semi-random SW strategy? In this paper, we attempted to investigate some related questions by constructing a semi-random small-world network through only randomly adding ’long-range lattice distance connections’ to a regular network. The modified network model is compared with the most used NW small-world network. It can be found that, by using the new modified small-worldify algorithm, one can obtain a better clustered small-world network with similar average path length. Further more, we numerically found that, for a dynamical network on typical coupling scheme, the synchronizability of the small-world network formed by our procedure is no better than that of the small-world network formed by NW’s algorithm, although the two classes of network constructed at the same constructing prices and having similar average path length. These results further confirmed that, the random coupling in some sense the best candidate for such nonlocal coupling in the semi-random strategy. Main results are confirmed by extensive numerical simulations.

- Neural Network Architectures | Pp. 254-263

Double Synaptic Weight Neuron Theory and Its Application

Wang Shou-jue; Chen Xu; Qin Hong; Li Weijun; Bian Yi

In this paper, a novel mathematical model of neuron-Double Synaptic Weight Neuron (DSWN) is presented. The DSWN can simulate many kinds of neuron architectures, including Radial-Basis-Function (RBF), Hyper Sausage and Hyper Ellipsoid models, etc. Moreover, this new model has been implemented in the new CASSANN-II neurocomputer that can be used to form various types of neural networks with multiple mathematical models of neurons. The flexibility of the DSWN has also been described in constructing neural networks. Based on the theory of Biomimetic Pattern Recognition (BPR) and high-dimensional space covering, a recognition system of omni directionally oriented rigid objects on the horizontal surface and a face recognition system had been implemented on CASSANN-II neurocomputer. In these two special cases, the result showed DSWN neural network had great potential in pattern recognition.

- Neural Network Architectures | Pp. 264-272

Comparative Study of Chaotic Neural Networks with Different Models of Chaotic Noise

Huidang Zhang; Yuyao He

In order to explore the search mechanism of chaotic neural network(CNN), this paper first investigates the time evolutions of four chaotic noise models, namely Logistic map, Circle map, Henon map, and a Special Two-Dimension (2-D) Discrete Chaotic System. Second, based on the CNN proposed by Y. He, we obtain three alternate CNN through replacing the chaotic noise source (Logistic map) with Circle map, Henon map, and a Special 2-D Discrete Chaotic System. Third, We apply all of them to TSP with 4-city and TSP with 10-city, respectively. The time evolutions of energy functions and outputs of typical neurons for each model are obtained in terms of TSP with 4-city. The rate of global optimization(GM) for TSP with 10-city are shown in tables by changing chaotic noise scaling parameter and decreasing speed parameter . Finally, the features and effectiveness of four models are discussed and evaluated according to the simulation results. We confirm that the chaotic noise with the symmetry structure property of reverse bifurcation is necessary for chaotic neural network to search efficiently, and the performance of the CNN may depend on the nature of the chaotic noise.

- Neural Network Architectures | Pp. 273-282

An Algorithm for Pruning Redundant Modules in Min-Max Modular Network with GZC Function

Jing Li; Bao-Liang Lu; Michinori Ichikawa

The min-max modular neural network with Gaussian zero-crossing function (M-GZC) has locally tuned response characteristic and emergent incremental learning ability, but it suffers from quadratic complexity in storage space and response time. Redundant Sample pruning and redundant structure pruning can be considered to overcome these weaknesses. This paper aims at the latter; it analyzes the properties of receptive field in M-GZC network, and then proposes a strategy for pruning redundant modules. Experiments on both structure pruning and integrated with sample pruning are performed. The results show that our algorithm reduces both the size of the network and the response time notably while not changing the decision boundaries.

- Neural Network Architectures | Pp. 293-302

A Modular Structure of Auto-encoder for the Integration of Different Kinds of Information

Naohiro Fukumura; Keitaro Wakaki; Yoji Uno

Humans use many different kinds of information from different sensory organs in motion tasks. It is important in human sensing to extract useful information and effectively use the multiple kinds of information. From the viewpoint of a computational theory, we approach the integration mechanism of human sensory and motor information. In this study, the modular structure of auto-encoder is introduced to extract the intrinsic properties about a recognized object that are contained commonly in multiple kind of information. After the learning, the relaxation method using the learned model can solve the transformation between the integrated kinds of information. This model was applied to the problem how a locomotive robot decides a leg’s height to climb over an obstacle from the visual information.

- Neural Network Architectures | Pp. 313-321

An ART2/RBF Hybrid Neural Networks Research

Xuhua Yang; Yunbing Wei; Qiu Guan; Wanliang Wang; Shengyong Chen

The radial basis function (RBF) neural networks have been widely used for approximation and learning due to its structural simplicity. However, there exist two difficulties in using traditional RBF networks: How to select the optimal number of intermediate layer nodes and centers of these nodes? This paper proposes a novel ART2/RBF hybrid neural networks to solve the two problems. Using the ART2 neural networks to select the optimal number of intermediate layer nodes and centers of these nodes at the same time and further get the RBF network model. Comparing with the traditional RBF networks, the ART2/RBF networks have the optimal number of intermediate layer nodes , optimal centers of these nodes and less error.

- Neural Network Architectures | Pp. 332-335

Urban Traffic Signal Timing Optimization Based on Multi-layer Chaos Neural Networks Involving Feedback

Chaojun Dong; Zhiyong Liu; Zulian Qiu

Urban traffic system is a complex system in a random way, it is necessary to optimize traffic control signals to cope with so many urban traffic problems. A multi-layer chaotic neural networks involving feedback (ML-CNN) was developed based on Hopfield networks and chaos theory, it was effectively used in dealing with the optimization of urban traffic signal timing. Also an energy function on the network and an equation on the average delay per vehicle for optimal computation were developed. Simulation research was carried out at the intersection in Jiangmen city in China, and which indicates that urban traffic signal timing’s optimization by using ML-CNN could reduce 25.1% of the average delay per vehicle at intersection by using the conventional timing methods. The ML-CNN could also be used in other fields.

- Neural Network Architectures | Pp. 340-344

Improving the Resultant Quality of Kohonen’s Self Organizing Map Using Stiffness Factor

Emin Germen

The performance of Self Organizing Map (SOM) is always influenced by learn methods. The resultant quality of the topological formation of the SOM is also highly dependent onto the learning rate and the neighborhood function. In literature, there are plenty of studies to find a proper method to improve the quality of SOM. However, a new term “stiffness factor” has been proposed and was used in SOM training in this paper. The effect of the stiffness factor has also been tested with a real-world problem and got positive influence.

- Neural Network Architectures | Pp. 353-357

A Novel Orthonormal Wavelet Network for Function Learning

Xieping Gao; Jun Zhang

This paper proposed a novel self-adaptive wavelet network model for Regression Analysis. The structure of this network is distinguished from those of the present models. It has four layers. This model not only can overcome the structural redundancy which the present wavelet network cannot do, but also can solve the complicated problems respectively. Thus, generalization performance has been greatly improved; moreover, rapid learning can be realized. Some experiments on regression analysis are presented for illustration. Compared with the existing results, the model reaches a hundredfold improvement in speed and its generalization performance has been greatly improved.

- Neural Network Architectures | Pp. 358-363

An Evolutionary Artificial Neural Networks Approach for BF Hot Metal Silicon Content Prediction

Zhao Min; Liu Xiang-guan; Luo Shi-hua

This paper presents an evolutionary artificial neural network (EANN) to the prediction of the BF hot metal silicon content. The pareto differential evolution (PDE) algorithm is used to optimize the connection weights and the network’s architecture (number of hidden nodes) simultaneously to improve the prediction precision. The application results show that the prediction of hot metal silicon content is successful. Data, used in this paper, were collected from No.1 BF at Laiwu Iron and Steel Group Co..

- Neural Network Architectures | Pp. 374-377

Double Robustness Analysis for Determining Optimal Feedforward Neural Network Architecture

Lean Yu; Kin Keung Lai; Shouyang Wang

This paper incorporates robustness into neural network modeling and proposes a novel two-phase robustness analysis approach for determining the optimal feedforward neural network (FNN) architecture in terms of Hellinger distance of probability density function (PDF) of error distribution. The proposed approach is illustrated with an example in this paper.

- Neural Network Architectures | Pp. 382-385

Observation of Crises and Bifurcations in the Hodgkin-Huxley Neuron Model

Wuyin Jin; Qian Lin; Yaobing wei; Ying Wu

With the changing of the stimulus frequency, there are a lot of firing dynamics behaviors of interspike intervals (ISIs), such as quasi-periodic, bursting, period-chaotic, chaotic, periodic and the bifurcations of the chaotic attractor appear alternatively in Hodgkin-Huxley (H-H) neuron model. The chaotic behavior is realized over a wide range of frequency and is visualized by using ISIs, and many kinds of abrupt undergoing changes of the ISIs are observed in deferent frequency regions, such as boundary crisis, interior crisis and merging crisis displaying alternately along with the changes changes of external signal frequency, too. And there are many periodic windows and fractal structures in ISIs dynamics behaviors. The saddle node bifurcation resulted collapses of chaos to period-12 orbit in dynamics of ISIs is identified.

- Neurodynamics | Pp. 390-396

An Application of Pattern Recognition Based on Optimized RBF-DDA Neural Networks

Guoyou Li; Huiguang Li; Min Dong; Changping Sun; Tihua Wu

An algorithm of Dynamic Decay Adjustment Radial Basis Function (RBF-DDA) neural networks is presented. It can adaptively get the number of the hidden layer nodes and the center values of data. It resolve the problem of deciding RBF parameters randomly and generalization ability of RBF is improved. When is applied to the system of image pattern recognition, the experimental results show that the recognition rate of the improved RBF neural network still achieves 97.4% even under stronger disturbance. It verifies the good performance of improved algorithm.

- Neurodynamics | Pp. 397-404

Effect of Noises on Two-Layer Hodgkin-Huxley Neuronal Network

Jun Liu; Zhengguo Lou; Guang Li

Stochastic resonance (SR) effect has been discovered in non-dynamical threshold systems such as sensory systems. This paper presents a network simulating basic structure of a sensory system to study SR. The neuronal network consists of two layers of the Hodgkin-Huxley (HH) neurons. Compared with single HH model, subthreshold stimulating signals do not modulate output signal-noise ratio, thus a fixed level of noise from circumstance can induce SR for the various stimulating signals. Numeric experimental results also show that noises do not always deteriorate the capability of the detection of suprathreshold input signals.

- Neurodynamics | Pp. 411-419

Modeling of Short-Term Synaptic Plasticity Using Dynamic Synapses

Biswa Sengupta

This work presents a model of minimal time-continuous target-cell specific use-dependent short-term synaptic plasticity (STP) observed in the pyramidal cells that can account for both short-term depression and facilitation. In general it provides a concise and portable description that is useful for predicting synaptic responses to more complex patterns of simulation, for studies relating to circuit dynamics and for equating dynamic properties across different synaptic pathways between or within preparations. This model allows computation of postsynaptic responses by either facilitation or depression in the synapse thus exhibiting characteristics of dynamic synapses as that found during short-term synaptic plasticity, for any arbitrary pre-synaptic spike train in the presence of realistic background synaptic noise. Thus it allows us to see specific effect of the spike train on a neuronal lattice both small-scale and large-scale, so as to reveal the short-term plastic behavior in neurons.

- Neurodynamics | Pp. 429-438

Stochastic Neuron Model with Dynamic Synapses and Evolution Equation of Its Density Function

Wentao Huang; Licheng Jiao; Yuelei Xu; Maoguo Gong

In most neural network models, neurons are viewed as the only computational units, while the synapses are treated as passive scalar parameters (weights). It has, however, long been recognized that biological synapses can exhibit rich temporal dynamics. These dynamics may have important consequences for computing and learning in biological neural systems. This paper proposes a novel stochastic model of single neuron with synaptic dynamics, which is characterized by several stochastic differential equations. From this model, we obtain the evolution equation of their density function. Furthermore, we give an approach to cut the evolution equation of the high dimensional function down to the evolution equation of one dimension function.

- Neurodynamics | Pp. 449-455

Learning Algorithm for Spiking Neural Networks

Hesham H. Amin; Robert H. Fujii

Spiking Neural Networks (SNNs) use inter-spike time coding to process input data. In this paper, a new learning algorithm for SNNs that uses the inter-spike times within a spike train is introduced. The learning algorithm utilizes the spatio-temporal pattern produced by the spike train input mapping unit and adjusts synaptic weights during learning. The approach was applied to classification problems.

- Neurodynamics | Pp. 456-465

Implementing Fuzzy Reasoning by IAF Neurons

Zhijie Wang; Hong Fan

Implementing of intersection operation and union operation in fuzzy reasoning is explored by three Integrate-And-Fire (IAF) neurons, with two neurons as inputs and the other one as output. We prove that if parameter values of the neurons are set appropriately for intersection operation, firing rate of the output neuron is equal to or is lower than the lower one of two input neurons. We also prove that if parameter values of the neurons are set appropriately for union operation, the firing rate of the output neuron is equal to or is higher than the higher one of the two input neurons. The characteristic of intersection operation and union operation implemented by IAF neurons is discussed.

- Neurodynamics | Pp. 476-479

A Method for Quantifying Temporal and Spatial Patterns of Spike Trains

Shi-min Wang; Qi-Shao Lu; Ying Du

Spike trains are treated as exact time dependent stepwise functions called response functions. Five variables defined at sequential moments with equal interval are introduced to characterize features of response function; and these features can reflect temporal patterns of spike train. These variables have obvious geometric meaning in expressing the response and reasonable coding meaning in describing spike train since the well known ’firing rate’ is among them. The dissimilarity or distance between spike trains can be simply defined by means of these variables. The reconstruction of spike train with these variables demonstrates that information carried by spikes is preserved. If spikes of neuron ensemble are taken as a spatial sequence in each time bins, spatial patterns of spikes can also be quantified with a group of variables similar to temporal ones.

- Neurodynamics | Pp. 480-489

Study on Circle Maps Mechanism of Neural Spikes Sequence

Zhang Hong; Fang Lu-ping; Tong Qin-ye

Till now, the problem of neural coding remains a puzzle. The intrinsic information carried in irregular neural spikes sequence is not known yet. But solution of the problem will have direct influence on the study of neural information mechanism. In this paper, coding mechanism of the neural spike sequence, which is caused by input stimuli of various frequencies, is investigated based on analysis of H-H equation with the method of nonlinear dynamics. The signals of external stimuli – those continuously varying physical or chemical signals – are transformed into frequency signals of potential in many sense organs of biological system, and then the frequency signals are transformed into irregular neural coding. This paper analyzes in detail the neuron response of stimuli with various periods and finds the possible rule of coding.

- Neurodynamics | Pp. 499-507

Doubly Regularized Kernel Regression with Heteroscedastic Censored Data

Jooyong Shim; Changha Hwang

A doubly regularized likelihood estimating procedure is introduced for the heteroscedastic censored regression. The proposed procedure provides the estimates of both the conditional mean and the variance of the response variables, which are obtained by two stepwise iterative fashion. The generalized cross validation function and the generalized approximate cross validation function are used alternately to estimate tuning parameters in each step. Experimental results are then presented which indicate the performance of the proposed estimating procedure.

- Statistical Neural Network Models and Support Vector Machines | Pp. 521-527

A Prediction Interval Estimation Method for KMSE

Changha Hwang; Kyung Ha Seok; Daehyeon Cho

The kernel minimum squared error estimation (KMSE) model can be viewed as a general framework that includes kernel Fisher discriminant analysis (KFDA), least squares support vector machine (LS-SVM), and kernel ridge regression (KRR) as its particular cases. For continuous real output the equivalence of KMSE and LS-SVM is shown in this paper. We apply standard methods for computing prediction intervals in nonlinear regression to KMSE model. The simulation results show that LS-SVM has better performance in terms of the prediction intervals and mean squared error(MSE). The experiment on a real date set indicates that KMSE compares favorably with other method.

- Statistical Neural Network Models and Support Vector Machines | Pp. 536-545

Training Data Selection for Support Vector Machines

Jigang Wang; Predrag Neskovic; Leon N. Cooper

In recent years, support vector machines (SVMs) have become a popular tool for pattern recognition and machine learning. Training a SVM involves solving a constrained quadratic programming problem, which requires large memory and enormous amounts of training time for large-scale problems. In contrast, the SVM decision function is fully determined by a small subset of the training data, called support vectors. Therefore, it is desirable to remove from the training set the data that is irrelevant to the final decision function. In this paper we propose two new methods that select a subset of data for SVM training. Using real-world datasets, we compare the effectiveness of the proposed data selection strategies in terms of their ability to reduce the training set size while maintaining the generalization performance of the resulting SVM classifiers. Our experimental results show that a significant amount of training data can be removed by our proposed methods without degrading the performance of the resulting SVM classifiers.

- Statistical Neural Network Models and Support Vector Machines | Pp. 554-564

Modelling of Chaotic Systems with Recurrent Least Squares Support Vector Machines Combined with Reconstructed Embedding Phase Space

Zheng Xiang; Taiyi Zhang; Jiancheng Sun

A new strategy of modelling of chaotic systems is presented. First, more information is acquired utilizing the reconstructed embedding phase space. Then, based on the Recurrent Least Squares Support Vector Machines (RLS-SVM), modelling of the chaotic system is realized. We use the power spectrum and dynamic invariants involving the Lyapunov exponents and the correlation dimension as criterions, and then apply our method to the Chua‘s circuit time series. The simulation of dynamic invariants between the origin and generated time series shows that the proposed method can capture the dynamics of the chaotic time series effectively.

- Statistical Neural Network Models and Support Vector Machines | Pp. 573-581

Fuzzy Support Vector Machines Based on —Cut

Shengwu Xiong; Hongbing Liu; Xiaoxiao Niu

A new Fuzzy Support Vector Machines (—FSVMs) based on —cut is proposed in this paper. The proposed learning machines combine the membership of fuzzy set with support vector machines. The —cut set is introduced to distinguish the training samples set in term of the importance of the data. The more important sets are selected as new training sets to construct the fuzzy support vector machines. The benchmark two-class problems and multi-class problems datasets are used to test the effectiveness and validness of —FSVMs. The experiment results indicate that —FSVMs not only has higher precision but also solves the overfitting problem of the support vector machines more effectively.

- Statistical Neural Network Models and Support Vector Machines | Pp. 592-600

Mixtures of Kernels for SVM Modeling

Yan-fei Zhu; Lian-fang Tian; Zong-yuan Mao; Wei LI

Kernels are employed in Support Vector Machines (SVM) to map the nonlinear model into a higher dimensional feature space where the linear learning is adopted. The characteristic of kernels has a great impact on learning and predictive results of SVM. Good characteristic for fitting may not represents good characteristic for generalization. After the research on two kinds of typical kernels—global kernel (polynomial kernel) and local kernel (RBF kernel), a new kind of SVM modeling method based on mixtures of kernels is proposed. Through the implementation in Lithopone calcination process, it demonstrates the good performance of the proposed method compared to single kernel.

- Statistical Neural Network Models and Support Vector Machines | Pp. 601-607

Recurrent Support Vector Machines in Reliability Prediction

Wei-Chiang Hong; Ping-Feng Pai; Chen-Tung Chen; Ping-Teng Chang

Support vector machines (SVMs) have been successfully used in solving nonlinear regression and times series problems. However, the application of SVMs for reliability prediction is not widely explored. Traditionally, the recurrent neural networks are trained by the back-propagation algorithms. In the study, SVM learning algorithms are applied to the recurrent neural networks to predict system reliability. In addition, the parameter selection of SVM model is provided by Genetic Algorithms (GAs). A numerical example in an existing literature is used to compare the prediction performance. Empirical results indicate that the proposed model performs better than the other existing approaches.

- Statistical Neural Network Models and Support Vector Machines | Pp. 619-629

Gait Recognition via Independent Component Analysis Based on Support Vector Machine and Neural Network

Erhu Zhang; Jiwen Lu; Ganglong Duan

This paper proposes a method of automatic gait recognition using Fourier descriptors and independent component analysis (ICA) for the purpose of human identification at a distance. Firstly, a simple background generation algorithm is introduced to subtract the moving figures accurately and to obtain binary human silhouettes. Secondly, these silhouettes are described with Fourier descriptors and converted into associated one-dimension signals. Then ICA is applied to get the independent components of the signals. For reducing the computational cost, a fast and robust fixed-point algorithm for calculating ICs is adopted and a criterion how to select ICs is put forward. Lastly, the nearest neighbor (NN), support vector machine (SVM) and backpropagation neural network (BPNN) classifiers are chosen for recognition and this method is tested on the small UMD gait database and the NLPR gait database. Experimental results show that our method has encouraging recognition accuracy.

- Statistical Neural Network Models and Support Vector Machines | Pp. 640-649

An Incremental Learning Method Based on SVM for Online Sketchy Shape Recognition

Zhengxing Sun; Lisha Zhang; Enyi Tang

This paper presents briefly an incremental learning method based on SVM for online sketchy shape recognition. It can collect all classified results corrected by user and select some important samples as the retraining data according to their distance to the hyper-plane of the SVM-classifier. The classifier can then do incremental learning quickly on the newly added samples, and the retrained classifier can be adaptive to the user’s drawing styles. Experiment shows the effectiveness of the proposed method.

- Statistical Neural Network Models and Support Vector Machines | Pp. 655-659

Blind Extraction of Singularly Mixed Source Signals

Zhigang Zeng; Chaojin Fu

In this paper, a neural network model and its associate learning rule are developed for sequential blind extraction in the case that the number of observable mixed signals is less than the one of sources. This approach is also suitable for the case in which the mixed matrix is nonsingular. Using this approach, all separable sources can be extracted one by one. The solvability analysis of the problem is also presented, and the new solvable condition is weaker than existing solvable conditions in some literatures.

- Statistical Neural Network Models and Support Vector Machines | Pp. 664-667

Palmprint Recognition Based on Unsupervised Subspace Analysis

Guiyu Feng; Dewen Hu; Ming Li; Zongtan Zhou

As feature extraction techniques, Kernel Principal Component Analysis (KPCA) and Independent Component Analysis (ICA) can both be considered as generalization of Principal Component Analysis (PCA), which has been used for palmprint recognition and gained satisfactory results [3], therefore it is natural to wonder the performances of KPCA and ICA on this issue. In this paper, palmprint recognition using the KPCA and ICA methods is developed and compared with the PCA method. Based on the experimental results, some useful conclusions are drawn, which fits into the scene for a better picture about considering these unsupervised subspace classifiers for palmprint recognition.

- Statistical Neural Network Models and Support Vector Machines | Pp. 675-678

A New Alpha Seeding Method for Support Vector Machine Training

Du Feng; Wenkang Shi; Huawei Guo; Liangzhou Chen

In order to get good hyperparameters of SVM, user needs to conduct extensive cross-validation such as leave-one-out () cross-validation. Alpha seeding is often used to reduce the cost of SVM training. Compared with the existing schemes of alpha seeding, a new efficient alpha seeding method is proposed. Through some examples, its good performance has been proved. Interpretation from both geometrical and mathematical view is also given.

- Statistical Neural Network Models and Support Vector Machines | Pp. 679-682

Short-Term Load Forecasting Based on Self-organizing Map and Support Vector Machine

Zhejing Bao; Daoying Pi; Youxian Sun

An approach for short-term load forecasting by combining self-organizing map(SOM) and support vector machine(SVM) is proposed in this paper. First, historical load data of same type are clustered using SOM, and then daily 48-point load values are vertically predicted respectively based on SVM. In clustering, factors such as date type, weather conditions and time delay are considered. In addition, influences of kernel function and SVM parameters on load forecasting are discussed and performance of SOM-SVM is compared with pure SVM. It is shown that normal smoothing technique in preprocessing is not suitable to be used in vertical forecasting. Finally, the approach is tested by data from EUNITE network, and results show that the approach runs with high speed and good accuracy.

- Statistical Neural Network Models and Support Vector Machines | Pp. 688-691

A Multi-class Classifying Algorithm Based on Nonlinear Dimensionality Reduction and Support Vector Machines

Lukui Shi; Qing Wu; Xueqin Shen; Pilian He

Many problems in pattern classifications involve some form of dimensionality reduction. ISOMAP is a representative nonlinear dimensionality reduction algorithm, which can discover low dimensional manifolds from high dimensional data. To speed ISOMAP and decrease the dependency to the neighborhood size, we propose an improved algorithm. It can automatically select a proper neighborhood size and an appropriate landmark set according to a stress function. A multi-class classifier with high efficiency is obtained through combining the improved ISOMAP with SVM. Experiments show that the classifier presented is effective in fingerprint classifications.

- Statistical Neural Network Models and Support Vector Machines | Pp. 692-695

A VSC Scheme for Linear MIMO Systems Based on SVM

Zhang Yibo; Yang Chunjie; Pi Daoying; Sun Youxian

A variable structure control (VSC) scheme for linear MIMO systems based on support vector machine (SVM) is developed. By analyzing the characters of linear MIMO system, a VSC scheme based on Exponent Reaching Law is adopted to track desired trajectory. Then one input of the system is trained as the output of SVM, while sliding mode function, differences and other inputs of the system are trained as the inputs of SVM. So one VSC input of the black-box system could be obtained directly by trained SVM after other inputs of the system are selected manually, and recognition of system parameters is avoided. A linear MIMO system is used to prove the scheme, and simulation results show that this scheme has high identification precision and quick training speed.

- Statistical Neural Network Models and Support Vector Machines | Pp. 696-699

Global Convergence of FastICA: Theoretical Analysis and Practical Considerations

Gang Wang; Xin Xu; Dewen Hu

FastICA is now a popular algorithm for independent component analysis (ICA) based on negentropy. However the convergence of FastICA has not been comprehensively studied. This paper provides the global convergence analysis of FastICA and some practical considerations on algorithmic implementations. The exhaustive equilibria are obtained from the iteration first. Then the global convergence property is given on the 2-channel system with cubic nonlinearity function, and the results can also be generalized to the multi-channel system. In addition, two practical considerations, e.g. the convergence threshold for demixing matrix and independence restriction for sources, are evaluated and the influence on the separation solutions is illustrated respectively.

- Statistical Neural Network Models and Support Vector Machines | Pp. 700-705

SVM Based Nonparametric Model Identification and Dynamic Model Control

Weimin Zhong; Daoying Pi; Youxian Sun

In this paper, a support vector machine (SVM) with linear kernel function based nonparametric model identification and dynamic matrix control (SVM_DMC) technique is presented. First, a step response model involving manipulated variables is obtained via system identification by SVM with linear kernel function according to random test data or manufacturing data. Second, an explicit control law of a receding horizon quadric objective is gotten through the predictive control mechanism. Final, the approach is illustrated by a simulation of a system with dead time delay. The results show that SVM_DMC technique has good performance in predictive control with good capability in keeping reference trajectory.

- Statistical Neural Network Models and Support Vector Machines | Pp. 706-709

Weighted On-line SVM Regression Algorithm and Its Application

Hui Wang; Daoying Pi; Youxian Sun

Based on KKT condition and Lagrangian multiplier method a weighted SVM regression model and its on-line training algorithm are developed. Standard SVM regression model processes every sample equally with the same error requirement, which is not suitable in the case that different sample has different contribution to the construction of the regression model. In the new weighted model, every training sample is given a weight coefficient to reflect the difference among samples. Moreover, standard online training algorithm couldn’t remove redundant samples effectively. A new method is presented to remove the redundant samples. Simulation with a benchmark problem shows that the new algorithm can quickly and accurately approximate nonlinear and time-varying functions with less computer memory needed.

- Statistical Neural Network Models and Support Vector Machines | Pp. 716-719

Convergence of an Online Gradient Method for BP Neural Networks with Stochastic Inputs

Zhengxue Li; Wei Wu; Guorui Feng; Huifang Lu

An online gradient method for BP neural networks is presented and discussed. The input training examples are permuted stochastically in each cycle of iteration. A monotonicity and a weak convergence of deterministic nature for the method are proved.

- Other Topics in Neural Network Models | Pp. 720-729

Stochastic High-Order Hopfield Neural Networks

Yi Shen; Guoying Zhao; Minghui Jiang; Shigeng Hu

In 1984 Hopfield showed that the time evolution of a symmetric Hopfield neural networks are a motion in state space that seeks out minima in the energy function (i.e., equilibrium point set of Hopfield neural networks). Because high-order Hopfield neural networks have more extensive applications than Hopfield neural networks, and have been discussed on the convergence of the networks. In practice, a neural network is often subject to environmental noise. It is therefore useful and interesting to find out whether the high-order neural network system still approacher some limit set under stochastic perturbation. In this paper, we will give a number of useful bounds for the noise intensity under which the stochastic high-order neural network will approach its limit set. Our result cancels the requirement of symmetry of the connection weight matrix and includes the classic result on Hopfield neural networks, which is a special case of stochastic high-order Hopfield neural networks. In the end, A example is given to verify the effective of our results.

- Other Topics in Neural Network Models | Pp. 740-749

An Efficient Score Function Generation Algorithm with Information Maximization

Woong Myung Kim; Hyon Soo Lee

In this study, we propose this new algorithm that generates score function in ICA (Independent Component Analysis) using entropy theory. To generate score function, estimation of probability density function about original signals are certainly necessary and density function should be differentiated. Therefore, we used kernel density estimation method in order to derive differential equation of score function by original signals. After changing the formula to convolution form to increase speed of density estimation, we used FFT algorithm which calculates convolution faster. Proposed score function generation method reduces estimation error, it is density difference of recovered signals and original signals. Also, we insert constraint which is able to information maximization using smoothing parameters. In the result of computer simulation, we estimate density function more similar to original signals compared with Extended Infomax algorithm and Fixed Point ICA in blind source separation problem and get improved performance at the SNR (Signal to Noise Ratio) between recovered signals and original signals.

- Other Topics in Neural Network Models | Pp. 760-768

A New Criterion on Exponential Stability of a Class of Discrete Cellular Neural Networks with Time Delay

Fei Hao; Long Wang; Tianguang Chu

A new criterion on exponential stability of the equilibrium point for a class of discrete cellular neural networks (CNNs) with delay is established by Lyapunov-Krasovskii function methods. The obtained result shows a relation between the delayed time and the corresponding parameters of the network. A numerical example is given to illustrate the efficiency of the proposed approach.

- Other Topics in Neural Network Models | Pp. 769-772

A Novel Local Connection Neural Network

Shuang Cong; Guodong Li; Yisong Zheng

A new type of local connection neural network is proposed in this paper. There is a called K-type activation function in its hidden layer so as to have less computation compared with other local connection neural network. First the structure and algorithm of the proposed network are given. Then the function of network and its properties are analyzed theoretically. The proposed network can be used in the function approximation and modeling. Finally, numerical applications are used to verify the advantages of proposed network compared with other local connection neural networks.

- Other Topics in Neural Network Models | Pp. 773-777

An Unsupervised Cooperative Pattern Recognition Model to Identify Anomalous Massive SNMP Data Sending

Álvaro Herrero; Emilio Corchado; José Manuel Sáiz

In this paper, we review a visual approach and propose it for analysing computer-network activity, which is based on the use of unsupervised connectionist neural network models and does not rely on any previous knowledge of the data being analysed. The presented Intrusion Detection System (IDS) is used as a method to investigate the traffic which travels along the analysed network, detecting SNMP (Simple Network Management Protocol) anomalous traffic patterns. In this paper we have focused our attention on the study of anomalous situations generated by a MIB (Management Information Base) information transfer.

- Other Topics in Neural Network Models | Pp. 778-782

A Fast Nonseparable Wavelet Neural Network for Function Approximation

Jun Zhang; Xieping Gao; Chunhong Cao; Fen Xiao

In this paper, based on the theory of nonseparable wavelet, a novel nonseparable wavelet model has been proposed. The structure of the model is distinguished from that of wavelet network (RBF structure). It is a four-layer structure, which helps overcome the structural redundancy. In the process of the training of the network, in the light of the characteristics of nonseparable wavelet, a novel method of setting the initial value of weight has been proposed. It can overcome the shortcoming of gradient descent methodology that it makes the convergence of the network slow. Some experiments with the novel model for function learning will be shown. Comparing with the present wavelet networks, BP network, the results in this paper show that the speed and generalization performance of the novel model have been greatly improved.

- Other Topics in Neural Network Models | Pp. 783-788

ANN Ensemble Online Learning Strategy in 3D Object Cognition and Recognition Based on Similarity

Rui Nian; Guangrong Ji; Wencang Zhao; Chen Feng

In this paper, in aid of ANN ensemble, a supervised online learning strategy continuously achieves omnidirectional information accumulation for 3D object cognition from 2D view sequence. The notion of similarity is introduced to solve the paradox between information simplicity and accuracy. Images are segmented into homogeneous region for training, correspondent to distinct model views characteristic of neighboring generalization. Real-time techniques are adopted to expand knowledge until satisfactory. The insert into joint model views is only needed in case of impartibility. Simulation experiment has achieved encouraging results, and proved the approach effective and feasible.

- Cognitive Science | Pp. 793-796

Comparison of Complexity and Regularity of ERP Recordings Between Single and Dual Tasks Using Sample Entropy Algorithm

Tao Zhang; Xiaojun Tang; Zhuo Yang

The purpose of this study is to investigate the application of sample entropy (SampEn) measures to electrophysiological studies of single and dual tasking performance. The complexity of short-duration (~s) epochs of EEG data were analysed using SampEn along with the surrogate technique. Individual tasks consisted of an auditory discrimination task and two motor tasks of varying difficulty. Dual task conditions were combinations of one auditory and one motor task. EEG entropies were significantly lower in dual tasks compared to that in the single tasks. The results of this study have demonstrated that entropy measurements can be a useful alternative and nonlinear approach to analyzing short duration EEG signals on a time scale of seconds.

- Cognitive Science | Pp. 806-810

Neural Network Based Emotion Estimation Using Heart Rate Variability and Skin Resistance

Sun K. Yoo; Chung K. Lee; Youn J. Park; Nam H. Kim; Byung C. Lee; Kee S. Jeong

In order to build a human-computer interface that is sensitive to a user’s expressed emotion, we propose a neural network based emotion estimation algorithm using heart rate variability (HRV) and galvanic skin response (GSR). In this study, a video clip method was used to elicit basic emotions from subjects while electrocardiogram (ECG) and GSR signals were measured. These signals reflect the influence of emotion on the autonomic nervous system (ANS). The extracted features that are emotion-specific characteristics from those signals are applied to an artificial neural network in order to recognize emotions from new signal collections. Results show that the proposed method is able to accurately distinguish a user’s emotion.

- Cognitive Science | Pp. 818-824

PENCIL: A Framework for Expressing Free-Hand Sketching in 3D

Zhan Ding; Sanyuan Zhang; Wei Peng; Xiuzi Ye; Huaqiang Hu

This paper presents a framework for expressing free-hand sketching in 3D for conceptual design input. In the framework, sketch outlines will be recognized as formal rigid shapes first. Then under a group of gestures and DFAs’(deterministic finite automata) control, the framework can express user’s free sketching intents freely. Based on this framework, we implemented a sketch-based 3D prototype system supporting conceptual designs. User can easily and rapidly create 3D objects such as hexahedron, sphere, cone, extrusion, swept body, revolved body, lofted body and their assemblies by sketching and gestures.

- Cognitive Science | Pp. 835-838

A Computation Model of Korean Lexical Processing

Hyungwook Yim; Heuseok Lim; Kinam Park; Kichun Nam

This study simulates a lexical decision task in Korean by using a feed forward neural network model with a back propagation learning rule. Reaction time is substituted by a entropy value called ‘semantic stress’. The model demonstrates frequency effect, lexical status effect and non-word legality effect, suggesting that lexical decision is made within a structure of orthographic and semantic features. The test implies that the orthographic and semantic features can be automatically applied to lexical information process.

- Cognitive Science | Pp. 844-849

Cooperative Aspects of Selective Attention

KangWoo Lee

This paper investigates the cooperative aspects of selective attention in which primary (or bottom-up) information is dynamically integrated by the secondary (top-down or context) information from different channels, and in which the secondary information provides a criterion of what should be many target candidates We present a computational model of selective attention that implements these cooperative behaviors. Simulation results, obtained using still and video images, are presented showing the interesting properties of the model that are not captured by only competitive aspects of selective attention.

- Cognitive Science | Pp. 855-866

Visual Search for Object Features

Predrag Neskovic; Leon N Cooper

In this work we present the computational algorithm that combines perceptual and cognitive information during the visual search for object features. The algorithm is initially driven purely by the bottom-up information but during the recognition process it becomes more constrained by the top-down information. Furthermore, we propose a concrete model for integrating information from successive saccades and demonstrate the necessity of using two coordinate systems for measuring feature locations. During the search process, across saccades, the network uses an object-based coordinate system, while during a fixation the network uses the retinal coordinate system that is tied to the location of the fixation point. The only information that the network stores during saccadic exploration is the identity of the features on which it has fixated and their locations with respect to the object-centered system.

- Cognitive Science | Pp. 877-887

Dynamic Inputs and Attraction Force Analysis for Visual Invariance and Transformation Estimation

Tomás Maul; Sapiyan Baba; Azwina Yusof

This paper aims to tackle two fundamental problems faced by multiple object recognition systems: invariance and transformation estimation. A neural normalization approach is adopted, which allows for the subsequent incorporation of invariant features. Two new approaches are introduced: dynamic inputs (DI) and attraction force analysis (AFA). The DI concept refers to a cloud of inputs that is allowed to change its configuration in order to latch onto objects thus creating object-based reference frames. AFA is used in order to provide clouds with transformation estimations thus maximizing the efficiency with which they can latch onto objects. AFA analyzes the length and angular properties of the correspondences that are found between stored-patterns and the information conveyed by clouds. The solution provides significant invariance and useful estimations pertaining to translation, scale, rotation and combinations of these. The estimations provided are also considerably resistant to other factors such as deformation, noise, occlusion and clutter.

- Cognitive Science | Pp. 893-902

Robust Face Recognition from One Training Sample per Person

Weihong Deng; Jiani Hu; Jun Guo

This paper proposes a Gabor-based PCA method using Whiten Cosine Similarity Measure (WCSM) for Face Recognition from One training Sample per Person. Gabor wavelet representation of face images first derives desirable features, which is robust to the variations due to illumination, facial expression changes. PCA is then employed to reduce the dimensionality of the Gabor features. Whiten Cosine Similarity Measure is finally proposed for classification to integrate the virtues of the whiten translation and the cosine similarity measure. The effectiveness and robustness of the proposed method are successfully tested on CAS-PEAL dataset using one training sample per person, which contains 6609 frontal images of 1040 subjects. The performance enhancement power of the Gabor-based PCA feature and WCSM is shown in term of comparative performance against PCA feature, Mahalanobis distance and Euclidean distance. In particular, the proposed method achieves much higher accuracy than the standard Eigenface technique in our large-scale experiment.

- Cognitive Science | Pp. 915-924

Modeling Human Learning as Context Dependent Knowledge Utility Optimization

Toshihiko Matsuka

Humans have the ability to flexibly adjust their information processing strategy according to situational characteristics. However, such ability has been largely overlooked in computational modeling research in high-order human cognition, particularly in learning. The present work introduces frameworks of cognitive models of human learning that take contextual factors into account. The framework assumes that human learning processes are not strictly error minimization, but optimization of knowledge. A simulation study was conducted and showed that the present framework successfully replicated observed psychological phenomena.

- Cognitive Science | Pp. 933-946

Automatic Text Summarization Based on Lexical Chains

Yanmin Chen; Xiaolong Wang; Yi Guan

The method of lexical chains is the first time introduced to generate summaries from Chinese texts. The algorithm which computes lexical chains based on the HowNet knowledge database is modified to improve the performance and suit Chinese summarization. Moreover, the construction rules of lexical chains are extended, and relationship among more lexical items is used. The algorithm constructs lexical chains first, and then strong chains are identified and significant sentences are extracted from the text to generate the summary. Evaluation results show that the performance of the system has a notable improvement both in precision and recall compared to the original system.

- Cognitive Science | Pp. 947-951

A General fMRI Linear Convolution Model Based Dynamic Characteristic

Hong Yuan; Hong Li; Zhijie Zhang; Jiang Qiu

General linear model (GLM) is a most popularly method of functional magnetic imaging (fMRI) data analysis. The key of this model is how to constitute the design-matrix to model the interesting effects better and separate noises. In this paper, the new general linear convolution model is proposed by introducing dynamic characteristic function as hemodynamic response function for the processing of the fMRI data. The method is implemented by a new dynamic function convolving with stimulus pattern as design-matrix to detect brain active signal. The efficiency of the new method is confirmed by its application into the real-fMRI data. Finally, real- fMRI tests showed that the excited areas evoked by a visual stimuli are mainly in the region of the primary visual cortex.

- Cognitive Science | Pp. 952-955

A KNN-Based Learning Method for Biology Species Categorization

Yan Dang; Yulei Zhang; Dongmo Zhang; Liping Zhao

This paper presents a novel approach toward high precision biology species categorization which is mainly based on KNN algorithm. KNN has been successfully used in natural language processing (NLP). Our work extends the learning method for biological data. We view the DNA or RNA sequences of certain species as special natural language texts. The approach for constructing composition vectors of DNA and RNA sequences is described. A learning method based on KNN algorithm is proposed. An experimental system for biology species categorization is implemented. Forty three different bacteria organisms selected randomly from EMBL are used for evaluation purpose. And the preliminary experiments show promising results on precision.

- Neuroscience Informatics, Bioinformatics, and Bio-medical Engineering | Pp. 956-964

Nonlinear Kernel MSE Methods for Cancer Classification

L. Shen; E. C. Tan

Combination of kernel PLS (KPLS) and kernel SVD (KSVD) with minimum-squared-error (MSE) criteria has created new machine learning methods for cancer classification and has been successfully applied to seven publicly available cancer datasets. Besides the high accuracy of the new methods, very fast training speed is also obtained because the matrix inversion in the original MSE procedure is avoided. Although the KPLS-MSE and the KSVD-MSE methods have equivalent accuracies, the KPLS achieves the same results using significantly less but more qualitative components.

- Neuroscience Informatics, Bioinformatics, and Bio-medical Engineering | Pp. 975-984

A New Algorithm of Multi-modality Medical Image Fusion Based on Pulse-Coupled Neural Networks

Wei Li; Xue-feng Zhu

In this paper, a new multi-modality medical image fusion algorithm based on pulse-coupled neural networks (PCNN) is presented. Firstly a multi-scale decomposition on each source image is performed, and then the PCNN is used to combine these decomposition coefficients. Finally an inverse multi-scale transform is taken upon the new fused coefficients to reconstruct fusion image. The new algorithm utilizes the global feature of source images because the PCNN has the global couple and pulse synchronization characteristics. Series of experiments are performed about multi-modality medical images fusion such as CT/MRI, CT/SPECT, MRI/PET, etc. The experimental results show that the new algorithm is very effective and provides a good performance in fusing multi-modality medical images.

- Neuroscience Informatics, Bioinformatics, and Bio-medical Engineering | Pp. 995-1001

Prediction Rule Generation of MHC Class I Binding Peptides Using ANN and GA

Yeon-Jin Cho; Hyeoncheol Kim; Heung-Bum Oh

A new method is proposed for generating rules to predict peptide binding to class I MHC proteins, from the amino acid sequence of any protein with known binders and non-binders. In this paper, we present an approach based on artificial neural networks (ANN) and knowledge-based genetic algorithm (KBGA) to predict the binding of peptides to MHC class I molecules. Our method includes rule extraction from a trained neural network and then enhancing the extracted rules by genetic evolution. Experimental results show that the method could generate new rules for MHC class I binding peptides prediction.

- Neuroscience Informatics, Bioinformatics, and Bio-medical Engineering | Pp. 1009-1016

Automatic Liver Segmentation of Contrast Enhanced CT Images Based on Histogram Processing

Kyung-Sik Seo; Hyung-Bum Kim; Taesu Park; Pan-Koo Kim; Jong-An Park

Pixel values of contrast enhanced computed tomography (CE-CT) images are randomly changed. Also, the middle liver part has a problem to segregate the liver structure because of similar gray-level values of neighboring organs in the abdomen. In this paper, an automatic liver segmentation method using histogram processing is proposed for overcoming randomness of CE-CT images and removing other abdominal organs. Forty CE-CT slices of ten patients were selected to evaluate the proposed method. As the evaluation measure, the normalized average area and area error rate were used. From the results of experiments, liver segmentation using histogram process has similar performance as the manual method by medical doctor.

- Neuroscience Informatics, Bioinformatics, and Bio-medical Engineering | Pp. 1027-1030

Similarity Analysis of DNA Sequences Based on the Relative Entropy

Wenlu Yang; Xiongjun Pi; Liqing Zhang

This paper investigates the similarity of two sequences, one of the main issues for fragments clustering and classification when sequencing the genomes of microbial communities directly sampled from natural environment. In this paper, we use the relative entropy as a criterion of similarity of two sequences and discuss its characteristics in DNA sequences. A method for evaluating the relative entropy is presented and applied to the comparison between two sequences. With combination of the relative entropy and the length of variables defined in this paper, the similarity of sequences is easily obtained. The SOM and PCA are applied to cluster subsequences from different genomes. Computer simulations verify that the method works well.

- Neuroscience Informatics, Bioinformatics, and Bio-medical Engineering | Pp. 1035-1038

Blind Clustering of DNA Fragments Based on Kullback-Leibler Divergence

Xiongjun Pi; Wenlu Yang; Liqing Zhang

In whole genome shotgun sequencing when DNA fragments are derived from thousands of microorganisms in the environment sample, traditional alignment methods are impractical to use because of their high computation complexity. In this paper, we take the divergence vector which is consist of Kullback-Leibler divergences of different word lengths as the feature vector. Based on this, we use BP neural network to identify whether two fragments are from the same microorganism and obtain the similarity between fragments. Finally, we develop a new novel method to cluster DNA fragments from different microorganisms into different groups. Experiments show that it performs well.

- Neuroscience Informatics, Bioinformatics, and Bio-medical Engineering | Pp. 1043-1046

Neuroinformatics Research in China- Current Status and Future Research Activities

Guang Li; Jing Zhang; Faji Gu; Ling Yin; Yiyuan Tang; Xiaowei Tang

After the Chinese National Neuroinformatics Working Group was formed in 2001, neuroinformatics research has progressed rapidly in China. This paper reviews the history of neuroinformatics in China, reports current researches and discusses recent trends of neuroinformatics in China.

- Neuroscience Informatics, Bioinformatics, and Bio-medical Engineering | Pp. 1052-1056

Current Status and Future Research Activities in Clinical Neuroinformatics: Singaporean Perspective

Wieslaw L. Nowinski

The Biomedical Imaging Lab in Singapore has been involved in neuroinformatics research for more than a decade. We are focused on clinical neuroinformatics, developing suitable models, tools, and databases. We report here our work on construction of anatomical, vascular, and functional brain atlases as well as development of atlas-assisted neuroscience education, research, and clinical applications. We also present future research activities.

- Neuroscience Informatics, Bioinformatics, and Bio-medical Engineering | Pp. 1065-1073

Optimal TDMA Frame Scheduling in Broadcasting Packet Radio Networks Using a Gradual Noisy Chaotic Neural Network

Haixiang Shi; Lipo Wang

In this paper, we propose a novel approach called the gradual noisy chaotic neural network (G-NCNN) to find a collision-free time slot schedule in a time division multiple access (TDMA) frame in packet radio network (PRN). In order to find a minimal average time delay of the network, we aim to find an optimal schedule which has the minimum frame length and provides the maximum channel utilization. The proposed two-phase neural network approach uses two different energy functions, with which the G-NCNN finds the minimal TDMA frame length in the first phase and the NCNN maximizes the node transmissions in the second phase. Numerical examples and comparisons with the previous methods show that the proposed method finds better solutions than previous algorithms. Furthermore, in order to show the difference between the proposed method and the hybrid method of the Hopfield neural network and genetic algorithms, we perform a paired t-test between two of them and show that G-NCNN can make significantly improvements.

- Neural Network Applications: Communications and Computer Networks | Pp. 1080-1089

A Fast Online SVM Algorithm for Variable-Step CDMA Power Control

Yu Zhao; Hongsheng Xi; Zilei Wang

This paper presents a fast online support vector machine (FOSVM) algorithm for variable-step CDMA power control. The FOSVM algorithm distinguishes new added samples and constructs current training sample set using K.K.T. condition in order to reduce the size of training samples. As a result, the training speed is effectively increased. We classify the received signals into two classes with FOSVM algorithm, then according to the output label of FOSVM and the distance from the data points to the SIR decision boundary, variable-step power control command is determined. Simulation results illustrate that the algorithm has a fast training speed and less support vectors. Its convergence performance is better than the fixed-step power control algorithm.

- Neural Network Applications: Communications and Computer Networks | Pp. 1090-1099

Fourth-Order Cumulants and Neural Network Approach for Robust Blind Channel Equalization

Soowhan Han; Kwangeui Lee; Jongkeuk Lee; Fredric M. Ham

This study addresses a new blind channel equalization method using fourth-order cumulants of channel inputs and a three-layer neural network equalizer. The proposed algorithm is robust with respect to the existence of heavy Gaussian noise in a channel and does not require the minimum-phase characteristic of the channel. The transmitted signals at the receiver are over-sampled to ensure the channel described by a full-column rank matrix. It changes a single-input/single-output (SISO) finite-impulse response (FIR) channel to a single-input/multi-output (SIMO) channel. Based on the properties of the fourth-order cumulants of the over-sampled channel inputs, the iterative algorithm is derived to estimate the deconvolution matrix which makes the overall transfer matrix transparent, i.e., it can be reduced to the identity matrix by simple reordering and scaling. By using this estimated deconvolution matrix, which is the inverse of the over-sampled unknown channel, a three-layer neural network equalizer is implemented at the receiver. In simulation studies, the stochastic version of the proposed algorithm is tested with three-ray multi-path channels for on-line operation, and its performance is compared with a method based on conventional second-order statistics. Relatively good results, with fast convergence speed, are achieved, even when the transmitted symbols are significantly corrupted with Gaussian noise.

- Neural Network Applications: Communications and Computer Networks | Pp. 1100-1112

Equalization of a Wireless ATM Channel with Simplified Complex Bilinear Recurrent Neural Network

Dong Chul-Park; Duc-Hoai Nguyen; Sang Jeen Hong; Yunsik Lee

A new equalization method for a wireless ATM communication channel using a simplified version of the complex bilinear recurrent neural network (S-CBLRNN) is proposed in this paper. The S-BLRNN is then applied to the equalization of a wireless ATM channel for 8PSK and 16QAM. The results show that the proposed S-CBLRNN converges about 40 % faster than the CBLRNN and gives very favorable results in both of the MSE and SER criteria over the other equalizers.

- Neural Network Applications: Communications and Computer Networks | Pp. 1113-1116

A Novel Remote User Authentication Scheme Using Interacting Neural Network

Tieming Chen; Jiamei Cai

Recently, interacting neural network has been studied out coming a novel result that the two neural networks can synchronize to a stationary weight state with the same initial inputs. In this paper, a simple but novel interacting neural network based authentication scheme is proposed, which can provide a full dynamic and security remote user authentication over a completely insecure communication channel.

- Neural Network Applications: Communications and Computer Networks | Pp. 1117-1120

Genetic Algorithm Simulated Annealing Based Clustering Strategy in MANET

Xu Li

MANET (Mobile Ad Hoc Network) is a collection of wireless mobile nodes forming a temporary computer communication network without the aid of any established infrastructure or centralized administration. MANET is characterized by both highly dynamic network topology and limited energy. This makes the efficiency of MANET depending not only on its control protocol, but also on its topology management and energy management. Clustering Strategy can improve the flexibility and scalability in network management. With graph theory model and genetic annealing hybrid optimization algorithm, this paper proposes a new clustering strategy named GASA (Genetic Algorithm Simulated Annealing). Simulation indicates that this strategy can with lower clustering cost and obtain dynamic balance of topology and load inside the whole network, so as to prolong the network lifetime.

- Neural Network Applications: Communications and Computer Networks | Pp. 1121-1131

An Improved Method of Feature Selection Based on Concept Attributes in Text Classification

Shasha Liao; Minghu Jiang

The feature selection and weighting are two important parts of automatic text classification. In this paper we give a new method based on concept attributes. We use the Terms of the Chinese word to extract concept attributes, and a Concept Tree (C-Tree) to give these attributes proper weighs considering their positions in the C-Tree, as this information describe the expression powers of the attributes. If these attributes are too weak to sustain the main meanings of the words, they will be deserted and the original word will be reserved. Otherwise, the attributes are selected in stead of the original words. Our main research purpose is to make a balance between concept features and word ones by set a shielded level as the threshold of the feature selection after weighting these features. According to the experiment results, we conclude that we can get enough information from the combined feature set for classification and efficiently reduce the useless features and the noises. In our experiment, the feature dimension is reduced to a much smaller space and the category precise is much better than the word selection methods. By choose different shielded levels, we finally select a best one when the average category precise is up to 93.7%. From the results, we find an extra finding that the precise differences between categories are smaller when we use combined features.

- Neural Network Applications: Expert System and Informatics | Pp. 1140-1149

Process Control and Management of Etching Process Using Data Mining with Quality Indexes

Hyeon Bae; Sungshin Kim; Kwang Bang Woo

As argued in this paper, a decision support system based on data mining and knowledge discovery is an important factor in improving productivity and yield. The proposed decision support system consists of a neural network model and an inference system based on fuzzy logic. First, the product results are predicted by the neural network model constructed by the quality index of the products that represent the quality of the etching process. And the quality indexes are classified according to and expert’s knowledge. Finally, the product conditions are estimated by the fuzzy inference system using the rules extracted from the classified patterns. We employed data mining and intelligent techniques to find the best condition for the etching process. The proposed decision support system is efficient and easy to be implemented for process management based on an expert’s knowledge.

- Neural Network Applications: Expert System and Informatics | Pp. 1160-1169

An Improved Information Retrieval Method and Input Device Using Gloves for Wearable Computers

Jeong-Hoon Shin; Kwang-Seok Hong

In this paper, we describe glove-based information retrieval method and input device for wearable computers. We suggest an easy and effective alphanumeric input algorithm using gloves and conduct efficiency test. The key to the development of the proposed device is the use of unique operator-to-key mapping method, key-to-symbol mapping method and simple algorithm. We list and discuss traditional algorithm and method using a glove, then describe an improved newly proposed algorithm using gloves. The efficiency test was conducted and the results were compared with other glove based device and algorithm for wearable computers.

- Neural Network Applications: Expert System and Informatics | Pp. 1179-1184

A Self-organized Network for Data Clustering

Liang Zhao; Antonio P. G. Damiance; Andre C. P. L. F. Carvalho

In this paper, a dynamical model for data clustering is proposed. This approach employs a network consisting of interacting elements with each representing an attribute vector of input data and receiving attractions from other elements within a certain region. Those attractions, determined by a predefined similarity measure, drive the elements to converge to their corresponding cluster center. With this model, neither the number of data clusters nor the initial guessing of cluster centers is required. Computer simulations for clustering of real images and Iris data set are performed. The results obtained so far are very promising.

- Neural Network Applications: Expert System and Informatics | Pp. 1189-1198

Complexity of Linear Cellular Automata over ℤ

Xiaogang Jin; Weihong Wang

Cellular automata(CA) is not only a discrete dynamical system with infinite dimensions, but also an important computational model. How simple can a CA be and yet support interesting and complicated behavior. There are many unsolved problems in the theory of CA, which appeal many researchers to focus their attentions on the field, especially subclass of CA – linear CA. These studies cover the topological properties, chaotical properties, invertibility, attractors and the classification of linear CA etc.. This is a survey of known results and open questions of D-dimensional linear CA over ℤ.

- Neural Network Applications: Expert System and Informatics | Pp. 1209-1213

Applications of Genetic Algorithm for Artificial Neural Network Model Discovery and Performance Surface Optimization in Finance

Serge Hayward

This paper considers a design framework of a computational experiment in finance. The examination of relationships between statistics used for economic forecasts evaluation and profitability of investment decisions reveals that only the ‘degree of improvement over efficient prediction’ shows robust links with profitability. If profits are not observable, this measure is proposed as an evaluation criterion for an economic prediction. Combined with directional accuracy, it could be used in an estimation technique for economic behavior, as an alternative to conventional least squares. Model discovery and performance surface optimization with genetic algorithm demonstrate profitability improvement with an inconclusive effect on statistical criteria.

- Neural Network Applications: Financial Engineering | Pp. 1214-1223

The Application of Structured Feedforward Neural Networks to the Modelling of the Daily Series of Currency in Circulation

Marek Hlaváček; Josef Čada; František Hakl

One of the most significant factors influencing the liquidity of financial markets is the amount of currency in circulation. Even the central bank is responsible for the distribution of the currency it could not assess the demand for the currency as it is influenced by the non-banking sector. Therefore the amount of currency in circulation have to be forecasted. This paper introduces feedforward structured neural network model and discusses its applicability to the forecasting of the currency in circulation. The forecasting performance of the new neural network model is compared with an ARIMA model. The results indicates that the performance of the neural network model is slightly better and that both models might be applied at least as supportive tools for the liquidity forecasting.

- Neural Network Applications: Financial Engineering | Pp. 1234-1246

The Prediction of the Financial Time Series Based on Correlation Dimension

Chen Feng; Guangrong Ji; Wencang Zhao; Rui Nian

In this paper we firstly analysis the chaotic characters of three sets of the financial time series (Hang Sheng Index (HIS), Shanghai Stock Index and US gold price) based on the phase space reconstruction. But when we adopt the feedforward neural networks to predict those time series, we found this method run short of a criterion in selecting the training set, so we present a new method: using correlation dimension (CD) as the criterion. By the experiments, the method is proved effective.

- Neural Network Applications: Financial Engineering | Pp. 1256-1265

Toward Global Optimization of ANN Supported by Instance Selection for Financial Forecasting

Sehun Lim

Artificial Neural Network (ANN) is widely used in the business to get on forecasting, but is often low performance for noisy data. Many techniques have been developed to improve ANN outcomes such as adding more algorithms, feature selection and feature weighting in input variables and modification of input case using instance selection. This paper proposes a Euclidean distance matrix approach to instance selection in ANN for financial forecasting. This approach optimizes a selection task for relevant instance. In addition, the technique improves prediction performance. In this research, ANN is applied to solve problems in forecasting a demand for corporate insurance. This research has compared the performance of forecasting a demand for corporate insurance through two types of ANN models; ANN and ISANN (ANN using Instance Selection supported by Euclidean distance metrics). Using ISANN to forecast a demand for corporate insurance is the most outstanding.

- Neural Network Applications: Financial Engineering | Pp. 1270-1274

Data Clustering with a Neuro-immune Network

Helder Knidel; Leandro Nunes de Castro; Fernando J. Von Zuben

This paper proposes a novel constructive learning algorithm for a competitive neural network. The proposed algorithm is developed by taking ideas from the immune system and demonstrates robustness for data clustering in the initial experiments reported here for three benchmark problems. Comparisons with results from the literature are also provided. To automatically segment the resultant neurons at the output, a tool from graph theory was used with promising results. A brief sensitivity analysis of the algorithm was performed in order to investigate the influence of the main user-defined parameters on the learning speed and accuracy of the results presented. General discussions and avenues for future works are also provided.

- Neural Network Applications: Financial Engineering | Pp. 1279-1288

Información

Tipo: libros

ISBN impreso

978-3-540-28323-2

ISBN electrónico

978-3-540-31853-8

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación