Catálogo de publicaciones - libros
Advances in Natural Computation: 1st International Conference, ICNC 2005, Changsha, China, August 27-29, 2005, Proceedings, Part I
Lipo Wang ; Ke Chen ; Yew Soon Ong (eds.)
En conferencia: 1º International Conference on Natural Computation (ICNC) . Changsha, China . August 27, 2005 - August 29, 2005
Resumen/Descripción – provisto por la editorial
No disponible.
Palabras clave – provistas por la editorial
Artificial Intelligence (incl. Robotics); Image Processing and Computer Vision; Computation by Abstract Devices; Algorithm Analysis and Problem Complexity; Pattern Recognition; Evolutionary Biology
Disponibilidad
Institución detectada | Año de publicación | Navegá | Descargá | Solicitá |
---|---|---|---|---|
No detectada | 2005 | SpringerLink |
Información
Tipo de recurso:
libros
ISBN impreso
978-3-540-28323-2
ISBN electrónico
978-3-540-31853-8
Editor responsable
Springer Nature
País de edición
Reino Unido
Fecha de publicación
2005
Información sobre derechos de publicación
© Springer-Verlag Berlin Heidelberg 2005
Tabla de contenidos
doi: 10.1007/11539087_17
Generate Different Neural Networks by Negative Correlation Learning
Yong Liu
This paper describes two methods on how to generate different neural networks in an ensemble. One is based on negative correlation learning. The other is based on cross-validation with negative correlation learning, i.e., bagging with negative correlation learning. In negative correlation learning, all individual networks are trained simultaneously on the same training set. In bagging with negative correlation learning, different individual networks are trained on the different sampled data set with replacement from the training set. The performance and correct response sets are compared between two learning methods. The purpose of this paper is to find how to design more effective neural network ensembles.
- Neural Network Learning Algorithms | Pp. 149-156
doi: 10.1007/11539087_19
Learning Outliers to Refine a Corpus for Chinese Webpage Categorization
Dingsheng Luo; Xinhao Wang; Xihong Wu; Huisheng Chi
Webpage categorization has turned out to be an important topic in recent years. In a webpage, text is usually the main content, so that (ATC) becomes the key technique to such a task. For Chinese text categorization as well as Chinese webpage categorization, one of the basic and urgent problems is the construction of a good benchmark corpus. In this study, a machine learning approach is presented to refine a corpus for Chinese webpage categorization, where the AdaBoost algorithm is adopted to identify outliers in the corpus. The standard (kNN) algorithm under a (VSM) is adopted to construct a webpage categorization system. Simulation results as well as manual investigation of the identified outliers reveal that the presented method works well.
- Neural Network Learning Algorithms | Pp. 167-178
doi: 10.1007/11539087_21
A New Learning Algorithm Based on Lever Principle
Xiaoguang He; Jie Tian; Xin Yang
In this paper a new learning algorithm, Lever Training Machine (LTM), is presented for binary classification. LTM is a supervised learning algorithm and its main idea is inspired from a physics principle: Lever Principle. Figuratively, LTM involves rolling a hyper-plane around the convex hull of the target training set, and using the equilibrium position of the hyper-plane to define a decision surfaces. In theory, the optimal goal of LTM is to maximize the correct rejection rate. If the distribution of target set is convex, a set of such decision surfaces can be trained for exact discrimination without false alarm. Two mathematic experiments and the practical application of face detection confirm that LTM is an effective learning algorithm.
- Neural Network Learning Algorithms | Pp. 187-198
doi: 10.1007/11539087_23
A Novel LDA Approach for High-Dimensional Data
Guiyu Feng; Dewen Hu; Ming Li; Zongtan Zhou
Linear Discriminant Analysis (LDA) is one of the most popular linear projection techniques for feature extraction. The major drawback of this method is that it may encounter the small sample size problem in practice. In this paper, we present a novel LDA approach for high-dimensional data. Instead of direct dimension reduction using PCA as the first step, the high-dimensional data are mapped into a relatively lower dimensional similarity space, and then the LDA technique is applied. The preliminary experimental results on the ORL face database verify the effectiveness of the proposed approach.
- Neural Network Learning Algorithms | Pp. 209-212
doi: 10.1007/11539087_25
Support Vector Regression with Smoothing Property
Zhixia Yang; Nong Wang; Ling Jing
The problem of construction of smoothing curve is actually regression problem. How to use SVR to solve the problem of curve smoothing reconstruction in reverse engineering is discussed in this paper. A modified support vector regression model is proposed. Numerical result shows that the smoothness of curves fitted by modified method is better than by the standard SVR, when there are some bad measure points in the data.
- Neural Network Learning Algorithms | Pp. 217-220
doi: 10.1007/11539087_26
A Fast SMO Training Algorithm for Support Vector Regression
Haoran Zhang; Xiaodong Wang; Changjiang Zhang; Xiuling Xu
Support vector regression (SVR) is a powerful tool to solve regression problem, this paper proposes a fast Sequential Minimal Optimization (SMO) algorithm for training support vector regression (SVR), firstly gives a analytical solution to the size two quadratic programming (QP) problem, then proposes a new heuristic method to select the working set which leads to algorithm’s faster convergence. The simulation results indicate that the proposed SMO algorithm can reduce the training time of SVR, and the performance of proposed SMO algorithm is better than that of original SMO algorithm.
- Neural Network Learning Algorithms | Pp. 221-224
doi: 10.1007/11539087_28
A New Predictive Vector Quantization Method Using a Smaller Codebook
Min Shi; Shengli Xie
For improving coding efficiency, a new predictive vector quantization (VQ) method was proposed in this paper. Two codebooks with different dimensionalities and different size were employed in our algorithm. The defined blocks are first classified based on variance. For smooth areas, the current processing vectors are sampled into even column vectors and odd column vectors. The even column vectors are encoded with the lower-dimensional and smaller size codebook. The odd ones are predicted using the decoded pixels from intra-blocks and inter-blocks at the decoder. For edge areas, the current processing vectors are encoded with traditional codebook to maintain the image quality. An efficient method for codebook design was also presented to improve the quality of the resulted codebook. The experimental comparisons with the other methods show good performance of our algorithm.
- Neural Network Learning Algorithms | Pp. 229-236
doi: 10.1007/11539087_30
Universal Approach to Study Delayed Dynamical Systems
Tianping Chen
In this paper, we propose a universal approach to study dynamical behaviors of various neural networks with time-varying delays. A universal model is proposed, which includes most of the existing models as special cases. An effective approach, which was first proposed in [1] , to investigate global stability is given, too. It is pointed out that the approach proposed in the paper [1] applies to the systems with time-varying delays, too.
- Neural Network Architectures | Pp. 245-253
doi: 10.1007/11539087_31
Long-Range Connections Based Small-World Network and Its Synchronizability
Liu Jie; Lu Jun-an
How crucial is the long-distance connections in small-world networks produced by the semi-random SW strategy? In this paper, we attempted to investigate some related questions by constructing a semi-random small-world network through only randomly adding ’long-range lattice distance connections’ to a regular network. The modified network model is compared with the most used NW small-world network. It can be found that, by using the new modified small-worldify algorithm, one can obtain a better clustered small-world network with similar average path length. Further more, we numerically found that, for a dynamical network on typical coupling scheme, the synchronizability of the small-world network formed by our procedure is no better than that of the small-world network formed by NW’s algorithm, although the two classes of network constructed at the same constructing prices and having similar average path length. These results further confirmed that, the random coupling in some sense the best candidate for such nonlocal coupling in the semi-random strategy. Main results are confirmed by extensive numerical simulations.
- Neural Network Architectures | Pp. 254-263
doi: 10.1007/11539087_32
Double Synaptic Weight Neuron Theory and Its Application
Wang Shou-jue; Chen Xu; Qin Hong; Li Weijun; Bian Yi
In this paper, a novel mathematical model of neuron-Double Synaptic Weight Neuron (DSWN) is presented. The DSWN can simulate many kinds of neuron architectures, including Radial-Basis-Function (RBF), Hyper Sausage and Hyper Ellipsoid models, etc. Moreover, this new model has been implemented in the new CASSANN-II neurocomputer that can be used to form various types of neural networks with multiple mathematical models of neurons. The flexibility of the DSWN has also been described in constructing neural networks. Based on the theory of Biomimetic Pattern Recognition (BPR) and high-dimensional space covering, a recognition system of omni directionally oriented rigid objects on the horizontal surface and a face recognition system had been implemented on CASSANN-II neurocomputer. In these two special cases, the result showed DSWN neural network had great potential in pattern recognition.
- Neural Network Architectures | Pp. 264-272