Catálogo de publicaciones - libros

Compartir en
redes sociales


Advances in Neural Networks: 4th International Symposium on Neural Networks, ISNN 2007, Nanjing, China, June 3-7, 2007, Proceedings, Part II

Derong Liu ; Shumin Fei ; Zengguang Hou ; Huaguang Zhang ; Changyin Sun (eds.)

En conferencia: 4º International Symposium on Neural Networks (ISNN) . Nanjing, China . June 3, 2007 - June 7, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence (incl. Robotics); Computation by Abstract Devices; Computer Communication Networks; Algorithm Analysis and Problem Complexity; Discrete Mathematics in Computer Science; Pattern Recognition

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-72392-9

ISBN electrónico

978-3-540-72393-6

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

Regularization Versus Dimension Reduction, Which Is Better?

Yunfei Jiang; Ping Guo

There exist two main solutions for the classification of high-dimensional data with small number settings. One is to classify them directly in high-dimensional space with regularization methods, and the other is to reduce data dimension first, then classify them in feature space. However, which is better on earth? In this paper, the comparative studies for regularization and dimension reduction approaches are given with two typical sets of high-dimensional data from real world: Raman spectroscopy signals and stellar spectra data. Experimental results show that in most cases, the dimension reduction methods can obtain acceptable classification results, and cost less computation time. When the training sample number is insufficient and distribution is unbalance seriously, performance of some regularization approaches is better than those dimension reduction ones, but regularization methods cost more computation time.

- Training and Learning Algorithms for Neural Networks | Pp. 474-482

Integrated Analytic Framework for Neural Network Construction

Kang Li; Jian-Xun Peng; Minrui Fei; Xiaoou Li; Wen Yu

This paper investigates the construction of a wide class of singlehidden layer neural networks (SLNNs) with or without tunable parameters in the hidden nodes. It is a challenging problem if both the parameter training and determination of network size are considered simultaneously. Two alternative network construction methods are considered in this paper. Firstly, the discrete construction of SLNNs is introduced. The main objective is to select a subset of hidden nodes from a pool of candidates with parameters fixed ‘a priori’. This is called discrete construction since there are no parameters in the hidden nodes that need to be trained. The second approach is called continuous construction as all the adjustable network parameters are trained on the whole parameter space along the network construction process. In the second approach, there is no need to generate a pool of candidates, and the network grows one by one with the adjustable parameters optimized. The main contribution of this paper is to show that the network construction can be done using the above two alternative approaches, and these two approaches can be integrated within a unified analytic framework, leading to potentially significantly improved model performance and/or computational efficiency.

- Training and Learning Algorithms for Neural Networks | Pp. 483-492

Topographic Infomax in a Neural Multigrid

James Kozloski; Guillermo Cecchi; Charles Peck; A. Ravishankar Rao

We introduce an information maximizing neural network that employs only local learning rules, simple activation functions, and feedback in its functioning. The network consists of an input layer, an output layer that can be overcomplete, and a set of auxiliary layers comprising feed-forward, lateral, and feedback connecwtions. The auxiliary layers implement a novel ”neural multigrid,” and each computes a Fourier mode of a key infomax learning vector. Initially, a partial multigrid computes only low frequency modes of this learning vector, resulting in a spatially correlated topographic map. As higher frequency modes of the learning vector are gradually added, an infomax solution emerges, maximizing the entropy of the output without disrupting the map’s topographic order. When feed-forward and feedback connections to the neural multigrid are passed through a nonlinear activation function, infomax emerges in a phase-independent topographic map. Information rates estimated by Principal Components Analysis (PCA) are comparable to those of standard infomax, indicating the neural multigrid successfully imposes a topographic order on the optimal infomax-derived bases.

- Neural Networks Structures | Pp. 500-509

An Artificial Immune Network Model Applied to Data Clustering and Classification

Chenggong Zhang; Zhang Yi

A novel tree structured artificial immune network is proposed. The trunk nodes and leaf nodes represent memory antibodies and non-memory antibodies, respectively. A link is setup between two antibodies immediately after one has reproduced by another. By introducing well designed immune operators such as clonal selection, cooperation, suppression and topology updating, the network evolves from a single antibody to clusters that are well consistent with the local distribution and local density of original antigens. The framework of learning algorithm and several key steps are described. Experiments are carried out to demonstrate the learning process and classification accuracy of the proposed model.

- Neural Networks Structures | Pp. 526-533

Sparse Coding in Sparse Winner Networks

Janusz A. Starzyk; Yinyin Liu; David Vogel

This paper investigates a mechanism for reliable generation of sparse code in a sparsely connected, hierarchical, learning memory. Activity reduction is accomplished with local competitions that suppress activities of unselected neurons so that costly global competition is avoided. The learning ability and the memory characteristics of the proposed winner-take-all network and an oligarchy-take-all network are demonstrated using experimental results. The proposed models have the features of a learning memory essential to the development of machine intelligence.

- Neural Networks Structures | Pp. 534-541

Multi-Valued Cellular Neural Networks and Its Application for Associative Memory

Zhong Zhang; Takuma Akiduki; Tetsuo Miyake; Takashi Imamura

This paper discusses the design of multi-valued output functions of Cellular Neural Networks (CNNs) implementing associative memories. The output function of the CNNs is a piecewise linear function which consists of a saturation and non-saturation range. The new structure of the output function is defined, and is called the “basic waveform”. The saturation ranges with levels are generated by adding  − 1 basic waveforms. Consequently, creating an associative memory of multi-valued patterns has been successful, and computer experiment results show the validity of the proposed method. The results of this research can expand the range of applications of CNNs as associative memories.

- Neural Networks Structures | Pp. 542-551

Emergence of Topographic Cortical Maps in a Parameterless Local Competition Network

A. Ravishankar Rao; Guillermo Cecchi; Charles Peck; James Kozloski

A major research problem in the area of unsupervised learning is the understanding of neuronal selectivity, and its role in the formation of cortical maps. Kohonen devised a self-organizing map algorithm to investigate this problem, which achieved partial success in replicating biological observations. However, a problem in using Kohonen’s approach is that it does not address the stability-plasticity dilemma, as the learning rate decreases monotonically.

In this paper, we propose a solution to cortical map formation which tackles the stability-plasticity problem, where the map maintains stability while enabling plasticity in the presence of changing input statistics. We adapt the parameterless SOM (Berglund and Sitte 2006) and also modify Kohonen’s original approach to allow local competition in a larger cortex, where multiple winners can exist.

The learning rate and neighborhood size of the modified Kohonen’s method are set automatically based on the error between the local winner’s weight vector and its input. We used input images consisting of lines of random orientation to train the system in an unsupervised manner. Our model shows large scale topographic organization of orientation across the cortex, which compares favorably with cortical maps measured in visual area V1 in primates. Furthermore, we demonstrate the plasticity of this map by showing that the map reorganizes when the input statistics are chanaged.

- Neural Networks Structures | Pp. 552-561

Graph Matching Recombination for Evolving Neural Networks

Ashique Mahmood; Sadia Sharmin; Debjanee Barua; Md. Monirul Islam

This paper presents a new evolutionary system using genetic algorithm for evolving artificial neural networks (ANNs). Existing genetic algorithms (GAs) for evolving ANNs suffer from the permutation problem. Frequent and abrupt recombination in GAs also have very detrimental effect on the quality of offspring. On the other hand, Evolutionary Programming (EP) does not use recombination operator entirely. Proposed algorithm introduces a recombination operator using graph matching technique to adapt structure of ANNs dynamically and to avoid permutation problem. The complete algorithm is designed to avoid frequent recombination and reduce behavioral disruption between parents and offspring. The evolutionary system is implemented and applied to three medical diagnosis problems - breast cancer, diabetes and thyroid. The experimental results show that the system can dynamically evolve compact structures of ANNs, showing competitiveness in performance.

- Neural Networks Structures | Pp. 562-568

Iteratively Reweighted Fitting for Reduced Multivariate Polynomial Model

Wangmeng Zuo; Kuanquan Wang; David Zhang; Feng Yue

Recently a class of reduced multivariate polynomial models (RM) has been proposed that performs well in classification tasks involving few features and many training data. The RM method, however, adopts a ridge least-square estimator, overlooking the fact that least square error usually does not correspond to minimum classification error. In this paper, we propose an iteratively reweighted regression method and two novel weight functions for fitting the RM model (IRF-RM). The IRF-RM method iteratively increases the weights of samples prone to misclassification and decreases the weights of samples far from the decision boundary, making the IRF-RM model more suitable for efficient pattern classification. A number of benchmark data sets are used to evaluate the IRF-RM method. Experimental results indicate that IRF-RM achieves a higher or comparable classification accuracy compared with RM and several state-of-the-art classification approaches.

- Neural Networks Structures | Pp. 583-592

Ordering Grids to Identify the Clustering Structure

Shihong Yue; Miaomiao Wei; Yi Li; Xiuxiu Wang

Almost all of the well-known clustering algorithms require input parameters while these parameters may be difficult to be determined. OPTICS (Ordering Points To Identify the Clustering Structure Cluster Structure) is a primary semi-clustering method to visualize the data structure and to determine the input parameters of a given clustering algorithm. However, OPTICS has too high complexity () to be applied to any large dataset of data. In this paper, we present a new semi-clustering method by partitioning data space into a number of grids and Ordering all Grids To Identify the Clustering Structure (OGTICS). Accordingly, the new method is called OGTICS. The OGTICS has only linear complexity () and thus is much faster than OPTICS. Consequently, the OGTICS can be applied to very large dataset.

- Neural Networks Structures | Pp. 612-619