Catálogo de publicaciones - libros

Compartir en
redes sociales


Advances in Neural Networks: 4th International Symposium on Neural Networks, ISNN 2007, Nanjing, China, June 3-7, 2007, Proceedings, Part III

Derong Liu ; Shumin Fei ; Zengguang Hou ; Huaguang Zhang ; Changyin Sun (eds.)

En conferencia: 4º International Symposium on Neural Networks (ISNN) . Nanjing, China . June 3, 2007 - June 7, 2007

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Computation by Abstract Devices; Computer Communication Networks; Algorithm Analysis and Problem Complexity; Discrete Mathematics in Computer Science; Artificial Intelligence (incl. Robotics); Pattern Recognition

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-72394-3

ISBN electrónico

978-3-540-72395-0

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2007

Tabla de contenidos

Periodicity of Recurrent Neural Networks with Reaction-Diffusion and Dirichlet Boundary Conditions

Chaojin Fu; Chongjun Zhu; Boshan Chen

In this paper, a class of reaction-diffusion recurrent neural networks with time-varying delays and Dirichlet boundary conditions are considered by using an approach based on the delay differential inequality and the fixed-point theorem. Some sufficient conditions are obtained to guarantee that the reaction-diffusion recurrent neural networks have a periodic orbit and this periodic orbit is globally attractive. The results presented in this paper are the improvement and extension of the existed ones in some existing works.

- Recurrent Neural Networks | Pp. 123-130

New Critical Analysis on Global Convergence of Recurrent Neural Networks with Projection Mappings

Chen Qiao; Zong-Ben Xu

In this paper, we present the general analysis of global convergence for the recurrent neural networks (RNNs) with projection mappings in the critical case that (,), a matrix related with the weight matrix and the activation mapping of the networks, is nonnegative for a positive diagonal matrix . In contrast to the existing conclusion such as in [1], the present critical stability results do not require the condition that must be symmetric and can be applied to the general projection mappings other than nearest point projection mappings. An example has also been shown that the theoretical results obtained in the present paper have explicitly practical application.

- Recurrent Neural Networks | Pp. 131-139

A Study on Digital Media Security by Hopfield Neural Network

Minseong Ju; Seoksoo Kim; Tai-hoon Kim

Recently, the distribution and using of the digital multimedia contents are easy by developing the internet application program and related technology. However, the digital signal is easily duplicated and the duplicates have the same quality compare with original digital signal. To solve this problem, there is the multimedia fingerprint which is studied for the protection of copyright. Fingerprinting scheme is a technique which supports copyright protection to track redistributors of electronic information using cryptographic techniques. Only regular user can know the inserted fingerprint data in fingerprinting schemes differ from a symmetric/asymmetric scheme and the scheme guarantee an anonymous before re-contributed data. In this paper, we present a new scheme which is the detection of colluded multimedia fingerprint by neural network. This proposed scheme is consists of the anti-collusion code generation and the neural network for the error correction. Anti-collusion code based on BIBD(Balanced Incomplete Block Design) was made 100% collusion code detection rate about the average linear collusion attack, and the Hopfield neural network using (n,k) code designing for the error bits correction confirmed that can correct error within 2bits.

- Recurrent Neural Networks | Pp. 140-146

Two Theorems on the Robust Designs for Pattern Matching CNNs

Bing Zhao; Weidong Li; Shu Jian; Lequan Min

The cellular neural/nonlinear network (CNN) has become a useful tool for image and signal processing, robotic and biological visions, and higher brain functions. Based on our previous research, this paper set up two new theorems of robust designs for Pattern Matching CNN in processing binary images, which provide parameter inequalities to determine parameter intervals for implementing the prescribed image processing function. Three numerical simulation examples are given.

- Recurrent Neural Networks | Pp. 147-156

Iterative Learning Control Analysis Based on Hopfield Neural Networks

Jingli Kang; Wansheng Tang

Iterative learning control problem based on improved discrete-time Hopfield neural networks is considered in this paper. For the every process of iterative learning control, the neural networks execute a cycle that includes variable terms of learning time and training iterative number. The iterative learning control with improved Hopfield neural networks is formulated that can be described as a two-dimensional (2-D) Roesser model with variable coefficients. In terms of 2-D systems theory, sufficient conditions that iterative learning error approaches to zero are given. It has been shown that convergence of iterative learning control problem based on Hopfield neural networks is derived by 2-D systems theory instead of conventional algorithms that minimize a cost function.

- Recurrent Neural Networks | Pp. 157-163

Stochastic Stabilization of Delayed Neural Networks

Wudai Liao; Jinhuan Chen; Yulin Xu; Xiaoxin Liao

By introducing appropriate stochastic factors into the neural networks, there were results showing that the neural networks can be stabilized. In this paper, stochastic stabilization of delayed neural networks is studied. First, a new type Razumikhin-type theorem about stochastic functional differential equations is proposed and the rigid proof is given by using Itô formula, Borel-Contelli lemma etc.. As a corollary of the theorem, a new type Razumikhin-type theorem of delayed stochastic differential equation is obtained. Next, taking the results obtained in the first section as the theoretic basis, the stabilization of the delayed deterministic neural networks is examined. The result obtained in the paper shows that the neural networks can be stabilized so long as the intensity of the random perturbation is large enough. The expression of the random intensity is presented which is convenient to networks’ design.

- Recurrent Neural Networks | Pp. 164-173

Convergence of a Recurrent Neural Network for Nonconvex Optimization Based on an Augmented Lagrangian Function

Xiaolin Hu; Jun Wang

In the paper, a recurrent neural network based on an augmented Lagrangian function is proposed for seeking local minima of nonconvex optimization problems with inequality constraints. First, each equilibrium point of the neural network corresponds to a Karush-Kuhn-Tucker (KKT) point of the problem. Second, by appropriately choosing a control parameter, the neural network is asymptotically stable at those local minima satisfying some mild conditions. The latter property of the neural network is ensured by the convexification capability of the augmented Lagrangian function. The proposed scheme is inspired by many existing neural networks in the literature and can be regarded as an extension or improved version of them. A simulation example is discussed to illustrate the results.

- Neural Networks for Optimization | Pp. 194-203

Multi-objective Topology Optimization of Structures Using NN-OC Algorithms

Xinyu Shao; Zhimin Chen; Mingang Fu; Liang Gao

Topology optimization problem, which involves many design variables, is commonly solved by finite element method, a method must recalculate structure-stiffness matrix each time of analysis. OC method is a good way to solve topology optimization problem, nevertheless, it can not solve multiobjective topology optimization problems. This paper introduces an effective solution to Multi-objective topology optimization problems by using Neural Network algorithms to improve the traditional OC method. Specifically, in each iteration, calculate the new neural network link weight vector by using the previous link weight vector in the last iteration and the compliance vector in the last time of optimization, then work out the impact factor of each optimization objective on the overall objective of the optimization in order to determine the optimal direction of each design variable.

- Neural Networks for Optimization | Pp. 204-212

A Hybrid Particle Swarm Algorithm for the Structure and Parameters Optimization of Feed-Forward Neural Network

Tang Xian-Lun; Li Yin-Guo; Zhuang Ling

A novel and efficient method combining chaos particle swarm optimization (CPSO) and discrete particle swarm optimization (DPSO) is proposed to optimize the topology and connection weights of multilayer feed-forward artificial neural network (ANN) simultaneously. In the proposed algorithm, the topology of neural network is optimized by DPSO and connection weights are trained by CPSO to search the; global optimal ANN structure and connectivity. The proposed algorithm is successfully applied to fault diagnosis, able to eliminate some bad effects on the diagnosis capacity of network introduced by redundant structure of ANN. Compared with genetic algorithm (GA), the proposed method shows its superiority on convergence property and efficiency in training ANN. It is validated by the good diagnosis results of experiments.

- Neural Networks for Optimization | Pp. 213-218

Designing Neural Networks Using PSO-Based Memetic Algorithm

Bo Liu; Ling Wang; Yihui Jin; Dexian Huang

This paper proposes an effective particle swarm optimization (PSO) based memetic algorithm (MA) for designing artificial neural network. In the proposed PSO-based MA (PSOMA), not only the evolutionary searching mechanism of PSO characterized by individual improvement plus population cooperation and competition is applied to perform the global search, but also several adaptive high-performance faster training algorithms are employed to enhance the local search, so that the exploration and exploitation abilities of PSOMA can be well balanced. Moreover, an effective adaptive Meta-Lamarckian learning strategy is employed to decide which local search method to be used so as to prevent the premature convergence and concentrate computing effort on promising neighbor solutions. Simulation results and comparisons demonstrate the effectiveness and efficiency of the proposed PSOMA.

- Neural Networks for Optimization | Pp. 219-224