Catálogo de publicaciones - libros

Compartir en
redes sociales


Neural Information Processing: 13th International Conference, ICONIP 2006, Hong Kong, China, October 3-6, 2006, Proceedings, Part II

Irwin King ; Jun Wang ; Lai-Wan Chan ; DeLiang Wang (eds.)

En conferencia: 13º International Conference on Neural Information Processing (ICONIP) . Hong Kong, China . October 3, 2006 - October 6, 2006

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence (incl. Robotics); Computation by Abstract Devices; Computer Appl. in Administrative Data Processing; Information Systems Applications (incl. Internet); Database Management; Image Processing and Computer Vision

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2006 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-46481-5

ISBN electrónico

978-3-540-46482-2

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2006

Tabla de contenidos

Solving Multiprocessor Real-Time System Scheduling with Enhanced Competitive Scheme

Ruey-Maw Chen; Shih-Tang Lo; Yueh-Min Huang

A new method based on Hopfield Neural Networks (HNN) for solving real-time scheduling problem is adopted in this study. Neural network using competitive learning rule provides a highly effective method and deriving a sound solution for scheduling problem. Moreover, competitive scheme reduces network complexity. However, competitive scheme is a confine rule and applicable for limited scheduling problems. Restated, the processor may not be full utilization for scheduling problems. To facilitate the non-fully utilized problem, extra neurons are introduced to the Competitive Hopfield Neural Network (CHNN). Slack neurons are imposed on CHNN with respected to pseudo processes. Simulation results reveal that the competitive neural network imposed on the proposed energy function with slack neurons integrated ensures an appropriate approach of solving both full and non-full utilization multiprocessor real-time system scheduling problems.

- Neurodynamic and Particle Swarm Optimization | Pp. 1108-1117

A Distributed Hybrid Algorithm for Optimized Resource Allocation Problem

Kyeongmo Park; Sungcheol Kim; Chuleui Hong

This paper presents a novel distributed Mean field Genetic algorithm called MGA for the load balancing problems in MPI environments. The proposed MGA is a hybrid algorithm of Mean Field Annealing (MFA) and Simulated annealing-like Genetic Algorithm (SGA). The proposed MGA combines the benefit of rapid convergence property of MFA and the effective genetic operations of SGA. Our experimental results indicate that the composition of heuristic mapping methods improves the performance over the conventional ones in terms of communication cost, load imbalance and maximum execution time. It is also proved that the proposed distributed algorithm maintains the convergence properties of sequential algorithm while it achieves almost linear speedup as the problem size increases.

- Neurodynamic and Particle Swarm Optimization | Pp. 1118-1127

A Swarm Optimization Model for Energy Minimization Problem of Early Vision

Wenhui Zhou; Lili Lin; Weikang Gu

This paper proposes a swarm optimization model for energy minimization problem of early vision, which is based on a multi-colony ant scheme. Swarm optimization is a new artificial intelligence field, which has been proved suitable to solve various combinatorial optimization problems. Compared with general optimization problems, energy minimization of early vision has its unique characteristics, such as higher dimensions, more complicate structure of solution space, and dynamic constrain conditions. In this paper, the vision energy functions are optimized by repeatedly minimizing a certain number of sub-problems according to divide-and-conquer principle, and each colony is allocated to optimize one sub-problem independently. Then an appropriate information exchange strategy between neighboring colonies, and an adaptive method for dynamic problem are applied to implement global optimization. As a typical example, stereo correspondence will be solved using the proposed swarm optimization model. Experiments show this method can achieve good results.

- Neurodynamic and Particle Swarm Optimization | Pp. 1128-1137

PSO-Based Hyper-Parameters Selection for LS-SVM Classifiers

X. C. Guo; Y. C. Liang; C. G. Wu; C. Y. Wang

The determination for hyper-parameters including kernel parameters and the regularization is important to the performance of least squares support vector machines (LS-SVMs). In this paper, the problem of model selection for LS-SVMs is discussed. The particle swarm optimization (PSO) is introduced to select the LS-SVMs hyper-parameters. In the proposed method we do not need to consider the analytic property of the generalization performance measure and the number of hyper-parameters. The feasibility of this method is evaluated on benchmark data sets. Experimental results show that better performance can be obtained. Moreover, different kinds of kernel families are investigated by using the proposed method. Experimental results also show that the best and good test performance could be obtained by using the SRBF and RBF kernel functions, respectively.

- Neurodynamic and Particle Swarm Optimization | Pp. 1138-1147

Training RBF Neural Networks with PSO and Improved Subtractive Clustering Algorithms

JunYing Chen; Zheng Qin

In this paper, Particle Swarm Optimization (PSO) and improved subtractive clustering algorithm were proposed for training RBF neural networks. PSO was used to feature selection in conjunction with RBF classifiers for individual fitness evaluation. During RBF training process, supervised mean subtractive clustering algorithm (SMSCA) was used to evolve RBF networks dynamically with the selected feature subset based on PSO algorithm. Experimental results on four datasets show that RBF networks evolved by our proposed algorithm have more simple architecture and stronger generalization ability with nearly the same classification performance when compared with the networks evolved by other methods.

- Neurodynamic and Particle Swarm Optimization | Pp. 1148-1155

Training RBF Neural Network Via Quantum-Behaved Particle Swarm Optimization

Jun Sun; Wenbo Xu; Jing Liu

Radial Basis Function (RBF) networks are widely applied in function approximation, system identification, chaotic time series forecasting, etc. To use a RBF network, a training algorithm is absolutely necessary for determining the network parameters. The existing training algorithms, such as Orthogonal Least Squares (OLS) algorithm, clustering and gradient descent algorithm, have their own shortcomings. In this paper, we make an attempt to explore the applicability of Quantum-behaved Particle Swarm Optimization, a newly proposed evolutionary search technique, in training RBF neural network. The proposed QPSO-Trained RBF network was test on nonlinear system identification problem, and the results show that it can identifying the system more quickly and precisely than that trained by Particle Swarm algorithm.

- Neurodynamic and Particle Swarm Optimization | Pp. 1156-1163

Discrete Particle Swarm Optimization and EM Hybrid Approach for Naive Bayes Clustering

Jing-Hua Guan; Da-You Liu; Si-Pei Liu

This paper presents an improved Naive Bayes algorithm for clustering. Many researchers search for parameter values from incomplete data using EM (Expectation Maximization) algorithm. It is well-known that EM approach has a drawback – local optimal solution, so we propose a novel hybrid algorithm of the DPSO (Discrete Particle Swarm Optimization) and the EM approach to improve the global search performance. We then apply the approach to 4 real-world data sets from UCI repository and compare the performance of clustering by the new algorithm with by EM algorithm. In the comparison, the hybrid DPSO+EM algorithm exhibits more effectively and outperforms the EM approach.

- Neurodynamic and Particle Swarm Optimization | Pp. 1164-1173

Extended Particle Swarm Optimiser with Adaptive Acceleration Coefficients and Its Application in Nonlinear Blind Source Separation

Ying Gao; Zhaohui Li; Hui Zheng; Huailiang Liu

First, based on the particle swarm optimization, an extended particle swarm optimizer with acceleration coefficients (EPSO_AAC) is presented. The personal best particle is replaced by the average of personal best particles in swarm at generation, and time-varying acceleration coefficients are applied by establishing a nonlinear functional relationship between acceleration coefficients and the different of the average fitness of all particles and the fitness of the global best particle. The proposed algorithm uses more particles’ information, and adjusts adaptively “cognition” component and “social” component by time-varying acceleration coefficients, thus improves convergence performance. Then, the proposed algorithm is applied to nonlinear blind source separation. The demixing system of the nonlinear mixtures is modeled using a multi-input multi-output B-spline neural network whose weights are optimized under the criterion of independence of its outputs by EPSO_AAC. The experiment results demonstrate that the proposed algorithms are effective, and have good convergence performance.

- Neurodynamic and Particle Swarm Optimization | Pp. 1174-1182

Application of a Hybrid Ant Colony Optimization for the Multilevel Thresholding in Image Processing

Yun-Chia Liang; Angela Hsiang-Ling Chen; Chiuh-Cheng Chyu

Our study proposes a hybrid optimization scheme based on an ant colony optimization algorithm with the Otsu method to render the optimal thresholding technique more applicable and effective. The properties of discriminate analysis in Otsu’s method are to analyze the separability among the gray levels in the image. The ACO-Otsu algorithm, a non-parametric and unsupervised method, is the first-known application of ACO to automatic threshold selection for image segmentation. The experimental results show that the ACO-Otsu efficiently speed up the Otsu’s method to a great extent at multi-level thresholding, and that such method can provide better effectiveness at population size of 20 for all given image types at multi-level thresholding in this study.

- Neurodynamic and Particle Swarm Optimization | Pp. 1183-1192