Catálogo de publicaciones - libros

Compartir en
redes sociales


MICAI 2006: Advances in Artificial Intelligence: 5th Mexican International Conference on Artificial Intelligence, Apizaco, Mexico, November 13-17, 2006, Proceedings

Alexander Gelbukh ; Carlos Alberto Reyes-Garcia (eds.)

En conferencia: 5º Mexican International Conference on Artificial Intelligence (MICAI) . Apizaco, Mexico . November 13, 2006 - November 17, 2006

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence (incl. Robotics); Computation by Abstract Devices; Mathematical Logic and Formal Languages; Image Processing and Computer Vision

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2006 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-49026-5

ISBN electrónico

978-3-540-49058-6

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2006

Tabla de contenidos

The Adaptive Learning Rates of Extended Kalman Filter Based Training Algorithm for Wavelet Neural Networks

Kyoung Joo Kim; Jin Bae Park; Yoon Ho Choi

Since the convergence of neural networks depends on learning rates, the learning rates of training algorithm for neural networks are very important factors. Therefore, we propose the Adaptive Learning Rates(ALRs) of Extended Kalman Filter(EKF) based training algorithm for wavelet neural networks(WNNs). The ALRs of the EFK based training algorithm produce the convergence of the WNN. Also we derive the convergence analysis of the learning process from the discrete Lyapunov stability theorem. Several simulation results show that the EKF based WNN with ALRs adapt to abrupt change and high nonlinearity with satisfactory performance.

- Neural Networks | Pp. 327-337

Multistage Neural Network Metalearning with Application to Foreign Exchange Rates Forecasting

Kin Keung Lai; Lean Yu; Wei Huang; Shouyang Wang

In this study, we propose a multistage neural network metalearning technique for financial time series predication. First of all, an interval sampling technique is used to generate different training subsets. Based on the different training subsets, the different neural network models with different training subsets are then trained to formulate different base models. Subsequently, to improve the efficiency of metalearning, the principal component analysis (PCA) technique is used as a pruning tool to generate an optimal set of base models. Finally, a neural-network-based metamodel can be produced by learning from the selected base models. For illustration, the proposed metalearning technique is applied to foreign exchange rate predication.

- Neural Networks | Pp. 338-347

Genetic Optimizations for Radial Basis Function and General Regression Neural Networks

Gül Yazıcı; Övünç Polat; Tülay Yıldırım

The topology of a neural network has a significant importance on the network’s performance. Although this is well known, finding optimal configurations is still an open problem. This paper proposes a solution to this problem for Radial Basis Function (RBF) networks and General Regression Neural Network (GRNN) which is a kind of radial basis networks. In such networks, placement of centers has significant effect on the performance of network. The centers and widths of the hidden layer neuron basis functions are coded in a chromosome and these two critical parameters are determined by the optimization using genetic algorithms. Thyroid, iris and escherichia coli bacteria datasets are used to test the algorithm proposed in this study. The most important advantage of this algorithm is getting succesful results by using only a small part of a benchmark. Some numerical solution results indicate the applicability of the proposed approach.

- Neural Networks | Pp. 348-356

Complexity of Alpha-Beta Bidirectional Associative Memories

María Elena Acevedo-Mosqueda; Cornelio Yáñez-Márquez; Itzamá López-Yáñez

Most models of Bidirectional Associative Memories intend to achieve that all trained patterns correspond to stable states; however, this has not been possible. Also, none of the former models has been able to recall all the trained patterns. A new model which appeared recently, called Alpha-Beta Bidirectional Associative Memory (BAM), recalls 100% of the trained patterns, without error. Also, the model is non iterative and has no stability problems. In this work the analysis of time and space complexity of the Alpha-Beta BAM is presented.

- Neural Networks | Pp. 357-366

A New Bi-directional Associative Memory

Roberto A. Vázquez; Humberto Sossa; Beatriz A. Garro

Hebbian hetero-associative learning is inherently asymmetric. Storing a forward association from pattern to pattern enables the recalling of pattern given pattern . This, in general, does not allow the recalling of pattern given pattern . The forward association between and will tend to be stronger than the backward association between and . In this paper it is described how the dynamical associative model proposed in [10] can be extended to create a bi-directional associative memory where forward association between and is equal to backward association between and . This implies that storing a forward association, from pattern to pattern , would enable the recalling of pattern given pattern and the recalling of pattern given pattern . We give some formal results that support the functioning of the proposal, and provide some examples were the proposal finds application.

- Neural Networks | Pp. 367-380

A Hybrid Ant Algorithm for the Airline Crew Pairing Problem

Broderick Crawford; Carlos Castro; Eric Monfroy

This article analyzes the performance of Ant Colony Optimization algorithms on the resolution of Crew Pairing Problem, one of the most critical processes in airline management operations. Furthermore, we explore the hybridization of Ant algorithms with Constraint Programming techniques. We show that, for the instances tested from Beasley’s OR-Library, the use of this kind of hybrid algorithms obtains good results compared to the best performing metaheuristics in the literature.

- Optimization and Scheduling | Pp. 381-391

A Refined Evaluation Function for the MinLA Problem

Eduardo Rodriguez-Tello; Jin-Kao Hao; Jose Torres-Jimenez

This paper introduces a refined evaluation function, called Φ, for the Minimum Linear Arrangement problem (MinLA). Compared with the classical evaluation function (), Φ integrates additional information contained in an arrangement to distinguish arrangements with the same value. The main characteristics of Φ are analyzed and its practical usefulness is assessed within both a Steepest Descent (SD) algorithm and a Memetic Algorithm (MA). Experiments show that the use of Φ allows to boost the performance of SD and MA, leading to the improvement on some previous best known solutions.

- Optimization and Scheduling | Pp. 392-403

ILS-Perturbation Based on Local Optima Structure for the QAP Problem

Everardo Gutiérrez; Carlos A. Brizuela

Many problems in AI can be stated as search problems and most of them are very complex to solve. One alternative for these problems are local search methods that have been widely used for tackling difficult optimization problems for which we do not know algorithms which can solve every instance to optimality in a reasonable amount of time. One of the most popular methods is what is known as iterated local search (ILS), which samples the set of local optima searching for a better solution. This algorithm’s behavior is achieved by some mechanisms like perturbation which is a key aspect to consider, since it allows the algorithm to reach a new solution from the set of local optima by escaping from the previous local optimum basis of attraction. In order to design a good perturbation method we need to analyze the local optima structure such that ILS leads to a good biased sampling. In this paper, the local optima structure of the Quadratic Assignment Problem, an NP-hard optimization problem, is used to determine the required perturbation size in the ILS algorithm. The analysis is focused on verifying if the set of local optima has the “Big Valley (BV)” structure, and on how close local optima are in relation to problem size. Experimental results show that a small perturbation seems appropriate for instances having the BV structure, and for instances having a low distance among local optima, even if they do not have a clear BV structure. Finally, as the local optima structure moves away from BV a larger perturbation is needed.

- Optimization and Scheduling | Pp. 404-414

Application of Fuzzy Multi-objective Programming Approach to Supply Chain Distribution Network Design Problem

Hasan Selim; Irem Ozkarahan

A supply chain distribution network design model is developed in this paper. The goal of the model is to select the optimum numbers, locations and capacity levels of plants and warehouses to deliver the products to the retailers at the least cost while satisfying the desired service level. Maximal covering approach is employed in statement of the service level. Different from the previous researches in this area, coverage functions which differ among the retailers according to their service standard requests are defined for the retailers. Additionally, to provide a more realistic model structure, decision maker’s imprecise aspiration levels for the goals, and demand uncertainties are incorporated into the model through fuzzy modeling approach. Realistic computational experiments are provided to confirm the viability of the model.

- Optimization and Scheduling | Pp. 415-425

Route Selection and Rate Allocation Using Evolutionary Computation Algorithms in Multirate Multicast Networks

Sun-Jin Kim; Mun-Kee Choi

In this paper, we simultaneously address the route selection and rate allocation problem in multirate multicast networks. We propose the evolutionary computation algorithm based on a genetic algorithm for this problem and elaborate upon many of the elements in order to improve solution quality and computational efficiency in applying the proposed methods to the problem. These include: the genetic representation, evaluation function, genetic operators and procedure. Additionally, a new method using an artificial intelligent search technique, called the coevolutionary algorithm, is proposed to achieve better solutions. The results of extensive computational simulations show that the proposed algorithms provide high quality solutions and outperform existing approach.

- Optimization and Scheduling | Pp. 426-438