Catálogo de publicaciones - libros

Compartir en
redes sociales


MICAI 2005: Advances in Artificial Intelligence: 4th Mexican International Conference on Artificial Intelligence, Monterrey, Mexico, November 14-18, 2005, Proceedings

Alexander Gelbukh ; Álvaro de Albornoz ; Hugo Terashima-Marín (eds.)

En conferencia: 4º Mexican International Conference on Artificial Intelligence (MICAI) . Monterrey, Mexico . November 14, 2005 - November 18, 2005

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence (incl. Robotics); Computation by Abstract Devices; Mathematical Logic and Formal Languages; Image Processing and Computer Vision

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2005 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-29896-0

ISBN electrónico

978-3-540-31653-4

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2005

Tabla de contenidos

Coevolutionary Multi-objective Optimization Using Clustering Techniques

Margarita Reyes Sierra; Carlos A. Coello Coello

We propose a new version of a multiobjective coevolutionary algorithm. The main idea of the proposed approach is to concentrate the search effort on promising regions that arise during the evolutionary process as a product of a clustering mechanism applied on the set of decision variables corresponding to the known Pareto front. The proposed approach is validated using several test functions taken from the specialized literature and it is compared with respect to its previous version and another approach that is representative of the state-of-the-art in evolutionary multiobjective optimization.

- Evolutionary Computation and Genetic Algorithms | Pp. 603-612

A Comparison of Memetic Recombination Operators for the MinLA Problem

Eduardo Rodriguez-Tello; Jin-Kao Hao; Jose Torres-Jimenez

In this paper the Minimum Linear Arrangement (MinLA) problem is studied within the framework of memetic algorithms (MA). A new dedicated recombination operator called Trajectory Crossover (TX) is introduced and its performance is compared with four previous crossover operators. It is shown that the TX crossover induces a better population diversity. The MA using TX is evaluated on a set of well-known benchmark instances and is compared with several state-of-art MinLA algorithms.

- Evolutionary Computation and Genetic Algorithms | Pp. 613-622

Hybrid Particle Swarm – Evolutionary Algorithm for Search and Optimization

Crina Grosan; Ajith Abraham; Sangyong Han; Alexander Gelbukh

Particle Swarm Optimization (PSO) technique has proved its ability to deal with very complicated optimization and search problems. Several variants of the original algorithm have been proposed. This paper proposes a novel hybrid PSO – evolutionary algorithm for solving the well known geometrical place problems. Finding the geometrical place could be sometimes a hard task. In almost all situations the geometrical place consists more than one single point. The performance of the newly proposed PSO algorithm is compared with evolutionary algorithms. The main advantage of the PSO technique is its speed of convergence. Also, we propose a hybrid algorithm, combining PSO and evolutionary algorithms. The hybrid combination is able to detect the geometrical place very fast for which the evolutionary algorithms required more time and the conventional PSO approach even failed to find the real geometrical place.

- Evolutionary Computation and Genetic Algorithms | Pp. 623-632

Particle Swarm Optimization with Opposite Particles

Rujing Wang; Xiaoming Zhang

The particle swarm optimization algorithm is a kind of intelligent optimization algorithm. This algorithm is prone to be fettered by the local optimization solution when the particle’s velocity is small. This paper presents a novel particle swarm optimization algorithm named particle swarm optimization with opposite particles which is guaranteed to converge to the global optimization solution with probability one. And we also make the global convergence analysis. Finally, three function optimizations are simulated to show that the PSOOP is better and more efficient than the PSO with inertia weights.

- Evolutionary Computation and Genetic Algorithms | Pp. 633-640

Particle Evolutionary Swarm Optimization with Linearly Decreasing -Tolerance

Angel E. Muñoz Zavala; Arturo Hernández Aguirre; Enrique R. Villa Diharce

We introduce the PESO (Particle Evolutionary Swarm Optimization) algorithm for solving single objective constrained optimization problems. PESO algorithm proposes two perturbation operators: “c-perturbation” and “m-perturbation”. The goal of these operators is to prevent premature convergence and the poor diversity issues observed in Particle Swarm Optimization (PSO) implementations. Constraint handling is based on simple feasibility rules, enhanced with a dynamic -tolerance approach applicable to equality constraints. PESO is compared and outperforms highly competitive algorithms representative of the state of the art.

- Evolutionary Computation and Genetic Algorithms | Pp. 641-651

Useful Infeasible Solutions in Engineering Optimization with Evolutionary Algorithms

Efrén Mezura-Montes; Carlos A. Coello Coello

We propose an evolutionary-based approach to solve engineering design problems without using penalty functions. The aim is to identify and maintain infeasible solutions close to the feasible region located in promising areas. In this way, using the genetic operators, more solutions will be generated inside the feasible region and also near its boundaries. As a result, the feasible region will be sampled well-enough as to reach better feasible solutions. The proposed approach, which is simple to implement, is tested with respect to typical penalty function techniques as well as against state-of-the-art approaches using four mechanical design problems. The results obtained are discussed and some conclusions are provided.

- Evolutionary Computation and Genetic Algorithms | Pp. 652-662

A Hybrid Self-adjusted Memetic Algorithm for Multi-objective Optimization

Xiuping Guo; Genke Yang; Zhiming Wu

A novel memetic algorithm for multi-objective optimization problems is proposed in this paper. The uniqueness of the method is that it hybridizes scalarizing selection with Pareto selection for exploitation and exploration. For extending the spread of solutions as quickly and fully as possible, the scalarizing functions defined by a wide diversified set of weights are used to go through all regions in objective space in the first phase at each generation. In the second phase, for intensifying search ability and achieving global exploration, a grid-based method is used to discover the gaps on existing tradeoff surface, and a fuzzy local perturbation is employed to reproduce additional ”good” individuals in the missing areas. Both the exploitation and exploration are made dynamic and adaptive to online optimization conditions based on a function of progress ratio, ensuring better stability of the algorithm. Compared with several state-of-the-art approaches using the same set of multi-objective 0/1 knapsack problem instances, experiment results show that the proposed method perform better to some extent in terms of finding a near-Pareto front and well-extended nondominated set.

- Evolutionary Computation and Genetic Algorithms | Pp. 663-672

Evolutionary Multiobjective Optimization Approach for Evolving Ensemble of Intelligent Paradigms for Stock Market Modeling

Ajith Abraham; Crina Grosan; Sang Yong Han; Alexander Gelbukh

The use of intelligent systems for stock market predictions has been widely established. This paper introduces a genetic programming technique (called Multi-Expression programming) for the prediction of two stock indices. The performance is then compared with an artificial neural network trained using Levenberg-Marquardt algorithm, support vector machine, Takagi-Sugeno neuro-fuzzy model and a difference boosting neural network. As evident from the empirical results, none of the five considered techniques could find an optimal solution for all the four performance measures. Further the results obtained by these five techniques are combined using an ensemble and two well known Evolutionary Multiobjective Optimization (EMO) algorithms namely Non-dominated Sorting Genetic Algorithm II (NSGA II) and Pareto Archive Evolution Strategy (PAES)algorithms in order to obtain an optimal ensemble combination which could also optimize the four different performance measures (objectives). We considered Nasdaq-100 index of Nasdaq Stock Market and the S&P CNX NIFTY stock index as test data. Empirical results reveal that the resulting ensemble obtain the best results.

- Evolutionary Computation and Genetic Algorithms | Pp. 673-681

Genetic Algorithms for Feature Weighting: Evolution vs. Coevolution and Darwin vs. Lamarck

Alexandre Blansché; Pierre Gançarski; Jerzy J. Korczak

Feature weighting is a more and more important step in clustering because data become more and more complex.

An embedded local feature weighting method has been proposed in [1].

In this paper, we present a new method based on the same cost function, but performed through a genetic algorithm. The learning process can be performed through an evolutionary approach or through a cooperavive coevolutionary approach. Moreover, the genetic algorithm can be combined with the original Weighting -means algorithm in a Lamarckian learning paradigm.

We compare hill-climbing optimization versus genetic algorithms, evolutionary versus coevolutionary approaches, and Darwinian versus Lamarckian learning on different datasets.

The results seem to show that, on the datasets where the original algorithm is efficient, the proposed methods are even better.

- Evolutionary Computation and Genetic Algorithms | Pp. 682-691

A Deterministic Alternative to Genetic Algorithms That Solves to Optimality Linearly Decomposable Non-overlapping Problems in Polynomial Time

Manuel Valenzuela-Rendón; Horacio Martínez-Alfaro; Hugo Terashima-Marín

David Goldberg has defined a genetic algorithm as one which “can solve hard problems, quickly, accurately, and reliably.” Among other genetic algorithms that have been developed are the Bayesian optimization algorithm (BOA), the fast messy genetic algorithm (fmGA), and the linkage learning genetic algorithm (LLGA). These algorithms have been tested on problems of bounded difficulty that are additive separable formed by deceptive subproblems of order not greater than , where < ℓ. BOA, fmGA, LLGA, and other genetic algorithms are stochastic, and thus, can only be assured of attaining optimality in a probabilistic sense. In this paper, we develop a deterministic algorithm that solves to optimality all linearly decomposable problems in a polynomial number of function evaluations with respect to the maximum size of the subproblems, . The algorithm presented does not rely on a population, does not recombine individuals or apply any other operator. Furthermore, because it is deterministic, the number of function evaluations required to find the optimum can be known in advance. The algorithm presented solves both the linkage and the optimization problems by finding the disjoint sets of related variables and the optimal values of these variables at the same time. The fact that such an algorithm can be devised has important implications for the design of GA-hard problems, and the development and evaluation of genetic optimization algorithms.

- Evolutionary Computation and Genetic Algorithms | Pp. 692-701