Catálogo de publicaciones - libros

Compartir en
redes sociales


MICAI 2006: Advances in Artificial Intelligence: 5th Mexican International Conference on Artificial Intelligence, Apizaco, Mexico, November 13-17, 2006, Proceedings

Alexander Gelbukh ; Carlos Alberto Reyes-Garcia (eds.)

En conferencia: 5º Mexican International Conference on Artificial Intelligence (MICAI) . Apizaco, Mexico . November 13, 2006 - November 17, 2006

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Artificial Intelligence (incl. Robotics); Computation by Abstract Devices; Mathematical Logic and Formal Languages; Image Processing and Computer Vision

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2006 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-49026-5

ISBN electrónico

978-3-540-49058-6

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2006

Tabla de contenidos

A Polynomial Algorithm for 2-Cyclic Robotic Scheduling

Vladimir Kats; Eugene Levner

We solve a single-robot -machine cyclic scheduling problem arising in flexible manufacturing systems served by computer-controlled robots. The problem is to find the minimum cycle time for the so-called 2-cyclic (or “2-degree”) schedules, in which exactly two parts enter and two parts leave the production line during each cycle. An earlier known polynomial time algorithm for this problem was applicable only to the Euclidean case, where the transportation times must satisfy the “triangle inequality”. In this paper we study a general non-Euclidean case. Applying a geometrical approach, we construct a polynomial time algorithm of complexity O( log ).

- Optimization and Scheduling | Pp. 439-449

A New Algorithm That Obtains an Approximation of the Critical Path in the Job Shop Scheduling Problem

Marco Antonio Cruz-Chávez; Juan Frausto-Solís

This paper presents a new algorithm that obtains an approximation of the Critical Path in schedules generated using the disjunctive graph model that represents the Job Shop Scheduling Problem (JSSP). This algorithm selects a set of operations in the JSSP, where on the average ninety nine percent of the total operations that belong to the set are part of the critical path. A comparison is made of cost and performance between the proposed algorithm, CPA (Critical Path Approximation), and the classic algorithm, CPM (Critical Path Method). With the obtained results, it is demonstrated that the proposed algorithm is very efficient and effective at generating neighborhoods in the simulated annealing algorithm for the JSSP.

- Optimization and Scheduling | Pp. 450-460

A Quay Crane Scheduling Method Considering Interference of Yard Cranes in Container Terminals

Da Hun Jung; Young-Man Park; Byung Kwon Lee; Kap Hwan Kim; Kwang Ryel Ryu

Quay cranes are the most important equipment in port container terminals, because they are directly related to the wharf productivity. This study proposes a heuristic search algorithm, called greedy randomized adaptive search procedure (GRASP), for constructing a schedule of quay cranes in a way of minimizing the makespan and considering interference among yard cranes. The performance of the heuristic algorithm was tested by a numerical experiment.

- Optimization and Scheduling | Pp. 461-471

Comparing Schedule Generation Schemes in Memetic Algorithms for the Job Shop Scheduling Problem with Sequence Dependent Setup Times

Miguel A. González; Camino R. Vela; María Sierra; Inés González; Ramiro Varela

The Job Shop Scheduling Problem with Sequence Dependent Setup Times () is an extension of the Job Shop Scheduling Problem () that has interested to researchers during the last years. In this paper we confront the problem by means of a memetic algorithm. We study two schedule generation schemas that are extensions of the well known & algorithm for the . We report results from an experimental study showing that the proposed approaches produce similar results and that both of them are more efficient than other genetic algorithm proposed in the literature.

- Optimization and Scheduling | Pp. 472-482

A Fuzzy Set Approach for Evaluating the Achievability of an Output Time Forecast in a Wafer Fabrication Plant

Toly Chen

Lot output time prediction is a critical task to a wafer fab (fabrication plant). Traditional studies are focused on prediction accuracy and efficiency. Another performance measure that is as important but has been ignored in traditional studies is the achievability of an output time forecast, which is defined as the possibility that the fabrication on a wafer lot can be finished in time before the output time forecast. Theoretically, if a probability distribution can be obtained for the output time forecast, then the achievability can be evaluated with the cumulative probability of the probability distribution before the given date. However, there are many managerial actions that are more influential to the achievability. For this reason, a fuzzy set approach is proposed for evaluating the achievability of the output time forecast. The fuzzy set approach is composed of two parts: a fuzzy back propagation network (FBPN) and a set of fuzzy inference rules (FIRs). An example is used to demonstrate the applicability of the proposed methodology.

- Optimization and Scheduling | Pp. 483-493

How Good Are the Bayesian Information Criterion and the Minimum Description Length Principle for Model Selection? A Bayesian Network Analysis

Nicandro Cruz-Ramírez; Héctor-Gabriel Acosta-Mesa; Rocío-Erandi Barrientos-Martínez; Luis-Alonso Nava-Fernández

The Bayesian Information Criterion (BIC) and the Minimum Description Length Principle (MDL) have been widely proposed as good metrics for model selection. Such scores basically include two terms: one for accuracy and the other for complexity. Their philosophy is to find a model that rightly balances these terms. However, it is surprising that both metrics do often not work very well in practice for they overfit the data. In this paper, we present an analysis of the BIC and MDL scores using the framework of Bayesian networks that supports such a claim. To this end, we carry out different tests that include the recovery of gold-standard network structures as well as the construction and evaluation of Bayesian network classifiers. Finally, based on these results, we discuss the disadvantages of both metrics and propose some future work to examine these limitations more deeply.

- Machine Learning and Feature Selection | Pp. 494-504

Prediction of Silkworm Cocoon Yield in China Based on Grey-Markov Forecasting Model

Lingxia Huang; Peihua Jin; Yong He; Chengfu Lou; Min Huang; Mingang Chen

The method of Grey prediction and Markov Chain prediction could be used for the prediction in time order. Their combination could be extensively applied in forecasting. In this paper, we studied the precisions of Grey-Markov forecasting model based on the original data of the silkworm cocoon yield in China from 1950 to 1999. The precisions of Grey-Markov forecasting model from 2000 to 2003 are 95.56%, 95.17% and 94.40% respectively, which are higher than GM (1,1), and next to the Exponential Smoothing method and linear regression. The paper provided a scientific basis for the planned development of sericulture in China.

- Machine Learning and Feature Selection | Pp. 505-512

A Novel Hybrid System with Neural Networks and Hidden Markov Models in Fault Diagnosis

Qiang Miao; Hong-Zhong Huang; Xianfeng Fan

Condition monitoring and classification of machinery health state is of great practical significance in manufacturing industry, because it provides updated information regarding machine status on-line, thus avoiding the production loss and minimizing the chances of catastrophic machine failures. This is a pattern recognition problem and a condition monitoring system based on a hybrid of neural network and hidden Markov model (HMM) is proposed in this paper. Neural network realizes dimensionality reduction for Lipschitz exponent functions obtained from vibration data as input features and hidden Markov model is used for condition classification. The machinery condition can be identified by selecting the corresponding HMM which maximizes the probability of a given observation sequence. In the end, the proposed method is validated using gearbox vibration data.

- Machine Learning and Feature Selection | Pp. 513-521

Power System Database Feature Selection Using a Relaxed Perceptron Paradigm

Manuel Mejía-Lavalle; Gustavo Arroyo-Figueroa

Feature selection has become a relevant and challenging problem for the area of knowledge discovery in database. An effective feature selection strategy can significantly reduce the data mining processing time, improve the predicted accuracy, and help to understand the induced models, as they tend to be smaller and make more sense to the user. In this paper, an effective research around the utilization of the Perceptron paradigm as a method for feature selection is carried out. The idea is training a Perceptron and then utilizing the interconnection weights as indicators of which attributes could be the most relevant. We assume that an interconnection weight close to zero indicates that the associated attribute to this weight can be eliminated because it does not contribute with relevant information in the construction of the class separator hyper-plane. The experiments were realized with 4 real and 11 synthetic databases. The results show that the proposed algorithm is a good trade-off among performance (generalization accuracy), efficiency (processing time) and feature reduction. Specifically, we apply the algorithm to a Mexican Electrical Billing database with satisfactory accuracy, efficiency and feature reduction results.

- Machine Learning and Feature Selection | Pp. 522-531

Feature Elimination Approach Based on Random Forest for Cancer Diagnosis

Ha-Nam Nguyen; Trung-Nghia Vu; Syng-Yup Ohn; Young-Mee Park; Mi Young Han; Chul Woo Kim

The performance of learning tasks is very sensitive to the characteristics of training data. There are several ways to increase the effect of learning performance including standardization, normalization, signal enhancement, linear or non-linear space embedding methods, etc. Among those methods, determining the relevant and informative features is one of the key steps in the data analysis process that helps to improve the performance, reduce the generation of data, and understand the characteristics of data. Researchers have developed the various methods to extract the set of relevant features but no one method prevails. Random Forest, which is an ensemble classifier based on the set of tree classifiers, turns out good classification performance. Taking advantage of Random Forest and using wrapper approach first introduced by Kohavi , we propose a new algorithm to find the optimal subset of features. The Random Forest is used to obtain the feature ranking values. And these values are applied to decide which features are eliminated in the each iteration of the algorithm. We conducted experiments with two public datasets: colon cancer and leukemia cancer. The experimental results of the real world data showed that the proposed method results in a higher prediction rate than a baseline method for certain data sets and also shows comparable and sometimes better performance than the feature selection methods widely used.

- Machine Learning and Feature Selection | Pp. 532-542