Catálogo de publicaciones - libros

Compartir en
redes sociales


Multi-Objective Machine Learning

Yaochu Jin (eds.)

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2006 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-30676-4

ISBN electrónico

978-3-540-33019-6

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer 2006

Tabla de contenidos

Multi-Objective Machine Learning

Yaochu Jin (eds.)

Pp. No disponible

Feature Selection Using Rough Sets

Mohua Banerjee; Sushmita Mitra; Ashish Anand

Feature selection refers to the selection of input attributes that are most predictive of a given outcome. This is a problem encountered in many areas such as machine learning, signal processing, and recently bioinformatics/computational biology. Feature selection is one of the most important and challenging tasks, when it comes to dealing with large datasets with tens or hundreds of thousands of variables. Areas of web-mining and gene expression array analysis provide examples, where selection of interesting and useful features determines the performance of subsequent analysis. The intrinsic nature of noise, uncertainty, incompleteness of data makes extraction of hidden and useful information very difficult. Capability of handling imprecision, inexactness and noise, has attracted researchers to use rough sets for feature selection. This article provides an overview on recent literature in this direction.

Palabras clave: Feature Selection; Probabilistic Neural Network; Decision Table; Discernibility Matrix; True Reducts.

I - Multi-Objective Clustering, Feature Extraction and Feature Selection | Pp. 3-20

Multi-Objective Clustering and Cluster Validation

Julia Handl; Joshua Knowles

This chapter is concerned with unsupervised classification, that is, the analysis of data sets for which no (or very little) training data is available. The main goals in this data-driven type of analysis are the discovery of a data set’s underlying structure, and the identification of groups (or clusters) of homogeneous data items — a process commonly referred to as cluster analysis.

Palabras clave: Pareto Front; Data Item; Cluster Solution; Cluster Problem; Single Link.

I - Multi-Objective Clustering, Feature Extraction and Feature Selection | Pp. 21-47

Feature Selection for Ensembles Using the Multi-Objective Optimization Approach

Luiz S. Oliveira; Marisa Morita; Robert Sabourin

Feature selection for ensembles has shown to be an effective strategy for ensemble creation due to its ability of producing good subsets of features, which make the classifiers of the ensemble disagree on difficult cases. In this paper we present an ensemble feature selection approach based on a hierarchical multi-objective genetic algorithm. The underpinning paradigm is the “overproduce and choose”. The algorithm operates in two levels. Firstly, it performs feature selection in order to generate a set of classifiers and then it chooses the best team of classifiers. In order to show its robustness, the method is evaluated in two different contexts: supervised and unsupervised feature selection. In the former, we have considered the problem of handwritten digit recognition and used three different feature sets and multi-layer perceptron neural networks as classifiers. In the latter, we took into account the problem of handwritten month word recognition and used three different feature sets and hidden Markov models as classifiers. Experiments and comparisons with classical methods, such as Bagging and Boosting, demonstrated that the proposed methodology brings compelling improvements when classifiers have to work with very low error rates.

Palabras clave: Feature Selection; Recognition Rate; Feature Subset; Feature Selection Algorithm; Perform Feature Selection.

I - Multi-Objective Clustering, Feature Extraction and Feature Selection | Pp. 49-74

Feature Extraction Using Multi-Objective Genetic Programming

Yang Zhang; Peter I Rockett

A generic, optimal feature extraction method using multi-objective genetic programming (MOGP) is presented. This methodology has been applied to the well-known edge detection problem in image processing and detailed comparisons made with the Canny edge detector. We show that the superior performance from MOGP in terms of minimizing the misclassification is due to its effective optimal feature extraction. Furthermore, to compare different evolutionary approaches, two popular techniques - PCGA and SPGA - have been extended to genetic programming as PCGP and SPGP, and applied to five datasets from the UCI database. Both of these evolutionary approaches provide comparable misclassification errors within the present framework but PCGP produces more compact transformations.

Palabras clave: Feature Extraction; Genetic Programming; Pareto Front; Multiobjective Optimization; Decision Space.

I - Multi-Objective Clustering, Feature Extraction and Feature Selection | Pp. 75-99

Regression Error Characteristic Optimisation of Non-Linear Models

Jonathan E. Fieldsend

In this chapter recent research in the area of multi-objective optimisation of regression models is presented and combined. Evolutionary multi-objective optimisation techniques are described for training a population of regression models to optimise the recently defined Regression Error Characteristic Curves (REC). A method which meaningfully compares across regressors and against benchmark models (i.e. ‘random walk’ and maximum a posteriori approaches) for varying error rates. Through bootstrapping training data, degrees of confident out-performance are also highlighted.

Palabras clave: Null Model; Pareto Front; Evolutionary Computation; Multiobjective Optimization; Hide Unit.

II - Multi-Objective Learning for Accuracy Improvement | Pp. 103-123

Regularization for Parameter Identification Using Multi-Objective Optimization

Tomonari Furukawa; Chen Jian Ken Lee; John G. Michopoulos

Regularization is a technique used in finding a stable solution when a parameter identification problem is exposed to considerable errors. However a significant difficulty associated with it is that the solution depends upon the choice of the value assigned onto the weighting regularization parameter participating in the corresponding formulation. This chapter initially and briefly describes the weighted regularization method. It continues by introducing a weightless regularization approach that reduces the parameter identification problem to multi-objective optimization. Subsequently, a gradient-based multi-objective optimization method with Lagrange multipliers, is presented. Comparative numerical results with explicitly de- fined objective functions demonstrate that the technique can search for appropriate solutions more efficiently than other existing techniques. Finally, the technique was successfully applied for the parameter identification of a material model^1.

Palabras clave: Objective Function; Tikhonov Regularization; Search Point; Cyclic Plasticity; Viscoplastic Strain.

II - Multi-Objective Learning for Accuracy Improvement | Pp. 125-149

Multi-Objective Algorithms for Neural Networks Learning

Antônio Pádua Braga; Ricardo H. C. Takahashi; Marcelo Azevedo Costa; Roselito de Albuquerque Teixeira

Most supervised learning algorithms for Artificial Neural Networks (ANN)aim at minimizing the sum of the squared error of the training data [12, 11, 5, 10]. It is well known that learning algorithms that are based only on error minimization do not guarantee good generalization performance models. In addition to the training set error, some other network-related parameters should be adapted in the learning phase in order to control generalization performance. The need for more than a single objective function paves the way for treating the supervised learning problem with multi-objective optimization techniques. Although the learning problem is multi-objective by nature, only recently it has been given a formal multi-objective optimization treatment [16]. The problem has been treated from different points of view along the last two decades.

Palabras clave: Hide Node; Validation Error; Pruning Method; Neural Network Learn; Weight Decay.

II - Multi-Objective Learning for Accuracy Improvement | Pp. 151-171

Generating Support Vector Machines Using Multi-Objective Optimization and Goal Programming

Hirotaka Nakayama; Yeboon Yun

Support Vector Machine (SVM) is gaining much popularity as one of effective methods for machine learning in recent years. In pattern classification problems with two class sets, it generalizes linear classifiers into high dimensional feature spaces through nonlinear mappings defined implicitly by kernels in the Hilbert space so that it may produce nonlinear classifiers in the original data space. Linear classi- fiers then are optimized to give the maximal margin separation between the classes. This task is performed by solving some type of mathematical programming such as quadratic programming (QP) or linear programming (LP). On the other hand, from a viewpoint of mathematical programming for machine learning, the idea of maximal margin separation was employed in the multi-surface method (MSM) suggested by Mangasarian in 1960’s. Also, linear classifiers using goal programming were developed extensively in 1980’s. This chapter introduces a new family of SVM using multi-objective programming and goal programming (MOP/GP) techniques, and discusses its effectiveness throughout several numerical experiments.

Palabras clave: Support Vector Machine; Multiobjective Optimization; Maximal Margin; High Dimensional Feature Space; Support Vector Machine Algorithm.

II - Multi-Objective Learning for Accuracy Improvement | Pp. 173-198

Multi-Objective Optimization of Support Vector Machines

Thorsten Suttorp; Christian Igel

Designing supervised learning systems is in general a multi-objective optimization problem. It requires finding appropriate trade-offs between several objectives, for example between model complexity and accuracy or sensitivity and specificity. We consider the adaptation of kernel and regularization parameters of support vector machines (SVMs) by means of multi-objective evolutionary optimization. Support vector machines are reviewed from the multi-objective perspective, and different encodings and model selection criteria are described. The optimization of split modified radius-margin model selection criteria is demonstrated on benchmark problems. The MOO approach to SVM design is evaluated on a real-world pattern recognition task, namely the real-time detection of pedestrians in infrared images for driver assistance systems. Here the three objectives are the minimization of the false positive rate, the false negative rate, and the number of support vectors to reduce the computational complexity.

Palabras clave: Support Vector Machine; False Positive Rate; Pareto Front; False Negative Rate; Slack Variable.

II - Multi-Objective Learning for Accuracy Improvement | Pp. 199-220