Catálogo de publicaciones - libros

Compartir en
redes sociales


Process Optimization: A Statistical Approach

Enrique Del Castillo

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2007 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-0-387-71434-9

ISBN electrónico

978-0-387-71435-6

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag US 2007

Cobertura temática

Tabla de contenidos

Introduction to Bayesian Inference

Enrique Del Castillo

Reverend Thomas Bayes, a Presbyterian Minister who lived in England in the 18th century wrote a manuscript on “inverse probability” related to the binomial distribution. This was published posthumously in 1763. Bayes’ goal was to make probability inferences about the parameter of a binomial distribution. In 1774, Laplace stated what is now known as Bayes’ theorem in general form, working independently.

Bayesian inference combines prior beliefs about model parameters with evidence from data using Bayes’ theorem. There is a subjective interpretation of probability in this approach, compared to the “frequentist” approach in which the probability of an event is the limit of a ratio of frequencies of events. The main criticisms of Bayesian analysis have been 1) that it is not objective (a fact that has been debated for many years), and 2) that the required computations are difficult. The second criticism has been overcome to a large extent in the last 10-15 years due to advances in integration methods, particularly, Markov Chain Monte Carlo (MCMC) method. The object of this chapter is to present an introduction to statistical inference problems from a Bayesian point of view. This will lead us in the next chapter to Bayesian regression and its use in process optimization.

Part V - Bayesian Approaches in Process Optimization | Pp. 291-319

Bayesian Methods for Process Optimization

Enrique Del Castillo

The mainstream literature on Response Surface Optimization is classical or “frequentist” given that it considers parameters as unknown constants that need to be estimated from data. The sampling variability or experimental error is reflected in the sampling distributions of the estimates. This sampling variability can (and should) be considered in optimization, see Chapter 7. In contrast, the Bayesian approach to statistical inference considers model parameters (and in fact, any unknowns) as random variables. This has considerable advantages over the classical approach when optimizing a process based on a fitted model, since depending on the estimated parameters different optimal conditions will be determined. In the Bayesian approach, the uncertainty in the model’s parameters is directly incorporated in the analysis. Prior knowledge (which can be considerable, in agreement with Weiner’s quote above) can be incorporated, if desired, into the optimization process. Otherwise, non-informative priors can be used for optimization purposes.

This chapter presents Bayesian linear regression models and its use in process optimization, with the minimum number of technical details, without sacrificing understanding of the main ideas for readers not familiar with Bayesian Analysis.

Part V - Bayesian Approaches in Process Optimization | Pp. 321-364

Simulation Optimization

Enrique Del Castillo

This book thus far has focused on the optimization of industrial processes where a physical system or process exists and needs to be improved. There is a growing awareness in the Statistics and Engineering literature for the need and usefulness of methods for the optimization of of such physical processes. The models are a surrogate of a process or product, used to study and improve it with no active intervention and faster experimentation. When optimizing a simulation model, the optimal solution obtained from the simulation is implemented in the real system. Evidently, the model must be an accurate representation of the system under study. We will not delve into the deep subject of simulation modeling and validation, for which a very large body of literature exists (see e.g., the books [83, 4, 143]) and is outside the scope of the present book. Our purpose in this chapter is to provide an introduction to some of the techniques that are useful in the optimization of simulated systems.

Part VI - Introduction to Optimization of Simulation and Computer Models | Pp. 367-378

Kriging and Computer Experiments

Enrique Del Castillo

Engineering problems where complex computer codes need to be run in order to obtain a solution are increasingly common. This could be, for example, a finite-element program which evaluates certain physical properties of a mechanical design. The finite element model is accurate and deterministic in the sense that when run from the same settings it produces the exact same outputs, but may take a long time to run. Still, a faster approximation (or ) of the computer code output is desirable, particularly for optimization purposes, since optimization would require running the code several times. The accuracy (bias) of such approximation is of prime interest. This leads to a different problem of function approximation methods, closer to what Chebyshev investigated in the XIX century. Kriging methods are one class of popular methods to provide interpolation of complex, “expensive” functions which can only be observed by running such a computer code. This chapter provides an introduction to Kriging methods used for prediction. We also discuss experimental design issues for computer codes, in particular, space filling designs used to reduce bias in the fitted model.

Part VI - Introduction to Optimization of Simulation and Computer Models | Pp. 379-395

Basics of Linear Regression

Enrique Del Castillo

We continue in this chapter the discussion of methods for dealing with sampling variability in experimental optimization techniques. This chapter considers the effect of statistical sampling error in RSM techniques that are based on second order (quadratic) polynomial models. We first discuss finding confidence intervals for the eigenvalues of the Hessian matrix, that is, the effect of sampling variability in . Later sections consider the related and important problem of finding a confidence region for the optimal operating conditions . The unconstrained case is discussed first after which methods for the computation and display of confidence regions on constrained optima are discussed. Any traditional (frequentist) RSM optimization analysis should probably always include such regions.

Part VII - Appendices | Pp. 399-412

Analysis of Variance

Enrique Del Castillo

Analysis of Variance, or ANOVA, is a technique developed mainly by R. Fisher in the 1920s to study statistical problems in genetics and improvement of crops. It corresponds to a generalization of the basic two-sample t-test to test for the equality of the means of two different populations. The generalization allows to test for the equality of means from 2 or more populations. In this appendix we outline the main ideas of ANOVA at an introductory level, sufficient to support the book sections where these ideas are utilized.

Part VII - Appendices | Pp. 413-427

Matrix Algebra and Optimization Results

Enrique Del Castillo

Response Surface Methods suggest to estimate a second order polynomial when there is evidence that the response is curved in the current region of interest, or when lack of fit tests point to an inadequacy of the a first order model. The decision for when to change from using first order designs and models to second order designs and models is therefore based on the single degree of freedom test for curvature and the lack of fit (LOF) tests explained earlier. In this chapter we provide a description of designed experiments with which we can fit the second order model

Part VII - Appendices | Pp. 429-441

Some Probability Results Used in Bayesian Inference

Enrique Del Castillo

Response Surface Methods suggest to estimate a second order polynomial when there is evidence that the response is curved in the current region of interest, or when lack of fit tests point to an inadequacy of the a first order model. The decision for when to change from using first order designs and models to second order designs and models is therefore based on the single degree of freedom test for curvature and the lack of fit (LOF) tests explained earlier. In this chapter we provide a description of designed experiments with which we can fit the second order model

Part VII - Appendices | Pp. 443-444