Catálogo de publicaciones - libros

Compartir en
redes sociales


Stochastic Theory and Control: Proceedings of a Workshop held in Lawrence, Kansas

Bozenna Pasik-Duncan (eds.)

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2002 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-43777-2

ISBN electrónico

978-3-540-48022-8

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2002

Tabla de contenidos

Optimal Control of Linear Backward Stochastic Differential Equations with a Quadratic Cost Criterion

Andrew E. B. Lim; Xun Yu Zhou

Backward Stochastic Differential Equations (BSDEs) are Ito SDEs with a random condition. While it is the case that BSDEs have been the topic of extensive research for a number of years, little has been done on optimal control of BSDEs. In this paper, we consider the problem of linear-quadratic control of a BSDE. A complete solution to this problem is obtained, in terms of a pair of Riccati type equations and an uncontrolled BSDE, using an approach that is based on the completion of squares technique.

Pp. 301-317

Hilbert Spaces Induced by Toeplitz Covariance Kernels

Mihaela T. Matache; Valentin Matache

We consider the reproducing kernel Hilbert space H induced by a kernel which is obtained using the Fourier-Stieltjes transform of a regular, positive, finite Borel measure on a locally compact abelian topological group . Denote by the dual of . We determine H as a certain subspace of the space C() of all continuous function on vanishing at infinity. Our main application is calculating the reproducing kernel Hilbert spaces induced by the Toeplitz covariance kernels of some well-known stochastic processes.

Pp. 319-333

Error Analysis of a Max-Plus Algorithm for a First-Order HJB Equation

William M. McEneaney

The H problem for a nonlinear system is considered. The corresponding dynamic programming equation is a fully nonlinear, first-order, steady-state partial differential equation (PDE). The computation of the solution of a nonlinear, steady-state, first-order PDE is typically quite difficult. We consider an entirely new class of methods for the obtaining the solution of such PDEs. These methods are based on the linearity of the associated semi-group over the max-plus algebra. In particular, solution of the PDE is reduced to solution of a max-plus eigenvector problem for known unique eigenvalue 0. We consider the error analysis for such an algorithm. The errors are due to both the truncation of the basis expansion and computation of the matrix whose eigenvector one computes.

Pp. 335-351

Optimal Strategies for Ergodic Control Problems Arising from Portfolio Optimization

Hideo Nagai

We consider constructing optimal strategies for risk-sensitive portfolio optimization problems on an infinite time horizon for general factor models, where the mean returns and the volatilities of individual securities or asset categories are explicitly affected by economic factors. The factors are assumed to be general diffusion processes. In studying the ergodic type Bellman equations of the risk-sensitive portfolio optimization problems we introduce some auxiliary classical stochastic control problems with the same Bellman equations as the original ones. We show that the optimal diffusion processes of the problem are ergodic and that under some condition related to integrability by the invariant measures of the diffusion processes we can construct optimal strategies for the original problems by using the solution of the Bellman equations.

Pp. 353-368

Finite Horizon Full-State Feedback CC Control in Civil Structures Protection

Khanh D. Pham; Michael K. Sain; Stanley R. Liberty

The following study presents a finite horizon full-state feedback -Cost- Cumulant (CC) control problem, in which the objective function representing a linear combination of cumulant indices of a finite time integral quadratic performance measure associated to a linear stochastic system, is minimized. A dynamic programming approach is used to obtain the optimal control solution. This control algorithm is then applied to the First Generation Structure Benchmark for Seismically Excited Buildings. Simulation results indicate that the CC control has both performance and control design flexibility advantages when compared with popular control strategies in building protection.

Pp. 369-383

Robust Stochastic Maximum Principle: A Measured Space as Uncertainty Set

Alex S. Poznyak

This paper develops a version of Robust Stochastic Maximum Principle () applied to the Minimax Mayer Problem formulated for stochastic differential equations with the control-dependent diffusion term. The parametric families of first and second order adjoint stochastic processes are introduced to construct the corresponding Hamiltonian formalism. The Hamiltonian function used for the construction of the robust optimal control is shown to be equal to the Lebesque integral over a parametric set of the standard stochastic Hamiltonians corresponding to a fixed value of the uncertain parameter. The paper deals with a cost function given at finite horizon and containing the mathematical expectation of a terminal term. A terminal condition, covered by a vector function, is also considered. The optimal control strategies, adapted for available information, for the wide class of uncertain systems given by an stochastic differential equation with unknown parameters from a given compact set, are constructed. This problem belongs to the class of minimax stochastic optimization problems. The proof is based on the recent results obtained for Minimax Mayer Problem with a finite uncertainty set [], [], [] and [] as well as on the variation results of [] derived for Stochastic Maximum Principle for nonlinear stochastic systems under complete information. The corresponding discussion of the obtain results concludes this study.

Pp. 385-397

On Optimality of Stochastic -Machine Flowshop with Long-Run Average Cost

Ernst Presman; Suresh P. Sethi; Hanqin Zhang; Qing Zhang

This paper is concerned with the problem of production planning in a stochastic manufacturing system with serial machines that are subject to break-down and repair. The machine capacities are modeled by a Markov chain. The objective is to choose the input rates at the various machines over time in order to meet the demand for the system’s production at the minimum long-run average cost of production and surplus, while ensuring that the inventories in internal buffers between adjacent machines remain nonnegative. The problem is formulated as a stochastic dynamic program. We prove a verification theorem and derive the optimal feedback control policy in terms of the directional derivatives of the potential function.

Pp. 399-417

A Risk-Sensitive Generalization of Maximum APosterior Probability (MAP) Estimation

Vahid Reza Ramezani; Steven I. Marcus

A sequential filtering scheme for the risk-sensitive state estimation of partially observed Markov chains is presented. The previously introduced risk-sensitive filters are unified in the context of risk-sensitive Maximum A Posterior Probability (MAP) estimation. Structural results for the filter banks are given. The influence of the availability of information and the transition probabilities on the decision regions and the behavior of risk-sensitive estimators are studied.

Pp. 419-433

Bayesian Adaptive Control of Discrete Time Partially Observed Markov Processes

L. Stettner

In this paper adaptive control of partially observed discrete time Markov processes with transition probability depending on an unknown random variable is studied. Although the techniques and methods used in the paper can be extended in various directions, the author concentrated himself on an analysis of three models called respectively: mixed observation model, model with observed regeneration, and rich observation model for which partially observed control problem with known transition probability and average cost per unit time functional can be solved.

Pp. 435-446

Portfolio Optimization in Markets Having Stochastic Rates

Richard H. Stockbridge

The Merton problem of optimizing the expected utility of consumption for a portfolio consisting of a bond and stocks is considered when changes in the bond’s interest rate, in the mean return rates and in the volatilities for the stock price processes are modelled by a finite-state Markov chain. This paper establishes an equivalent linear programming formulation of the problem. Two cases are considered. The first model assumes that these coefficients are known to the investor whereas the second model investigates a partially observed model in which the mean return rates and volatilities for the stocks are not directly observable.

Pp. 447-458