Catálogo de publicaciones - libros

Compartir en
redes sociales


Switching and Learning in Feedback Systems: European Summer School on Multi-Agent Control, Maynooth, Ireland, September 8-10, 2003, Revised Lectures and Selected Papers

Roderick Murray-Smith ; Robert Shorten (eds.)

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

Computation by Abstract Devices; Simulation and Modeling; Artificial Intelligence (incl. Robotics); Special Purpose and Application-Based Systems; Probability and Statistics in Computer Science; Dynamical Systems and Ergodic Theory

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2005 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-24457-8

ISBN electrónico

978-3-540-30560-6

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin Heidelberg 2005

Tabla de contenidos

From Feedback Control to Complexity Management: A Personal Perspective

Kumpati S. Narendra

Revolutionary advances in technology have generated numerous complex systems that have become integral parts of our socioeconomic environment. The study of such systems – those which contain many interacting parts – is currently attracting considerable attention.

In this paper, the author retraces his personal attempts, over a period of four decades, to develop simple models for adaptation, learning, identification and control using artificial neural networks, and hybrid systems, and goes on to describe how they are providing insights into dealing with complex interconnected systems.

- Switching and Control | Pp. 1-30

Convex Cones, Lyapunov Functions, and the Stability of Switched Linear Systems

Robert Shorten; Oliver Mason; Kai Wulff

Recent research on switched and hybrid systems has resulted in a renewed interest in determining conditions for the existence of a common quadratic Lyapunov function for a finite number of stable LTI systems. While efficient numerical solutions to this problem have existed for some time, compact analytical conditions for determining whether or not such a function exists for a finite number of systems have yet to be obtained. In this paper we present a geometric approach to this problem. By making a simplifying assumption we obtain a compact time-domain condition for the existence of such a function for a pair of LTI systems. We show a number of new and classical Lyapunov results can be obtained using our framework. In particular, we demonstrate that our results can be used to obtain compact time-domain versions of the SISO Kalman-Yacubovich-Popov lemma, the Circle Criterion, and stability multiplier criteria. Finally, we conclude by posing a number of open questions that arise as a result of our approach.

- Switching and Control | Pp. 31-46

Survey of Explicit Approaches to Constrained Optimal Control

Alexandra Grancharova; Tor Arne Johansen

This chapter presents a review of the explicit approaches to optimal control. It is organized as follows. Section 1 gives a summary of the main results of the optimal control theory. Section 2 presents briefly the methods for unconstrained optimal state feedback control of linear systems. Sections 3, 4 and 5 consider in details the explicit methods for constrained linear quadratic regulation (LQR) together with several examples. The main motivation behind the explicit solution is that it avoids the need for real-time optimization, and thus allows implementation at high sampling frequencies in real-time systems with high reliability and low software complexity. These sections include formulation of the constrained LQR problem, summary of the implicit approaches, basics of the model predictive control (MPC), description of the exact and the approximate approaches to explicit solution of MPC problems and the experimental evaluation of explicit MPC controller performance for laboratory gas-liquid separation plant.

- Switching and Control | Pp. 47-97

Analysis of Some Methods for Reduced Rank Gaussian Process Regression

Joaquin Quiñonero-Candela; Carl Edward Rasmussen

While there is strong motivation for using Gaussian Processes (GPs) due to their excellent performance in regression and classification problems, their computational complexity makes them impractical when the size of the training set exceeds a few thousand cases. This has motivated the recent proliferation of a number of cost-effective approximations to GPs, both for classification and for regression. In this paper we analyze one popular approximation to GPs for regression: the reduced rank approximation. While generally GPs are equivalent to infinite linear models, we show that Reduced Rank Gaussian Processes (RRGPs) are equivalent to finite sparse linear models. We also introduce the concept of degenerate GPs and show that they correspond to inappropriate priors. We show how to modify the RRGP to prevent it from being degenerate at test time. Training RRGPs consists both in learning the covariance function hyperparameters and the support set. We propose a method for learning hyperparameters for a given support set. We also review the Sparse Greedy GP (SGGP) approximation (Smola and Bartlett, 2001), which is a way of learning the support set for given hyperparameters based on approximating the posterior. We propose an alternative method to the SGGP that has better generalization capabilities. Finally we make experiments to compare the different ways of training a RRGP. We provide some Matlab code for learning RRGPs.

- Gaussian Processes | Pp. 98-127

Filtered Gaussian Processes for Learning with Large Data-Sets

Jian Qing Shi; Roderick Murray-Smith; D. Mike Titterington; Barak A. Pearlmutter

Kernel-based non-parametric models have been applied widely over recent years. However, the associated computational complexity imposes limitations on the applicability of those methods to problems with large data-sets. In this paper we develop a filtering approach based on a Gaussian process regression model. The idea is to generate a small-dimensional set of filtered data that keeps a high proportion of the information contained in the original large data-set. Model learning and prediction are based on the filtered data, thereby decreasing the computational burden dramatically.

- Gaussian Processes | Pp. 128-139

Self-tuning Control of Non-linear Systems Using Gaussian Process Prior Models

Daniel Sbarbaro; Roderick Murray-Smith

Gaussian Process prior models, as used in Bayesian non-parametric statistical models methodology are applied to implement a nonlinear adaptive control law. The expected value of a quadratic cost function is minimised, without ignoring the variance of the model predictions. This leads to implicit regularisation of the control signal (caution) in areas of high uncertainty. As a consequence, the controller has dual features, since it both tracks a reference signal and learns a model of the system from observed responses. The general method and its unique features are illustrated on simulation examples.

- Gaussian Processes | Pp. 140-157

Gaussian Processes: Prediction at a Noisy Input and Application to Iterative Multiple-Step Ahead Forecasting of Time-Series

Agathe Girard; Roderick Murray-Smith

With the Gaussian Process model, the predictive distribution of the output corresponding to a new given input is Gaussian. But if this input is uncertain or noisy, the predictive distribution becomes non-Gaussian. We present an analytical approach that consists of computing only the mean and variance of this new distribution (). We show how, depending on the form of the covariance function of the process, we can evaluate these moments exactly or approximately (within a Taylor approximation of the covariance function). We apply our results to the iterative multiple-step ahead prediction of non-linear dynamic systems with propagation of the uncertainty as we predict ahead in time. Finally, using numerical examples, we compare the to the numerical approximation of the true predictive distribution by simple Monte-Carlo.

- Gaussian Processes | Pp. 158-184

Nonlinear Predictive Control with a Gaussian Process Model

Juš Kocijan; Roderick Murray-Smith

Gaussian process models provide a probabilistic non-parametric modelling approach for black-box identification of nonlinear dynamic systems. The Gaussian processes can highlight areas of the input space where prediction quality is poor, due to the lack of data or its complexity, by indicating the higher variance around the predicted mean. Gaussian process models contain noticeably less coefficients to be optimized. This chapter illustrates possible application of Gaussian process models within model predictive control. The extra information provided by the Gaussian process model is used in predictive control, where optimization of the control signal takes the variance information into account. The predictive control principle is demonstrated via the control of a pH process benchmark.

- Gaussian Processes | Pp. 185-200

Control of Yaw Rate and Sideslip in 4-Wheel Steering Cars with Actuator Constraints

Miguel A. Vilaplana; Oliver Mason; Douglas J. Leith; William E. Leithead

In this paper we present a new steering controller for cars equipped with 4-wheel steer-by-wire. The controller commands the front and rear steering angles with the objective of tracking reference yaw rate and sideslip signals corresponding to the desired vehicle handling behaviour. The structure of the controller is based on a simplified model of the lateral dynamics of 4-wheel steering cars. We show that the proposed structure facilitates the design of a robust steering controller valid for varying vehicle speed. The controller, which has been designed using classical techniques according to the Individual Channel Design (ICD) methodology, incorporates an anti-windup scheme to mitigate the effects of the saturation of the rear steering actuators. We analyse the robust stability of the resulting non-linear control system and present simulation results illustrating the performance of the controller on a detailed non-linear vehicle model.

- Applications of Switching & Learning | Pp. 201-222

A Second-Order Cone Bounding Algorithm for Robust Minimum Variance Beamforming

Ngai Wong; Venkataramanan Balakrishnan; Tung-Sang Ng

We present a geometrical approach for designing robust minimum variance (RMV) beamformers against steering vector uncertainties. Conventional techniques enclose the uncertainties with a convex set; the antenna weights are then designed to minimize the maximum array output variance over this set. In contrast, we propose to cover the uncertainty by a second-order cone (SOC). The optimization problem, with optional robust interference rejection constraints, then reduces to the minimization of the array output variance over the intersection of the SOC and a hyperplane. This is cast into a standard second-order cone programming (SOCP) problem and solved efficiently. We study the computationally efficient case wherein the uncertainties are embedded in complex-plane trapezoids. The idea is then extended to arbitrary uncertainty geometries. Effectiveness of the proposed approach over other schemes and its fast convergence in signal power estimation are demonstrated with numerical examples.

- Applications of Switching & Learning | Pp. 223-247