Catálogo de publicaciones - libros

Compartir en
redes sociales


Identification of Nonlinear Systems Using Neural Networks and Polynomial Models: A Block-Oriented Approach

Andrzej Janczak

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2005 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-3-540-23185-1

ISBN electrónico

978-3-540-31596-4

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Springer-Verlag Berlin/Heidelberg 2005

Tabla de contenidos

1 Introduction

Andrzej Janczak

The class of block-oriented nonlinear models includes complex models which are composed of linear dynamic systems and nonlinear static elements. Wiener and Hammerstein models are the most known and the most widely implemented members of this class. A model is called the Wiener model if the linear dynamic block (element) precedes the nonlinear static one. In the Hammerstein model, the connection order is reversed.

Pp. 1-30

2 Neural network Wiener models

Andrzej Janczak

This chapter introduces different structures of neural network Wiener models and shows how their weights can be adjusted, based on a set of system input-output data, with gradient learning algorithms. The term ‘neural network Wiener models’ refers to models composed of a linear dynamic model followed by a nonlinear multilayer perceptron model. Both the SISO and MISO Wiener models in their two basic configurations known as a series-parallel and a parallel model are considered. In series-parallel Wiener models, another multilayer perceptron is used to model the inverse nonlinear element. For neural network Wiener models, four different rules for the calculation of the gradient or the approximate gradient are derived and presented in a unified framework. In series-parallel models, represented by feedforward neural networks, the calculation of the gradient can be carried out with the backpropagation method (BPS). Three other methods, i.e., backpropagation for parallel models (BPP), the sensitivity method (SM), and truncated backpropagation through time (BPTT) are used to calculate the gradient or the approximate gradient in parallel models. For the BPTT method, it is shown that the accuracy of gradient approximation depends on both the number of unfolded time steps and impulse response functions of the linear dynamic model and its sensitivity models. Computational complexity of the algorithms is also analyzed and expressed in terms of the orders of polynomials describing the linear dynamic model, the number of nonlinear nodes, and the number of unfolded time steps. Having the gradient calculated, different gradient-based algorithms such as the steepest descent, quasi-Newton (or variable metric), and conjugate gradient can be applied easily.

Pp. 31-75

3 Neural network Hammerstein models

Andrzej Janczak

This chapter deals with gradient-based learning algorithms for training neural network Hammerstein models. As in the case of neural network Wiener models discussed in Chapter 2, four different gradient calculation algorithms, i.e., backpropagation for series-parallel models (BPS), backpropagation (BPP), the sensitivity method (SM), and backpropagation through time (BPTT) for parallel models are derived. Having the rules for gradient calculation derived, steepest descent or other gradient-based learning algorithms can be implemented easily. Besides steepest descent algorithms, four other learning algorithms, which combine steepest descent algorithms with the recursive least squares (RLS) algorithm or the recursive pseudolinear regression (RPLR) algorithm, are proposed. For the truncated BPTT algorithm, gradient calculation accuracy is analyzed. It is shown that gradient calculation accuracy depends on impulse responses of sensitivity models and the linear dynamic model. Knowing these impulse responses, the errors of the calculation of partial derivatives of the model output w.r.t. model parameters can be evaluated. Computational complexity of the algorithms is analyzed and expressed in terms of the polynomial orders and , the number of nonlinear nodes, and the number of unfolded time steps for the BPTT algorithm.

Pp. 77-116

4 Polynomial Wiener models

Andrzej Janczak

This chapter deals with polynomial Wiener models, i.e., models composed of a pulse transfer model of the linear dynamic system and a polynomial model of the nonlinear element or the inverse nonlinear element. A modified definition of the equation error and a modified series-parallel Wiener model are introduced. Assuming that the nonlinear element is invertible and the inverse nonlinear element can be described by a polynomial, the modified series-parallel Wiener model can be transformed into the linear-in-parameters form and its parameters can be calculated with the least squares method. Such an approach, however, results in inconsistent parameter estimates. As a remedy against this problem, an instrumental variables method is proposed with instrumental variables chosen as delayed system inputs and delayed and powered delayed outputs of the model obtained using the least squares method.

Pp. 117-141

5 Polynomial Hammerstein models

Andrzej Janczak

In this chapter, we will review seven methods of the identification of Hammerstein systems which use Hammerstein models with a polynomial model of the nonlinear element. The reviewed methods use models of the linear dynamic system in the form of pulse transfer models [21, 27, 34, 63, 120, 161], or a Laguerre expansion of the impulse response [156]. Wiener system parameters are estimated with different procedures such as the ordinary least squares [27], iterative least squares [63, 120, 161], iterative correlation and steepest descent [156], prediction error [34], and pseudolinear regression [21] methods.

Pp. 143-157

6 Applications

Andrzej Janczak

This chapter starts with a brief survey of the reported applications of Wiener and Hammerstein models in both system modelling and control. Next, the estimation of parameter changes in the context of fault detection and isolation is considered in Section 6.2. Modelling vapor pressure dynamics in a .ve stage sugar evaporation station is studied in Section 6.3. Two nominal models of the process, i.e., a linear model and a neural network Wiener model are developed based on the real process data recorded at the Lublin Sugar Factory in Poland. Finally, Section 6.4 summarizes the results.

Pp. 159-185