Rechnergestützte optimierung statischer und dynamischer systeme

Rechnergestützte optimierung statischer und dynamischer systeme

0005 1098/84$3.00+ 0.00 PergamonPressLtd. (4>1984InternationalFederationof AutomaticControl Automatica,Vol.20, No. 1. pp. 137 139,1984 Printed in Gre...

256KB Sizes 1 Downloads 65 Views

0005 1098/84$3.00+ 0.00 PergamonPressLtd. (4>1984InternationalFederationof AutomaticControl

Automatica,Vol.20, No. 1. pp. 137 139,1984 Printed in Great Britain

Book Reviews

Rechnergestiitzte Optimierung statischer und dynamischer Systeme* H. G. Jacob numerical search algorithm changes the coefficients cl until the selected objective function becomes optimal. Using this optimization procedure the corresponding computer programming system can be built up in block form; e.g. separate blocks exist for the mathematical model, the objective function, the optimization algorithm, etc. Only those blocks for the mathematical model and the selected objective function must be provided by the user, whereas the control programming system containing the optimization algorithm as well as the programme block for connecting this to any arbitrarily given mathematical model, i.e. the appropriate structure for the control function, can be taken from a set of FORTRAN programs, which are described in detail in the book. For the purpose of an introduction the author demonstrates the proposed optimization procedure, at first using some simple and more academic examples. The book also contains three real world examples representing some quite complicated optimization problems: design of an optimal control system for the guidance of an underwater-towed boat; optimal open up and repeat landing procedure for the Airbus A 300 airplane; optimal angle of swing setting of a helicopter-rotor as a function of time and space. These examples lead to nonlinear mathematical models of high order including constraints. The optimization has been performed by relatively less expense. For a critical valuation it must be mentioned that the advantages of the optimization procedure proposed in this book are given by its general validity and simplicity. Its application is flexible including free selection of the objective function. However, the disadvantages are represented by a higher effort in computing time and the inability to distinguish between local and global optima. Furthermore, the procedure depends, in my opinion, strongly on the available a priori information. It might be possible, in the case without any a priori information, to select for the structure of the control function, orthogonal and uniformly convergent polynomial systems, but these allow only a rough approximation of the optimal control function, leading only to a suboptimal solution. Probably the increase in the computing time in this case will become quite considerable. Furthermore the book does not give an answer to the user, which type of approximating function will provide best results. In this connection also an error analysis for valuation of the approximation would be expected. Jacob has succeeded in providing mathematically clear and easily understandable presentation of his optimization procedure. By means of rather complex examples he showed the general applicability of the proposed method. The modular structure of the suggested programme package allows for an easy adaptation to changing boundary conditions. It is heartening that these investigations have been made available to a great number of interested engineers. This book will stimulate discussion on algorithmic optimization procedures and will give new impetus to future work in the field of optimization of technical systems. As its special character is not that of a text book, it can only be recommended to those who are especially interested in the field of numerical solution of optimization problems.

Reviewer: HEINZ UNBEHAUEN Ruhr-University, F.R.G.

IN ORDER to improve efficiency, economy or environmental compatibility of an industrial process, engineers have often to solve optimization problems such that the best result (in some sense) is obtained under given circumstances. An ultimate goal of all optimization problems is to find the maximum (or minimum) value of a function which expresses the benefit desired (or the effort required) in the actual operation of an industrial process. The starting point for system optimization is usually a sufficiently accurate mathematical model describing the given system. A performance criterion or an objective function is then defined according to the nature of the problem including constraints and with respect to it the system design is optimized. The objective function in stationary optimization is an algebraic quantity and in the case of dynamic optimization a functional. There is no single method available for solving all optimization problems efficiently. Several methods have been proposed for different optimization problems. It is well known that the indirect classical methods of optimization, such as the variational techniques of Euler Lagrange, permit analytical solutions in the presence of constraints only in special cases, and even iterative solutions are also not without difficulties. In view of these, simple and directoptimization methods have attained considerable importance. These are all numerical methods, wherein, starting with given initial values an appropriate optimal solution is sought in an iterative manner. The method presented in this book belongs to the group of direct optimization methods. The principle of the computeraided approach, proposed by Jacob, consists in transforming different types of optimization problems into a pure parameter optimization problem, which can be solved by any of the wellknown numerical stationary optimization or search techniques. This transformation of an arbitrary optimization problem, e.g. of either a dynamic and/or distributed problem is easily accomplished by approximating the desired optimal, eventually distributed, control trajectories u(t, z) of the mathematical system model by appropriate functions, as e.g. Fourier series, Legendreor Tschebyscheff-polynomials. Thus the optimization problem consists only in determining the coefficients ci of the control function u(t, z) by means of a search technique. Within this static optimization algorithm, the coefficients c~ are systematically altered until the scalar objective function becomes optimal. This iterative optimization procedure is based on the assumption, using all a priori knowledge about the system, of an appropriate structure of the approximating function for a possible optimal control function with unknown coefficients to be determined. The appropriate initial values of the coefficients c~ must be chosen. The iterative search algorithm is started by exciting the mathematical system model by the control function. At each step the value of the objective function is calculated. Then the

* Rechnergestiitzte Optimierung statischer und dynamischer Systeme (Computer-aided Optimization of Stationary and Dynamical Systems), by H. G. Jacobs. Published by Springer, Berlin (1982). 229 pp., DM 38. The books which are reviewed in the IFAC-Journal Automatica are not necessarily endorsed by IFAC, the editors, or the publishers nor are the reviewers" opinions or comments about the books.

About the reviewer Heinz Unbehauen received his Dipl.-Ing. degree in 1961 and the Dr.-Ing. degree in 1964 in mechanical engineering from the University of Stuttgart. Since 1964 he has been a lecturer. In 1969 he received the 'venia legendi" and in 1972 was appointed as a professor of control engineering in the Department of Energy

137

138

Book Reviews

Systems at the University of Stuttgart. During 1974 he was a visiting professor at the Hokkaido University, Sapporo (Japan) and in 1975 at the IIT Madras (lndia). Since 1975 he has been professor in the Department of Electrical Engineering at the

Ruhr University Bochum, where he is head of the control laboratory. His main research interests are in the tields of optimization, system identification, adaptive control and process computer application.

Digital Control Systems* Rolf Isermann Reviewer: DALE E. SEBORG Department of Chemical and Nuclear Engineering, University of California, Santa Barbara, CA 93106, U.S.A.

DURING the past 20 years, digital control systems have progressed from being an expensive novelty to playing a critical role in the operation of modern industrial plants. In view of the voluminous literature on digital control techniques, a need exists for suitable textbooks at both introductory and advanced levels. In the early days of computer control, a number of pioneering books appeared on sampled-data systems (Ragazzini and Franklin, 1958; Jury, 1958). Recently, several textbooks have been published which provide an excellent introduction to digital control systems (Franklin and Powell, 1981; Kuo, 1980). However, there is still a widespread need for a book with a scope broad enough to cover the full spectrum of topics: estimation and control techniques, multivariable design methods, stochastic and deterministic approaches, and adaptive control strategies. The scope of Professor lsermann's book is remarkable since it contains a significant amount of material on each topic. This book is an English translation of the 1977 edition which was published in German (Isermann, 1977). Major changes from the first edition include an expanded treatment of adaptive control strategies (Chapter 25) and the addition of three case studies (Chapter 30). The current edition also contains additional material on the mathematical description of sampled-data systems (Chapter 3) and on multivariable control techniques IChapters 20 and 21 ). Recent references have also been added to the revised chapters. The broad scope of this book is indicated by the titles of its seven main sections: Processes and process computers, Control systems for deterministic disturbances, Control systems for stochastic disturbances, Interconnected control systems, Multivariable control systems, Adaptive control systems based on process identification, and Digital control with process computers and microcomputers. The titles of the first and last sections of the book are somewhat misleading since process computers per se receive little attention in this book. For example, virtually no material is included on computer hardware, software, or interfaces. However, these omissions are not a serious shortcoming since this material is beyond the scope of this lengthy (566 pages) book. Furthermore, these topics have been discussed in detail elsewhere (Mellichamp, 1983). The book's contents will now be described in greater detail. The first chapter briefly describes hierarchial control but primarily serves as an introduction to the remaining 29 chapters. Chapters 2 and 3 provide an introduction to the mathematical representation of discrete-time systems including both pulsetransfer function and state-space models. No previous knowledge of discrete-time systems is required; however, the author does assume that the reader is familiar with the basic concepts of automatic control for continuous-time systems. Chapter 3 includes a detailed discussion of a model reduction technique previously developed by the author. By contrast, the material on standard topics such as z-transform inversion, stability criteria, and block diagram manipulation for sampled-data systems is quite limited. Chapters 4 11 present a variety of control system design techniques for deterministic systems. Chapter 5 concerns

*Digital Control Systems, by R. Isermann. Published by Springer, Berlin (1981) 566 pp., U.S.$45.70.

controllers with specified structures whose parameters are selected via optimization techniques (i.e. 'parameter-optimized controllers'). This chapter describes modifications of standard, digital PID algorithms such as omitting the set point (reference variable) from proportional and/or derivative terms. A detailed discussion of tuning rules and rules of thumb for selecting a sampling interval plus extensive simulation results arc also included. Chapters 6 and 7 provide a lucid discussion of ~cancellation' and deadbeat controllers. A number of state-space design methods for multivariable systems are introduced in Chapter 8 including LQ optimal control, modal control, deadbeat controllers, and the use of observers. Time delay compensation techniques are the subject of Chapter 9. The sensitivity of closedloop systems to parameter variations is considered in Chapter 10 while a comparison of pole-zero structures for various types of controllers is presented in Chapter 11. Design methods for linear systems with stochastic disturbances are considered in Chapters 12 through 15. Both minimum variance controllers (for SISO systems) and state-space approaches with Kalman filters (for M I M O systems) are described. Cascade and feedforward control systems are described in Chapters 16 and 17, respectively, under the general heading, 'Interconnected control systems'. Multivariable control systems based on transfer function or state-space representations are considered in Chapters 18 through 21. Emphasis is placed on structural aspects of multivariable processes plus design methods for multitoop PID control systems and decoupling controllers. Other design techniques such as deadbeat and minimum variance control are briefly described. (Optimal control was discussed previously in Chapter 8. ) A significant portion of the book (96 pages) is concerned with adaptive control and on-line identification. After a brief review of adaptive control systems (Chapter 22), on-line identification methods are reviewed in Chapter 23 and the special concerns and requirements for closed-loop identification are covered in Chapter 24. Convergence and identifiability conditions are presented for estimation schemes both with and without external perturbations. Adaptive control schemes based on recursive parameter estimation techniques are considered in Chapter 25, Most of the material in this lengthy chapter (55 pages) has appeared in review papers that Professor Isermann has presented at recent IFAC Conferences (e.g. Kyoto, Dusseldorf). The final section of the book is concerned with several aspects of digital control other than control and estimation algorithms. The effects of amplitude quantization and noise filtering are analyzed in Chapters 26 and 27, respectively. Actuator characteristics are described in Chapter 28 while computer aided design is briefly considered in Chapter 29. The book concludes with three case studies which compare various identification and control techniques: experimental applications involving a heat exchanger and a rotary dryer plus a simulation study for steam generator. Ifa book is written to cover a wide range of topics and still be of reasonable length, it must reflect the inevitable trade-ofl's between breadth and depth. This book is quite successful in this regard. The author's general philosophy is to survey available control and estimation techniques, describe them in a concise manner. and then compare and evaluate them via computer simulation. The book is practically oriented and tends to emphasize design and tuning considerations rather than lengthy derivations and proofs. A strong point of the book is its comparison of controller design techniques using structural arguments and computer