Discussion on: “Min-max Model Predictive Control of Nonlinear Systems: A Unifying Overview on Stability”

Discussion on: “Min-max Model Predictive Control of Nonlinear Systems: A Unifying Overview on Stability”

European Journal of Control (2009)1:22–28 # 2009 EUCA Discussion on: ‘‘Min-max Model Predictive Control of Nonlinear Systems: A Unifying Overview on ...

82KB Sizes 0 Downloads 26 Views

European Journal of Control (2009)1:22–28 # 2009 EUCA

Discussion on: ‘‘Min-max Model Predictive Control of Nonlinear Systems: A Unifying Overview on Stability’’ Jan Maciejowski Cambridge University Engineering Department, Cambridge CB2 1PZ, England

1. Introduction Model Predictive Control (MPC) has been used in industrial applications for about 30 years. As far as I know from industrial practitioners, none of the existing process control applications uses any of the academic work that our community has done on stability and robustness of MPC. We should keep this in mind when discussing our research on stability and robustness of MPC. There is now a lot of interest in applying MPC in many applications other than process control (and to new application areas even within process control, such as paper-making). There is particular interest in control of various kinds of vehicles, including marine, air, space and road. Also in off-road land vehicles such as those used in construction and mining operations. It was remarked some years ago by Sandoz [9] (who is an experienced practitioner) that petrochemical applications of MPC were ‘easy’, in the sense that they involved only a small number of rather similar problems, most of them required control near steady-state conditions only, and the control performance specifications were not very challenging.1 Sandoz anticipated that applying MPC in new problem areas would be very challenging. We have already seen this even in the chemical process industry, as MPC has been applied to tasks such as control of product grade changes, requiring Nonlinear MPC because of the large transients involved.

The new non-process applications frequently involve tight performance specifications, model changes or adaptations because of changing operating points, and—perhaps most importantly—safety-criticality. MPC formulations which offer guarantees of stability and robustness can be expected to be of great importance for the deployment of MPC in these applications. The paper by Raimondo et al. is therefore to be welcomed, particularly as it aims to survey and unify some of the literature relating to these issues. But, let us remember that our ‘traditional’ concerns with stability and robustness are not the only important ones, or perhaps even the most important ones, for deploying MPC in new applications. (I will come back to this point in section 3.) Many of these applications involve fast dynamics, so that real-time solution capability becomes a pressing issue, and most of the proposals for guaranteeing stability and robustness lead to very high computational complexity. Also implementers of real controllers in safety-critical systems are obsessed (quite correctly) by the issues of validation and verification (‘V& V’)—whatever our theorems may say, how do we know that our software will do what we claim it will do? At present we can say almost nothing about how to perform V& V for controllers which involve real-time optimization in the loop. And I don’t believe that we can say ‘not my problem; leave it to the software engineers’—it needs our expertise as well as theirs, and that of numerical analysts.

2. Characterizing Robustness in MPC E-mail: [email protected] 1 It is interesting that even in these restricted conditions MPC delivered very large benefits.

Most proposals for obtaining guaranteed robustness with MPC start by assuming that all the

23

Discussion on: ‘‘Min-max NMPC: Overview on Stability’’

uncertainty comes from state disturbances d, as in the model: xðk þ 1Þ ¼ AxðkÞ þ BuðkÞ þ dðkÞ

ð1Þ

where it is usually assumed that dðkÞ 2 D, and D is some bounded set. Robustness is then obtained by posing a min-max problem, thus ensuring that the optimization takes into account the worst possible disturbance trajectory that can occur. This follows the approach successfully adopted in ‘mainstream’ robust control theory [6, 11]2. It does, however, have the limitation that it assumes that the dimension of the state space, and the underlying dynamics in that state space, are both correct. This precludes the representation of unmodelled dynamics, which are of course always present—for example due to neglecting structural flexibility in aircraft, or parasitic capacitances in electric circuits. ‘Mainstream’ robust control theory uses a very powerful paradigm to deal with this: it assumes that the input-output model error is an operator , not necessarily linear or time-invariant, and that the only thing that is known about this error is a bound on its size, as measured by an operator norm, which is induced by some norm on the input and output signals. Most commonly the 1-norm is used, which is induced by the ‘2 norm on signals: kk1  

ð2Þ

It is of course difficult to see how uncertainty of this generality could be introduced into MPC formulations. The ‘unique selling point’ (USP in marketing jargon) of MPC is its ability to consider constraints explicitly, and any discussion of robustness which ignores constraints risks losing this USP. All the really interesting results on robust MPC, such as those presented in the paper under discussion, take constraints into account and have the form: ‘If you have an initial feasible solution, you will continue to have a feasible solution, even if the worstcase disturbance occurs’, and stability then follows, as shown convincingly by Scokaert, Mayne and Rawlings [10]. To get this kind of result it seems essential to have a well-defined space of known dimension in which to do the analysis. But there is no free lunch, and a difficulty with this kind of approach is that as conditions are added to ensure robustness under a wider range of conditions, so it becomes more likely that an initial feasible

2 This discussion is not concerned with the technical details, but it is worth noting how useful Sontag’s concept of input to state stability has been for MPC when representing uncertainty in this way.

solution is not found, whereupon everything is lost. The price of excessive conservatism is uselessness. In the paper under discussion, an uncertainty description of the form (1) is used, though with a nonlinear model. The disturbance is split into two parts, however, d1 2 D1 ðxÞ and d2 2 D2 , with D1 ðxÞ being allowed to depend on the state in a known way. In the example given in the paper this is applied to a problem in which there is a bilinear dependence on x and d1; this is quite a common way in which uncertainties act, for example in mixing processes. The idea here is that by exploiting more problem structure than is captured by (1) it is possible to avoid unnecessary conservatism. Another representation of uncertainty that has been used in MPC research is to allow (A, B) in (1) to lie in some bounded set, most frequently in a convex polytope [3]: ½A; B ¼

p X i¼1

i ½Ai ; Bi ;

X

i ¼ 1;

i  0:

i

ð3Þ This is an improvement in that it allows the dynamics to be uncertain, or even varying, but still in the same state space, so this still cannot represent neglected dynamics. There are difficulties with exploiting this approach, the methods described in [3] having extremely high computational complexity, although others, such as those of Lee and Kouvaritakis [4] are much simpler, at the cost of being restricted to input constraints only. A significant step towards more general representations of uncertainty in MPC has recently been taken by Løvaas et al. [5], who allow infinitedimensional model uncertainty, and use ideas of dissipativity and passivity to establish robustness. Does it matter how we represent uncertainty? Whatever we do, we will certainly not capture all possible realities. And it seems likely that ensuring robustness against any reasonable description of uncertainty should help us to avoid large sensitivity to most minor departures from the assumed nominal behavior. The counter-argument, however, is that we know from experience in various applications, that an optimization algorithm solves exactly the problem that is posed, which is usually not exactly the real problem that we have. And it is often the case that the optimal solution exhibits large sensitivity to small perturbations in the parameters of the problem. This has given rise to the research field of robust optimization [1]. In the systems and control community it has been argued by Carlson and Doyle [2] that such

24

sensitivity to some (rare, previously unencountered) perturbations is an inevitable consequence of optimizing against others (common, expected), even in the case that the optimization is performed by some ‘natural’ process such as evolution. This suggests that we should continue to seek better representations of uncertainty for use with MPC, even though the ones already in use are themselves valuable. We will only know that we have a sufficiently rich representation from experience with practical applications of the resulting MPC algorithms. (This was the case with mainstream robust control; the whole research activity was stimulated by the experience that LQG solutions sometimes—by no means always—gave very un-robust controllers. The H1 theory [11, 6], and even the earlier ‘LQG/LTR’ theory [6], essentially eliminated this problem.)

3. The Role of Stability and Robustness Analysis for MPC Do we really need stability and robustness results in order to apply MPC to the new problem areas mentioned in section 1? NO—they are not essential, but they will certainly help. Of course it is essential that all the controllers that we implement are closed-loop stable and sufficiently robust. But this can be achieved without having any theory which addresses these issues explicitly. The role of stability theory in control has always been to the process of designing control systems, that is, to speeding up the design and development process, and making it less expensive. In principle, we could generate candidate controllers by randomly generating parameters, then testing for closed-loop stability (analytically in the LTI case) and robustness (by Monte Carlo simulation). That is not so different from what the commercial MPC vendors actually do. The frequency-domain methods developed by Bode in the 1930’s, based on Nyquist’s theorem, are very powerful but quite difficult to use, requiring high levels of expertise. Part of the difficulty comes from the parameterization of controllers, essentially by means of their poles and zeros. Generating such parameters at random very rarely gives closed-loop stability. I believe that this is why we place so much emphasis on stability in the controller design process.3 If we had started our field with MPC (or with ‘Internal Model Control’, involving the Youla parameter [8, 6],

3

Also, the original problem that motivated Nyquist’s theorem was the instability of feedback amplifiers encountered by Bell Labs at that time.

Discussion on: ‘‘Min-max NMPC: Overview on Stability’’

or even with Ziegler-Nichols rules?), we would place much less emphasis on achieving stability, not because it would be less important, but because it would be much easier to achieve. I became convinced of this when generating examples of MPC giving closed-loop instability, to show that this is possible, for my book [7]. It was very hard to do so (assuming a SISO problem, no plant-model error and no constraints of course); the only way I could do it was to start with an unstable closed-loop and work backwards to the MPC weights and horizons. All those features which we usually consider to be difficult—unstable plant, right half-plane zeros, big time delays—cause no problems for MPC, at least as regards nominal closed-loop stability. Essentially one just has to choose a suitable horizon, which is a much easier parameterization to use than one which involves encircling the ‘  1’ point the correct number of times. The role of formulations which offer robustness guarantees is similar. They will improve the process of MPC controller design because they will make it faster and more systematic, but they are not essential in principle. However, I believe that they will offer more benefits than stability guarantees, as performance specifications become more ambitious. Robustness does not come as easily with MPC as nominal stability does. In the ‘easy’ world of petrochemicals with stable (or pre-stabilized) plants, steady-state operation, and relatively relaxed specifications, robustness can be increased easily by increasing the penalty weights on control moves (the weights on u). But this is not nearly so easy outside this comfort zone. So far, this section has assumed that MPC controller design is done off-line. But one of the attractions of MPC for some of the new applications is the rather obvious route it offers to adaptive control—just adapt the model, whether it is a black-box or a firstprinciples model, and the rest of the controller adapts itself. In such scenarios the possibility of off-line postdesign validation is of course not available, and a priori guarantees of stability and sufficient robustness become essential.

4. Conclusion If stability is less of a problem with MPC than with traditional control methods, then the emphasis shifts to performance and robustness. To be fair to the classical frequency-response theory, the power of Bode’s methods is that they address all of these issues when fully exploited, for SISO systems; also they tell us about the limits of feedback which we cannot overcome, whatever clever control algorithms we may

25

Discussion on: ‘‘Min-max NMPC: Overview on Stability’’

use. But here we come to a big challenge for MPC theory. MPC has been successful in practice partly because it has been easy to use and understand. Aspects of performance, including respect of constraints, are easy to represent and modify. As we complicate the formulations, in order to guarantee robustness (of performance as well as stability) we make MPC less easy to use, less easy to understand, and less easy to implement. The game now is to find a suitable compromise which will give us an effective process for designing MPC controllers for many application areas, without destroying all of its attractive features. Whether this can be done remains to be seen. The paper under discussion is a nice stepping stone along the way, but many more such stones are required to complete the job.

References 1. Ben-Tal A, Goryashko A, Guslitzer E, Nemirovski A. Adjustable robust solutions of uncertain linear programs. Math Program 2004; 99: 351–376.

2. Carlson JM, Doyle J. Complexity and robustness. Proc Natl Acad Sci 2002; 99: 2538–2545. 3. Kothare MV, Balakrishnan V, Morari M. Robust constrained model predictive control using linear matrix inequalities. Automatica 1996; 32 4. Lee YI, Kouvaritakis B. Robust receding horizon predictive control for systems with uncertain dynamics and input saturation. Automatica 2000; 36: 1497–1504. 5. Løvaas C, Seron MM, Goodwin GC. Robust output feedback model predictive control for systems with unstructured uncertainty. Automatica 2008; 44: 1933– 1943. 6. Maciejowski JM. Multivariable Feedback Design. Addison-Wesley, 1989. 7. Maciejowski JM. Predictive Control with Constraints. Prentice-Hall, 2002. 8. Morari M, Zafiriou E. Robust Process Control. PrenticeHall, 1989. 9. Sandoz D. Perspectives on the industrial exploitation of model predictive control. Meas Control 1998; 31: 113–117. 10. Scokaert POM, Mayne DQ, Rawlings JR. Suboptimal model predictive control (feasibility implies stability). IEEE Trans Autom Control 1999; 44: 648–654. 11. Zhou K, Doyle JC, Glover K. Robust and Optimal Control. Prentice-Hall, 1996.

Discussion on: ‘‘Min-max Model Predictive Control of Nonlinear Systems: A Unifying Overview on Stability’’ J.A. Rossiter Department of Automatic Systems and Control Engineering, Mappin Street, University of Sheffield, S1 3JD, UK

1. Introduction There has been a lot of interest in input to state stability (ISS) and input to state practical stability (ISpS) and more recently with specific consideration towards nonlinear model predictive control (NMPC) and thus an overview paper is timely and useful. However one main question a reader may be interested in is who is this overview for? Is it for persons with a general interest in NMPC or people who have specifically been looking at ISS methods? My conclusion is that it is more aimed at the latter group than the former in that it is not written in a tutorial style which sets out how the concepts have developed and where they link in to related MPC work. Rather, it takes the premise E-mail: [email protected]

that ISpS is accepted and understood by the readership and seeks to summarize and link works solely within that area. Indeed such a paper is useful, but as an aside I think many readers would appreciate a tutorial survey separately. One illustration of this focus on experts is the long lists of references within the introduction without any real attempt to discuss what contribution each paper made and hence why it was worth mentioning; it is almost as if the authors assumed the readership would know this and thus many references were included for completeness only. Ironically perhaps, I found myself wanting to insert several more references from related MPC/NMPC work to give more background and explanation of some concepts, but I then decided that, given their focus, the authors were probably correct to omit these.