Flowsheeting

Flowsheeting

Computers & Chemical Engmeermy, Printed in Great Britain. Vol. 3, pp 17-20, 1979 0098& 1354!79/040017-04$02.W/O Pergamon Press Ltd. Section 6 Rappo...

548KB Sizes 5 Downloads 154 Views

Computers & Chemical Engmeermy, Printed in Great Britain.

Vol. 3, pp 17-20, 1979

0098& 1354!79/040017-04$02.W/O Pergamon Press Ltd.

Section 6 Rapporteurs’

Review

FLOWSHEETING R. W. H. SARGENT Department of Chemical Engineering and Chemical Technology, Imperial College, London SW7 2BY, England (Receivedfor

publication

The three sessions on flowsheeting cover papers describing new flowsheeting packages, underlying algorithms and strategies, and case studies on the practical use of flowsheeting systems.

(1) Reasonably easy for a non-expert to use and difficult for him to misuse: -Simple input language -Clear manual and on-line explanations (prompts). -Good diagnostics (2) Efficient, robust and reliable: Either solve the problem Or give enough information for the user to overcome the difficulty. (3) Good data bank of physical properties. (4) Good range of standard unit-models. (5) Provision of cost estimation and economic evaluation. (6) Easy to expand both properties data-bank and unit-models library (i.e. modular). (7) Flexible stream description. (8) Ability to handle design specifications. (9) File oriented, to allow -Separation of data input, execution, inspection of results. -Easy restart after modifications, -Easy transfer ofdata to and from other programmes. (10) Portable and easy to access.

PACKAGES No less than thirteen different packages figure in the various papers, as summarized in the following table: 6Al SSPS 6A2 FLOWPACK-II 6A3/ 6B4 ASPEN

2 October 19791

Kaijaluoto Berger & Perris

Evans, Boston, Britt, Gallier, Gupta &Joseph Mahalec, Ng, Seider & Yagi FLOWTRAN 6A4 PROCESS (SM) Brannock, Vernenil & Wang 6A5 QUASILIN Gorczvnski, Hutchison & Waiih Klemes, Lutcha & Vasek _ 6A7 DIS Perkins 6Bl CHESS 6B2 SPEEDUP Hernandez & Sargent 6Bl TPIF Barrett & Walsh 6Cl SIMULA Proposto, Vinci & Mason 6C2 TISFLO van Cooten, Steeman & de Leeuw den Bouter Barker & Fletcher. 6C3 DISGULF Two of these packages (TPIF and DISGULF) are not general flowsheeting packages, but there seems to be no sign of a decrease in creative activity in this field. True, there is some concern at the continued proliferation, and the ASPEN project (Evans et al. 6A3) with an advisory committee involving 50 companies, represents a serious attempt to produce a package which will have wide general acceptability. However new packages will no doubt continue to appear, for ideas in this field are still evolving and often cannot be easily incorporated in existing packages. Whether the new packages presented to us at this conference do indeed contain sufficient novelty to justify their creation is a different matter, but no doubt each of the authors will try to convince us that this is the case and I hope they will highlight the novel aspects in their presentations. It is interesting to note that all the packages are solely concerned with steady-state design and simulation, and none deal with the growing field of dynamic simulation for operability or control studies. Several of the authors (6A2, 6A7, 6C2) refer to use of their packages for studies of plant operation, and it is mentioned that new versions of DIS (6A7) and TISFLO (6C2) can deal with errors and uncertainties. It would be interesting to have more details of methods and experience from the authors concerned. In describing the features of their packages, several of the authors, notably in papers 6A2-6A4, enumerate the desirable properties to be expected of a presentday flowsheeting package, and there seems to be general agreement on most of them. Those which concern the user may be summarized as follows:

Although some of these points are Utopian in character, most are obvious enough. However the authors differ in their approaches to meeting them, and there are several issues which might be worth some discussion: (a) Evans et al. (6A3) make a strong point about non-conventional stream attributes (e.g. solids size distribution, proximate analysis of coal) and in ASPEN provide three kinds of substreams. Berger & Perris (6A2) say that the problem of flexibility in stream description ‘can only sensibly be overcome if the system can handle arbitrary information vectors’, and presumably FLOWPACK-II is organized on this basis. However. this comes close to the ordinary FORTRAN structure of subroutines with arbitrary argument lists, and clearly a balance must be struck between flexibility in general and convenience of use in the normal situation. I venture to suggest the SPEEDUP solution of arbitrary stream information vectors with a facility for user definition of stream formats for each unit-type as required. Perhaps there are other ideas? (b) Access through a terminal (VDU or teletype) has largely supplanted card input, and free format input language seems to have become the rule, with optional prompts and an on-line help facility (as described by Berger & Perris, 6A2). However some authors (see 6A4 and 6A7) still advocate provision of a ‘fill-in-the-blanks’ data-sheet facility, even with VDU terminals. (c) File-oriented systems are also part of the general evolution of computing, and the advantages listed above really amount to a change from the classical ‘batch-job’ approach of data preparation-+ execution -+ output to a data-base orientation, where a 17

18

Rapportrurs’

structured data-base holds the current state of the design and the system provides a variety of commands for interrogating this or modifying it by user input or by processing and computation. Such a system can allow access to different types of user at different levels (e.g. read only, or access to limited portions of the datdbase). Nevertheless most packages (e.g. FLOWPACK, ASPEN, PROCESS) still use the old batch sequence approach, while Klemes et al. (6A7) emphasise the advantage of integration through a sequencing program (VIM). Perhaps these latter authors could explain their point in more detail, and we could have some general discussion on this basic issue. Several authors (6A2, 6A3, 6A5) emphasise the use of a linked-list data structure for their packages, and Kaijaluoto (6Al) describes in some detail the implementation of a package (SSPS) using such a structure. The use of lists, hash tables and other data structures in flowsheeting packages is no new idearl-51, and surely in 1979 one can assume that any competent programmer will use the data structure most appropriate to the problem in hand! I am also concerned about the use of the terms ‘plex’ and ‘bead’ to describe list-structures, for these terms were never adopted by the computing community and unfamiliar jargon can give the misleading impression of something new. And while on the subject of unnecessary jargon, may I give my support to the protest of Gorczynski et al. (6A5) against the over-restrictive use of the term ‘modular’. As they point out, a flowsheeting package does not have to be based on linking subroutines (i.e. procedure-based) in order to be modular, and equation-based systems can be just as modular in their input language and datastructure-indeed they are more modular in the sense that the numerical solution procedures are separated from the plant description and hence can be changed at will. The questions of the relative merits of procedurebased and equation-based systems, the ability to handle design specifications, and the problem of efficient, robust computational methods all take us beyond the user interface to the algorithms embedded in the packages. ALGORITHMS

With the increasing use of packages for larger and more realistic problems there is at last a growing realization that simple successive resubstitution leaves much to be desired as an iterative technique, and several papers in the conference (6B1, 6B2, 6B4) examine more powerful techniques. The rigidity of the structure of procedure-based systems has also made itself felt, and there is increasing preoccupation with the problem of dealing with design specifications and of inefficiency due to multiple nested iteration loops. One way of removing unnecessary loops due to the rigid input-output structure of unit subroutines is through the use of subnetworks, as described by Berger & Perris (6A2). The essence of the idea is that each plant model is in reality a set of subroutines, each carrying out an elementary portion of the computation, and the how-diagram linking these routines to the unit input-output data is combined by the executive with the plant how-diagram to produce an overall information flow-diagram for the elementary subroutines. Of course the ultimate building-blocks

rroiew

are the individual equations, so pursuing this philosophy to its logical limit leads to the equationbased approach, where the numerical procedures, and hence the directionality of information flow, are completely divorced from the plant description. The design specification problem is discussed by several authors (6B1, 6B4, 6B5). Vasek et u/. (6B5) conclude that it is worth writing several versions of each unit routine with different input arguments to cover the different design cases, and have implemented this in DIS (see 6A7). They give a formalized treatment of the problem and attempt to classify the required subroutine types according to the different combinations of physical and information flows. I did not feel that this classification yields any further insightbut perhaps I have missed the point completely. Most of the authors who treat the design problem point out that for each design specification an extra adjustable variable must be specified, and that there are implicit restrictions on the permissible choices of such variables. This is basically a question of avoiding ‘functional singularity’ as described by Perkins (6Bl) and this is not easily dealt with by ad hoc methods. Mahalec et ul. (6B4) play safe by forbidding the specification of any unit input stream variable as adjustable, leaving the subroutine writer the responsibility of choosing each unit’s parameter set to avoid functional singularity. However, as Perkins points out, this excludes the commonly occurring design situation of adjusting the feed to the plant to achieve a specified production rate. Perkins (6B 1) describes the reformulation of the tornstream iteration problem as the solution of a set of nonlinear equations, and shows that design constraints and specifications are easily incorporated. He then discusses the use of Broyden’s method for solving this system and compares it with the traditional ‘control module’ approach. He pays particular attention to the problem of generating an initial approximation to the Jacobian and advocates the use of finite differencing, admitting the heavy computational cost but pointing out the possibility of detecting functional singularity and hence badly posed problems. The results in his Table4 should arouse interest, since they show that it is sometimes easier to attain high accuracy than a lower accuracy, and this point is worth more elaboration. Mahalec et al. (6B4) adopt a different approach, reverting to the early ideas of Rosen, who used simple linear ‘split-fraction’ models to represent each unit in solving the process balances, generating these models from the input-output data derived from rigorous unit subroutines. They demonstrate that splitfraction models are not adequate, but that models linearized about the current iteration point give good results. As they point out, this is a quasi-Newton procedure applied simultaneously to all connecting stream variables, and they compare a diagonal approximation to the Jacobian, based on finite differences, with the Broyden approximation (using Schubert’s sparse update). As expected, the Broyden formula is superior. In contrast to Perkins, they conclude that it is impractical to perturb all variables to generate a full initial Jacobian by finite differences, and use diagonal initial approximations except for the submatrices corresponding to ‘strongly interacting’units (in practice these are taken to be the chemical reactors). They are

Rapporteurs’ review

of course dealing with a much larger system than Perkins, and further numerical comparisons between the two approaches would be of interest. In particular, it would be interesting to see if the results bear out the conclusion of Hernandez & Sargent (6B2), based on a rough analysis of computational effort, that decomposition is in general preferable to simultaneous solution. I have several more detailed comments and questions on this paper by Mahalec et al. (a) It is unnecessary to treat output and input variables linked by a stream as separate variables. Using the identifications provided by the ‘connectivity relations’ the size of the system of equations is virtually halved. (b) In Sec. 5 it is stated that the Jacobian itself is updated and Eqs. (5.1) solved from scratch at each iteration. The computational effort could be reduced by a factor of n by updating the triangular factors of B directly. It is true that this may be unsafe for illconditioned systems, since the pivot sequence is fixed from the beginning, and if an initial diagonal approximation to B is used this pivot sequence must be chosen arbitrarily. However a test could be included for restart in case of difficulty. (c) The stream variables are initialized by making ‘several’ successive resubstitution iterations, but they do not say how they determine the tear set for this nor how they deal with the design purpose, specifications in this phase. (d) It should be noted that the tear-set corrections obtained by the procedure described in Sec. 4.4 are not the same as those obtained by applying the quasiNewton algorithm to the torn system (as described by Perkins). For the example given in the paper, with the link Y, = Xi torn, the latter approach yields -A33

while the procedure

A22

AII

* AXi

=

r,,

in Sec. 4.4 yields

(I-A,,A,,A,,).AX,

= Y,-Y;.

Since this does not yield a Newton algorithm if true Jacobian matrices are used, the convergence characteristics are not likely to be as good as those of the Perkins algorithm. Note also that this procedure again involves redundant storage and computation; in the example, Y4 is computed from the separator module, so that AY, and hence Ab3 are not required. Mahalec et al. discuss the approach of Umeda et al. who used analytically linearized models, but rejected this in favour of numerical generation of linear models because of the onus put on the user of providing both full and linear models for each unit. However QUASILIN (6A5) dispenses with the full models and provides a special language to facilitate the writing of the linear models. In effect the user writes algebraic expressions for the coefficients occurring in the linear models, and a set of standard equation types is provided to further shorten the work. These coefficients are evaluated from initial variable estimates and the linear system for the whole process is then solved, using a sparse matrix method; this gives all the interconnecting stream variables, from which new values of the linear coefficients can be computed. QUASILIN is therefore an equation-based system, but is quite different in philosophy from that underlying SPEED-

19

UP (6B2), which provides a language and facilities for users to write unit models consisting of general sets of algebraic equations (linear or nonlinear); the executive then assembles the equations for all the units, identifying variables common to a pair of units in a linking stream, and then solves the resulting complete set of (nonlinear) equations using partitioning and tearing techniques with a quasi-Newton method. Of course the unit models required by QUASILIN or SPEEJDUP do not involve solution of the equations, and hence are much easier to write than subroutines, removing most of the labour from creating a useful ‘package’ (as noted by Klemes er al. 6A7). However, as pointed out by Evans et al. (6A3), they cannot make use of existing unit routines, and the development of a useful library of models is still a substantial task. The use of the simultaneous solution approach of Mahalec et a/. (6B4) or the approach of Perkins (6Bl) to tearing therefore seems to offer an attractive way of combining the best of both worlds. However, as more design equations are added, the structure of these equations becomes increasingly important, and the above-mentioned problem of selecting a corresponding valid set of additional adjustable variables becomes steadily more difficult. It is natural to ask if this last operation cannot be automated, or more generally if an algorithm cannot be devized to deal with mixed systems of equations and procedures. This is the subject of our own contribution (6B2), where we describe such an which also uses automatic algebraic algorithm, manipulation to exploit the structure of the equations. The new version of SPEEDUP incorporating this algorithm can therefore make use of existing subroutines while retaining the advantages of an equationbased system. The useofautomatic algebraic manipulation is again a controversial subject on which there will no doubt be differing views, The paper by Barrett & Walsh (6B7) tackles a different aspect of the computational problem. They note that a large portion of the computation time in design or simulation is taken up by the calculation of physical properties, and set about reducing the load by the use of simple local models. These models are chosen to reflect the known structure of the relations for the different properties, and the parameters in them are fitted by using data obtained from the general physical property package available. The hope is that the fitting time is more than compensated by the reduction in the number of calls to the more sophisticated package, and this is amply realized in the examples cited in the paper. The basic idea is not new, but the authors make two innovations which enormously add to the effectiveness of their method. The first is a systematic method of storage and use of data-points already calculated from thegeneral package, and the second is a systematic calculation of error-bounds which indicate when the current local model is sufficiently accurate and when it needs updating. A subsidiary benefit is that their package (TPIF) also yields reliable derivatives with almost no extra computation. So far they have concentrated on thermodynamic properties, and suggest that the method could easily be extended to other physical properties. But why stop at physical properties? The idea can be extended to locally valid simple models of all kinds, and opens

20

Rapporteurs’

the way to a significant leap forward in the economic computing capability of flowsheeting packages. In this sense, the paper is probably the most significant contribution to flowsheeting at this conference. The remaining paper in the section on algorithms is a contribution to the problem of generating optimum computation sequences for procedure-based systems. Kuru & Hortacsu (6B6) have implemented algorithms for partitioning and tearing and a new method for generating the computational order. They use the reachability matrix, generated by Mah’s method, to carry out the partitioning, and Pho and Lapidus’ BTA algorithm for tearing. They point out that the ‘Process Matrix Method’ described by Crowe et al. for obtaining the computational order is not a valid partitioning algorithm, and their own method is used to order the partitions found from the reachability matrix; the Process Matrix Method is used to order nodes within each partition. It is evident that there is a certain overlap of functions between the various algorithms used, and elimination of the redundancy would improve the efficiency. In view of the work of Upadhye & Grens[6], some would disagree with the objective function used, and I would also disagree with the particular choice of algorithms as the most efficient now available for the purpose, as explained in a recent review [7]. CASE STUDIES

In common with many other fields of applied science, there is a dearth of good case studies of the industrial use of flowsheeting packages. The time is long past when the simple announcement of the fact that an industrial company has actually used a flowsheeting package to solve a real problem is sufficient justification for a paper at a conference. To be useful to those working in the field a case study must give sufficient data on the problem to make clear just what is involved in the calculations and the possible sources of difficulties, if not enough to enable the calculations to be repeated. Of course there is the ever present problem of commercial secrecy, but the companies concerned must beware of the risk that the paucity of information given may well be misinterpretedthe best kept secret in any company is just how little it really knows! So I was somewhat disappointed by the first case study on the list from Proposto et al. (6Cl). The gas separation plant described is complex enough to be a genuine test for any package, and the range of compounds involved such that no specialized physical property data would be required. The exercise was also obviously a success, for the improvements achieved were significant. The paper starts with a quick outline of the package used (in this case an in-house package), dwelling mainly on the physical property correlations incorporated. The original and modified plant flowsheets are given in sufficient detail, and some typical computation times are given for the various units, but since the number of components involved and the

revieu

number of plates in the various columns are not given these times mean very little. What one would hope for is some typical test-run conditions and the requisite plant parameters with corresponding times-even if these do not correspond to the actual conditions of the plant. Finally, although the resulting plant improvements are listed, there is no comment on difficulties encountered in carrying out the flowsheeting exercise. Perhaps there were none-in which case we have nothing to learn from this case study! The second paper, by van Cooten et al. (6C2), is a model of what a case study should be. The problem is clearly defined, and the flow-diagram and typical run data are given, various questions concerned with the use of the package are discussed, and a final summary of the benefits given. Gulf Oil must feel a little rueful about the revelations made in the last paper by Barker & Fletcher (6C3) on the operation of some parts of their refinery! This is not really a case study of the use of a flowsheeting package, but a study of the operation of several columns using a steady-state distillation program (DISGULF2) specially written for the purpose. Since the authors have available their own flowsheeting package (PEETPACK) they may have something to tell us on the relative merits of special and general programs for this type of study. The problem of the pentane-splitter reminded me of a comprehensive study of a similar system made a long time ago by Keating & Townend [S] and perhaps their results will be of interest to the authors. REFERENCES

1. R. W. H. Sargent & A. W. Westerberg. SPEED UP (Simulation Programme for the Economic Evaluation and Design of Unsteady-state Processes) in chemical engineering design. Trans. Inst. Chem. Engrs. 42, 190 (1964). 2. R. S. H. Mah & M. Rafal, , Automatic program generation in chemical engineering computation. Truns. Inst. Chem. Engrs. 49, 101 (1971). 3. M. J. Leigh, A computer flowsheeting programme incorporating algebraic analysis of the problem structure. Ph.D. Thesis, University of London (1973). 4. D. H. Cherry, Data ofganization in chemical plant design. Ph.D. Thesis. Universitv of Cambridge (1975). 5. G. Stephanopoulos &- A. W. Wesierberg,’ Studies in process synthesis--II. Chem. Engng Sci. 31, 195 (1976). 6. R. S. Upadhye & E. A. Grens II, Selection of decompositions for chemical process simulation. A.1.Ch.E. J. 21(l), 136 (1975). I. R. W. H. Sargent, The decomposition of systems of procedures and algebraic equations. Numerical Analysis ~- Proc., Biennal Co@:, Dundee IY77 (Ed. G. A. Watson). Lecture Notes in Mathematics, Vol. 630, pp. 158178, Springer-Verlag, Berlin (1978). 8. 5. M. Keating & D. S. Townend, Superfractionator controllability data obtained by the use of a digital computer. In Proc. ofthe Joint Symposium on Instrumentation and Computation in Process Development und Plant Design (Ed. P. A. Rottenberg). Institution of

Chemical Engineers, London (1959).