ANN based estimator for distillation—inferential control

ANN based estimator for distillation—inferential control

Chemical Engineering and Processing 44 (2005) 785–795 ANN based estimator for distillation—inferential control Vijander Singh∗ , Indra Gupta, H.O. Gu...

489KB Sizes 0 Downloads 123 Views

Chemical Engineering and Processing 44 (2005) 785–795

ANN based estimator for distillation—inferential control Vijander Singh∗ , Indra Gupta, H.O. Gupta Electrical Engineering Department, Indian Institute of Technology Roorkee, Roorkee, Uttaranchal 247667, India Received 1 September 2003; received in revised form 11 February 2004; accepted 11 August 2004 Available online 19 November 2004

Abstract Typical production objectives in distillation process require the delivery of products whose compositions meet certain specifications. The distillation control system, therefore, must hold product compositions as near the set points as possible in the faces of upset. Distillation column is generally subjected to disturbances in the feed and the control of product quality is often achieved by maintaining a suitable tray temperature near its set point. Secondary measurements are used to adjust the values of the manipulated variables, as the controlled variables are not easily measured or not economically viable to measure (inferential control). In the present paper, an artificial neural network (ANN) based estimator to estimate composition of the distillate is proposed. Nowadays with the advent of digital computers, the demand of the time is to amalgamate the control of various variables to achieve the best results in optimum time. It is therefore required to monitor all the desired variables and perform the control action (feed forward, feed back and inferential) as per algorithm adopted. The developed estimator is tested and the results are compared. The comparison shows that the predictions made by the neural network are in good agreement with results of simulation. © 2004 Elsevier B.V. All rights reserved. Keywords: Inferential control; Distillation control system; Artificial neural network

1. Introduction The distillation control system must hold product composition as near the set point(s) as far as possible in the faces of upsets. The disturbances are generally in feed. The control is difficult because the product quality cannot be measured economically on line. This is because the instrumentation is either very expensive and/or measurement lags and sampling delays make impossible to design an effective control system. A solution to this problem is the use of secondary measurements in conjunction with a mathematical model of the process to estimate the product quality. An estimator predicts product quality from a linear combination of process input and output measurements. The control strategy is to use selected measurements of both process inputs and outputs to estimate the effect of measured and unmeasured disturbances on the product quality, and then ∗

Corresponding author. Tel.: +91 1332 284294; fax: +91 1332 285231. E-mail address: [email protected] (V. Singh).

0255-2701/$ – see front matter © 2004 Elsevier B.V. All rights reserved. doi:10.1016/j.cep.2004.08.010

to use a standard control system to adjust the control effort so as to maintain the product quality at the desired level. This strategy reduces approximately to that of a feed forward control system when there are no measurements of process outputs. Application of the estimator to a simulated multicomponent distillation column shows that the composition control achieved with an estimator based on temperature, reflux and steam flow measurements is comparable to that achieved instantaneous composition measurements. The estimated composition may be used in a control scheme to determine valve position directly, or it may be used to manipulate the set point of a temperature controller as in parallel cascade control. This is the notion behind inferential control developed by Joseph and Brosilow [5] (1978). The inferential control scheme uses measurements of secondary outputs, in this instance, selected tray temperatures, and manipulated variables to estimate the effect of unmeasured disturbances in the feed on product quality. The estimated product compositions are then used in a scheme to achieve improved composition control. Use of large digital

786

V. Singh et al. / Chemical Engineering and Processing 44 (2005) 785–795

computers for distillation calculations was not investigated up to 1958, although the high speed of computation seemed to offer economies and present the opportunity of making calculations not otherwise possible. Amundson and Pontinen [1] in 1958, introduced the use of digital computers to solve the distillation column problem. For general multi-component mixtures the coefficients depend in a highly non-linear fashion on compositions also, thus, solution becomes difficult. The solution obtained should be available for comparison and should be accurate. This is made possible with the help of large digital computer. Choe and Luyben [2] in 1987, took up rigorous dynamic model of Distillation Column. Most of the dynamic models assume two simplifications namely negligible vapor holdup and constant pressure. But in this paper it was demonstrated that these assumptions lead to erroneous predictions of dynamic responses. It happens when pressure of column is high (i.e. greater than 10 atmosphere) and when column pressures are low (i.e. vacuum columns). In 1990, Rovaglio et al. [3] solved the distillation column problem with the help of rigorous model. Rigorous model is reliable for practical purposes. An industrial example was taken to show practical implementation and real economic value of feed forward control. Feed forward control action reduces the inherent error when feedback control structure is used to infer composition. When process dead times are large and load upsets are frequent and when high quality is required feedback control cannot serve the purpose alone, then feed forward control is required to evaluate proper value of manipulated variables so as to cancel the effects of input variations. The control of many industrial processes is difficult because online measurement of product quality is complicated. This is due to the non-existence of measurement technology. Weber and Brosilow in 1972 [4] cited one solution to this problem by using secondary measurements in conjunction with a mathematical model of the process to estimate product quality. The method includes procedures for selecting the available output measurement to get an estimator, which is relatively insensitive to modeling error and measurement noise. The estimator developed for control of multi-component distillation column is based on temperature, reflux and steam flow measurements. The control achieved with the estimator is comparable to that achieved with instantaneous composition measurements and is far superior to composition control achieved by maintaining a constant temperature on any single stage of the column. The Weber et al. [4] have designed an estimator in three steps: (1) The selection of the appropriate measurements from those available. (2) The inversion of the process model so as to obtain an estimate of the unmeasured process disturbances from the measurements.

(3) Application of the process model so as to map the estimated and measured process inputs into the estimate of product quality. Finally, this model was tested for its validity to 16 stages distillation column. More important is to develop algorithms for selecting a subset of the available process output measurements, which will be most appropriate. Joseph and Brosilow [5] in 1978, presented a method for designing an estimator to infer unmeasurable product qualities from secondary measurements. The secondary measurements are selected so as to minimize the number of such measurements required to obtain an accurate estimate. The application of design procedures to design a static inferential control system to control product composition is described. Then the dynamic structure of linear inferential control system term is discussed. Also the rigorous methods for the design of sub optimal dynamic estimators are discussed. In 1991 and 1992, Marmol and Luyben [6,7] presented an inferential model based control of multi-component batch distillation. The model used is described in the paper and two approaches were explored to estimate the distillate composition: a rigorous steady state estimator and a quasi-dynamic non-linear estimator. The models developed provide good estimation of the distillate composition using only one temperature measurement. Bhagat in 1990 [8], discussed briefly the neural networks. Two examples were taken to demonstrate their practical application, these involved CSTR’s. In the first one, the change in concentration of outlet stream with the changes in inlet stream concentration was studied. The second example involved the identification of degree of mixing in a reactor or vessel. In 1994, Morris et al. [9] examined the contribution that various network methodologies can make to the process modeling and control toolbox. Feed forward networks with sigmoidal activation functions, radial bases function networks and auto associative networks were reviewed and studied using data from industrial processes. Finally, the concept of dynamic networks was introduced with an example of nonlinear predictive control. MacMurray and Himmelblau [13] in 1994, described the modeling of packed distillation column with artificial neural network (ANN) and provide a example of complex modeling. The change in the sign of the gain was observed under various operating conditions [13]. Ou and Rhinehart [14] demonstrated a parallel model structure for general non-linear model predictive control. The model comprises of a group of sub-models, each providing prediction of one process at one selected future point in time. The neural network is used for each sub-model and terms the prediction model as a grouped neural network (GNN). The work demonstrates implementation of grouped neural network model predictive control (GNNMPC) on a non-linear, multivariable, constrained pilot scale distillation unit [14]. Tamura and Tateishi [15] have discussed the capabilities of a neural network with a finite number of hidden units and shown with the support of mathematical proof that a four-

V. Singh et al. / Chemical Engineering and Processing 44 (2005) 785–795

layered feed forward network is superior to three layered feed forward network in terms of the number of parameters needed for the training data. Kung and Hwang [16] proposed algebraic projection analysis and provide an analytical solution for optimal hidden units size and learning rate of the back propagation neural networks. Murata et al. [17] have investigated the problem of determining the optimal number of parameters in neural network from statistical point of view. The proposed new information criterion (NIC) therein measures the relative merits of two models having the same structure but different number of parameters and concludes whether more number of neurons should be added to the network or not. Kano et al. [18] presented a control scheme to control the product composition in a multi-component distillation column. The distillate and bottom compositions are estimated from online measured process variables. The inferential models for estimation product compositions are constructed using dynamic partial least squares (PLS) regression, on the base of simulated time series data. From the detailed dynamic simulation results, it is found that the cascade control system based on a proposed dynamic (PLS) model works much better than the usual tray temperature control system. Kano et al. [19] proposed a new inferential control scheme termed as “Predictive Inferential Control”. In predictive inferential control system, future compositions predicted from online measured process variables are controlled instead of the estimates of current compositions. The key concept is to realize the feed back control with a feed forward effect by the use of inherent nature of a distillation column. An approach to fault detection is described by Brydon et al. [20] which uses neural network pattern classifiers trained using data from a rigorous differential equation based simulation of a pilot plant column. Two case studies were presented, both considering only plant data. For two classes of process data, a neural network and a K-means classifier both produced excellent diagnoses. For additional three classes of plant operation, a neural network again provides accurate classifications, while a K-means classifier failed to categories the data [20]. Sbarbaro et al. [21] presented the traditional approach to include multi-dimensional information into conventional control systems and proposed a new structure based on pattern recognition. The artificial neural networks and finite state machines as a frame work for designing the control system is used. Bakshi and Stephanopoulos [22] derived a methodology for pattern based supervisory control and fault diagnosis, based on multi-scale extraction of trends from process data. An explicit mapping is learned between the features extracted at multiple scales, and the corresponding process conditions using the technique of induction by decision trees. Taking advantage of technique developed by Kolmogorov, Kurkova [23] provided a direct proof of the universal approximation capabilities of perceptron type network with two hidden layers. Lippmann [24] demonstrated the computational power of different neural net models and the effectiveness of simple error correction training procedures. Single and multi layer perceptrons, which can be used for pattern clas-

787

sification, are described as well as Kohonen’s feature map algorithm, which can be used for clustering or as a vector quantizer.

2. Simulation algorithm The realistic distillation column [12] consists of non-ideal column with NC components, non-equimolal overflow, and inefficient trays. In present paper following assumptions are made for developing the model. (1) (2) (3) (4)

Liquid on the tray is perfectly mixed and incompressible. Tray vapor holdups are negligible. Dynamics of the condenser and the reboiler is neglected. Vapor and liquid are in thermal equilibrium but not in phase equilibrium. The departure from phase equilibrium is described by Murphree vapor efficiency.

Under these assumptions, the steady state operation of each module is considered by the following equations, commonly referred to as the MESH equations. [MESH = material balance equations, efficiency relations, summation equation, and heat (enthalpy) balance equations]. Here, the stage number i takes integer values from 1 to NT. Li+1 + Vi−1 − Li − Vi = 0 (material balance equations)

(1)

yi − yi−1 = ηij [yi∗ (xi , Ti , pi ) − yi−1 ] (stage efficiency relations) where vi yi = Vi Li =

NC 

and

xi =

(2)

li Li

lij (summation equations)

(3)

vij

(4)

j=1

Vi =

NC  j=1

Li+1 hi+1 + Vi−1 Hi−1 − Li hi − Vi hi = 0 (enthalpy balance equation)

(5)

Eqs. (1)–(5) are used to represent an equilibrium condenser and an equilibrium reboiler by the removal of variables corresponding to a liquid stream above the condenser and a vapor stream below a reboiler, and the inclusion of condenser and reboiler heat duties Qc and QB in the respective enthalpy balance equations. For the simulation of a distillation column the quantities [10], such as feed composition, flow rate, temperature and

788

V. Singh et al. / Chemical Engineering and Processing 44 (2005) 785–795

pressure, column pressure, stage efficiencies are assumed to be specified. The basic steps of the algorithm reflecting the above assumption for the simplified multi-component distillation column are: Step 1: Input data for column size, components, physical properties, feeds, and initial conditions (liquid compositions, liquid flow rates and temperatures on all trays). Step 2: Calculate initial tray holdups and the pressure profile. Step 3: Calculate the temperatures and vapor compositions from the vapor–liquid equilibrium data. Step 4: Calculate liquid and vapor enthalpies. Step 5: Calculate vapor flow rates on all trays, starting in the column base, using the algebraic form of the energy equations. Step 6: Evaluate all derivatives of the component continuity equations for all components on all trays plus the reflux drum and the column base. Step 7: Integrate all ODEs (using Euler’s method). Step 8: Calculate new total liquid holdups from the sum of the component holdups. Then calculate the new liquid mole fraction from the component holdups and the total holdups. Step 9: Calculate new liquid flow rates from the new total holdups for all trays. Step 10: Go to step 3 for the next step. The case under study is a multi-component system (Fig. 1) (five components) with constant relative volatility throughout the column and hundred percent efficient trays i.e. the vapor leaving is in equilibrium with the liquid on the tray. A single feed stream is fed as saturated liquid on to feed tray NF (NF = 5). The feed flow rate is F (kmols/h) and composition is z (mole fraction). The overhead vapor is totally condensed in a condenser and flows in to the reflux drum, whose holdup of liquid is MD (kmols). The contents of the drum is assumed to be perfectly mixed with composition xD (mole fraction). The liquid in the drum is at it’s bubble point. Reflux is pumped back to the top tray NT (NT = 15) of the column at a rate R (kmols/h). Overhead distillate product is removed at a rate D (kmols/h). At the base of the column, liquid bottoms product is removed at rate B (kmols/h) and with a composition xB (mole fraction). The vapor boilup is generated in the reboiler at rate V (kmols/h). The algorithm presented is translated into a program using C language for the distillation column discussed. The main objective of the above simulation program is to generate patterns. In order to vary reboiler duty QB (KJ/h) for obtaining various patterns, the following equation is used: QB = QB + ran(i)

(6)

where ran(i) is a random number generated using a library function srand(). The ran(i) is generated so that it ranges 0.013–0.881. The change in the reboiler duty changes the temperature profile of the column. With this changed tem-

perature profile we get a changed distillate quality. In this way, 130 patterns of temperature profile and respective distillate compositions are generated. These are then used for training and testing a neural network model.

3. Artificial neural network modeling 3.1. Neuron model A neuron model consists of a processing element [11] with synaptic input connections and a single output. The signal flow of neuron inputs xni is considered to be unidirectional as indicated by arrows as in a neuron’s output signal flow. A general neuron symbol is shown in Fig. 2. The neuron’s output signal is given by the following relationship   n  t o = f (w xn) or o = f (7) wi xni i=1

where w is weight vector defined as w  [ w1

w2

=

...

t

wn ]

and xn is the input vector  t xn  xn1 xn2 · · · xnn =

The function f(wt xn) is often referred to as an activation function. The variable net is defined as a scalar product of the weight and the input vector. net  wt xn

(8)

=

Using Eq. (8) in Eq. (7), we get o = f (net)

(9)

It is observed from Eq. (7) that the neuron as processing node performs the operation of summation of its weighted inputs. Subsequently, it performs the non-linear operation f(net) through its activation function. Typical activation functions used are 2 f (net)  −1 (10) = 1 + exp(−λ net) and



f (net)  =

+1 · · · net > 0 −1 · · · net < 0

(11)

where λ > 0 in Eq. (10) is proportional to neuron gain determining the steepness of the continuous function f(net) near net = 0. By shifting and scaling the bipolar activation function defined by Eqs. (10) and (11), unipolar activation function can be obtained as 1 (12) f (net)  = 1 + exp(−λnet)

V. Singh et al. / Chemical Engineering and Processing 44 (2005) 785–795

Fig. 1. Schematic diagram of distillation column with instrumentation and control component.

789

790

V. Singh et al. / Chemical Engineering and Processing 44 (2005) 785–795

where the input and output vector and the weight matrix are     o1 yn1  o2   yn2      o= .  yn =  .   ..   ..  ynJ oK 

Fig. 2. General symbol of neuron.

w11  w21  W = .  ..

w12 w22 .. .

··· ··· .. .

wK1

wK2

· · · wKJ

w1J w2J .. .

    

and the non-linear diagonal operator Γ [•] is   f (•) 0 ··· 0  0 f (•) · · · 0    Γ [•] =  . .. .. ..   .. . . .  0

0

· · · f (•)

and the desired output vector is   d1  d2    d   .  netk = Wyn =  .. 

(15)

dK Fig. 3. Single layers network with continuous perceptron.

The generalized error expression include all squared errors at outputs k = 1, 2, . . ., K. K

and

Ep =



+1 · · · net > 0 f (net)  0 · · · net < 0 =

(13)

3.2. Delta learning rule for multi-perceptron layer The back propagation-training algorithm allows experiential acquisition of input output mapping knowledge within multilayer networks. Input patterns are submitted during the back propagation training sequentially. If a pattern is submitted and its classification or association is determined to be erroneous, the synaptic weights as well as the thresholds are adjusted so that the current least mean square classification error is reduced. The input output mapping comparison of target and actual values and adjustment, if needed, continue until all mapping examples from the training are learned within an acceptable over all error. During the association or classification phase the trained neural network itself operate in a feed forward manner. However, the weight adjustment enforced by the learning rule propagates exactly backwards from the output layer to the hidden layer towards the input layer. To formulate the learning algorithm the simple continuous perceptron network involving K neuron will be considered as shown in Fig. 3 . 

o = Γ Wyn



(14)

2 1 1 (dpk − opk )2 = dp − op 2 2

(16)

k=1

for a specific pattern p, where p = 1, 2, . . ., P Let us assume that the gradient decent search is performed to reduce the error Ep through the adjustment of weights. Requiring the weight adjustment we compute individual weight adjustment as follows: wkj = −η

∂E ∂wkj

(17)

where the error E is defined in Eq. (16) for each node in layer k, k = 1, 2, . . ., K, we can write using Eq. (15) netk =

J 

wkj ynj

(18)

j=1

and further using Eq. (14) the neuron’s output is ok = f (netk )

(19)

The error signal term δ is called delta produced by the kth neuron is defined for this layer as follows: δok  − =

∂E ∂(netk )

(20)

It is obvious that the gradient component ∂E/∂Wkj depends only on the netk of a single neuron, since the error at the output of the kth neuron is contributed to only by the weights wkj ,

V. Singh et al. / Chemical Engineering and Processing 44 (2005) 785–795

791

for j = 1, 2, . . ., J for fixed k value. Thus, using the chain rule we may write

The final formula for the weight adjustment of the single layer network can now be obtained from Eq. (24) as

∂E ∂E ∂(netk ) = × ∂wkj ∂(netk ) ∂wkj

wkj = η(dk − ok )fk (netk )ynj

(21)

The second term of product of Eq. (21) in the derivative of the sum of products of weights and patterns as in Eq. (18). Since the values of ynj , for j = 1, 2, . . ., J are constant for a fixed pattern at the input, we obtain ∂(netk ) = ynj ∂wkj

(22)

Combining Eqs. (20) and (22) leads to the following form for Eq. (21) ∂E = −δok ynj ∂wkj

(23)

The weight adjustment formula Eq. (17) can be rewritten using the error signal δok term as below: wkj = ηδok ynj

E(netk ) = E[ok (netk )]

(24)

∂E ∂ok × ∂ok ∂(netk )

∂E = −(dk − ok ) ∂ok

Formula Eqs. (31) and (32) refers to any form of non-linear and differentiable activation function f(net) of the neuron. Let us examine the following two commonly used delta training rules for the two selected typical activation functions f(net).For the unipolar continuous activation function defined in Eq. (12) f (net) can be obtained as f  (net) =

exp(−net) [1 + exp(−net)]2

(33)

f  (net) =

1 1 + exp(−net) − 1 × 1 + exp(−net) 1 + exp(−net)

(34)

Again using Eq. (12) in Eq. (34), we get f  (net) = o(1 − o)

(35)

Delta value of the Eq. (29) for this activation function can be rewritten as δok = (dk − ok )ok (1 − ok )

(36)

wkj = wkj + η(dk − ok )ok (1 − ok )ynj

(27)

(28)

(29)

Eq. (29) shows that the error signal term δok depicts the local error (dk − ok ) at the output of the kth neuron scaled by the multiplicative factorfk (netk ), which is the slope of the activation function computed at the following excitation value (30)

(37)

for ok =

1 1 + exp(−netk )

The updated weights under the delta learning rule for the single layer network can be expressed using vector notation as W  = W + ηδo ynt

allows rewriting formula Eq. (26) as follows: δok = (dk − ok )fk (netk ) for k = 1, 2, . . . , K

(32)

(26)

and noting that

netk = f −1 (ok )

j = 1, 2, . . . , J

Summarizing the above discussion, the updated individual weights under the delta learning rule can be expressed for k = 1, 2, . . ., K and j = 1, 2, . . ., J as follows:

Denoting the second term in Eq. (26) as a derivative of activation function ∂ok ∂(netk )

for k = 1, 2, . . . , K

(25)

Thus, from Eq. (20)

fk (netk )  =

wkj = wkj + wkj

This can be rewritten as

The expression Eq. (24) represents the general formula for delta training/learning weight adjustments for a single layer network. It can be noted that wkj in Eq. (24) does not depend upon the form of an activation function. To adapt the weights, the error signal term delta δok introduced in Eq. (20) needs to be computed for the kth continuous perceptron. E is a composite function of netk , therefore, it can be expressed for k = 1, 2, . . ., K

δok = −

The updated weight values become

for k = 1, 2, . . . , K and j = 1, 2, . . . , J

(31)

(38)

where the error signal vector δo is defined as the column vector consisting of the individual error signal terms.   δo1  δo2    δo   .  =  ..  δoK

792

V. Singh et al. / Chemical Engineering and Processing 44 (2005) 785–795

4. Proposed ANN based estimator for distillation column The ANN model has forward flowing information in predictive mode & back-propagated error corrections in learning mode. Such nets are usually organized into layer of neurons; connections are made between neurons of adjacent layers. A neuron is such connected that it receives signals from each neuron in immediately succeeding layer. An input layer receives input. One or more intermediate layers (also called hidden layers) lie between the input and output layer, which communicates results externally. ANN based estimator developed for a distillation column assumes a mixture of NC components and NT number of trays, the column reboiler in the bottom and a condenser on the top. An estimator is proposed to estimate the distillate quality from the temperature profile of the column. We have NT + 2 temperature inputs for the NT trays, a reflux drum, and the reboiler. The output consists of NC liquid compositions and NC vapor compositions i.e. 2 × NC outputs. The estimator contains NT + 2 input neurons and 2 × NC output neurons. An input vector of NT + 2 elements (temperature profile of the column) is given to the input layer of the network. Weights are initially randomized when the net undergoes training the errors between the results of the output neurons and the desired corresponding target values are propagated backward through the net. The backward propagation of error signals is used to update the connection weights. Finally, a network is achieved which can predict the output for any input vector. The input neurons transform the input signal and transmit the resulting

value to the hidden layer. Each neuron in the hidden layers individually sums the signals they receive together with the weighted signal from bias neuron and transmit the result of each of the neurons in the next layer. Ultimately, the neurons in the output layer receive weighted signals from neurons in the penultimate layer sum the signals and emit the transformed sums as output from the net. The output vector is composed of 2 × NC composition outputs of the distillate. The temperature profile of the trays in distillation column is highly non-linear as the system is very complex by having five-component mixture. To incorporate the non-linearities in ANN model of these patterns three hidden layers are used in the proposed estimator. Further for three hidden layers acceptable accuracy is achieved and increasing the number of hidden layers beyond three no further improvement in accuracy is achieved. Also for less than three hidden layers the accuracy is not acceptable. The trained network with three hidden layers is then used to estimate the distillate composition for any given temperature profile of the distillation column.

5. Comparison of results Proposed artificial neural network based estimator is tested for 15-tray column with a reboiler and a reflux drum with five component mixture. The 20 temperature profiles and the corresponding distillate composition used for testing are the one not used in training. The results obtained with the help of ANN based estimator are compared with the re-

Fig. 4. Proposed neural network for the distillation column.

V. Singh et al. / Chemical Engineering and Processing 44 (2005) 785–795

793

measurements in conjunction with a mathematical model of the process to estimate the product quality. An artificial neural network based estimator developed here can be used for the inferential control of distillation column. The developed estimator control strategy with minimal computational burden and high speed can be proposed for the distillation control system, which is generally non-linear in nature. As for simulation study program discussed a 15-tray column with a reboiler and a reflux drum with five-component mixture is considered for testing the estimator. One hundred and thirty input-output patterns are generated using simulation program and are used for training the developed estimator of Fig. 4. Out of the above-generated patterns some of them are used for testing purpose. Temperature profile taken as input vector consisted of 17 temperature entries of 15 trays, reboiler and reflux drum. The output vector of the estimator is constituted by five liquid and five vapor distillate compositions for the mixture considered. Also the estimator’s input vector consisted of 17 elements and output vector had 10 elements. A 5-layered network model is taken with [17, 10, 35, 35, 35] configuration i.e. 17 input neurons, 10 output neurons and 35 neurons in each of the three hidden layers. The network is trained using 110 patterns and 20 test inputs are given for testing. Training the estimator took about 60,000 × 110 iterations and about 45 h. It is observed on 1.2 GHz, Intel Pentium-IV processor, that developed simulation program takes 0.16 s for its execution and developed ANN based estimator takes 0.05 s for the same

Fig. 5. Liquid composition of components with reboiler temperature.

sults of simulation as obtained using semi rigorous model. The results for distillate compositions are shown in Fig. 5 and Fig. 6. As seen from Fig. 5 and Fig. 6 the estimated composition that from proposed ANN based estimator is close to the one obtained from semi rigorous model. In Figs. 5 and 6, the composition of liquid xd5 and vapor compositions yd4 and yd5 , respectively are zero in the distillate product.

6. Discussions and conclusions The distillate product of distillation control system must hold composition as near the set point(s) as far as possible in the faces of upsets. The disturbances are generally in the flow and composition of feed. The control of the product composition is difficult because the product quality cannot be measured economically on line. This is because the instrumentation is either infeasible and/or measurement lags and sampling delays make impossible to design an effective control system. This problem is solved by using of secondary

Fig. 6. Vapor composition of components with reboiler temperature.

794

V. Singh et al. / Chemical Engineering and Processing 44 (2005) 785–795

process, thus, the total time saving of 68.75% can be achieved using ANN model, without sacrificing the accuracy.

R vn vnij

Appendix A. Nomenclature

vn V Vi vij w wi w wij

f(net) Γ [•] δo δok δyj

activation function a non-linear diagonal operator error signal vector error signal vector produced by kth neuron error signal term produced by jth neuron of hidden layer having output y v weight increment for hidden layer of neurons w weight increment for input layer of neurons f y column vector for hidden layers vaporization efficiency ηi v η learning parameter (positive constant) E error gradient vector ηij Murphree stage efficiency B bottom product rate (kmols/h) d desired output vector dp desired output vector for pth pattern di desired output from ith neuron dpk desired output from kth neuron for pth pattern D distillate product rate (kmols/h) Ep least squared error for pth pattern Fi total feed flow rate into ith tray (kmols/h) hF total molar enthalpy of feed (kJ/kmol) hfij component feed enthalpy (kJ/kmol) hi total molar enthalpy of liquid mixture (kJ/kmol) total molar enthalpy of vapor (kJ/kmol) Hi hlij component liquid enthalpy (kJ/kmol) HNi,j hidden neuron for ith hidden layer and jth node Hvij component vapor enthalpy (kJ/kmol) INB input neuron for reboiler temperature input neuron for reflux drum temperature IND INI input neuron for ith tray temperature K, L, M number of neurons in three hidden layers respectively Kij equilibrium constant Li total liquid flow rate leaving the tray (kmols/h) lij component liquid flow rate leaving the ith tray (kmols/h) MB liquid molar holdup in reboiler (kmols) MD liquid molar in reflux drum liquid molar holdup on ith tray (kmols) Mi NC number of components net scalar product of weight vector and input vector netI scalar product of ith weight vector and input vector NT total number of trays in distillation column O output vector of neuron Ok kth output of neurons processing node ONi output neuron for ith output QB reboiler heat duty (KJ/h) QC condenser heat duty (KJ/h)

W x xFij xij xn xni y y* yij yij * yn

reflux rate (kmols/h) updated weights of hidden layer connection weights of ith node of one layer to jth node of preceding layer weight vector of hidden layer weight matrix of hidden layer total vapor flow rate from the tray (kmols/h) component vapor flow rate from the tray (kmols/h) multiplicative weight vector multiplicative weight for ith input updated weights of input layer multiplicative weights for input to ith neuron from jth input element weight matrix liquid composition of more volatile component (mole fraction) component liquid composition of jth component in feed (mole fraction) liquid composition if jth component on ith tray (mole fraction) input vector to neuron ith input to neuron vapor composition of more volatile component (mole fraction) equilibrium vapor composition of more volatile component (mole fraction) vapor composition of jth component on ith tray (mole fraction) equilibrium vapor composition of jth component on ith tray (mole fraction) input vector to neuron layer

References [1] N.R. Amundson, A.J. Pontinen, Multicomponent distillation calculations on a large digital computer, Ind. Eng. Chem. 50 (5) (1958) 730–736. [2] Y.-S. Choe, W.L. Luyben, Rigorous dynamic models of distillation columns, Ind. Eng. Chem. Res. 26 (10) (1987) 2158–2161. [3] M. Rovaglio, E. Ranzi, G. Biardi, M. Fontana, R. Domenichini, Rigorous dynamic and feed forward control design for distillation process, AIChE J. 36 (4) (1990) 576–586. [4] R. Weber, C. Brosilow, The use of secondary measurements to improve control, AIChE J. 18 (3) (1972) 614–627. [5] B. Joseph, C.B. Brosilow, Inferential control of process. Part I: Steady state analysis and design. Part 2: The structure and dynamics of inferential control systems. Part 3: Construction of suboptimal dynamic estimators, AIChE J. 24 (3) (1978) 485– 509. [6] E.Q. Marmol, W.L. Luyben, C. Geogarkis, Application of an extended luenberger observer to the control of multi-component batch distillation, Ind. Eng. Chem. Res. 30 (8) (1991) 1870– 1880. [7] E.Q. Marmol, W.L. Luyben, Inferential model based control of multi-component batch distillation, Chem. Eng. Sci. 47 (1992) 887– 898. [8] P. Bhagat, An introduction to neural nets, Chem. Eng. Prog. (1990) 55–60.

V. Singh et al. / Chemical Engineering and Processing 44 (2005) 785–795 [9] A.J. Morris, G.A. Montague, M.J. Willis, Artificial neural networks: studies in process modeling and control, Trans. I Chem. E 72 (Part A) (1994) 3–19. [10] L.W. Luyben, Process Modeling, Simulation and Control for Chemical Engineers, Mcgraw Hill International Editions, Chemical Engineering Series. [11] J.M. Zurada, Introduction to Artificial Neural Systems, Jaico Publishing House. [12] P.B. Deshpande, Distillation Dynamics and Control, Instrument Society of America, Tata McGraw Hill Publishing Co. Ltd. [13] J.C. MacMurray, D.M. Himmelblau, Modeling and control of a packed distillation column using artificial neural networks, Comput. Chem. Eng. 19 (10) (1995) 1088. [14] J. Ou, R.R. Rhinehart, Grouped neural network model predictive control, Control Eng. Pract. 11 (2003) 723–732. [15] S. Tamura, M. Tateishi, Capabilities of a four layered feed forward neural network: four layers versus three, IEEE Trans. Neural Networks 8 (2) (1997) 251–255. [16] S.Y. Kung, J.N. Hwang, An Algebraic Projection Analysis for Optimal Hidden Units Size and Learning Rates in Back Propagation Learning, Princeton University, Department of Electrical Engineering, Princeton, NJ 08544, U.S.A. [17] N. Murata, S. Yoshizawa, S. Amari, Network information criteriondetermining the number of hidden units foe an artificial neural

[18]

[19]

[20]

[21]

[22]

[23] [24]

795

network model, IEEE Trans. Neural Networks 5 (6) (1994) 865– 872. M. Kano, N. Showchaiya, S. Hasebe, I. Hashimoto, Inferential control system of distillation composition using dynamic partial least squares regression, J. Process Control 10 (2000) 157– 166. M. Kano, N. Showchaiya, S. Hasebe, I. Hashimoto, Inferential Control of Distillation Composition: Selection of Model and Control Configuration, Control Eng. Pract. 11 (8) (2003) 927– 933. D.A. Brydon, J.J. Cilliers, M.J. Willis, Classifying pilot plant distillation column faults using neural networks, Control Eng. Pract. 5 (10) (1997) 1373–1384. D. Sbarbaro, P. Espinoza, J. Araneda, A pattern based strategy for using multidimensional sensors in process control, Comput. Chem. Eng. 27 (2003) 1943. B.R. Bakshi, G. Stephanopoulos, Representation of process trends. IV. Introduction of real time patterns from operating data for diagnosis and supervisory control, Comput. Chem. Eng. 18 (4) (1994) 303–332. V. Kurkova, Kolmogorov’s Theorem and multi layer neural networks, Neural Networks 5 (1992) 501–506. R.P. Lippmann, Neural Nets for Computing, Lincola Laboratory, M.I.T, Lexington, MA 02173, U.S.A.