Neural networks in extrusion process identification and control

Neural networks in extrusion process identification and control

Neural networks in extrusion process identification and control T. Eerikginen, Y.-H. Zhu and P. Linko? Although neural networks have become one of t...

1MB Sizes 8 Downloads 63 Views

Neural networks in extrusion process identification and control T. Eerikginen,

Y.-H. Zhu and P. Linko?

Although neural networks have become one of the key research objects within artificial intelligence, relatively little information is available on neural networks related to food process control. The interest in such areas as dynamic modelling of recesses has increased, not least due to dramatic improvement and food availag ility of the calculation methods and hardware. In the present case, flat bread extrusion was used as an example food process. Dynamic changes of tor ue, speci ic mechanical energy (SME) and pressure were identified (modelled) an % control $ed using two independently taught feed-forward artificial neural networks (ANN). SME, torque and pressure are system parameters which can be controlled with process parameters, such as feed moisture, mass feed rate and screw speed. Target arameters, such as roduct expansion index, bulk density, etc. are normally difPicult to measure on- Pine, but can be estimated as functions of the system parameters. For the modelling of the whole flat bread extrusion cooking process a MIMO (multi input and multi output) approach was necessary. The neural network topology for the process model was 21-9-3 and for the controller 18-20-2. The process model was taught with 629 real data samples and the controller with 115 synthetic samples created with the process model. When testin the MIMO controller, the SME and ressure set points were quite well reache B . One of the clear advantages of neura Pnetworks in the controller design is the ease of constructing a complex MIMO controller. Keywords: extrusion cooking; dynamic modelling; neural networks: control; MIMO

INTRODUCTION interest in dynamic modelling of the extrusion cooking process has increased not least due to dramatic improvements in PC-based software and hardware. Dynamic models are needed for the identification and/ or controlling of real-time, dynamic proceses. Quite recently, Cayot et al. (1991) and Lu (et al.) (1993)

The

reported on dynamic modelling of extrusion cooking. Both papers refer to the difficulties caused by the multivariable nature of extrusion process. Further, it Laboratory of Biotechnology and Food Engineering, Helsinki University of Technology, SF-02150 Espoo, Finland. Presented at the third EFFoST conference ‘Food Control On-Line Control for Improved Quality’, 26-29 September 1993, Porto, Portugal. “To whom correspondence should be addressed 0956-7135/94/02/011

l-09

known that in extrusion cooking real-time measurement of target quality parameters is difficult. Meuser and van Lengerich (1984) have highlighted the value of the system parameters in the quality optimization of extruded flat bread. SME (specific mechanical energy), torque and pressure are system parameters which can be controlled with process parameters, such as feed moisture, mass feed rate and screw speed. Difficult to measure target parameters, such as product expansion index, bulk density etc., can be estimated as functions of the system parameters. The modelling of the dynamic behaviour of an extrusion cooker leads easily to the use of sophisticated experimental procedures (Eerikginen and Linko, 1989). When performing a conventional identification procedure, one should know beforehand whether the system comprises of one or several first, second or higher order processes and how they are located, i.e. in is well

@ 1994 Butterworth-Heinemann Ltd

Food Control 1994 Volume 5 Number 2

111

Neural networks in extrusion process: T. Eeriktiinen et al.

a series or in parallel. A good example of expertise needed when constructing a multi input and single output (MISO) controller in a conventional way has been recently presented by Hofer and Tan (1993). They designed an extrudate temperature controller using ARMAX (autoregressive moving-average models with auxiliary inputs) model parameters to built the process transfer functions. The control system comprised two different Smith-type temperature predictors coupled with plant functions and two internal model controllers (IMC). The control system apparently worked well, but needed good knowledge of control techniques and many calculations and tunings. On the other hand, neural network models do not require any a priori knowledge of the relationships of the process variables in question, thus, offering a simple and straightforward approach to the identification problems. Neural networks are characterized by their ability to learn from exemplar input/output vector pairs through iterative training and to deal with highly nonlinear problems. Consequently, neural networks have become one of the biggest research areas of artificial intelligence, but only little information is available on neural networks related to food process control (Eerikainen, 1993; Eerikainen et al. 1993a; 1993b; Linko et al. 1993) Artificial neural networks were initially developed to mimic the function of brain. The foundations for neural nework computation, neuroengineering, were laid in the early 1940s. It took about 40 years until the development in computer and software technologies provided practical means for advanced applied research in neural networks (Link0 and Zhu, 1992a). Today, neural networks are applied in, for example, speech recognition, language processing, character recognition (Obermeier and Barron, 1989), image feature classification (Moallemi, 1991) and in modelling and control (Bhat and McAvoy, 1990; Hoskins and Himmelblau, 1988; Levin et al., 1991; Linko and Zhu, 1992b, 1992~; Miller et al., 1990; Ungar et al., 1990). Neural network-based control, also called neuromorphic control (Fukuda and Shibata, 1992; Tanomaru and Omatu, 1992), can be used in many ways to control a process due to the learning ability and mapping capabilities of the artificial neural neworks (ANN) which provide a means of controlling non-linear processes too difficult for conventional linear controllers. In particular, model-based controllers have been shown to be useful when the process is non-linear or large time delays exist (Willis et al., 1992). As a feedback controller, a neural network can either be used as a direct process controller or to adjust the parameters of a conventional controller (Ichikawa and Sawa, 1992). The different control techniques investigated can be classified into a number of major methods, such as: supervised control, inverse control, neural adaptive control, back-propagation through time (BTT) and adaptive critics (Link0 and Zhu, 1992b; Tanomaru and Omatu, 1992; Werbos, 1990). Good overviews of control techniques exploiting neural networks have been given (Fukuda and Shibata, 1992; Hunt et al., 1992; Miller et al. 1990). Ungar (1990) mentions some problems concerning the neural network control of chemical plants. Many chemical processes, such as extrusion cooking in the present case, have a spatial credit assignment problem. Such proceses have many sensors and controllers 112

Food Control 1994 Volume 5 Number 2

) typical of the multi input and multi ouput (MIMO) systems, and it is not clear how to connect sensors and controllers or how changes to multiple controllers interact. Another problem that Ungar (1990) mentions, is the process delay, as significant lag times between the time a response is observed and the time a control action is taken indicate that the system is not invertible (Eeriklinen, 1993). In the present work, dynamic changes of torque, SME and pressure in flat bread extrusion were identified (modelled) and controlled using the MIMO approach and two independently taught feed-forward ANNs.

NEURAL

NETWORK

PRINCIPLES

Neural network models are specified by the network topology, node characteristics and training or learning rules (Lippmann, 1987). An artificial neural network normally consists of several layers of nodes (neurons) which are actually organized groups of so-called processing units. A neural network has an input layer, one or more so-called hidden layers and an output layer. The connection to the outside world is handled by the input and output layers. The number of processing units in the input and output layers corresponds to the desired model inputs and outputs but, in the hidden layer(s), the optimal number of the nodes is more or less determined by trial and error. More complex relationships between input and output data may require more hidden units and hidden layers, but too many hidden units can lead to unwanted effects such as learning process noise. Each neuron is usually, but not necessarily, connected to every other neuron in the next forward layer. Every connection between two neurons has a weight factor which is a real number, usually within the interval of about [0, 11, describing the strength of connection in analogy to the synaptic strength of a neural connection. The coded (normalized) inputs to a neuron are multiplied with the weight factors and the weighted values are summed up. The output of a neuron is prepared from this sum by normalization, for example by evaluating it against some suitable transfer function. Most commonly used transfer functions are the sigmoid (S-shaped) functions such as logistic (min = 0, max = 1) or hyperbolic tangent (min = - 1, max = + 1) functions. Other transfer functions often mentioned are threshold logic (linearly increasing from 0 to 1) and Gaussian function (Lippman, 1987; Hunt and Sbarbaro, 1991). Different neurons may have different transfer functions. The main function of a neural network can be summarized as an interpolation or mapping from known or so-called teacher values (which are often experimentally measured) to the requested and/or simulated values. It is said that traditional neural networks can essentially approximate any functional mapping, if given enough training data to learn the map (Werbos, 1991). In short, the network produces some outputs from given inputs depending on the weights between different connections. In the learning process, the weights are modified starting from given small random values so that the produced outputs correspond to the desired output values within a certain error

Neural networks

margin. It should be noted that the inputs are given as normalized (coded) values, usually between about 0 and 1, and the teacher values correspond to the minimum and maximum values of the used transfer function in the output layer. The system compares the calculated and desired output values and uses a learning algorithm to change the weights, if needed. The most widely used learning method is backpropagation, developed by Werbos (1974) and rediscovered and popularized by Rumelhart and McClelland (1986). In back-propagation the weights are modified via the propagation of an error signal backward from the outputs to the inputs. This method has also been used in our work. Back-propagation is a generalization of the least mean square procedure, where the target is to minimize the mean squared error between the desired and actual outputs. In the back-propagation method the local output errors to all k output units are first calculated as a difference between the desired and actual output, multiplied by the derivative of the transfer function of the local sum. From these values the so-called delta weights can be obtained by multiplying the local output error by the learning coefficient of the output layer and the output from the corresponding hidden unit. Local hidden errors are calculated in the same manner. When all of the delta weights have been calculated, delta values are added to the previous weights to get the new weight values. The procedure is repeated by giving a new input to the network and the iteration is continued, normally thousands of times, until proper performance (usually a certain given limit error) of the network is reached.

EXTRUSION

EXPERIMENTS

Materials and equipment The flat bread recipe was composed of 78.35% wheat grits, 7.84% wheat flour, 7.84% rye flour, 4.70% milled flat bread, 0.47% salt, 0.24% sugar, and 0.56% non-fat dry milk. The experiments were carried out with a co-rotating twin-screw extruder (Continua 58, Werner & Pfleiderer GmbH, Stuttgart, Germany), l/d ratio=20.9, using a flat bread die with a slit 45 mm wide and 1.8 mm high. The extruder barrel consisted of five modules. The first module was cooled with cold tap water to ensure free flour flow in the feed section. The last two modules were connected to a two-channel oil heat exchanger (ST0 2-12-24-D6, Single Temperiertechnik GmbH, Wemau, Germany). Measured on-line variables were screw speed (rev min-‘), flour (g min-‘) and water feed rates (mlmin-‘), mass and heating oil temperatures (“C) both at four different positions, and pressure (bar) at the die. SME (W h kg- ), mass feed rate (gmin-‘) and feed moisture (%) were calculated. Water was fed with a peristaltic pump (Masterflex 7523-02, ColeParmer Instrument Co., Chicago, USA) having a rear jack provided for a remote 4-20mA signal. Variable values were recorded every 10 s (= with 0.1 Hz frequency) with a data-acquisition system (Keithley Series 500-522, 14-bit A/D converter and cards 500-AIMl, AIM3 and AIM7). Set points were given with a D/A converter card (PCL-726, 1Zbit resolution, six freely configurable channels).

in extrusion

process: T. Eerikainen

et a/.

Experimental arrangements Extrusion experiments were carried out in order to find out the dynamic behaviour between the process inputs and responses (outputs). Some initial values, such as the moisture of the flour mixture, were given during the opening of the expert user interface, the fuzzy Extruder-Expert (Eerikainen et al. 1993b). The user interface was also equipped with a semi-automatic calibration routine for the flour feeding device to avoid tedious and/or inaccurate calibrations. A warning was given if the coefficient of determination (R’) of the calibration line was below 0.995 (e.g. due to weighting error of a calibration sample). The coefficient of determination can be calculated as follows: R2=

2 (Yi-jj)' 2 (Yi-ji12+C

(yi-Yi)’

(1)

where Yi is a predicted, 1 is the average and yi is a measured variable value. Input variables were in this case feed moisture, mass feed rate and screw speed. Feed moisture and mass feed rates were given directly on-line from the Extruder-Expert, and the program calculated the needed flour and water feed rates. The inaccuracies of the pump head +2ml mini) and the flour feed device (flog min- ( ) caused a +0.3% absolute error in feed moisture, and f2% in mass feed rate. The heating system was shut off after start-up to enable the mass temperature inside the extruder to stabilize due to changes in the input variables. The main idea was to change one input variable at a time stepwise upwards or downwards, let the responses stabilize, and then change the same or another input variable to another level, and so on. In some cases, such as in the start-up and shut-down procedures, every input variable was changed at the same time. Two experimental runs were performed during successive days. Both runs utilized the automatic start-up of Extruder-Expert and the input values after the start-up were follows: feed moisture 18 or 19%, screw speed 190 rev min-’ and mass feed rate 13OOgmin’. In the first experiment (SET 1) the input variables were changed as follows, first the feed moisture: 18 + 20 + 17-+ 18 (%), then mass feed rate: 1300-+ llOO+ 1350 + 1250 (g mini), then screw speed: 190 -+ 230-+ 180 + 190 (revmin-‘). In the last experiments, all of the three variables were changed at the same time (feed moisture, screw speed, mass feed rate): 18, 190, 1250-19, 230, 1350-17, 190, 1100 (%; revmin-‘; gmin-‘). The next extrusion pattern (SET 2) was as follows: feed moisture: 19 + 18 -+ 20 --+ 17 -+ 18 (%), then mass feed rate: 1300+ lOOO--+1400+ 1300 (g mini), then screw speed: 190-+ 230-+ 170 (revmin-‘) and again at last all of the three variables were changed at the same time (feed moisture, screw speed, mass feed rate): 18, 190, 1300+ 19, 230, 1400 += 17,170,1000~ 18,180,1300 (%; revmin-‘; gmin-‘).

Programming environment A data-acquisition driver for the Keithley system was written with assembly language (Turbo Assembler, Borland International). It was installed as a userdefined primitive into an expert system, the ExtruderExpert, constructed with an object oriented programFood Control 1994 Volume 5 Number 2

113

Neural networks in extrusion process: T. Eerikainen et al. ming system SmalltalkN 286 (Digitalk, Inc., Los Angeles, USA) as described by Eerikainen et al. (1993b). The digital analogue-driver was written directly with Smalltalk. Data were saved in Smalltalk as ‘*.csv’-files, which were directly compatible with the Microsoft Excel spreadsheet program. Excel was used, for example, to convert raw data files into neural network teaching and testing files. The neural network program was written with Microsoft Visual C+ + for Windows, basically as described by Zhu et al. (1993). The network training results were saved as weight files, which were used, in turn, for the initialization of the simulation and control networks of Extruder-Expert. Personal computers with Intel 486 DXl 50MHz (data acquisition) and DX2 66MHz (all calculations) were used.

NEUTRAL NETWORK MODELLING CONTROL

AND

A very usual combination, which was employed in the present work in all experiments, is a feed-forward neural network with the back-propagation learning algorithm. The externally recurrent feed-forward networks employed consisted of three layers: an input layer, one hidden layer and an output layer. A connection between two neurons holds a weight value, which changes during the learning procedure. The networks were fully connected, which means that each neuron distributes its output to each neuron in the next layer. The neurons in the input layer do not perform any computations but only transmit their inputs to each neuron in the (first) hidden layer. Each neuron in the hidden and output layers sums up its inputs and the sum is passed through a non-linear transfer function to give the output of the neuron. Each neuron in the hidden and output layers also had a weighted connection to a bias set to unity, thus providing the means of internal offset. A hyperbolic tangent transfer function scaled to monopolar area was used in every hidden and output neuron (2): (2)

where Sum is the sum of inputs multiplied with the corresponding weights to a neuron and the f(Sum) values were limited to between 0.01 and 0.99. The superiority of the hyperbolic tangent to the often used logistic function, f(Sum) = l/l + emsum, is based on faster convergence due to four times larger derivative values of the funciton. The derivatives were used for weight updating during the network training process. Almost the same performance can also be reached with a logistic function if about four times larger learning rate coefficients (see Equation 3) are used. Similar results have been shown (Harrington, 1993). The widely used back-propagation training algorithm is an iterative gradient algorithm designed to minimize the mean square error between the actual output of a multilayer feed-forward network and the desired output. It requires continuous differentiable non-linearity 1987). The following stages briefly (Lippmann, describe the algorithm: 114

Food Control 1994 Volume 5 Number 2

present normalized input values; calculate actual outputs from inputs used; calculate global and local errors between desired and actual output values; 5. calculate delta weight values recursively output local errors;

from

6. update all weights by adding delta weights to the corresponding previous weights; 7. repeat by going to step 2 until global error reaches the desired level. When calculating the delta weight values the speed of learning is controlled by the learning coefficient n. Delta weights are calculated as follows: AWji[S]=77.ej[S].Xi[S-l]

Back-propagation learning algorithm

f(Sum) =0.5.[1.0+tanh(2*Sum)]

initialize all weights to small random values;

(3)

where AWji[s] is the weight change in the connection between the jth neuron in the layer s and ith neuron in the layer (s - l), ej[s] is local error of jth neuron in layer s and xj[s - l] is output of ith neuron in layer (s - 1). In the program used in the present work n1 was used for ‘normal’ and 7z for offset weights, respectively. The so called momentum term, a, can be used to smooth the oscillations and also to speed up the learning rate. In that case, a portion of the previous delta weight, AWii[S](t- l), is included in the updating procedure: Awj~[s](t)=~.ej[s].x~[s-l]+a.Awj~[s](t-l)

(4)

Weight updating can be done either after every presentation of a sample (inputs and desired outputs), cumulatively after a certain number of samples (called epoch size) or by presenting all the training samples to the network. To prevent the network from memorizing instead of generalizing, one should avoid repeated or too ‘clean’ noiseless data. Memorizing usually indicates that the network has only one neuron responding to a particular input activation (Chitra, 1993).

Neural network identification experiments A neural network was first used to construct an extrusion simulation model. It had been noticed earlier that a neural network is a suitable tool for dynamic simulation of MIS0 problems, while creating an SME simulation model (Link0 et al., 1992). To model the whole process a MIMO approach is needed. To define a neural network for dynamic simulation purposes requires a large number of ‘degrees of freedom’. This denotes, for example, different learning coefficient, moment term and topology combinations, the number of iterations needed and the range of initial weight values. Topology variations mean the variation in the number of input, hidden and output neurons, including the number of time delays for different input and output variables. Sometimes it can be difficult to distinguish when a network starts to learn noise instead of general trends, thus causing unwanted behaviour as the network is tested. Frequently a great number of different topologies will do well, but the optimization

Neural networks in extrusion process: T. Eerikainen et al. of the parameters is difficult and time consuming. For example, the initial values for the weights, given prior to the back-propagation, affects the convergence (typical for multivariable optimization tasks), thus often covering the changes caused by other parameters. One way to avoid the problem caused by different initial weight values was the use of ‘pseudo random values’. This means that initial weight values were generated randomly once and these values were used in every experiment having a similar topology. The coefficient of determination (R2) was used to measure the goodness of a neural network model. In neural network training, 2000 iteration cycles with a selected interval of 100 samples from SET 2 (a total of 629 samples) to cover both minimum and maximum output values were employed, and the well-trained network was tested either with SET 2 or with SET 1 (see the explanations for SET 1 and SET 2 under ‘Experimental arrangements’). Thus, the training data set was a small slice from continuous process data which included many different step changes. The fact that the measurements were made in two successive runs, leading to two data sets, actually worked as a repetition to see whether the similar extrusion behaviour can be obtained regardless of the process start-up. It was known from previous experiments that the history of extrusion variable may have a remarkable effect on the current values of all variables. A series of delay experiments were carried out to find out a proper configuration of input and output layers. Every variable was handled similarly. For example, when the number of delays was 2, each variable x had its x(t), x(t-1) and x(t-2) terms in the network. Figure I shows that four delays gave only slightly better results than three delays, and further experiments were carried out with three time delays, thus avoiding unnecessary complexity. Next, the effect of the number of hidden neurons in a network with three time delays was tested. Too many hidden neurons may result in excessive learning of noise, too few may hinder the learning of non-linearity. Figure 2 shows that the learning results oscillate as the number of hidden neurons is increased. This can be due as the to different initial values in the iterations,

._ _

0.85

0.8

J 6

9

12

15

18

21

Number of hidden neurons Figure 2 Effect of hidden neuron number (h) on learning results (R2) in 21-h-3 networks. -, Torque; --, SME; ---. pressure; -, R’ average

TQO) SME(t)

Figure 3 Dynamic neural network model for extrusion cooking process. Topology 21-9-3. FM = feed moisture, MF = mass feed rate, SS = screw speed, TQ = torque, SME = specific mechanical energy, PR = pressure near the die element. One time step is 10s

0.95

0.85 % 0.8

c 0

1

2

3

4

Number of delays Figure 1 Effect of input and output delays (d)on the performance of a (6d+3)-(3d+3)-3 network. -, Torque; --, SME; ---, pressure; -, R’ average

number of connections (and weights) was different in each experiment. The highest average R2 value was reached in this case with nine hidden neurons, thus leading to a quite simple 21-9-3 topology (Figure 3). Further experiments were carried out with this kind of a network. Learning coefficient tests were carried out to find out suitable values for fast and accurate learning. Both ql and r/2 were at the same level in each test. Up to 4000 iterations were calculated to make sure that convergence was reached also with small coefficients. Figure 4 shows that, in this case, ~~.~=0.005 was the best coefficient value. Although learning seemed to be fast and training results were accurate with large learning coefficients, such as 0.02 or 0.1, the testing results Food Control 1994 Volume 5 Number 2

115

Neural networks in extrusion

process: T. Eerikainen

et al.

revealed that generalization was not at an acceptable level with such coefficients. After these trials, the neural network could be used as a simulator of the extrusion process within the ranges of input and teacher values. Figure 5 shows the fit of the learning data (100 samples) with the 21-9-3 network after 6000 iterations using the learning coefficients of n1,2=0.005. The R* for torque was 0.983, for SME 0.947, for pressure 0.982 and for the average 0.971. It can be seen that the fit was quite good, although the learning of torque peaks could have been better. When tested with the rest of the data, the generalization capability of the neural network model was found to be very satisfactory. Figure 6 shows the testing result with SET 2 (data from the same extrusion run) and Figure 7 with SET 1. R* in Figure 6 for torque was 0.969, for SME 0.944, for pressure 0.907 and for the average 0.940, and in Figure 7 for torque was 0.946, for SME 0.939, for pressure 0.903 and for the average 0.929. To enhance the generalization capability of the model, the neural network was trained with the whole SET 2 and then tested with SET 1. This teaching procedure was very time-consuming due to the large

1

Time(s)

Figure 6 Output prediction results (-) 629 samples from the same set as the 21-9-3 network

for the test data (-; teacher data) with the

400

d 300 b 2 4 ‘5 200 >

100

0 0

loo0

2ooo

3ooo

4000

SOCWI

6coO

Time(s) Figure 7 Output prediction results (-) 532 samples from a different set as the 21-9-3 network

for the test data (-; teacher data) with the

0.85

0.8 0.001

0.005

0.01 Learning

0.02

0.1

coefficients

Figure 4 Effect of learning coefficient results (I?‘) when using the 21-9-3 network. SME; ---, pressure; -, R2 average

(7, = Q) on learning -, Torque; ---,

100

0

sCQT

0

A

loo0

2cca

3Olxl

4cw

5ooo

6cw

Time (s)

Figure 8 Output prediction results (-) with the 21-9-3 network for the SET 1 (-; 532 samples) when the teacher data comprised the whole SET 2 (629 samples)

4ca

0 5200

5400

5600

5800

6ooo

6200

6400

Time (s)

Figure (-;

5 Output prediction results (-) 100 samples) with the 21-9-3 network

for

the

116

Food Control 1994 Volume 5 Number 2

teacher

data

learning data file and 4000 iterations were run overnight for 17 h with a fast 66 MHz 486 PC. In this case, the R* values after teaching were 0.981, 0.975 and 0.938 for torque, SME and pressure, respectively. The testing result with SET 1 (Figure 8) shows improved fitting of torque (R* =0.967) and SME (R* = 0.964) and slightly inferior fitting of pressure data (R* = 0.884). The reason for the inferior pressure result was due to the relatively low noise in the 100 sample teaching set.

Neural networks

in extrusion

process: T. Eerikainen

et al.

network was taught with 115 samples in which S values were changed stepwise. It should be noted that the process model itself (a 21-9-3 network in Figure 3) included the mechanism for feed backing previous Y values. After teaching of the controller, the system worked as an extrusion control simulator according to Figure Il. The control network received ‘should be’ values from the trajectory builder, which actually transports the deviation between the measured and set values four steps ahead and defines how the other deviations should change in this timespace (t, t + 1, t + 2 and Learning structure for dynamic extrusion controller. S is the set-point vector of feed moisture, mass feed and screw speed set-points, Y is the output vector including the estimated SME, torque and pressure values, f, and Is are the controller inputs. D,, d is the D2 and D, are delay operators, AY(t) = Y(t)- Y(t-d), delay, Y, is modified Y vector, E is an error signal. AW includes the controller network weight changes and U is the controller output vector Figure 9

t+4). Figure

12 decribes the behaviour of the control variables feed moisture and screw speed, when different SME and pressure set-points were given. It can be seen that SME and pressure set-points were reached quite well, but one has to remember that the controller

Neural network control experiments After a careful search of different neural network controller approaches, the extrusion controller learning arrangement shown in Figure 9 was selected. This resembles ‘the general learning structure using synthetic signal’ presented by Hunt and Sbarbaro (1991) which was a modification of the structure used by Psaltis et al. (1988). The process mode1 was the dynamic extrusion model network described above. Figure 9 also gives a slightly simplified flow sheet of the principle of the actual learning procedure, in which S is the set-point vector of feed moisture, mass feed and screw speed set-points and Y is the output vector including the estimated SME, torque and pressure values. Properly delayed Y and S values were used as controller inputs (Zy and Is). The delay operators can be described as follows: D,~Y=Z,=[AY,,,(t),AY,,,(t--1),

. . .,

AYm(t-4,

YmO-41

(5)

D**S=z~=[S(f-d-l), S(t---2),S(t-d-3)] D3.S = [S(t), S(t-

l), S(t-2),

S(t-3)]

(6) (7)

where AY(t) = Y(t)- Y(t-d), d is the delay (defined to be 4, equal to 40s in this case), Y,,, is modified Y vector including only SME and pressure, and S is the set-point vector. Error signal E, which is used to calculate the controller network weight changes (AW), is the deviation between the delayed set-point vector S and the controller output vector U. Two different controller versions were developed. The first one had mass feed rate on both sides of the controller network and the other had mass feed rate only as controller input. This means that the control of SME and pressure was carried out by the former controller with feed moisture, screw speed and mass feed rate changes, and by the latter controller with feed moisture and screw speed changes only, but the histories of all the three S variables were given as inputs to both controllers. The description from further on denotes the latter controller verison. The topology for the controller network was 18-20-2 (Figure IO) and the

Figure 10 Dynamic neural network controller. Topology 18-20-2. FM = feed moisture, MF= mass feed rate, SS = screw speed, SME = specific mechanical energy, PR = pressure near the die element. One time step 10s. AY(r) = Y(t)-Y(t-d). e.g. SME(r+3)=SME(t+3)-SME(r_l)whend=4

I

Process

Figure 11

I

model

Neural network control scheme. Symbols as in Figure 9

Food Control 1994 Volume 5 Number 2

117

pue A.~oaq~ (2661) ‘1

lemau JO suoyeydde

xua~ds

‘ptq

xum~

‘uaqaev pUV

kzznd

-paa~o,d

ggg,

1owo3

68P-ZLP ‘6E pzysnpu! ‘waq!qS

~UO~~JJl~

JOJ s~~omlau pus

*L ‘8pnynd

.dd ‘I ‘10~ ‘hemlag ~oz-zoz ‘uogepuno~ 31113 01-L tdaS ‘sJ@o~ouy3~~ rU~sy/JlUl UO SSJl8UO3 UvJdomg lSA!d ‘f6, lldfjg 40 S8U! .uogemgsa yo.wau Ie!-lnau pue salqe!leA dzzn3 ql!M uv (q&I) ‘d ‘Oy!l pue *H-*A ‘nqz “J, ‘UaUgWJ~ :q

2 W’JJnN

S ~‘JWW

t766 1 IOJWCI

POEI

8 11

yuaur!ladxa uowwxa u! dlaq aql .103 paklpalmouycc? dlln3awJ% s! ‘6ueunaf) ‘p[owaa ‘qw?asa~ p!dg pue owed ‘lealaa -lo3 awa3 [clapad aql .lloddns ~epueu~3 103 uogepunod qaleasax pood qsyug aql 01 pue pue~uy 30 ICruape3v aql 01 InJawd ale uoqvw aqL

LII~IS&S tladxa

ZPZ-LEZ ‘p youy3a~ ‘us pood SQUJ~ .r(9olouqaal pue aauaps poo3 u! suo!geagdde ylor\\)au [emau pue a!%01 r(zznd (ec661) ‘H-‘A ‘llqz PUS 'J, ‘SalU!!S“d ‘O!fU!l“S ‘Ol(U!l“.J ‘UCUl!W!.l~

poz-~sl .dcj ‘e)osauu!ly ‘Ined IS ‘r(J,s!maq3 !eaia> 30 uoy!aossy uea!Jamy (‘m ‘JadleH pue ‘d ‘oyug”3 ‘Ja!alaN ‘pa) Bu!~oo~ uolsrwxg :UI ~uoges!mgdo pue 1o~uoa ‘%U!JapOlIl

%I!~003

UO!SIlJlX~

‘(@(j[)

‘d

‘O’lU!l

pU8 ‘1 EWZ

61yluaqaog Ie3!uqaaL ‘S!SaqL d%OlOUq3aL 30 %!?/003

UO~S?I.ilX~

U!

:d%olouqaaL lO$JOa

30

‘IO’lUO3

JLNJ8!llJtU~

Llyayun

pUV IV&IlV

lualqold

103

[apow

s~lom~au

[Elnau

‘z Io.wo~

(I&j\)

‘H

pood

‘&lESSllS~

asn

q

vOdatI SSJ3Old

‘1

'UaU!fl!J~ ‘SOJd

(Efj61)

‘%

‘d’s

‘8Jgq3

uoywxa

%goo3 PUS

‘U.3U!~!.l~

guyaH

&IllJpO~ (Qj6I)

ZS-Pp ‘(P)68 %+lOS

‘way3

sp~-opI

U! uof)eagguap!

‘a!unoa

“N

‘)OL883

EOS-ELS ‘VI %I’ .wJW s~Jv?dtuo3 .swalsds ssaaold [eayaqa 30 [o~uoa pue %u!~lapom a!lueuAp 103 stau lemau 30 asn (~61) ‘p-1 ‘Loaya~ pu8.N ‘18qa

.sasasoJd ~003 103 wamuo+ua 10~~~03 ONIN sy~eudp alq!xaD pue AIpua!y lasn e %,up!AoId u! ]uamaAoldur! laqvn3 e se smawis vadxa Iwnau Azzn3 p!lqAq %u@h]saAu! rCllualln3 ale aM .pale.wuomap @cap 92~ IOJJUO~%u!yoo:, uoywxa 01 sqlomlau lelnau 30 6vI!qeagdde aq] ‘110~ wasa.td aql UI .pallowoD aq 01 ssaaord aql 30 s~gn~~l(p aql 30 a8palMouy palgap Jnoql!M pauyqo aq uw sqnsal poog .lallo.wo:, OJQJAJ xa[duroD e uaAa %uywsuo~ 30 asea aqi s! @sap .xaljorwo:, la1003 uoywxa aql u! sy~om~au Ielnau 8u!sn 30 sa%v.wpe real:, aql30 au0 NOISfTI3N03 '(qs661 ‘yvia uau!ey!iag) pau+sap uaaq seq iallo.wo~ lladxa lczzn3 e qD!qM 103 ‘dn-].trzls ssa3oJd aql ~03 I[aM 110~ 01 palDadxa aq you p[noM iajloquo3 s!qI ‘aldmexa .IO+J~suog~puo~ ssaDold leruydo aq4 leau awerulopad ~008 e alqkwa 01 eaie ~0.w~ Jaqler e u! ~q%nel SBM

lu!od-las amssald ‘-. - :yod-las :aJnssaJd ‘_--tg~S *. . (q) falnyolu paa WIS ‘--‘---:paadS MalJS ‘. . . (e) .suoge!len w!od-]as amssaJd pue 3~s 01 (san[eA amssard pue 3~s) sasuodsaJ ssaaoJd (q) pus (san[eA paads Ma.m pue amyom paa3) sasuodsaJ ~a~~o~~uo~ (e) Z[ aml!d

sj! pue (I) au.y uaahilaq lol3aA u! a8ueqD a_uwsald put2 TJ~S I(IUO%u!pnlDu! JopaA x pa!3!poru arqerl alqe!leA pap!pald anleA alqe!.IeA palnseatu anleA alqe!.wA a%lalze satyeA alnssa.td pue anblo$ ‘zJ~S palempsa aql Bu!pnlDu! lopar\ lndlno a1qegA ssaDold a8ueqD w@aM qloMlau lajloJiuo3 v@aM 30 sa%ueqD Iq%!aM

0081 Oh

,P

I

I

0001

ooc 1

009

ooz

I

08

OS

z

t

5

;;:

?

L

1. (g

001

I

slq8!arn Su!puodsa&g aql qjy paydglnw slndu! 30 urns ( ,_U!LIJ Aar) paads MaIDs ( r_%~q M) k%aua p+ut?q3auI 3!3!3ads slu!od-las paads MaIDs pue paa sww ‘a.uus!ow paa 30 JopaA lu!od-las ialCeI uo!ieu!m.Ia~ap 30 lua!3!33ao3 (.wq) iuawala a!p aql leau alnssald (I_uy%) aIt paa SSEUI Indu! lallowo3 se JopaA Indjno paLelap wdu! Jallo.woD se JolDaA lu!od-las (yO) a.uvs!ow paa uoyun3 Ja3sue4 .wauq-uou ~XI%~S lo.xla lolla 1~301 s.lCeIap30 laqlunu

OZL

9 0081 91 ,

I

I

0001

OOtIl

OOZ , 0

009

WI

2

OS

2

81 -

d VI

:: 3 %, ozz 2 2

zz-

3

OS1

B b

001

3 5

ooz

hZ 1

e

‘/E &3 lJaU@y!Ja3

‘1

:ssaDoJd UO!StlJ~a

1.

osz

U! SyJO~aU

~~JflaN

Neural networks in extrusion process: T. Eerikainen et al. Harrington, P. de B. (1993) Sigmoid propagation neural networks. Anal.

transfer functions in backChem. 65, 2167-2168

Hofer, J.M. and Tan, J. (1993) Extrudate temperature disturbance prediction. Food Control 4, 17-24

control

with

Hoskins, J.C. and Himmelblau, D.M. (1988) Artificial neural network models of knowledge representation in chemical engineering. Compurers Gem. Eng. 12, 881-890 Hunt, K.J. and Sbarbsro, internal model control.

D. (1991) Neural networks for non-linear Proc. IEE Part D 138, 431-438

Hunt, K.J., Sbarbaro, D., Zbikowski, R. and Gawtbrop, P.J. (1992) Neural networks for control systems - a survey. Aulomarica 28, 1083-1112 Icbikawa, Y. and Sawa, T. (1992) Neural network application for direct feedback controllers. IEEE Trans. Neural Networks 3. 224-23 1 Levin, E., Gewirtzman, R. and Inbar, G.F. (1991) Neural network architecture for adaptive system modelling and control. Neural Networks 4, 185-191 Linko, P. and Zbu, Y.-H. (1992a) Neural Kemia-Kemi 19, 215-220

networks

in bioengineering.

Linko, P. and Zbu, Y.-H. (1992b) Neural networks engineering. Ann. N. Y. Acad. Sci. 542, 83-101

in enzyme

Linko, P. and Zbu, Y.-H. (1992~) Neural network modelling for realtime variable estimation and prediction in the control of glucoamylase fermentation. Process Biochem. 27, 275-283

Lippmann, R.P. (1987) An introduction nets. IEEE ASSP Msg. 4, (2), 4-22

(Ed. Zeuhen, P. et al.). London, pp. 175-179

Elsevier

Applied

Miller, W.T., Sutton, R.S. and Werbos, P.J. (Eds) (1990) Networks for Control. MIT Press, Cambridge, MA Moallemi, C. (1991) Classifying cells for cancer diagnosis networks. /EEE Expert 6 (6). 8-12 Obermeier, K.K. and Barron, J.J. 14 (8). 217-227 Psaltis, D., Sideris, neural network 17-21

Science Neural

using neural

(1989) Time to get fired up. Byte

A. and Yamamura, A.A. (1988) A multilayered controller. IEEE Control Systems Mag. 8 (2),

Rumelhart, D.E. and McClelland, J.L. (1986) Paralfef Distributed Processing, Explorations in the Microstructure of Cognition MIT Press, Cambridge, MA Tauomaru, trained 521

J. and Omatu, S. (1992) Process control by on-line neural controllers. IEEE Trans. Ind. Elect. 39, 51l-

Ungar, L.H. (1990) A bioreactor benchmark for adaptive networkbased process control. In: Neural Networks for Control (Ed. R.S. and Werbos, P.J.) MIT Press, Miller, W.T., Sutton, Cambridge, MA, pp. 387-402 Ungar, L.H., Powell, B.A. and Kamens, S.N. (1990) Adaptive networks for fault diagnosis and process control. Computers Chem. Eng. 14,561-572

in

Werbus, P.J. (1974) Beyond Regression: New Tools for Prediction and Analysis in the Behaviour Sciences. PhD Thesis. Harvard University. Committee on Applied Mathematics

Zbu, Y.-H. (1993) Artificial Proc. AIFA Conference 93 and Food, October 26-28, pp. 187-200

Werbos, P.J. (1990) Overview of designs and capabilities. In: Neural Networks for Control (Ed. Miller, W.T., Sutton, R.S. and Werbos, P.J.). MIT Press, Cambridge, MA, pp. 59-66

Linko, P., Uemura, K. and Eerikiiinen, T. (1992) Neural networks fuzzy extrusion control. IChemE Symp. Ser. 126, 401-410 Linko, P., Eeriklinen, T., Linko, S. and intelligence for the food industry. Artificial Intelligence for Agriculture Nimes, France. EC2, Paris, France.

of Foods Publishers,

to computing

with neural

Werbos, P.J. (1991) An overview of neural IEEE Control Systems 11 (I), 40-41

networks

for control.

Lu, Q., Mulvaney, S.J., Hsieb, F. and Huff, H.E. (1993) Model and Food strategies for computer control of a twin-screw extruder. Control 4, 25-33

Willis, M.J., Montague, G.A., Di Massimo, C., Tham, M.T. and Morris, A.J. (1992) Artificial neural networks in process estimation and control. Automatica 28, 1181-l 187

Meuser, F. and van Lengerich, the extrusion of starches.

Zhu, Y.-H., Linko, S. and Linko, P. (in press) enzymology. Adv. Mol. Cell. Biol.

B. (1984) System analytical model for In: Thermal Processing and Quality

Neural

networks

Food Control 1994 Volume 5 Number 2

in

119