Intelligent control of dynamic systems

Intelligent control of dynamic systems

Intelligent Con h-01 of Dynamic Systems by T. I.LIU Department Sacramento, of Mechanical Engineering, CA 95819, U.S.A. California State University...

698KB Sizes 2 Downloads 184 Views

Intelligent Con h-01 of Dynamic Systems by T. I.LIU Department Sacramento,

of Mechanical Engineering, CA 95819, U.S.A.

California

State

University,

E. J. KO

Department of Mechanical, Aeronautical and Materials University of California, Davis, CA 95616, U.S.A.

Engineering,

and J. LEE Industrial/University Washington,

Cooperative

DC 20550,

Research

Centers,

National

Science

Foundation,

U.S.A.

ABSTRACT: A neural network controller is developed to learn the inverse dynamics of unknown dynamic systems and to serve as afeedforward controller. Arttjicial neuralnetworks, which consist of a set oj’processing units with interconnections between them, are used to get the desired output. The interconnections, known as weights, can be on-line tuned. Hence the controher is adaptive in nature. Neural networks can be used to represent the inverse dynamics of unknown dynamic systems. The error back propagation technique is used in the learning process. The controller is intelligent enough to learn from its experience. The performance of the intelligent controller for learning and control of dynamic systems is very successful.

I. Introduction

Automatic control systems have been used in various kinds of automated equipment, such as robotic manipulators, numerical control machines, automated material handling systems, automated warehouse, etc. It is essential to develop an adaptive and robust control system so as to achieve a highly automated or even unmanned process (1,2). Highly sophisticated adaptive control systems have been developed by many researchers (2, L&7). Adaptive control is an advanced technique where the controller parameters can be tuned on the real time basis, according to the changes in the process parameters, to reach the desired performance. However, current adaptive control systems have the following drawbacks. (I) The control systems can only be used when the plant models are already known. These control systems are not intelligent enough to control unknown dynamic systems. (II) While the parameters of the control models can be on-line estimated and tuned, the model forms are fixed and known. Furthermore, most control models are based on linear system theory.

The Frankl~nlnstitutc0016_0032:Y3 $600+0.00

491

T. I. Liu et al. The control system must possess some kind of intelligence so as to learn and control an unknown dynamic plant (8-17). However, it is difficult to use expert system control if the dynamic system itself is unknown. To develop a rule-based system in advance for an unknown dynamic plant is too cumbersome. For the same reason, it is also difficult to implement fuzzy logic control in this situation. To control an unknown dynamic system, the controller must be capable of online learning the system dynamics of the plant, and then take adequate action to achieve the predetermined goal. In this work, artificial neural networks are used as controllers to control unknown systems in continuous time domain. The artificial neural networks are used to learn the inverse dynamics of these systems, and the dynamic systems are controlled successfully by neural networks without prior knowledge of these systems. A brief background of artificial neural networks is given in Section II. The neural network controller is described in Section III. In Section IV, computer simulation of the intelligent control scheme and the results are presented. Three different kinds of dynamic systems controlled by artificial neural networks are simulated. Further discussions and conclusions are provided in Section V.

II. Backgvound of AvtiJicial Neuval Netwovks The artificial neural network, as indicated by its name, resembles the structure of a human brain. There are more than 100 billion neurons in the human brain. Neurons have a tree-like structure and thereby can receive incoming signals from other neurons through the connections known as synapses. There are more than 1000 synapses on the input and output of every neuron. However, most artificial neural networks are structured in a much simpler form. Research work on neural computing was initiated by W. S. McCulloch and W. A. Pitts in the early 1940s. Theoretical models were set up in the 1950s and early 1960s by Farley and Clark, Rosenblatt, Widrow and Hoff, etc. Many implementations of “neural computers” were realized in the 1960s. Since then, many researchers have done significant work in this field (18-21). The artificial neural network is very powerful due to its robust processing and adaptive capability. It consists of a set of nodes, and is referred to as parallel distributed processing. The pattern of connectivity between nodes, known as weights, can be modified according to some preset learning rule. The knowledge of the networks is stored in their interconnections, and their functionality is determined by modifying the strengths of the connections during a learning process (21). Artificial neural networks can learn from experience. This technique has been used successfully in tool condition monitoring, process modeling and robotic applications (22-27). Recently, artificial neural networks have been used as intelligent controllers, and the results are quite encouraging (28-30). The neural network controller performs a new form of adaptive control. The controller has a form of nonlinear multilayer network, and the controller parameters are the strengths of the connections between the nodes. In order to learn and control unknown dynamic systems adaptively, feedforward neural networks Journal

492

of the Franklin Institute Pergamon Press Ltd

Intelligent

Control of Dynamic Systems

HIDDEN LAYER INPUT

OUTPUT

LAYER

lAYER /

INPUT

-

FIG. 1. Neural network structure. were employed in this work to learn the unknown system dynamics basis and to control different kinds of dynamic systems.

III. Neural Network

on real time

Controller

3. I. Intelligent control scheme The structure of feedforward neural networks is shown in Fig. 1. It has an input layer, a hidden layer and an output layer. The input and the output layers are the two “visible” layers which are located on the left and right of the network, respectively. The dynamics of all the systems being controlled are unknown. The systems have single input and single output. Therefore, there is only one node in the input layer and one node in the output layer. Every node in the feedforward neural networks must send its output to a higher layer and it must receive its input from a lower layer. The control architecture is shown in Fig. 2. The desired output A’, is input into CONlRWXD

DESIRED

OUTPUT

OUTPUT x,

ARTIFICIAL

----,

NEURAL

U (INPUT)

DYNAMIC e

SYSTEM

‘1+ ’ NETWORKS

1

U

(ESTIMATED

INPUT)

_._

FIG. 2. Intelligent control architecture. Vol. 330, No. 3, pp. 491-503, 1993 Prmted m Great Britain

493

T. I. Liu et al. the neural network. As a result, the output U of the network is generated at its output layer. This network output U is used as the control input to drive the dynamic systems and thereby get an actual output X. At first this actual output X is quite different from the desired output X, since the weights of the neural network are randomly assigned. The actual output Xis then fed into the same neural network again to get another network output U’. The two neural networks shown in Fig. 2 are actually identical. Based upon the difference between U and U’, all the weights are adjusted. The error back propagation method was used for the adjustment. After the adjustment, the neural network is used for learning and control of the dynamic system again. This process is repeated until X reaches X,. 3.2. Leurning scheme The artificial neural network shown in Fig. 2 was used as a feedforward controller for learning and control of the dynamic system. Every node represents an activation function which is in the form of 0, = ,f(Z,) = KI,

(1)

where I, = C W,,O,. In the above equation, K is a proportional gain of the node, Z, and 0, are the input and output of the node in the current layer respectively ; W,, are the weights between nodes in the current and preceding layers. For the learning process, the error back propagation method is used for adjusting the weights between layers (21). The learning process starts by assigning random values to the weights. This prevents the network from being trapped at a local minimum at the beginning of the process. The desired output is fed to the network in order to calculate input of the dynamic system. Each node calculates its output by using Eq. (1). The actual output of the dynamic system can be obtained from its input and its unknown transfer function. Because the values of the weights are randomly assigned, the output will be quite different from the desired output. Next, the actual output of the dynamic system is fed to the same neural network and the estimated input is obtained. The difference between the actual input and the estimated input of the dynamic system is used for the adjustment of weights. The equation used is shown as e = U-

Ui’.

The weights are adjusted according to the generalized propagated backwards as follows (21, 28) :

(21 delta rule. The error 6 is

AW,, = NdO,. In the above equation, N is the learning error at the output layer is given by 6 = K(UThe error for the hidden

494

(3)

rate and 6 represents

the error. The

U’).

layer is given by Journalofthe Franklin

lnst~tute Pcrfamon Press Ltd

Intelligent

0.0

J 0

i

I

I

2

3

TIME

4

(SEC.

Control of Dynamic

/

I

5

6

Systems

)

FIG. 3. Step input used in the simulation.

6, = K6W /

(5)

where W, is the weight between the jth node in the hidden layer and the node in the output layer. The learning process can be used to determine the error and then adjust the weights. This learning process can be repeated until the actual output is equal to the desired output. The feedforward neural networks described above can be employed as an intelligent controller for learning and control of unknown dynamic systems. Since the knowledge is stored in the interconnections of the neural network and can be adjusted, this controller is adaptive in nature. In addition, this controller has a high speed of response since the neural network has very strong computational power.

IV. Computer Simulation and Results In this work, a step input with a magnitude of two, as shown in Fig. 3, is used as the desired output. The unknown dynamic systems are assumed to be secondorder systems. Three different types of dynamic systems are used: underdamped system, critically damped system, and overdamped system. The transfer function of dynamic systems is assumed to be X(s> U(s)

b &T,s+a,

.

(6)

In the above equation, a,, a, and b are the constants. For the underdamped system, a, = 16, a, = 100 and b = 100. For the critically damped system, a, = 20, a2 = 100 and b = 100. For the overdamped system, a, = 28, a, = 100 and h = 100. The learning rate used is 0.2 and the proportional gain is 0.3. Different numbers of nodes in the hidden layer are used to compare the performance of the intelligent controllers. Of course, the whole system behavior changes since the neural network controller is added to the dynamic systems. Vol. 330. hue. 3, pp. 49 I-503. Pmred in Great Bnta~n

1991

495

T. I. Liu et al. The simulation results for the underdamped system are shown in Fig. 4. The desired output was reached in all the simulations. As the number of nodes increases, the system response becomes faster. The simulation results for critically damped and overdamped systems are shown in Figs 5 and 6, respectively. The desired output was reached in all the cases. Similarly, the system response becomes faster as the number of nodes increases. In order to evaluate the influence of the learning rate and the proportional gain to the system response, simulations were performed using different combinations of the learning rate and the proportional gain in all the cases. The simulation results are shown in Fig. 7. The controller with a higher learning rate or higher gain yields a faster system response in all the cases. The effect of the gain appears to be stronger than that of the learning rate. The settling time and the maximum overshoot of the whole system for different numbers of nodes in the hidden layer of the controller were compared in Fig. 8. The settling time is defined as the time required for the whole system to reach and stay within 1% of the desired output, while the maximum overshoot is defined as the percentage of the maximum peak value reached with respect to the desired output. For the underdamped and critically damped dynamic systems, the settling time decreases as the number of nodes increases. For the overdamped system, the settling time keeps decreasing and reaches its minimum and then increases again. The number of nodes does affect the settling time in all the cases and its selection is important to the performance of the feedforward controller. The maximum overshoot also fluctuates when the number of nodes changes, as indicated in Fig. 8. There is an optimal number of nodes to minimize the maximum overshoot. The choice of node number is critical since the dynamic system is unknown. Some compromise has to be made for the choice.

V. Discussions and Conclusions The objective of this work is to develop intelligent controllers to control dynamic systems even when their transfer functions are unknown. To meet this objective, neural network controllers are designed for various kinds of dynamic systems. Neural network controllers with a different number of neurons have been used and compared. The influence of the learning rate and proportional gain on the system response has also been studied. Computer simulations show that their performance is exceedingly successful. The intelligent controllers are workable even if the dynamic systems being controlled are unknown. This is very beneficial since system identification remains a very difficult task in many situations. However, the performance of this approach for more involved systems, such as nonlinear systems, stochastic process, and systems with multiple inputs and multiple outputs, etc. requires further investigation. Also, the investigation should be extended to use more complex neural network structures as well as different kinds of activation functions. Based on the above discussions, the following conclusions can be drawn. Journal

496

oi the Franklm Institute Pergamon Press Ltd

Intelligent

Control of Dynamic

Systems

3

3 z ?

2

4

1

Q

20NEURONS

Q

50 NEURONS

0 0

10

20

TIME (SEC.)

0

1

2

3

TIME (SEC.)

3 -1

0

1

2

-a-

IOONEURONS

Q

200 NEURONS

3

TIME (SEC.)

3 -1

0

1

2

3

TIME (SEC.)

FIG. 4. Response

of the underdamped

Vol. 330, No. 3, pp. 491-503. 1993 Printed in Great Britain

system controlled

by the intelligent

controller.

497

T. I. Liu et al.

3 1

0

10

Q

20 NEURONS

-a-

50 NEURONS

Q

lOONEURONS

Q

200 NEURONS

20

TIME (SEC.)

3 ‘1

0

1

2

3

TIME (SEC.)

3 ‘1

0

1

2

TIME (SEC.)

0

1

2

TIME (SEC.)

FIG. 5. Response

of the critically damped

system controlled

by the intelligent

Journal

498

controller.

of the Franklin Institute Pergamon Press Ltd

Intelligent

Control of Dynamic

Systems

3 Q

20 NEURONS

Q

50 NEURONS

-a-

lOONEURONS

Q

200 NEURONS

2

0 20

10

0

TIME (SEC.)

3 71 2 L

1 . 0 . 0

1

2

3

4

5

TIME (SEC.)

3 m 2

:“.”

1 : 0 r 0

1

2

TIME (SEC.)

3

2

0

1

2

TIME (SEC.)

FIG. 6. Response

of the overdamped

system controlled

by the intelligent

controller.

499

T. I. Liu et al.

+x z i

k=.3, n=.l

2

$1

0

30

10 TIME (SEC;’

CRITICALLY

[Q 1+ -E-

0

10 TIME (SEC;’

k=.3, n=.2 k=.4, n=.l k=.3, n=.l

30

3

w 2

DAMPED

Q + 0

k=.3, n=.2 kz.4, n=.l k=.3, n=.l

2

a’ 5

1

0 30 0

FIG. 7. Influence

500

of the learning

10 TIME (SEC;’

rate and the proportional

gain on the system response.

Journal of the Franklin Institute Pergamon Press Ltd

Intelligent Control of Dynamic Systems

UNCEFtDAM’ED K=0.3, N=0.2 12

2.0

10

8 1.8 I-

2 z

8 6

1.6 5 1.4 5

ii

4

1.2 5 r

0

2

1.0 3 zz

? i

Q

SE-ITLING TIME MAXIMUM OVERSHOOT

8 4

-

0

0.8 0

100

200

NUWEROF

300

400

NEURONS CRITICALLY DAMPED K=0.3, N=0.2

Q +-

SETTLING TIME MAXIMUM OVERSHOOT

NUMBER OF NEURONS OVEFtDAAA’ED K=0.3. N=0.2

X)-r”

g 4

5 0

Q +

SElTLING TIME MAXIMUM OVERSHOOT

i!i

0

100

200

300

400

NUMBER OF NEURONS

FIG. 8. Settling

time and maximum overshoot of different intelligent controller.

Vol.330.No.3.pp.491-503. 1993 PrmtedmGreatBritain

systems

controlled

by the

501

T. I. Liu et al

(1) Artificial

neural networks can be used effectively to learn unknown secondorder dynamic systems and to control these systems to reach the desired output. (2) As the number of nodes increases, the speed of the system response increases in all cases. (3) As the learning rate of artificial neural networks increases, the system response becomes faster in all cases. gain increases, the speed of the system response increases (4) As the proportional in all cases. (5) The influence of the number of nodes on the settling time and maximum overshoot varies for different dynamic systems.

Acknowledgement Partial support of this work by the Hornet

Foundation

is highly appreciated.

References (I) M. P. Grover, (2)

(3) (4) (5)

(6) (7)

(8)

(9)

(10)

(11) (12)

502

“Automation, Production Systems, and Computer-Integrated Manufacturing”, Prentice-Hall, Englewood Cliffs, NJ, 1987. T. I. Liu, “Forcasting control and its applications to machine tools”, Proc. 16th CIRP Int. Seminar on Manufacturing Systems, Tokyo, Japan, pp. 196-205, Japan Society of Precision Engineering, Tokyo, 1984. K. J. Astrom, “Adaptive feedback control”, Proc. IEEE, Vol. 75, No. 2, pp. 1855209, 1987. J. J. Craig, “Adaptive Control of Mechanical Manipulators”, Addison-Wesley, Reading, MA, 1988. R. Horowitz and M. Tomizuka, “An adaptive control scheme for mechanical manipulators compensation of nonlinearity and decoupling control”, ASME J. Dynamic Systems, Meusurement and Control, Vol. 108, pp. 127-135, 1982. A. H. Levis et al., “Challenges to control: A collective view”, IEEE Trans. Auto. Control, Vol. AC-32, No. 4, pp. 2755285, 1987. M. C. Mulder, J. Shaw and N. Wagner, “Adaptive control strategies for a biped”, Proc. ASME Winter Annual Meeting Symp. on Robotics Research, San Francisco, CA, pp. 113-l 17, 1989. A. M. Agogino and S. Srinivas, “Multiple sensor expert system for diagnostic reasoning, monitoring and control of mechanical systems”, ASME J. Mech. Syst. Signal Proc., Vol. 2, No. 2, pp. 1655185, 1988. D. Baechtel, C. Day and S. Chand, “A fuzzy-logic based tuner for proportional integral derivative controllers”, Proc. USPS Advanced Technology Conference, Wash., DC, 557 Nov., Vol. 3, pp. 131771326, 1990. C. Batur and V. Kasparian, “Intelligent fuzzy expert control”, Proc. ASME Winter Annual Meeting Symposium on Intelligent Control Systems, San Francisco, CA, 10-l 5 Dec., pp. l-6, 1989. S. Chang and L. Zadeh, “On fuzzy mapping and control”, IEEE Trans. Systems, Man Cybernetics, Vol. SCM-2, pp. 3&42, 1972. P. Ralston and T. L. Ward, “Fuzzy logic control of machining”, ASME Manufacturing Review, Vol. 3, No. 3, pp. 1477154, 1990.

Journal ofthe Franklin

Inslltute Pergamon Press Ltd

Intelligent

Control

sf Dynamic Systems

(13) K. Ramamurthi (14)

(15) (16)

(17) (18) (19) (20) (21) (22)

(23) (24) (25) (26)

(27)

(28)

(29) (30)

and A. M. Agogino, “Real time expert system for fault tolerant supervisory controll”, ASME Comput. Engng, Vol. 2, pp. 333-339, 1988. C. W. de Silva, “A knowledge-based fuzzy tuner for servo controllers”, Proc. ASME Winter Annual Meeting Symp. on Intelligent Control Systems, San Francisco, CA, lo-15 Dec., pp. 17-23, 1989. R. Shoureshi, R. Evans and D. Swedes, “Learning control for autonomous machines”, ASME Paper No. 8%Wa/DSC-29, 1988. R. Shoureshi and K. Rahmani, “Intelligent control of building systems”, Proc. ASME Winter Annual Meeting Symp. on Intelligent Control Systems, San Francisco, CA, l&l5 Dec., pp. 7-16, 1989. M. L. Wright, M. W. Green, G. Fiegl and P. F. Cross, “An expert system for realtime control”, IEEE Software, pp. 16-24, March, 1986. with neural circuits : a model”, Sciencr~, J. J. Hopfield and D. W. Tank, “Computing Vol. 233, pp. 625-633, 1986. T. Kohonen, “An introduction to neural computing”, Neural Net., Vol. 1, pp. 3-16, 1986. R. Lippman, “An introduction to computing with neural nets”, IEEE ASSP Mug., pp. 4-22, 1987. D. Rumelhart and J. McClelland, “Parallel Distributed Processing”. Vol. I, MIT Press, Cambridge, MA, 1986. G. Chryssolouris and M. Guillot, “A comparison of statistical and A.I. approach to the selection of process parameters in intelligent machining”, ASME J. Engng Id, Vol. 112, pp. 122-128, 1990. S. Y. Kung and J. N. Hwang, “Neural network architectures for robotic applications”, IEEE Trans. Robotics Autornution, Vol. 5, No. 5, pp. 641-656, 1989. T. I. Liu, E. J. Ko and S. L. Sha, “Intelligenl monitoring of tapping tools”, ASME J. Muter. Shqing Technol., Vol. 8, No. 4, pp. 249-254, 1990. T. I. Liu. E. J. Ko and S. L. Sha, “Diagnosis of tapping operations using an A.I. approach”, ASME J. Muter. Shuping Tdznol., Vol. 9. No. I, 1991. S. Rangwala and D. Dornfield, “Sensor integration using neural networks for intelligent tool condition monitoring”. ASME J. Enqng Ind., Vol. I 12, No. 3, pp. 219228, 1990. S. Rangwala and D. Dornfield, “Learning and optimization of machining operations using computing abilities of neural networks”, IEEE Truns. SJ:rtrms. Man Cyhernetics, Vol. 19, No. 2, 1989. M. S. Lan, “Adaptive control of unknown dynamical systems via neural network approach.” Proc. American Control Conf., Pittsburgh, PA, pp. 916915, IEEE Press, New York, 1989. D. Psaltis, A. Sideris and A. A. Yamamura, “A multilayered neural network controller”, JEEE Control Sy’stems Msg., pp. 17-20, April, 1988. R. Shoureshi and R. Chu, “Neural space representation of dynamical systems”. Proc. 1989 ASME Winter Annual Meeting Symp. on Intelligent Control Systems, San Francisco, CA, pp. 63-68, 1989.

Received : 3 September 1992 Accepted : 10 November 1992

503