NEUROCOMPUTING Neurocomputing9 (1995) 131-148
Adaptive neuro-control for spacecraft attitude control K. KrishnaKumar
*, S. Rickard, S. Bartholomew
Departmentof Aerospace Engineering, l’he Vnivedy of Alabama, Tuscabasa, AL 35487-0280, USA
Received 10 May 1993;accepted 30 August 1994
Abstract
Spacecraft attitude control is approached as a nonlinear adaptive control problem and neuro-control, which combines concepts from artificial neural networks and adaptive control, is investigated as an alternative to linear control approaches. Three capabilities of neuro-controllers are demonstrated using a nonlinear model of the Space Station Freedom. These capabilities are: (a) synthesis of robust nonlinear controllers using neural networks; (b) copying an existing control law using neural networks; and (4 adaptively modifying neuro-controller characteristics for varying inertia characteristics. The main components of the adaptive neuro-controllers are an identification network and a controller network. Both these networks are trained using the back-propagation of error learning paradigm. To ensure robustness of the neurecontroller, optimally connected neural networks are synthesized for the identification and the controller networks. For the on-line adaptive control problem, a backpropagation of error technique using a linear adaptive critic is introduced in place of the backpropagation through time technique. Performances of the nonlinear neuro-controllers for the three cases listed above are verified using a nonlinear simulation of the Space Station. Results presented substantiate the feasibility of using neural networks in robust nonlinear adaptive control of spacecraft. Keywords: Neuro-control; critic
Space station; Robust control; Adaptive control; Linear adaptive
1.Introduction Spacecraft attitude control has been examined in the past using several apIn general, these approaches include classical control techniques [1,2] and modern state space techniques [3-51. With the current interest in an evolu-
proaches.
l
Correspondingauthor. Email:
[email protected]
09252312/95/$09.50 0 1995 Elsevier Science B.V. All rights reserved SSDI 0925-2312(94)00062-X
132
K KXshnaKumar et al. / Neurocomputing 9 (1995) 131-148
tionary approach for constructing a Space Station (SS) in space, there is a need to examine alternate control techniques that can accommodate such evolutionary changes. Techniques reported in references [l-5] use linear approaches for controller designs and make inertia assumptions that are not valid for most proposed configurations. Also, small perturbation assumptions made during linearization will be violated during nominal operations of the SS. Another substantial deviation from nominal parameters will occur during space shuttle docking and general relative mass motion. These contribute significantly to the changes in moments of inertia and will introduce transient torque moments. To address some of these problems, recent studies [6-81 have focussed on other alternatives. Reference [6] addresses the treatment of changing inertia as an adaptive control problem and uses a linear approach for the same. In [7] SS control is approached as a nonlinear control problem and Lyapunov’s second method is used for synthesizing stable control laws. In [8] a feedback linearization technique is used for nonlinear control of the space station. This paper presents an approach using artificial neural networks (ANN) that will provide both nonlinear control and adaptive control capabilities. In the recent past, revived interest in the working of neural networks has brought out many new approaches to solving engineering problems using mathematical networks that mimic the workings of neural connections in the brain. Neuro-control is an application domain of ANN in which ANN concepts are applied to system identification and control. Artificial neural networks have been used in many areas of control problems. In the domain of space station control, references [9,10, and 111 present mass identification and adaptive control approaches using neural networks. In 191,neural networks are used to estimate the inertia properties of the space station. In [lo], a neuro-control approach using a linearized pitch degree-offreedom model is presented. The application of radial basis function networks in direct adaptive control of the space station is discussed in [ll]. There are several benefits in using neural networks in adaptive nonlinear control for space applications. These include: (1) A neuro-controller learns to control a system based on the input-output couplings that exist. Also, neural networks have been shown to extract inputoutput mapping even out of noise-corrupted data. This implies that a decentralized controller can be implemented using neuro-controllers with direct output feedback. Decentralized control, with direct output feedback, is computationally less demanding and more suited when using space-qualified computers that are limited in their processing power. (2) The necessity for a learning-type controller for space applications arises due to the conditions of uncertainty in the environment of operation and due to the good possibility of unmodeled dynamics being present in the system to be controlled. (3) The possibility of component failure in space structures establishes the requirement for adaptive-type controllers. (4) An important advantage of using neural networks for control is the fixed software/hardware architecture for the controller. This implies that changing
K KrishnaKumar et al/Neurocomputing
9 (1995) 131-148
133
the control algorithm amounts to changing the weights of the neural connections and not the structure of the controller. This feature makes neuro-controllers attractive even if using other control schemes for controller design. The objective of this study is to examine the use of back-propagation neural networks in providing robust adaptive control capabilities for spacecraft attitude control. The Space Station (SS) is chosen as an ideal example for showing these capabilities. In what follows, we first present the fundamentals of neuro-control relevant to this study and outline a procedure for design and implementation of robust, adaptive neuro-controllers. Next, we present the non-linear SS model used and show the three capabilities of neuro-control in controlling the pitch attitude of the SS. These capabilities are: (a> synthesis of robust non-linear control laws using neural networks; (b) copying an existing control law using neural networks; and (c) adaptively modifying neuro-controller characteristics using back-propagation of error in conjunction with a linear adaptive critic. 2. Adaptive new-o-control concepts
Neuro-control, which combines ANN concepts, optimal control concepts, and adaptive control concepts, is relatively recent. Narendra 1121,Werbos [13,14],Barto et al. [15], and many others have made significant contributions to the current practices of neuro-control. The next few sections present the neural network and neuro-control concepts used in this study. 2.1. Artificial Neural Networks The ANN structure used in this study is shown in Fig. 1. It is assumed that every neuron takes connections from any or all neurons to the left of it. The degree of connectivity depends on the number of connections within the structure. A fully forward-connected network takes all possible connections. This ANN configuration provides a network that can have up to ‘h’ (number of hidden neurons) hidden layers. It is known that as the number of layers increases, the accuracy of the mapping increases. One drawback of increasing the number of layers is that the number of parameters to be optimized increases and thus affects the generalization capabilities of the networks (see Anshelevich et al. [18]). A technique is
Fig.
Artificial neural network structure.
134
K. Krishnakkmar
et al. / Neurocomputing 9 (1995) 131-148
outlined in [161that optimizes the connectivity pattern as part of the supervised learning. This technique has the flexibility to maintain the desired accuracy as well as reduce the number of parameters in the network. Also, the network optimization can potentially have up to ‘h’ neurons in the first layer. It has been shown before that this is important for approximation capabilities of the neural network. As shown later, the optimally connected network provides a foundation for synthesizing robust neuro-controllers. Now, consider the single neuron in the network structure shown in Fig. 1. An individual neuron has many inputs depending on the number of connections. Each connection to the neuron has a weight associated with it. After the net input is calculated, it is converted into an activation value through a functional relationship. The power of ANN lies within this transfer function. %o types of transfer functions are used in this study. These are: (a) Sigmoidal function (for all hidden neurons): f(x) = (1 - e(-a”))/(l + e(-ax)) (b) Ramp function (for all input and output neurons): f(x) = (TX In this study, (Yis chosen to be 1.0. After computing the network outputs, an error is calculated for all outputs by comparing ANN output to a desired output. Reference 1161documents the equations necessary for the implementation and training of ANN. 2.2. Supervised learning using backpropagation (BP) of error The most important concept that underlies the ANN design is the concept of learning from experience. There are several useful learning paradigms for ANN, but the most widely used paradigm is the backpropagation (BP) algorithm. In BP, the ANN learns by repeated exposure to a set of training examples. The learning takes place through a first-order reduction in the output error quantity. Backpropagation has been successfully used for mapping of non-linear functions. 2.3. New-o-controltechniques Before the neuro-control techniques used in this study are outlined, we outline the steps needed to realize an adaptive neuro-controller for the spacecraft attitude control problem (throughout this paper, NNC stands for a neural network controller and NNM stands for a neural network model of the system to be controlled): (1) Identify a mathematical model (even a crude one) of the system to be controlled. (2) Copy the model using any technique that provides a robust mapping of the input-output relationship. (3) Design a NNC using the supervised control technique (described below) and a fixed NNM or copy an existing controller. Here we can either use batch processing or backpropagation through time (BTI’) [12,131,if enough memory is available. (4) On-line adaptation includes learning unmodeled system dynamics, persistent system disturbances, and any other anomalies that were not accounted for in
K KrishnaKumar et al. /Neurocomputing 9 (1995) 131-148
135
steps 2 and 3 and simultaneously adapting the controller network. For obvious reasons, batch processing cannot be used in this situation. On the other hand, BIT requires the error information to flow backwards from time step NT to time NT-l, to NT-2, and so on. This is inconsistent with any true real-time adaptation. In this paper, we introduce the concept of a linear adaptive critic that in effect produces an on-line prediction of total output error derivatives. This adaptive critic is a simplified form of the adaptive critic techniques outlined by Werbos [14]. 2.3.1. Off-line supervisedcontroller synthesisusing batch learning Supervised neuro-control is similar to the indirect control used in the adaptive control literature. In traditional adaptive control, a linear system model is identified and using this model the control is adapted. Similarly, in supervised neurocontrol, first the system is copied using a neural network. This system is called the neuro-model. For robust control applications, it is important that the NNM is general enough to accommodate uncertainties in the system modeling. The technique used in this study, to arrive at a robust NNM, is outlined in [16]. (From now on, neural networks whose connections are pruned to provide robust mapping are referenced as optimally connected networks.) Next, the controller weights are tuned using the back-propagation of error through the neuro-model. Once again, it is desired to train a NNC to be general enough to accommodate modeling errors. The role of the neuro-model is to relate the output error to the controller error. It should be noted that the error definitions for the model and the controller are different as shown in Fig. 2. The error for the controller is the error between the actual system trajectory and a desired trajectory generated by a reference model.
TRAJECTORY V&K)
GENERATOR $00
.I-IIIrlllllIIII-IIII~
-
BP FORCONTRGL
Fig. 2. Neuro-controi using supervised learning and back propagation of error.
K KrishnaKumar et al. /Neurocomputing
136
9 (1995) 131-148
-.. ZONTR~L
Fig. 3. Neuro-controller
training with a linear critic.
For training the NNC, either a batch learning method or BTI’ can be used. In this study, a batch learning as outlined in [17] is used. In this method, first the system is subjected to an arbitrary disturbance. A forward pass is performed for each time step without any backpropagation of error being allowed; however, the total error for a period of time is collected. During a second forward pass, the total error from the first pass is back-propagated for each time step through the NNM and the NNC. Note that the weighted connections of the NNC are updated during error backpropagation, whereas NNM connection weights remain unchanged. 2.3.2. On-line neuro-controller adaptation using a linear adaptive critic For on-line adaptive control, the NNM is updated at each time step and the NNC weights are tuned using the updated NNM. In general, the NNM is updated at a frequency higher than the NNC updates. For the NNC adaptation, as stated earlier, BTT is inadequate for real-time applications. A controller action at the present time influences the future response of the system. The time horizon of this response is directly related to the bandwidth of the system that is being controlled. B’IT essentially correlates this time-dependent error characteristics with the past control actions. An alternative to using BIT is the use of a critic (Fig. 3) that predicts future errors. Since backpropagation of error entails backpropagating the derivative of errors and not the error itself, it is appropriate to have a critic that predicts the total error derivative instead of the total error. Adaptive critics proposed by Werbos [14], besides predicting the errors, also adapt themselves to improve upon their prediction capabilities. Adaptive critic designs, in general, attempt to approximate discrete dynamic programming. In discrete dynamic programming, the goal is to minimize a total error function J,,,(y(tN in the short time, given the instantaneous error E,(y(tN. In equation form,
Jt,,= min.(t)[E,+Jt+l,nl
(2-l)
The function EI is the cost for one stage and is equivalent to J,,l+l. The subscripts of J indicate the starting and ending stages of the error measure E,. For example, [J1,, = Cjf:EiWl. Now, differentiating Eq. 2.1 with respect to x, we get
(2.2) where A r,n =-
aJ*.tl ax
’
E, = $(x -xd)*,
x = system states,
xd = desired states.
K KrishnaKumar et al/Neumcomputing
137
9 (1995) 131-148
For the sake of notational simplicity, the above equations are written in scalar form. Similar equations in vector form can be easily written. Based on Eqs. 2.1 and 2.2, one can define an approximate linear predictor equation for A,,, as
where W, and W, are parameters that can be adapted on-line making the predictor an adaptive critic. To adapt the parameters W, and W,, we first define the desired value for h,,n using an approximate equation based on Eq. 2.2.
(2.4) Defining the error between A,,n and A& and using Eqs. 2.3 and 2.4, we get
EA=
8At.n- A:,J2=i(A,,n-
(A,+,,,z+z))* 2
WaA,_l,n + Wbz
-
Now we can write the necessary equations to adapt the parameters
P-6) as follows (2.7)
In the above equations, Ed is the learning rate for updating parameters W, and wb. Also, to ensure that the predictor is stable (Eq. 2.31, W, is limited to f0.95.
3. Neuro-control of SS Attitude control of Space Station is challenging due to the continuous change in the system ‘characteristics caused by the varying mass properties. One of the proposed control architectures for the attitude control of Space Station consists of an outer loop for the momentum management and an inner loop for the attitude control (see Fig. 4). The momentum management loop commands the Space Station attitude and the inner loop controls the Space Station to achieve the desired trajectory. A combination of model identification and adaptive control can be used for the inner attitude control loop and the reference trajectory is generated by the momentum management system. The study presented in this paper, without any loss of generality, examines only the inner loop pitch attitude control.
K Krishnafimar
138
et al. / Neurocomputing 9 (1995) 131-148 ANGLES AND
U
SPACE
TORQUE
INNER
LOOP
ANGLES AND RATES
MOMENTUM MANAGER
OUTER LOOP
l
Fig. 4. Inner and outer Imps for Space Station control.
3.1 Space Station model The non-linear equations of motion of the Space Station are presented below in terms of body-fixed control axes components. Attitude kinematics
11 i 11 1 6, 8, 6,
= 01
IN-1 0
cos -cos e,/cos 6,tan 6,e3
-sin sin &tan B,/cos 8, e3 cos 8,
w2 + n o1
(3.1)
0
w3
Space Station dynamics
-ul+
w-
3n2[Z]-’ -c2
Cl
0
1
.l
- U, + W, -u3+
c3 Ll
1
where c+j c,fj cjb Z=
- sin 0, cos 8, cos 8, sin 8, sin f13+ sin 8, cos e2 - sin 8, sin e2 sin 8,.+ cos e1 cos e2 moment of inertia matrix 42 113 50.28 - 0.39 0.16 11
n=
1 I[
= [z 21
131
I,,
I,
132
133
=
-0.39 0.16
10.80 0.16
orbital angular velocity = 0.0011 rad/sec.
1
w,
0.16 x lo6 sZug.ft2 58.57
(3.2)
w3 I
K. KrirhnaKumar et al/Neurocomputing9 (1995)131-148
139
CMG momentum
Assumptions of small roil/yaw attitude errors and small products of inertia lead to a simplification of the complete non-linear model. These equations are useful when there is a need for large pitch (0,) maneuvers with small roll (0,) and yaw (0,) maneuvers. With the above simplifications, the equation for the pitch axis reduces to: I,,& -I-3n2( ZI1- ZJ3) sin e2 cos 8, = -U, + W,
(3.4)
In the above equations, (Or, &, 8,) are the roll, pitch, and yaw Euler angles; (01, @2, 0s) are the body-axes components of the absolute angular velocity of the station; (h,, h,, h3) are the body-axes components of the CMG momentum; (U,, U,, Us> are the body-axis components of the control torque caused by the CMG momentum change;
+ Kzijhz
(3.5)
where the pitch axis CMG momentum and its integral are included to prevent momentum build-up. The control gains obtained from [l] are: K,, = 2.448E + 2 ft-lb/rad K2d = 1.465E + 5 ft-lb/ad/s
140
K. KrishnaKumar et al. / Neurocomputing 9 (1995) 131-148
Fig. 5. Comparison between the performances of a copied neuro-controller
and a linear controller.
K,, = 7.523E - 3 ft-lb/ft-lb-s I& = 3.546E - 6 ft-lb/ft-lb-s2 The inputs to the neuro-controller
included 8,, i,, h,, /h, and the output was
u2.
Twenty hidden units were used and the neuro-controller was trained using supervised pattern learning. The input-output training pairs were generated using random values for 8,, b2, h,, /h, and the desired output was computed using Eq. 3.5. Performances of the linear and neuro-controller are compared in Fig. 5 for an initial condition disturbance. It is seen that the performances are essentially identical. 3.3 Off-line supervised new-o-controller synthesis A neuro-controller synthesis using a trained neuro-model is demonstrated in this section. Referring to Fig. 2, a neuro-model is synthesized first using backpropagation of error. The structure of the neural network employed for the ANN modeling consisted of four inputs (bias, 8, and its rate, and U, at time t), twenty hidden units, and two outputs (0, and its rate at time t + dt; dt = 100 seconds). An optimally connected ANN representation using the technique presented in [161 was used for identifying the NNM. In this study, the inputs and outputs were linearly scaled so that the input/output hyperspace becomes a hyper-cube (equal magnitudes for all inputs and outputs). Appropriate ranges of inputs and outputs are chosen for the scaling. For all cases included in this study, the inputs and outputs were scaled to fall between - 1 and + 1. Reasonable deviations outside this range are acceptable due to the generalization capabilities of ANN. Tables 1 and 2 document the scaling parameters, the neural network structure parameters, and the learning parameters used in this study.
K KrishnaKumar et al. /Neurocomputing 9 (1995) 131-148
141
Table 1 Neural network parameters NN function
Inputs
NNM (fully connected)
1.0 (bias), 0,(k),
NNM (optimally connected)
No. of hidden neurons
outputs
Learning rate
Number of connections (free weights)
20
L9,fk+ l),B’,(k + 1)
0.2
325
20
e&k + 11,B’,(k + 1)
0.2
295
12
U,(k)
0.1
91
12
U,(k)
0.1
55
e’,(k), U*(k) 1.0 (bias), t?,(k) e’,(k),U,W
NNC (fully connected)
e,(k)- e,,(k),
NNC (optimally connected)
e,(kbe,,(k),
t&(k)- B’,,(k)
B’,(k)- B’,(k)
The training data for ANN consisted of random ANN inputs and the desired response was derived from the non-linear SS model presented earlier. The error was defined as the quadratic error between the ANN outputs and the corresponding SS model outputs. This computed error for each time step was back-propagated to update the weights of the NNM. Fig. 6 compares the response of the SS to that of the trained NNM for a random disturbance introduced through the control torque, U,. Controller design for the SS involves the training of a neuro-controller (NNC) to minimize pitch attitude response resulting from any disturbance. The desired output becomes not the system dynamic response as in model training, but instead, a desired output magnitude. In this study, the desired response was generated by a command generator 171of the form .. ... 6, + a@, + a#, + a(#, = e, (3.6) where 0, is the desired final pitch attitude. Constants a,, a,, and a2 are give a characteristic equation associated with the command generator (s + AXS~+ 250,s + w,~) = 0 with h = 0.0008; 5 = 0.707; and o,, = A/(. ues for A, 5, and w, were obtained from [7]. It is noted here that the
Table 2 Scaling values Space station variables
Maximum/minimum values
e2
0.3/ - 0.3 (Radian)
e2
3.Oe- 04/ - 3.0e - 04 fRad/sec) 50/ - 50 (ft-lb)
r/,
chosen to system as The valcommand
142
K KrkhnaKumar et al. /Neurocomputing 9 (1995) 131-148
1 --__
Desired Optimally
Output Connected
Network
Output
Fig. 6. Comparison behveen Space Station model and neuro-model responses to a random input at the control torque U,.
generator is synonymous with the outer-loop momentum management presented earlier. The structure of the neural network employed for the NNC had two feedback inputs, twelve hidden units, and a single output. The two input units were the pitch angle error (0, - 0,) and its rate and the single output represented the control signal, U,. Again, only the pitch degree-of-freedom was considered for the neurocontrol design and verification. The NNC was trained using batch learning and the total time used for each batch was 15,000 seconds (approximately three orbits). It was originally believed that it would be necessary to train the controller for many random excitations to generalize the controller characteristics. An interest-
Fig. 7. Performance of the neuro-controller
for an initial condition disturbance.
K KrishnaKutnar et aL /Neurocomputing 9 (1995) 131-148
143
600 I
: -
;ms*
Optimally connected network connected network b%%%WAdoptive neuro-control
QHHEQFully
0 35
0
Fig. 8. Robustness verification: squared tracking error summed over 15,000 seconds.
ing discovery was identified regarding the training of a NNC for control. It was only necessary to train the NNC for a single disturbance magnitude and duration. Other disturbance magnitudes and time durations could be controlled without having to train the NNC for many random combinations. Concisely, the NNC had ‘learned’ the dynamics of the model and to control the response to any disturbance. Fig. 7 shows the response comparison between the neuro-controller and the commanded output for an initial condition disturbance. 3.4 Robustness of the neuro-controller In this phase of the study, the claim that the robustness of the neuro-controller improves if optimally connected networks are used is verified. Two systems, one
Percent
Variation
in
Pitch
Inertia
Fig. 9. Robustness vertification: squared control effort summed over 15.000 seconds.
K KrishnaKumaret al./ Neurocomputing9 (1995) 131-148
144
with fully connected networks and the other with optimally connected networks, were tested for their robustness to uncertainties in the pitch moment of inertia of the SS. Figs. 8 and 9 summarize the tracking error performances and control energy utilized for control, respectively. These are defined as: 150
Tracking error = c [6,(k)
(3.7)
- 8,,(k)]’
k=l 150
P-8)
Control energy = c U,(k)* k=l
where ‘k’ is the integration step (K= 150 implies 15,000 seconds of simulation time), 8, is the pitch angle in degrees, and U, is the control torque in ft-lbs. It is
‘0
A
--I
a
F
72
04
0
/‘-
_-_ -
-
Commonded trajectory Non-odoptive neuro-control Adoptive neuro-control
No.
of
Orbits _
b
- -+--
Non-adoptive neuro-control Adaptive neuro-control
Fig. 10. Performance comparison between adaptive and non-adaptive neuro-control (a) pitch angle response; and (b) control torque response.
K KrishnaKumar et al./Neurocomputing
9 (199s) 131-148
145
interesting to note that the optimally connected network outperforms the fully connected network. Also, beyond 15% inertia variation, the controller performances are unacceptable. This is where we need to adapt the NNM and the NNC to effectively realize an adaptive controller. Fig. 8 and 9 also present data related to the performance of the adaptive controller. The adaptive control problem is discussed next. 3.5 Adaptive neuro-control of Space station For showing the adapting capability of neuro-controllers, the pitch axis moment of inertia of the SS was reduced by 20%. The adaptation of NNM and NNC were
clew+
2.5
RcW@A With
With
a non-odoptive on odaptive
critic
critic
+++++ With L%s%%%~With
CI non-adaptive on adaptive
critic
LO
L=1 ml.5 fI .Y g
k
1.0 0.5
6500 T
critic
Fig. 11. Performance comparison between adaptive neuro-controllers (a) squared tracking error; and (b) squared control effort.
b
with and without adaptive critics:
146
K IGishnaffimar
et al. / Neurocomputing
9 (I 995) 131-148
carried out simultaneously (i.e. the weights of both NNM and NNC were adapted simultaneously each time step) for the same initial condition disturbance used in the previous case. For the BP of error, a linear adaptive critic was introduced between the NNM and the error derivative, as shown in Fig. 3. The linear critic was initialized with W, set to -0.95 and W, set to 1. As stated earlier, W, was restricted to lie within f0.95 to ensure stability of the linear predictor. Fig. 10(a) and 10(b) present controlled responses using adaptive and non-adaptive neurocontrol. Although the pitch response looks identical, the controller response (U,) for the non-adaptive case is unacceptable. For adaptive neuro-control, the control response is initially oscillatory and as adaptation progresses, the control response improves dramatically. Fig. 8 and 9 document the performance of the adaptive controller for few more cases. These plots clearly show the capabilities of neurocontrollers in adapting and maintaining small errors. In Fig. 11, we compare the performance difference between a system with an adaptive critic and a system with a non-adaptive critic. For the non-adaptive critic, W, was set to -0.95 and W, is set to 1 and both were held constant. It is seen from this figure that adapting the critic helped in achieving better total performance.
4. Conclusions This study demonstrated the applications of neuro-control to adaptive non-linear control of Space Station. Three capabilities of neuro-controllers were demonstrated using a non-linear model of the Space Station Freedom. These capabilities are: (a) synthesis of robust non-linear control laws using neural networks, (b) copying an existing control law using neural networks, and (c) adaptively modifying neuro-controller characteristics for varying inertia characteristics using a linear adaptive critic. The neuro-controller techniques presented appear robust based on the successful results obtained using straight forward applications of these techniques. This study examined decentralized neuro-control and showed the learning and adaptation capabilities of ANN based on direct output feedback. For space applications, the fixed software/hardware environment provided by ANN synthesis makes it easy to modify the controller characteristics from a remote site, if there is a need. Any other controller design could easily be copied using ANN and can take advantage of having a fixed software/hardware structure for the neuro-controller. To build upon the ideas presented, the three-axis adaptive neuro-controller synthesis is currently in progress. Also, adaptive critics ‘using neural networks as critic elements are being investigated as an alternative to the linear critic used in this study. Further research in this area need to examine the stability and performance robustness of these techniques. Specifically, it will be interesting to examine the theoretical aspects of the optimally connected networks in providing robust control capabilities. Also, the feasibility of using neural networks for estimating cyclic
K KrishnaKwnar et al. /Neumcomputing
9 (1995) 131-148
147
disturbances encountered by the space station will be of interest in the future. There are many other ANN concepts and variations that might improve upon the ideas and results presented in this paper and these need to be further investigated.
Acknowledgement This material is based upon work partly supported Foundation under Grant No. ECS-9113283.
by the National Science
[l] B. Wie, K.W. Byun, W. Warren, D. Geller, D. Long and J.W. Sunkel, New approach to attitude/momentum control of the Space Station, 1. Guidance Controf Dynamics 12 (5) (1989) 714-722. [2] J. Yeichner, J. Lee and 0. Barrows, Overview of Space Station attitude control-system with active momentum management, AAS paper t3&044, Feb. 1988. [3] K.W. Byun, B. Wie, D. Geller and J.W. Sunkel, Robust H-infinity control design for the Space Station with structured parameter uncertainty, presented at AIAA Guidance, Navigation and Control Conference, Portland, OR, Aug. 2O-22,199O. [4] G. Balas, A. Packard, and J. Harduvel, Applications of mu-synthesis techniques to momentum management and attitude control of the Space Station, presented at AIAA Guidance, Navigation and Control Conference, Jan. 1991. [5] A.G. Parlos and J.W. Sunkel, Adaptive attitude control and momentum management for large-angle spacecraft maneuvers, J. Guidance Control Llynamics15 (4) (July-Aug. 1992). [6] S.R. Vadali and H.S. Oh, Space Station attitude control and momentum management: A nonlinear look, J. Guidance Control Dynamics (May-June 1992). [7j T.C Bossart and S.N. Smgh, Invertibility of map, zero dynamics and nonlinear control of Space Station, AIAA-91-2663-CP. [S] A.G. Pa&s, A.F. Atiya and J.W. Sunkel, Parameter estimation in space systems using recurrent neural networks, paper no. 91-2716, AIAA Guidance, Navigation, and Control Conference, Aug. 1991. [9] R. Chipman, Q. Lam and J.W. Sunkel, Mass property identification: A compa&on between extended Kahnan filter and neurofllter approach, paper no. 912664, AIAA Guidance, Navigation, and Control Conference, Aug. 1991. [lo] R.R. Kumar, H. Seywald, SM. Deshpande and Z Rahman, Artificial neural networka in space station optimal attitude control, presented at the World Space Congress, Washington, DC, Aug. 28-Sep. $1992. [ll] S.R. Vadali, S. Krishnan and T. Singh, Attitude control of spacecraft using neural networks, AAS/AIAA Spaceflight Mechancis Meeting, Paper AAS 93-192, Pasadena, CA, Feb. 22-24.1993. [12] KS. Narendra, Adaptive control using neural networks, Neural Nerworks for Control, eds. W.T. Miller III, R.S. Sutton and P.J. Werbos (MIT Press, Cambridge, MA, 1990) 287. 1131 PJ. Werbos, Back-propagation through time: What it does and how to do it, Pmt. IEEE, (Aug. 1990). [14] P.J. Werbos, A menu of designs for reinforcement learning over time, Neural Network for ControJ eds. W. Thomas Miller III, R.S. Sutton and PJ. Werbos (MIT Press, Cambridge, MA, 1990) 67. 1151 A.G. Barto, RS. Sutton and CW. Anderson, Neuron-like adaptive elements that can solve difBcult control problems, IEEE Trans. SMC. SMC-13 (1983) 834-846. [16] K. KrishnaKumar, Optimization of the neural net connectivity pattern using a back-propagation algorithm, Neurocomputing 5 (6) (1993) 273-286.
148
K KrishnaKumar et al. / Newocomputing 9 (1995) 131-148
[17] Si-Zhao Qin, Hong-Te Su, and TJ. McAvoy, Comparison of four neural net learning methods for dynamic system identification, IEEE Trans. Neural Networks 3 (1) (Jan. 1992) 122-130. [18] V.V. Anshelevich et al., On the ability of neural networks to perform generalization by induction, BioL Cybemet. 61 (1989) 125-128. Kabnaqje KrisbnaKumar holds a B. Tech in Aeronautical Engineering from the Indian Institute of Technology, Madras, India in 1982 and a M.S. and Ph.D. degrees in Aerospace Engineering from The University of Alabama, Tuscaloosa, Alabama in 1985 and 1988, respectively. He is currently an Associate Professor of Aerospace Engineering at the University of Alabama, Tuscaloosa. His research interests include: Immunized artificial neural systems; adaptive neurocontrol and applications; genetic algorithm applications to control; fuzzy logic control; structural control, optimal control, and optimal estimation: and flinht simulation. traininz. and aoolications to real world oroblems. He is a a&or member of ALA& member of IEEE, and a me&& of INNS. He is also a member of the AIAA Artificial Intelligence Technical Committee and the SAE simulation technologies committee.
Susan RIckarrI obtained her Bachelor of Science degree in Aerospace Engineering from the University of Alabama, Tuscaloosa, AL in 1992. As a student, she participated in the Cooperative education program with the NASA Langley Research Center and is currently employed in the Vehicle Performance Branch at NASA Langley specializing in both flight and wind tunnel testing. Ms. Richard is also pursuing a MBA through the college of William and Mary, Virginia.
Susan Bartholomew received her B.S. in Aerospace Engineering from the University of Alabama. in Tuscaloosa. in 1992. There she worked as an undergraduate research scholar applyhrg Artificial Neural Networks to the de&n of a controller for the Space Station Freedom. She received her MS. in Aerospace Engineering from the University of Colorado in Boulder. At University of Colorado she was a research assistant working with Artiicial InteUigencer particularly expert systems, in the data quality control process for Satellite Imagery data. Currently she is working as an engineer for Martin Marietta in Denver, Colorado.