Adaptive Neural Star Tracker Calibration for Precision Spacecraft Pointing and Tracking

Adaptive Neural Star Tracker Calibration for Precision Spacecraft Pointing and Tracking

Copyright © 1996 IFAC 13th Triennial World Congress. San Francisco. USA 8a-032 ADAPTIVE NEURAL STAR TRACKER CALIBRATION FOR PRECISION SPACECRAFT POI...

876KB Sizes 11 Downloads 85 Views

Copyright © 1996 IFAC 13th Triennial World Congress. San Francisco. USA

8a-032

ADAPTIVE NEURAL STAR TRACKER CALIBRATION FOR PRECISION SPACECRAFT POINTING AND TRACKING David S. Bayard

Jet Propulsion Laboratory .1800 Oak Grove Drive Pa8adena, CA 91109 Tel: (818) 35,1-8t08 [email protected]. nasa. gov Abstract. The sta.r tracker is an essential sensor for precision pointing and tracking in most 3-axis stabilized spacecraft. In the interest improving pointing performance by taking advantage of dramatic increases in flight computer power and memory anticipated over the next decade, this paper investigates the use of a neural net for adaptive in-flight calibration of the sta.r tracker. Estimation strategies are given for c~es when the spacecraft attitude is both known and unknown. Keywords. Neural networks, Calibration, Spacecraft pointing

1

INTRODUCTION

The sta.r tracker unit (STU) is a key sensor for determining the overall pointing and tracking performance of most 3-axis stabilized spacecraft, Liebe (1993)(1995); Stanton et. al. (1993); Wertz (1978). Any Improvement in star tracker measurement quality translates directly into improved position and rate estimation, and hence improved pointing knowledge. '!racker sensing becomes even more important in the absence of accurate gyros, since the body rate must be reconstructed analytically (Kissel, 1995). Completely gyroless configurations are becoming popular in NASA and in commercial space industry to save ;l?0wer, weight, and overall cost {Grewal and Shiva, 1995). This paper investigates the use of a neural net for in-flight calibration of the sta.r tracker. Specifically, a nonlinea.r distortion profile will be estimated autonomously based on in-flight measurements. The star tracker distortion is then corrected using the estimated profile. Preliminary results indicate that low spatial frequency distortions on the order of .1 pixel, can be reduced to .034 pixel using the proposed method.

2

STAR TRACKER MODEL

The star tracker, sometimes conveniently referred to as a Star '!racker Unit (STU), is an instrument which determines spacecraft attitude. The location of 2 or more star images on the CCD along with their star locations in inertial coordinates is sufficient to determine the attitude of the STU camera with respect to an inertial frame of reference. The spacecraft attitude can then be calculated from knowledge of how the STU camera is mounted on the spacecraft bodr' These relationships will be developed in more detai in this section. The standard vector crossproduct operator v x, where v = [VI,V2,V3]T E R3 will often be represented as V X using the equivalent matrix notation, -V3

o

(1)

Vl

Typically, the STU images several sta.rs simultaneously in the same exposure. Let 8lcl, l = 1, ... , nlc denote the collection of stars imaged on the CCD during the exposure at time tic. Let the unit vector Vlcl denote the location of sta.r 8U in inertial coordinates. The vector Vlct is rotated into the STU frame using the sequence of transformations,

(2)

7402

Here, Ai is the attitude matrix at time ti which acts to rotate a vector from inertial coordinates into the body frame; T is a matrix which rotates a vector from the body frame into the STU frame; and the unit vector ZIIl is the location of star BU in STU coordinates. Because of uncerta.inty in the attitude, the matrix Ai is more conveniently decomposed as,

(3) where Ai is a known estimate of the attitude at time ti, and c5A(cPi) is the error parametrized by three Euler angles cPi = [cP l , cP 2 , cP 3]T, nominally taken to be in a 12-3 rotation sequence. Since the error c5Ai is typically a small rotation, it can be expanded to first order as,

(4) Similarly, only a nominal estimate T of the transforma.tion T is usually known, so that one can write,

T = c5T(O)T

(5)

where c5T(O) is the error parametrized by three Euler angles 8 = [Ol, 82 , 03]T. Since the error c5T(O) is typically a small rotation, it can be expanded to first order as,

c5T(0)

!::::

I - Ox

(6)

Substituting (3)(4) and (5)(6) into (2) and eliminating terms of second order gives (see Appendix A),

(7) where,

(8) At this point, the vector ZIIl denotes the unit vector in the STU frame pointing to star BU. Let the components of vector Zu be denoted as,

(9) Then the projection of Zit onto the STU imal5e plane places the image of star Bit at location (zu, Yit) on the CCD where, (10) Ylct = Zlct 2 I zlel

(11) Implicit in (10)(11) is the standard assumption that the star location has been normalized to unit focal length (i.e., given the true star location (X, Y) one defines z = XI I and Y = YII where I is the camera focal length ). The star image is recorded on a CCD and the star location is determined by a centroiding algorithm applied to a pixel readout. To improve centroiding accuracy, the star image is typically defocused to spread photons over several pixels ofthe CCD. Due to sensor errors, the star is observed at perturbed location (z~tJ ~t) where, Z~t = Zu + Fl(Zit, Ylet,Plet) + "lit

(12)

Ylet

= Ylet + F2 (zi:t, Yi:t, Plet) + '7~t

(13)

The quantities '7ll> '7~( are additive white measurement noise terms which model the contribution of random errors in the determination of star locations. These are typically described in terms of a noise equivalent angle (NEA) in units ofradians, which is typically a function of the noise-to-signal (N/S) ratio at starlit pixellocations. The N/S ratio is dependent on star magnitude, quantum efficiency, optical efficiency, integration time and system temporal noise levels. Sources of temporal noise include CCD noise (on-chip), electronics noise (off-chip), quantization noise, background noise, and dark current. The quantities Fl , F2 in (12)(13) are distortion functions which model the contribution of bias errors associated with determining star locations. These bias errors can be roughly divided into two groups, depending upon whether the distortion is of low or high spatial frequency. Low spatial frequency error sources typically include: Optical distortion, Chromatic distortion, Mechanical stabilityIshifts, Thermal expansion of materials, Thermal sensitivity of refractive index, and Ground calibration accuracy. High spatial frequency error sources typically incluqe: CCD photo response nonuniformity, CCD dark current nonuniformity, CCD charge transfer efficiency, and Centroiding algorithm error. These dependencies have been captured in the distortion model (12)(13) by letting the functions Fl, F2 also depend on a "covariate" vector Plet = [Pit, P~tJ pftl T E R3 whose components are given as follows, pit - temperature (or color) of star Blet P~t - CCD temperature at time tie

pft - optics temperature at time tie

Star temperature (or color) is known from the star cata.logue, while CCD and optics temperatures must be measured physically using temperature sensing devices.

3

AUTONOMOUS CALIBRATION

The specific approach to calibration depends on whether the spacecraft attitude is known or unknown.

3.1

Case 1: Attitude Known (i.e., 4>k

!::::

0)

In some cases, an estimate Ale of the spacecraft attitude Ale is available from alternative sources which is sufficiently accurate for star tracker calibration purposes. In this case, errors due to attitude knowledge are removed and the STU alignment error is estimated as part of the overall distortion profile.

3.2

Case 2: Bootstrapped Attitude Estimate

Often the attitude matrix Ale must be estimated at each time tic from the same data that is being used to calibrate the star tracker. This problem is much more difficult than Case I, since the STU frame alignment error c5T(O)

7403

7404

• Equation (21) provides an analytic expression for weights. Loc8.1 minima are avoided since the neural net does not have to be tuned using gradient type methods. • As the size of the data set n becomes large and 0'n is chosen according the rules (17)08), the neural net provides an asymptotically unbiased and consistent estimate of the conditional mean distortion surface. A disadvant~e of the GRNN is that the complexity of the expressions (20)(21) increases with n. However, methods to overcome this using clustering algorithms have been investigated in Specht (1991). Alternatively, special purpose hardware architectures can be developed for efficient implementation.

4.2

Application to Tracker Calibration

Consider the GRNN applied to the star tracker calibra.tion problem. Since it is desired to "invert" the distortion profile, the neural net must be set up to learn the appropriate inverse mapping. For this purpose, the GRNN input is chosen as the STU output,

5

NUMERICAL EXAMPLE

A numerical example is given to demonstrate the neural net approach. For this example, the star tracker is chosen with an 8 degree FOV, and a CCD with a 512x512 pixel array. It is assumed that the attitude is known perfectly at each time instant, that there are 15 stars per exposure frame (as would be consistent with a full frame device), and there are 300 frames. Assuming a frame rate of 1 second, this would involve 300 seconds of data. This gives a total of 4500 stars for calibration purposes. The stars from each exposure are assumed to be Poisson distributed across the FOV. The STU distortion pattern (in the x direction, F l ) is shown in Figure 1. This distortion is spatially oscilla.tory and attains a maximum magnitude of .1 pixels. This shape is typical of optical distortion patterns remaining after initial ground calibration and captures the low spatial frequency behavior of the error. High frequency distortions are also important but are ignored in this study. Also, the dependencies on covariates Plet (i.e., star temperature, CCD temperature and optics temperature) are ignored for simplicity. X-axia Oillorlon

(22) In addition to (%~t' VJ:t), the covariate vector pu has been included in (22) to capture dependencies on star, CCD and optics temperatures.

While in principle the STU input zu can be ta.ken as the neural net output, a better choice is to first project it into the STU image plane as follows,

t

(23)

i:

Ylet = i~t/ t (24) The use of the vector [zu, Ylel]T E R'l rather than zu E R3 reduces the dimension by one. The neural net output is defined in terms of the error profile, (25) ~ y:'llel = .i (26) Ylel - Ylel Since there are two outputs, and the GRNN methodology must be applied to each output Yl and Y'l separately.

At any point in time the neural net learning can be frozen, and the STU distortion corrected by inverting the mapping as follows, Zlet

= %~t - E[y1/clIX lel ]

YU = 1hl - E[y;lIXlelj

(27) (28)

where the GRNN is used to produce the required conditional mean estimates.

Figure 1: STU distortion pattern of .1 pixels (worst case) for x-axis Given the star location data, the neural net (20)(21) with input vector (22) and output (25) (26) is used to estimate the distortion surface. The result is shown in Figure 2, using a value of 0' = .042 rad. The smoothness of the estimate is clearly seen from Figure 2, as is its resemblance to the true surface of Figure 1. The residual distortion after correction is shown in Figure 3, and has a worst-case error of .051 pixels. Hence, the effect of the neural net in this example is to reduce the the worst-case error from .1 pixels to .051 pixels.

As mentioned in Section 4, the neural net performance is sensitive to the choice of 0'. This sensitivity is briefly studied by changing the value to 0' = .021 and repeating the previous run. The resulting neural net estimate is shown in Figure 4. As expected, there is less smoothing and the estimate is more "ragged" than the earlier estimate of Figure 2. The residual error distortion after correction is shown in Figure 5, and has a worst-case error of .034 pixels compared to the initial error of.l pix-

7405

els. Comparing .034 pixels error with the previous run

Proof: Given vectors u, v E ~, let

having .051 pixels error, indicates that the estimation process hu been improved by using less smoothing. 6

CONCLUSIONS

(36) For any rotation vector R, one has,

This paper demonstrates how a neural net can be used for in-flight star tra.cker calibration. Specifically, a nonlinear distortion mappin~ is estimated autonomously by a neural net using in-flight measurements. The star tra.cker distortion is then corrected by using the learned mapping. A numerica.l example indicates that the neural net can be used to reduce error from .1 pixel to between .051 and .034 pixels. This example &88umes that the attitude is known at each time. The method of bootstrapping attitude while ca.librating the star tra.cker discussed in Section 3, remains to be tested by simulation. It is expected that comparable results will require larger data sets in order to "smooth" over additional attitude estimation errors.

Rw

= R(u x v) = (Ru) x (Rv) = (Ru) X Rv

(37)

or solving for w gives, w = RT(Ru) X Rv

(38)

Equating (36) and (38) gives, RT(Ru) X Rv

= uXv

(39)

Since v is arbitrary, this implies, RT(Ru)X R =

UX

(40)

Acknowledgements

Rearranging (40) gives the desired result (35).

The author would like to thank Dr. Jim Alexander and Dan Chang of the Automation and Control Section of JPL for several technica.l discussions. This research wu performed at the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.

References Cacoullos, T. (1996). Estimation of a multivaria.te density. Ann. Inst. Statist. Math, (Tokyo), voL 18, no. 2, pp. 179-189.

A

Grewal, M.S. and M. Shiva (1995). Application of Kalman Filtering to gyroless attitude determination and control system for environmental satellites, Proc. IEEE Conference on Decision and Control, Buena. Vista, Florida.

APPENDIX A:

In this appendix, details are given behind the derivation of equation (7).

Kissel, G.J. (1995). Precision pointing for the Pluto mission spacecraft. 18th Annual AAS Guidance and Control Conference, Pa.per AAS 95-065, Keystone, Colorado.

Substituting (3)(4) and (5)(6) into (2) gives,

Zkt

=TAkVkt =6T(8)T6A(
(29)

= (I - 8X)T(I -
Liebe, C.C. (1993). Pattern recoe;nition of star constella.tion for spacecraft apphcations. IEEE Aerospace f3 Electronics Systems Magazine, pp. 3139, Jan. 1993.

(30)

Expanding (30) and neglecting second order terms gives, Zkt

= T AkVkt - ()XT AkVkt - T
~ ()X ~ Zkt Zkt

= Zkt -

(31)

~ - -T..I..x 'f'k T-T Zkt

(32)

(T
(33)

()XZkt -

= Zkt + z:t(8 + T
Liebe, C.C (1995). Star tra.ckers for attitude determina.tion. IEEE Aerospace f3 Electronics Systems Magazine, pp. 10-16, June 1995. Parzen, E. (1962). On estimation of a probability density function and mode. Ann. Math. Statist., vol. 33, pp. 1065-1076.

(34)

Here, equation (32) follows from (31) by using the definition of Zkt from (8), and the relation TTT = Ij equation (33) follows by using the relation T
Shuster, M.D. and R.V.F. Lopes (1994). Para.meter interference in distortion and alignment calibration. Proceedings AAS Flight Mechanics Meeting, Paper No. AAS-94-186, Cocoa Beach, Florida. Specht, D.F. (1991). A general regression neural network. IEEE Trans. Neural Networks, vol. 2, no. 6, pp. 568-576, November 1991.

Identity A.l For any rotation matrix R (i.e., any matrix R E Raxa such that RT R RRT I and det R = 1), and any vector u E Ra the following equality holds, (35)

=

Stanton, R.J., J.W. Alexander and E.W. Dennison (1993). CCD star tracker experience - Key results tram Astro 1 flight. SPIE Vol. 19./9, Space Guidance, Control and Tracking.

=

7406

Thompson, J .R. and R.A. Ta.pia (1990). Nonparametric FUnction Estimation, Modeling, and Simulation. Philadelphia: SIAM Press.

J .R. Wertz, (Ed.) (1978). Spacecraft Attitude Determination and Control. Kluwer Academic Publishers, Boston.

X-axil Dinnon Ed-n_

t t

Figure 4: Neural net estimate of X-axis distortion; n = 4500,

(1'

= .021

Figure 2: Neural net estimate of X-axis distortion; n = 4500,

(1'

= .042

x-axil COrrected CMIIIUIon

t

X-Old. Corloclod Diota1ion

I Figure 3: Residual X-axis distortion of .051 pixels (worst case) after neural net calibration; n = 4500, (1' = .042

Figure 5: Residual X-axis distortion of .034 pixels (worst case) alter neural net calibration; n = 4500, (1' = .021

7407