ON SYSTEMATIC ERRORS IN TRAJECTORY DETERMINATION PROBLEMS by Dr. Alfons J. Claus Member of the Technical Staff, Bell Telephone Laboratories, Inc. Whippany, New Jersey ABSTRACT The case that is considered in this paper consists of the determination of the orbit of a near-earth satellite from biased observations from a single tracking station. A brief theoretical discussion, confirmed by numerical computation, shows that the computed orbit becomes completely insensitive to observational bias errors if the variance of the arithmetic mean of the random errors approaches zero. Various curves are given illustrating effects of pass length (smoothing time) and magnitUde of the relevant errors on the accuracy of the resulting orbital elements.
Formulas are presented for the determination of satellite trajectories from observations containing random as well as systematic errors. The case where the systematic errors consist of instrument biases is treated in some detail. It is shown that the accuracy of the computed trajectory becomes highly insensitive to the magnitude of bias errors as soon as the variance of the arithmetic mean of the random errors becomes negligible. The latter conclusion is verified on the basis of numerical computation. A short discussion is devoted to the question of socalled imbred bias errors.
The theoretical investigation leads naturally into the issue of "inbred" bias errors and is treated in some detail. Briefly, it is shown that, if the optimum filter ceases to be valid due to singular behavior, an appropriate number of bias errors should be ignored. Although this will undoubtedly cause a loss of accuracy of the established orbit, it preserves the quality of predicted observations provided they are of the same type and pertain to the same station(s) as the ones used for the orbit determination process itself.
INTRODUCTION The presence of systematic errors among observations to be used for establishing a satellite orbit invariably has a deteriorating influence on the accuracy of the computed orbital elements. The extent to which the orbit is affected depends obviously on the way the observations are handled. In the vast majority of the present day operational orbit determination methods, an orbit is found which matches the observations according to a weighted least squares fit. Such a procedure extracts the maximum amount of information from the available data only if the latter happen to be uncorrelated, which is hardly ever the case in practice. The reason for the application of a least squares filter is, among others, its inherent simplicity. Although this is undOUbtedly a highly desirable characteristic, it seems worthwhile to investigate at what price it is obtained. In particular, the question pertaining to loss of accuracy resulting from ignoring systematic errors tends to be a pertinent one indeed.
FORMULATION AND DISCUSSION Before we engage in the detailed calculations a few more general remarks may be appropriate. Strictly speaking, they apply only to situations in which a rather substantial amount of observations is processed simultaneously. Minimum variance estimators, as well as estimators based on the maximum likelihood principle, involve the inversion of the covariance matrix of the observational errors. In one of the simplest applications, namely the least squares method, the inversion is obviously immediate since covariance matrices of all observational errors are assumed to be diagonal. Inversion problems may begin to appear as soon as the latter assumption is removed, thereby recognizing the presence of nonrandom errors.
It is clear that any acknowledgment of systematic observational errors requires the so-called weighting matrix of the observations to have nondiagonal elements different from zero. Apparently, only the treatment of special cases can be carried through with sufficient detail to be practically useful. However, these cases offer valuable insight and reveal interesting features connected with the use of filters designed for the removal of low frequency errors.
The reason for this is twofold. First, the size of the covariance matrix could be rather
339
Assume the error vector B~ of the observations to consist of a vector E whose components form a set of statistically independent random variables, and a vector NBv, where Bv is the error vector of the relevant parameters; N is the matrix of sensitivities of the observations with respect to these parameters. Thus, B~ = E + NBv. If~,~, h represent the covariance matrices of B~, E and Bv, respectively, it follows that
large, the matrix having a number of rows (and columns) equal to the number of observations to be processed. This disadvantage can be somewhat relieved by prefiltering, or reducing observations to so-called normal places. Second, it will be explained later that the more one wishes to enhance the presence of low frequency errors, the more the corresponding covariance matrix approaches the singular case. It seems, then, that a straightforward application of a maximum likelihood (or minimum variance) filter, when simultaneously applied to a large number of observations, should represent a compromise between two possible extremes: high correlation coefficients to account for systematic errors and zero correlation to make the matrix inversion trivial.
~ = ~
+ NhN'
in which the prime denotes transposition. To simplify the argument, let us consider the case of n observations where each observation is corrupted by a random error with standard deviation cr and an error with standard deviation Y, the latter error being the same for all observations. We then have
We mentioned earlier that prefiltering, perhaps using polynomial filters, can relieve the inversion problem to some extent. Actually, the degree of possible improvement is rather limited. The reason lies in the fact that polynomial filters of reasonably short smoothing time, as must be the case in applications under present discussion, form no effective shield against systematic errors, but may reduce high frequency errors sUbstantially. It follows that the output noise can have quite a narrow bandwidth resulting in a high correlation between the prefiltered observations, thus causing the covariance matrix to be ill-conditioned. Hence, the advantage of having a smaller size matrix to invert is apparently offset by the fact that the new matrix contains substantially higher correlation coefficients than the original one.
~ =
iI
+ -IQ
in which I represents an nxn unit matrix and Q is an nxn matrix whose elements are all equal to unity. Let us examine the matrix ~ for small values of i/n. Clearly, when cr approaches zero, the matrix becomes singular since Q is a matrix of rank one (n > 1). The behavior of ~ for large n is investigated by stUdying the value 6 of the n determinant of the correlation matrix,
~
6
The implication of the previous discussion is that any method of orbit determination which calls for the explicit numerical inversion of the covariance matrix of observational errors should be avoided. Apparently, a more fruitful approach to the treatment of systematic errors consists of assuming the errors to originate from random processes such that an analytic inversion of the covariance matrix becomes practically possible. Obviously, this may put quite serious constraints on the error models amenable to analysis. However, it turns out that one of the most important cases in practice, namely errors resulting from an imperfect knowledge of certain constants, belongs to the latter category. Cases of slowly varying errors, such as sometimes encountered in range and range rate systems, can also be handled. In the latter cases, the errors may be assumed to originate from Markov sequences of appropriate order and bandwidth.
(i+n-l) cr2 (n-l) n
( cr2 +-I)n
I t is easily shown that
lim n
~oo
6
n
= O.
Thus, the
2 smaller the value of cr /n, the more troublesome the inversion of the covariance matrix of the observational errors. On the other hand, small 2 values of cr /n imply that the prevailing situation is such that the random errors can be "averaged out". Under these circumstances it is reasonable to expect that effects of bias errors in the observations on the computed trajectory are likely to be small if the data are processed in an optin~ fashion. Consequently, we are led to the conclusion that situations which look promising for bias removal are indeed the ones where computational difficulties are likely to appear in the numerical inversion of the covariance matrix of the observational errors.
Some of the arguments presented above may have appeared rather vague. We will attempt to illustrate them in the case of systematic errors expressible in a functional form involving only unknown parameters, and which was earlier described as being of considerable practical importance.
The previous example can also serve as a guide line to illustrate the effects of prefiltering. Suitable polynomial filtering applied to densely spaced data (for instance) should not noticeably affect possible bias errors but will reduce the random noise level considerably.
340
Hence, although in the expression for 6 , n n becomes a reasonabq small value (the resulting 2
number of "normal places"), a approaches zero with an increasing amount of raw data. It becomes quite evident that effective bias removal and ill-conditioned covariance matrices of the observational errors go hand in hand. This is exactq the reason why we insist on an analytic inversion of~. To be sure, any inversion of ~ is obviousq impossible of a ., 0 but, even in the latter case, the formulation to be derived will maintain its consistency. The actual derivation is given elsewhere(2) but is briefly repeated here for completeness. Under appropriate assumptions, a set of orbital elements, represented by the column vector a, is adjusted to the set ~ according to the formula (1), (4)
case where a straightforward application of the formulas (1-2) is impossible. As mentioned earlier, any method of inverting ~ for a., 0 will result in failure, but emphasis must be placed on the fact that analytic inversion of ~ for any nonzero value of a, however small, leads to a numerical formalism for orbit refinement which is ultimateq still valid for a c O. The necessary formulas are easily obtained by substituting a I for ~ in the expressions (3), 2
J,~-lJ = ~
[J'J_J'N(a2A- l +N'N)-lN'J],
a
J.~-15A = 1
2
[J'5A-J'N(a2A-l+N'N)-~'5A] .
a
Thus,
(1) in which 5A is the vector of residuals in the relevant observations, the residuals being computed on the basis of the elements aj C is the covariance matrix of a; ~ is the covariance matrix of the observational errors and J is the matrix of partial derivatives of the observations with respect to the orbital elements. The covariance matrix of the refined elements is cov ( ~) ~
=
(-1 C +Jr~ -1-1 J)
The analytic inversion of ~ is accomplished by making use of the matrix identity
The matrices J,~-lJ and J.~-15A appearing in the formulas (1-2) become
and
Formulas (4-5) are applicable for any value of a. In particular they are suitable for the study of cases in which a = 0, provided the matrix J'J-J'N(NIN)-~'J is nonsingular. The case for which the latter condition is not satisfied is quite exceptional and is discussed in some detail in the next section. However, it is assumed throughout this paper that both matrices J'J and N'N are nonsingular. If the matrices J and N cording to J'
(N~ N~
•••
= (J~
N~) where
J~
'"
ar~
partitioned ac-
J~) ,
N' .,
J i and Ni correspond to
the i-th observation, formula (5) may be put in the form 2
We wish to examine the case where the systematic error is predominant. To this effect, 2 let a2 approach zero in ~ c a I. This is the
cov(~)
in which Superior numbers refer to similarly-numbered references at the end of this paper.
341
= -an
-1
S
2
(a ,n)
(6 )
2
S(o ,n)
i = -
n
n
-1
C
1
+ -n
I
i=l
cov(~)
rn =
I
J~WJi
r=l
{t J;) WA(,A+W-l)-{~ J~r
It is readily shown that all elements of the 2
matrix S(o ,n) remain bounded for either 0 approaching zero, or n approaching infinity. Formula (6) then indicates that in the hypothetical case where the random errors are aver-
l in which w- is the covariance matrix of random errors in a data triplet R,A,E.
aged out, that is in the case where 02/n is zero, the trajectory will be determined with absolute precision in spite of the fact that bias errors may be present. Let us recall once more that this is exactly the case for which a straightforward application of the formulas (1-2) becomes impossible. It is of some importance to realize that, of course, the previous situation will hardly ever arise
If the elements of the matrix A are much
larger than the ones of W-I;n, that is if the bias errors are predominant, formula (8) becomes
cov(~)
in practice, but the fact remains that 02/n can become qUite small compared to anticipated bias errors, in which case serious numerical difficulties may be encountered during the inversion of the matrix ~. putations are carried out using n slant range measurements and n measurements of azimuth angles and elevation angles from a single tracking station. Systematic errors are assumed to consist of a constant error for range R, a constant error for elevation angle E and an error for azimuth angle A which is inversely proportional to cos E. No prior information is
On the other hand, if A 0, indicating the fact that no bias errors are present, we obtain
-1
assumed to be known, so that C is put equal to a null matrix. Application of the formulas (1-3) yields
Ea
cov(~) =[! J~WJi
+[f J~WJi
i=l
iEl
-(,~ J;)
, JiWJ i
Obviously, for any matrix A I 0, there exists a SUfficiently low noise level such that the above expression for cov(~) becomes applicable. Thus, for such noise levels, formula (9) provides a measure of the attainable accuracy in the presence of bias errors and, to a first approximation, is seen to be independent of the actual value of these errors.
In this paper, orbit determination com-
11
(8)
-1
J
(10)
The formulas (9) and (10) considered together deserve some fUrther attention. Both formulas indicate that, at least for sufficiently small random errors in the case of (9), the accuracy of the established trajectory depends solely on W and n, a statement which is hardly surprising with regard to formula (10). In connection with formula (9), however, the feature to be emphasized is the fact that the mere presence of bias errors, independent of their magnitude, is sufficient to make the
-l)-l(,~ J;l]_'
WA(,A..
342
formula applica ble. The reason for this perhaps somewhat puzzlin g conclus ion must be found in the behavio r of the matrix
1\.(1\.
+ W:l)-l in the
-1
= W = O.
Indeed, the latter l matrix functio n is discont inuous at I\. = W- = O.
vicinit y of I\.
I f I\. " 0 (no bias errors) , I\. (I\. + w:l)-l = 0
INBRED BIAS ERRORS
This section is mainly concern ed with the stUdy of the accurac y of predict ed acqUis ition quanti ties for sensors whose earlier data were used for the comput ation of the traject ory .under consid eration . It turns out that this topic is very closely related to the fact that the filter (4) may cease to be valid in case the matrix
-1
-1 ~ 0, for any W r O. On the other hand if W -1)-1 = I for any I\. le 0 (bias errors I\. ( I\. + ~ presen t). This peculi arity is further illustrated in the last section by means of the numeri cal results .
Cb
and Cr represe nt, respec tively, the covaria nce matrice s of the orbita l elemen ts in the events that bias errors are and are not presen t, suitabl e matrix manipu lations on the formula s (9-10) yield If
' . - Cr +
c{t, J;)
.[nW-1{t, J~ c{t J;)l'(t, J}r
M = J'J - J'N(NtN)-~'J
(12)
is singula r. Indeed, if the latter conditi on prevai ls, it will be shown that the inabili ty of removin g all bias errors does not affect the acquis ition accurac y in the hypoth etical case of no random errors. It is perhaps worthw hile to remark that, theore tically , the filter (4) never breaks down as long as the random errors, howeve r small, are differe nt from zero, provided 1\.-1 le 0 and enough observ ations are processed. Before proceed ing to the detaile d discuss ion, it is found to be conven ient to establi sh a necessa ry and suffici ent conditi on for M to be singula r, which will lend itself to a clear geomet rical interpr etation . If there exist two nonzero vectors 5v and such that
5~
(11) N5v = J5~
It can be shown that the matrix
it is easily shown that M is singula r. Indeed, combin ation of equatio ns (12) and (13) yields MB~ " 0 for nonzero 5~. Conver sely, if M is singula r, there exists a non zero vector 5~ for which 5u'M51-1 = O. Or,
Thus, it follows from formula (11) that the diagon al elemen ts of Cb' i.e. the varianc es of the orbita l elemen ts in the case bias errors are presen t, are larger than the diagon al elemen ts of Cr' Hence, with the same noise level, bias errors will indeed deterio rate the traject ory determ ination but, again, the deterioration is indepen dent of the magnitu de of the bias errors provide d the noise level is low enough.
with p =
J5~,
p' [I-N(N'N)-~'] P
o
(14)
Since it can be shown that the matrix I-N(N'N)-~' is positiv e semi-d efinite , it follows that equatio n (14) is equiva lent with the linear system.
A more detaile d study would indicat e that the filter (7) is more likely to cause some comput ational difficu lties than the usual (weight ed) least squares filter. This disadvanta ge become s more trouble some either when attemp ting to accoun t for more sources of bias errors, or when attemp ting to handle data from short trackin g interva ls in the event that no prior observ ations were process ed. Of course, the nature of the observ ations themse lves plays an importa nt role.
[I-N(NtN)-~'] P " 0 • Let n be the number of observ ations and r number of bias sources . If the rank of N (as assumed earlier ), indicat ed by p(N) it follows from Sylves ter's law(3) that p[I-N(N 'N)-lN' ] "n - r. This means that be express ed in terms of the r compon ents
343
the is r r, p can of a
vector Bv. It is immediately verified that p ~ N5y represents the desired solution. Thus, the two non zero vectors By and B~ satisfy the condition N5y = JO~.
detecting whether bias errors were present or not. The above geometric interpretation of equation (13) leads immediately to the conclusion that the effect of an error in station longitude on the trajectory computed on the basis of observations from the station in question, cannot be eliminated. Another example is found in the case of an equatorial synchronous orbit. Any attempt to account for all bias errors in observations from a sensor located anywhere on earth will result in failure.
The situation in connection with the behavior of formulas (4-5) can now be summarized as follows. If conditions (13) cannot be satisfied for non zero vectors By and B~, the formulas are always applicable. They break down if conditions (13) can be satisfied and, either A~ = 0J or no random errors are present , or both. If A-1 r~ 0, a non zero noise level is sufficient to avoid the singular case. Of course, it is tacitly understood that enough observations are processed.
The question pertaining to what smoothing formula should be used if the matrix M turns out to be singular is one of primary importance. In particular, we wish to investigate the attainable accuracy of predicted observations for, say, acquisition purposes. For simplicity of explanation, no random errors are assumed. As far as the current discussion is concerned, the latter situation is known to be equivalent to the case where A- l = 0 with random errors
Some interesting conclusions follow qUite readily from conditions (13). Let us write the latter in the form
present. The matrix (N J) has n rows and r~ columns (if we are dealing with six state variables), and thus has a rank less than or equal to min(n,r~). Now, the vectors By and B~ have, respectively, r and six components. Consequently, if n < r~, the linear system (15) admits a nontrivial solution for By and B~. Stated differently, for nonzero By and B~ equation (13) does not impose a restriction on Nand J unless n > r~. It follows that effective bias removal can only be expected in case the number of processed observations equals at least the number of state variables plUS the number of bias sources. This conclusion may be somewhat academic since, in practice, one would ordinarily process more observations than the aforementioned minimum.
First, we briefly examine the case where the matrix M is nonsingular. The error vector B~ in the computed set of orbital elements becomes
in which N5v represents the error vector in the observations. Thus, B~ = 0 and the orbit can be established without error. Incidentally, this is in agreement with the earlier result that cov(~) = 0 in the absence of random errors. The vector of residuals in the data is 5~ - N5v. It follows that 5Y = (N'N)-~'B~. Similarly, the residual in a predicted observation ~ is 5~0
The conditions implied by the matrix relation (13) lend themselves to a simple geometric interpretation which has a strong intuitive appeal. Consider the hypothetical case in which there are no random errors. If also no bias errors are present, the actual set of orbital elements is obviously capable of producing simulated observations which match the observational data perfectly. Let us now introduce bias errors represented by the components of By appearing in equation (13). The observed quantities in this case will differ from the previous ones by an increment N5y, since N is the matrix of the relevant sensitivities. By virtue of equation (13), this increment can be put in the form JO~. It follows that the actual set of orbital elements incremented by B~ corresponds again to a trajectory which matches the (offset) observations perfectly. Consequently, there is no way of
= N05Y.
Or, 5~
o
~
1
N (N'N)- N'B~. 0
0
Hence,
the residual itself is predictable on the basis of the known vector 5~. It is clear that, in case the matrix M is singular, no attempt should be made to filter out all bias errors. Instead, we must concentrate on the maximum number of such errors which can be handled without causing the smoothing formula to become invalid. We therefore partition the matrix N in the form N = (U U ••• UpV), in which the number of 1 2 columns of V is taken as large as possible, but still leaving the matrix M* = J'J-JIV(VIV)-ly'J nonsingularj all matrices Ui(i~1,2, ••. ,p) consist of one column.
344
We now consider observations corrupted with errors represented by the components of Nay, where By is an arbitrary vector. Let us apply the smoothing formula
The vector By can be arranged in the form
the orbit exactly. However, it is still possible to predict observations (for the same sensor as was used to obtain the observations represented by A) with absolute precision. Indeed, using equation (18), the vector 5A of observational residuals is expressible as 5A z: V(Bv ~y ). Similarly, for a predicted v v observation A , we obtain 5A = V (5v ~y ). o 0 0 v v Finally, combination of the latter two expressions yields 5A
By
(16)
in which 5Yu and 5v
correspond to the matrices
U
c
v (U U ••• Up) and V respectively. I 2
If Ni = (UiV), the definition of V implies I
-1
I
that all matrices J1J-JINi(NiNi ) NiJ (i c 1,2, ••• ,p) are singular. It follows then from equation (13) that vectors (~i5Yi)1 and B~i can be found such that
Summing the previous expression over all i and putting BYv
c
f
5Yi ,
B~
i=l
=
f 5~i' i=l
(18) At this point we wish to emphasize that, in equation (17), all constants ~i are arbitrary. Consequently, the vector 5v appearing in u (18) can be taken equal to Bvu in (16). Of course, this leaves the vectors By
equation
v
and Bv
v
in general different from each other.
,
The observational error vector Nav may be put in the form N5v c U5V + V5V and the u v errors in the orbital elements become
5~
c
M*-l [J 1 _J ' V(V ' V)-ly·](U5v +V5v ) u v
Or, using equation
(18),
B~ = B~.
Thus, con-
trary to the case where M is nonsingular, the observations are not capable of establishing
z: V (V V)-ly 5A and 5A ' ' o 0 0 can be computed from the known residual vector 5A.
By way of an example we may again consider a single tracker yielding measurements of slant ranges, azimuth angles and elevation angles, of a satellite placed in an equatorial, circular, synchronous orbit. It has already been pointed out that an attempt to remove possible bias errors in all three types of measurements is not desirable. In the present case, a sound policy may consist of ignoring bias errors in ranges and constructing a filter which is optimal with regard to bias errors in azimuth angles and elevation angles.
The previous discussion may have appeared to be rather academic because random errors are always present in practice. Moreover, the developments seem perhaps somewhat theoretical. However, they yield useful guide lines in many practical cases. For instance, we may wish to construct an optimum filter for a certain number of bias sources and subsequently discover that the relevant matrix is highly ill-conditioned. This would indicate that perhaps a number of appropriate bias errors can be ignored without causing a great loss of accuracy in predicted observations for the same sensor, although the accuracy of the established trajectory itself may suffer substantially. Any such deterioration, however, could never be detected without taking observations from an independent source. This argument is in agreement with, for instance, the intuitive notion that a single tracking station may very well acquire on the basis of its own predictions whereas acquisition quantities predicted for other stations could be largely in error. The latter situation has, in fact, been observed in practice. NUMERICAL RESULTS
The formulas (7-8) have been used to process, first, measurements of ranges, azimuth angles and elevation angles and, second, angular measurements only. All observations belong to the same satellite pass originating from a single tracking station. In order to reduce the number of parameters to a reasonable minimum, the matrices W and A appearing in formulas (7-8) are put in the form
345
y ••
V I-- ~- ..!L : o.e
y ••
Y'2
"''" i
'"...J
! i/
::l! O
O
b
b
V
!!.- =0.6 '-- Wo
V"'""
1/ /
"'
Y -I
...J
I
""
/"
-
/
J
"-
~:O.2 f- Wo
/ D. .!!..
vn
r
o
1.0
~=o
r (MILES)
(MILES)
FIG. I
ANGULAR AND RANGE DATA COVERING I HOUR PASS
20
I.
Y··
I. /"
V
"
Y"
..,
..,
."
.."
.J
12
/ /V
cd. 10
'.0"
~
.0
y=2
.....-
.fn
h •
o
= 0.75
.!!..:: 0.50
V
'lV .k
~=1.00
-~
i/
~ /" V ~
y=1
--<
=0.25
/
Y (MILES)
(MILES)
FIG.2
ANGULAR AND RANGE DATA
COVERING
t
HOUR PASS
air!=
1.00
~ = 075
y=4
.., ." ~
y=2
i
r=
bO
bO I
~ = O.2~
~=o
k
Y I MILES I
(MILES)
FIG.3
ANGULAR COVERING
AND
RANGE DATA
~ HOUR PASS
346
10
10
>;.",:';
Id
'-
~ l'
7
~
1/1
6
"''"-' ~
4
/; V'-r..o
/I
CA -4 '- --to Vn
I-- r--
.r r-
,) 05
"-
I-- ~ f--
!
W
V
-..:::
6
/; ~
o
-
I
1.0
~
'"
1.5
2.0
2.5 X 10-
4
o
I 0.5
1.0
1.5
r.. (RADIANS)
(RADIANS)
FIG.4 ANGULAR DATA COVERING
lHR PASS
10 ._.
r
1
L
60 y"<'0-1
'0
r.. 'O.51110- 3
.
Y'O.251110-
3
; 40
i bO 30
20
~·005.IO-~ .r,;
10
~.O 0.' Y.. (RADIANS)
FIG.5 ANGULAR DATA COVERING 1/2 HOUR PASS
'00
I,
400
:"-- r#= 0.3XIO
300
;
§
!
!
.,
\... r-~= o lXla
~
.0
.0
200
\...
v.
~:
o
.,
IX IQ
'00
'1.0
0.'
~
"
YA
(RADIANS)
FIG.6 ANGULAR DATA COVERING 1/4 HOUR PASS
347
(RADIANS)
I.'
J! -., 2XIO
.,
V.
~. 005)(10
~=o
FIG.7
PROPERTY OF Y
CURVES
POSITION ERROR (MILES)
VELOCITY ERROR I MILES I SECJ
3.14
0.000222
162.12
0.011269
ERROR IN SEMI-MAJOR AXIS (MILES)
NO
LEAST
OBSERVATIONAL BIAS
1.41
SQUARES OBSERVATIONAL
F II TER
BIAS
37.63
NO
OB SERVATIONAl OPTIMUM
BIAS
Fll TER
OBSERVATIONAL
BIAS
3.14
0.000222
1.41
5.38
0000347
2.03
~G8
COMPAR~ON OF LEAST SOUARES FILTER AND OPTIMUM FILTER
-,
,
-, 2 X 10
z
-1.0
o
~i7) >z
-0.8
~~
WO
.
-0.6
~~
-0.4
"''''
-0.2
..JW .. ..J
"'Cl oz in"
W 0::
I I ./1- f'..
o
l/
f",
0.2
0.4
RESULTING FROM OPTIMUM FILTERING
"t-.
o
l'-..
I......
1/ 12
FIG.9
rI
RESULTING FROM LEAST SQUARES FILTERING
18
24 30 36 TIME (MINUTES)
TYPICAL BEHAVIOR IN THE
OF RESIDUALS
OBSERVATIONS
348
42
48
54
60
W-
~2 a
W0' fI. =
lfl.o
where f represents a scalar function of the 2 argument 2;n'l. Knowledge of this function enables us to draw families of curves displaying 0a} either as a function of a~
°
'
where Wo and fl. are fixed matrices. The paramo eters to be varied in this numerical study are 'l and a~ (n is the number of datapoints). They represent} respectively, a measure of the magnitUde of the bias errors and the arithmetic mean of the random errors. In case of the range and angle measurements and for a - 'l = 1 mile, the matrices Wo and fl. were selected to o make the standard deviation of random and bias errors in the inertial sate:lite coordinates equal to one mile for a point near the center of the pass. In case of only the angular measurements, the ranges were ignored by assigning them a weight equal to zero. In the latter case the results are presented in terms of the standard deviations 'lA and 0A~' respectively, of the bias errors and arithmetic mean of the random errors in the angular measureffients.
for constant 'l, or as a function of 'l for constant o~. The aforementioned curves, shown in Fig. 1 through Fig. 6, enjoy a simple geometric property which is helpful in obtaining additional ones with virtually no supplementary calculations. Let us consider two typical curves giving aa as a function of, say, a~ for two values 'l1,'l2 of 'l (Fig. 7).
°
cov(~)
=
° la
OP OQ
In terms of the above defined parameters,
formula (8) can be put in the form
I f 01 and
02 are values of such that al/'ll = 0!'l2' it follows from expression (20) that a = a l 2
=
'll 'l2
t
Thus, if point P describes the 'll curve, a
2 ~2 F (0 2) , n'l
second point Q, lying on the radius OP such that the ratio OP!OQ is kept constant, describes another 'l curve with a corresponding 'l value of 'll OQ/OP.
As was demonstrated earlier in this paper, all 'l curves corresponding to a nonzero value of 'l have the same slope at the origin. This slope is equal to the one of the straight line representing the 'l "curve" in the hypothetical case of 'l approaching infinity. It is perhaps worth mentioning that the present results can be considered valid only if the observational errors are small enough for the usual linearization processes to be applicable. Hence, except for the feature mentioned above, the "curve" for 'l approaching infinity is meaningless.
Clearly, the adequacy of the established orbit should be Judged at least on the basis of the accuracy of all orbital elements. However, in the interest of simplicity we present the results pertaining to only one element, namely the semimajor axis "a". Indeed, the semimajor axis is the most critical element with regard to long range predictions and, as it turned out for the particular geometry of this stUdy, its accuracy can be taken as a reliable measure to judge the accuracy of the computed orbit. If 0a is the
The graphs giving 0a as a function of 'l for various values of o~ show clearly the deteriorating influence of bias errors on the accuracy of the computed trajectory. It can be seen that this so-called bias sensitivity is highly dependent on the type of data, the length of the pass, and, to some extent, on the magnitude of the errors themselves. In the case of only angular data covering an interval of 15 minutes, the bias sensitivity is exceedingly small whereas} for example, it becomes quite pronounced with a 15 minute pass of range and angular data. Of course, even in the latter case the results are preferable to the ones obtained by using a least squares filter on the biased data, simply because the present method is optimal for the assumed error model.
standard deviation of "a", it follows from formula (19) that
(2C)
tStrictly speaking, only for sufficiently large values of n can F be regarded as a function of 2 the one argument °2;n'l.
349
As a final part of the evaluation of the present method, the case of the one hour pass with angular data, displaying a limited amount of bias sensitivity, is examined in some detail. In particular, it is interesting to compare the performance of both the optimum method and the method of least squares.
RKFERENCES
Preliminary numerical computations show that the derived orbit is most vulnerable to bias errors when the ratio of these errors in elevation angles and azimuth angles is approximately equal to -3. Therefore, a trajectory is obtained taking data which, aside from random errors with standard deviation 0.25XlO -3 radians, contains a bias error of 0.003 radians in azimuth angles and -0.01 radians in elevation angles. The computations are carried out using a least squares filter and, subsequently, the optimum filter presented in this paper. Finally, and as a basis for comparison, the unbiased data are processed according to the least squares method (Which happens to be optimum in this case) to yield a third trajectory. Typical results are listed in Fig. 8 and clearly indicate the superiority of the optimum method compared to the method of least squares if observational bias errors are present. For instance, use of the optimum filter reduces position and velocity errors resulting from least squares filtering by a factor of approximately 30 in the case at hand. The behavior of the residuals in the observations as encountered during the trajectory determination process exhibits an interesting feature. As is expected, the residuals have an arithmetic mean nearly equal to zero when using the least squares method. For instance, this is shown for the elevation angles in Fig. 9. If the orbital elements resulting from application of the least squares method are readjusted by means of the optimum filter, the residuals settle to the ones also indicated in Fig. 9. Hence, the optimum method allows the predominant observational bias error to remain in the residuals rather than forcing a match between the actual and simulated observations. Adding the latter bias error to corresponding predicted observations will greatly improve prediction accuracy. ACKNOWLEDGMENT I wish to thank F. T. Geyling for valuable comments on this paper.
350
(1)
Blackman, R. B., "Methods of Orbit Refinement," Bell System Tech. J., Vol. 43, pp. 885-909, 1964.
(2)
Claus, A. J., "Orbit Determination in the Presence of Systematic Errors," Celestial Mechanics and Astrodynamics, Vol. 14, pp. 725-742, 1964.
(3)
Gantmacher, F. R., The Theory of Matrices, Vol. I, Chelsea Publishing Company, New York, 1960, pp. 61-66.
(4)
Swerling, P., "First Order Error Propagation in a Stage-Wise Smoothing Procedure for Satellite Observations," J. Astronaut. Sci., Vol. 6, pp. 46-52, Autumn, 1959.