Pattern Recognition Letters 26 (2005) 1782–1791 www.elsevier.com/locate/patrec
A new coarse-to-fine rectification algorithm for airborne push-broom hyperspectral images Hongya Tuo *, Yuncai Liu Institute of Image Processing & Pattern Recognition, Shanghai Jiaotong University, 1954 Hua Shan Road, Shanghai 200030, PR China Received 8 July 2004; received in revised form 21 January 2005 Available online 14 April 2005 Communicated by R. Davies
Abstract Rectification for airborne push-broom hyperspectral images is an indispensable and important task in registration procedure. In this paper, a coarse-to-fine rectification algorithm is proposed. In the coarse rectification process, the data provided by the Positioning and Orientation System (POS) is used in a model of direct georeference position to adjust the displacement of each line of images. In the fine rectification step, polynomial model is applied to realize that the geometric relationships of all the pixels in raw image space are same to those in the reference image. Experiments show that the fine rectified images are satisfying, and our rectification algorithm is very effective. 2005 Elsevier B.V. All rights reserved. Keywords: Coarse-to-fine rectification; Airborne push-broom hyperspectral image; Direct georeference position; Polynomial model
1. Introduction Image rectification is an indispensable and important task in lots of fields, such as remote sense, computer vision and pattern recognition, especially remote sensing image registration and fusion. Many research studies have been carried out on this topic during the last several decades.
*
Corresponding author. Fax: +1 86 21 62932035. E-mail address:
[email protected] (H. Tuo).
In this paper, we are interested in the rectification of the airborne push-broom hyperspectral images. These kinds of data contain much information with over 100 spectral bands at each pixel location which are very useful for image fusion and classification. Unfortunately, due to the ununiformity of the velocity, the gradient, and the altitude of the airplane during the flight, there are severe distortions in acquired raw images. These images can only be useful if they are rectified. So rectification is a key step for post-process, for example registration and fusion procedures.
0167-8655/$ - see front matter 2005 Elsevier B.V. All rights reserved. doi:10.1016/j.patrec.2005.02.005
H. Tuo, Y. Liu / Pattern Recognition Letters 26 (2005) 1782–1791
The rectification process requires the knowledge of the camera interior and exterior orientation parameters (three coordinates of the perspective center, and three rotation angles known as roll, pitch, and yaw angle). In an image of frame photography, where all pixels are exposed simultaneously, each point has the same orientation parameters. The interior orientation parameters are provided by the camera calibration. The exterior orientation parameters are determined by using some mathematical models between object and image spaces (Ebner et al., 1992; Kratky, 1989). Rectification for this kind of images is achieved indirectly by adjusting a number of well-defined ground control points and their corresponding image coordinates. Unlike frame photography, each line of airborne push-broom images is collected at a different time. Perspective geometry varies with each pushbroom line. Therefore, each line has different six exterior orientation parameters, leading to the displacement of each line different and thus the rectification process much more difficult. Now, the positioning and orientation system (POS) integrated with global positioning systems (GPS) and inertial measure unit (IMU) is carried by an airplane to provide precise position and attitude parameters (Haala et al., 1998; Cramer et al.,
1783
2000; Mostafa and Schwarz, 2000; Skaloud and Schwarz, 1998; Skaloud, 1999). Many researches have discussed sensor modeling to estimate precise external orientation parameters for each line from POS data (Daniela, 2002; Chen, 2001; Gruen and Zhang, 2002; Lee et al., 2000; Hinsken et al., 2002). The aerial triangulation method is the traditional rectification method. Its aim is to determine the parameters of exterior orientation of a block of images and the object coordinates of the ground points. But it needs well-distributed and a sufficient number of ground control points which is a time-consuming procedure and is usually applied for full frame imagery. The source images which need to be rectified are from airborne push-broom imager, developed by Shanghai institute of technical physics, Chinese Academy of Sciences. The airborne images contain 652-column and have 124 different spectral bands at each pixel location. At constant time intervals associated with each line, 652 elements with 124 spectral bands are acquired (see Fig. 1(a), Lee et al., 2000). The across-track IFOV (Instantaneous Field of View) is 0.6 mrad, and the alongtrack IFOV is 1.2 mrad. And at the instant time, the sensor is positioned along its flight trajectory at the instantaneous perspective center (see Fig.
Fig. 1. (a) Airborne push-broom hyperspectral image. (b) Instantaneous perspective center.
1784
H. Tuo, Y. Liu / Pattern Recognition Letters 26 (2005) 1782–1791
Table 1 The accuracy of POS/AV 510-DG Position (unit: m) Roll angle (unit: deg) Pitch angle (unit: deg) Yaw angle (unit: deg)
0.05–0.3 0.005 0.005 0.008
establish direct georeference position model. In the fine rectification step, we apply the polynomial model to realize that the geometric relationships of all the pixels in image space correspond to those in the reference image. 2.1. Coarse rectification
1(b)). The POS data are obtained by the GPS/ IMU equipment POS/AV 510-DG which accuracy is listed in Table 1. From Table 1, we can see that the accuracy of POS data is relatively high and can be directly used to rectify raw images. So, a coarse-to-fine rectification algorithm using POS data is proposed. A reference image is required. In the coarse rectification process, the POS data is used in a model of direct georeference position to adjust the displacement of each line quickly. In the fine rectification step, fewer ground control points are selected, and polynomial model is applied to realize that the geometric relationships of all the pixels in raw image space are same to those in the reference image. Since the geometric distortions existing in the 124 bands are negligible, rectification is performed on just one of the bands. Then the same transformation is applied to the other bands. This paper is organized as follows. Section 2 gives the coarse-to-fine rectification algorithm in details, in which the model of direct georeference position is established, the resampling and interpolation techniques are discussed, the fine rectification method using the polynomial model is presented, and finally the algorithm is listed in detail. The experimental examples are given in Section 3. A check for the accuracy of rectification is presented in Section 4. Conclusions are appeared in Section 5.
The coarse rectification step includes several key problems: introduction of Coordinate Systems, establishment of direct georeference position model for a push-broom line, and resampling method. 2.1.1. Coordinate systems In order to obtain the mathematic relationships between the image points and ground reference points, Coordinate Systems must be established. As shown in Fig. 2, the major coordinate systems are: (1) sensor coordinate systems (S-UVW), with perspective center S, U-axis parallel to flight trajectory, V-axis tangent to the flight trajectory, and W-axis parallel to the optical axis and pointing upwards, completing a right-hand coordinate systems, (2) ground coordinate systems (O–XYZ), (3) image coordinate systems (o–xyf), in which x and y are the coordinates of image plane, f is the focal length of the sensor. The image coordinate system and the sensor coordinate system are parallel.
2. Coarse-to-fine rectification algorithm In this section, we propose the coarse-to-fine rectification algorithm for airborne push-broom hyperspectral images. This kind of images is obtained by one line after another line. Each line has different exterior orientation parameters which make the rectification process much more difficult. In coarse rectification step, we use the POS data to
Fig. 2. The coordinate systems.
H. Tuo, Y. Liu / Pattern Recognition Letters 26 (2005) 1782–1791
2.1.2. Georeferencing for a push-broom line According to the theory of photography, it is known that each line of airborne data has a perspective center and its own exterior orientation elements. At a certain interval, six exterior parameters for a line are recorded by the POS on board of the aircraft. The center position (latitude, longitude and height) is measured by GPS in ground coordinates system, with WGS84 as reference ellipsoid. The aircraft orientation values (roll, pitch, and yaw data) are supplied by INS. Suppose that (XG, YG, ZG) are the coordinates of the perspective center G obtained by GPS, u is the pitch angle of the aircraft, x is the roll angle and j is the yaw angle measured by INS. Plane P is the image space and the ground space is O– XYZ. The geometry among the perspective center G, projective points in image space, and corresponding points in the ground space for one line is shown in Fig. 3. Let line AB is one push-broom line in image plane, and C is the image projective center on line AB with the coordinates (xC, yC, f) in image coordinate system. Q is the ground corresponding point of C. Assume point D is selected from line AB, and its coordinates are (xC, y, f) in the image space. Suppose the corresponding point in the ground space of D is P. We will deduce the mathematic relations between the projective points in image space, and corresponding points in the ground according to Fig. 3. Firstly, we get the corresponding pointÕs coordinates of the image projective center on one pushbroom line in the ground space.
1785
In Fig. 3, line QR is perpendicular to Y-axis; point R is the intersection. The coordinates (XQ, YQ) of point Q can be obtained by X Q ¼ X G þ jQRj ¼ X G þ Z G tg x= cos u
ð1Þ
Y Q ¼ Y G þ jORj ¼ Y G þ Z G tg u
ð2Þ
The next aim is to get the ground coordinates (XP, YP) of P. Line PN is perpendicular to Y-axis, and point N is the intersection. Line QT is perpendicular to line PN, and point T is the intersection. Let h be the angle between lines GQ and GP proceeding from the same point G. Assume that the across-track IFOV (Instantaneous Field of View) is q mrad, then h ¼ ðy y C Þ q ð180=ð1000 pÞÞ ðunit is degreeÞ
ð3Þ
Let jGQj = l, jGPj = s and jQPj = d, we have, X P ¼ X Q þ d sin j
ð4Þ
Y P ¼ Y Q þ d cos j
ð5Þ
From above, we obtain, jGQj ¼ l ¼ Z G =ðcos u cos xÞ
ð6Þ
According to Fig. 3, there are jPN j ¼ l sin x þ d sin j
ð7Þ
jON j ¼ l cos x sin u þ d cos j
ð8Þ
In RTnGNP, we have GN 2 ¼ GP 2 PN 2
ð9Þ
and in RTnGON, GN 2 ¼ OG2 þ ON 2
ð10Þ
Substitute (7) and (8) into (9) and (10), we obtain the following equation: s2 ðl sin x þ d sin jÞ
2
2
¼ ðl cos x cos uÞ þ ðl cos x sin u þ d cos jÞ
2
ð11Þ Simplify (11), we obtain Fig. 3. Geometry among the perspective center G, projective points in image space, and corresponding points in the ground.
l2 þ 2ld ðsin x sin j þ cos x sin u cos jÞ þ d 2 s2 ¼ 0
ð12Þ
1786
H. Tuo, Y. Liu / Pattern Recognition Letters 26 (2005) 1782–1791
In nGQP, we have l2 þ s2 2ls cos h d 2 ¼ 0
ð13Þ
and the average height of the aircraft during the flight be H , and then we have: Dx ¼ intðH q1 =100 þ 0:5Þ=10
ð22Þ
Dy ¼ intðH q2 =100 þ 0:5Þ=10
ð23Þ
Let b ¼ sin x sin j þ cos x sin u cos j
ð14Þ
Combine (12) and (13), we get l2 þ b ld ls cos h ¼ 0
ð15Þ
min Y ¼ minðfY : ðX ; Y Þ 2 F gÞ
From (15), we obtain l þ bd s¼ cos h
ð16Þ
tg h d ¼ l qffiffiffiffiffiffiffiffiffiffiffiffiffi 2 1 b tg h b ZG tg h qffiffiffiffiffiffiffiffiffiffiffiffiffi cos u cos x 1 b2 tg h b
ð24Þ
max Y ¼ maxðfY : ðX ; Y Þ 2 F gÞ
Substitute (16) into (12) to solve for d, it yields
¼
Let min X ¼ minðfX : ðX ; Y Þ 2 F gÞ max X ¼ maxðfX : ðX ; Y Þ 2 F gÞ
ð17Þ
Then, the coordinates (XP, YP) of point P can be gotten by X P ¼ X G þ Z G tg x= cos u þ d sin j
ð18Þ
Y P ¼ Y G þ Z G tg u þ d cos j
ð19Þ
In the same way, for point P at the left of Q, the coordinates (XP, YP) of P are X P ¼ X G þ Z G tg x= cos u d sin j
ð20Þ
Y P ¼ Y G þ Z G tg u d cos j
ð21Þ
2.1.3. Resampling and interpolation Once all the pixels (i, j) in the image space are transformed to the ground coordinates (X, Y), a new image F is obtained. Obviously the points (X, Y) in F are not on a regular grid yet. Resampling is used to rectify the transformed image to a regular grid. Suppose the resampling image as Fb . Firstly, it is necessary to get the size of Fb . Assume that the size of Fb is M · N, the across-track IFOV (Instantaneous Field of View) is q1 mrad, and the alongtrack IFOV is q2 mrad. Let the actual width of pixel in ground space be Dx, the length be Dy,
We can get that M ¼ intððmax X min X Þ=DxÞ þ 1
ð25Þ
N ¼ intððmax Y min Y Þ=DyÞ þ 1
ð26Þ
iX ¼ intððX min X Þ=DxÞ 0 6 iX < M
ð27Þ
jY ¼ intððY min Y Þ=DyÞ 0 6 jY < N
ð28Þ
where int is the truncation function that gets the closest integer less than or equal to its variable. Assume that the gray values at (i, j), (X, Y) and (iX, jY) are f(i, j), F(X, Y), and Fb ðiX ; jY Þ respectively, then have F ðX ; Y Þ ¼ f ði; jÞ
ð29Þ
Fb ðiX ; jY Þ ¼ F ðX ; Y Þ
ð30Þ
Using this above method, we can quickly get the transformed image Fb on a regular grid. On the other land, for the rectified grids, some points are not assigned gray values. The next step is interpolation to solve this problem. The interpolation method is introduced as follows. For point (iX, jY), iX = 1, . . . , (M 2), jY = 1, . . . , (N 2), Select its 3 · 3 neighborhood as search region / ¼ f Fb ðiX þ t; jY þ sÞ : t ¼ 1; 0; 1; s ¼ 1; 0; 1g, and define Num ¼ fthe number of Fb ¼ 6 0; Fb 2 /g Sum ¼
1 1 X X
Fb ðiX þ t; jY þ sÞ
ð31Þ ð32Þ
s¼1 t¼1
if Fb ðiX ; jY Þ ¼ 0 and Num 5 0, the gray value Fb ðiX ; jY Þ is changed by the next formula: Fb ðiX ; jY Þ ¼ Sum=Num
ð33Þ
H. Tuo, Y. Liu / Pattern Recognition Letters 26 (2005) 1782–1791
1787
Using systematic search, from left to right and top to down, to interpolate the whole pixels on the rectified grid, we can get the coarse rectified image.
For any point (e, g), its corresponding one (X, Y) can be obtained according to (34) and (35). Define
2.2. Fine rectification using polynomial model
J ¼ intðY Þ
Because of the ununiformity of velocity, the gradient, and the altitude of the airplane, airborne images usually have great distortions. Though the above method can eliminate some errors caused by many factors, the precision rectification step is essential for further application such as fusion and classification. The most widely used method for rectification is the polynomial model. In this step, a reference image is required and some ground control points (GCPs) are selected from both the reference image and the coarse rectified image. Let (X, Y) be point in the above rectified image Fb , (e, g) be the correb Also let sponding point in the reference image G. b ; Yb Þ be the estimate of (X, Y) by a polynomial ðX transformation. General form of the model is expressed as follows: b ¼ X
n X ni X i¼0
Yb ¼
bi;j ei gj
U ¼X I V ¼Y J
ð36Þ
b J Þ is gotten by the follows: the gray value RðI; b J Þ ¼ Fb ðX ; Y Þ RðI; ¼ ð1 U Þð1 V Þ Fb ðI; J Þ þ ð1 U ÞV Fb ðI; J þ 1Þ þ U ð1 V Þ Fb ðI þ 1; J Þ þ UV Fb ðI þ 1; J þ 1Þ
ð37Þ
2.3. Algorithm In this section, we give the coarse-to-fine rectification algorithm for airborne push-broom hyperspectral images in details. The procedure involves several steps:
ð34Þ
j¼0
n X ni X i¼0
ai;j ei gj
I ¼ intðX Þ
ð35Þ
j¼0
where ai,j and bi,j are unknown coefficients, and n is the degree of model. In most cases, low-order polynomials are used to rectify images. For example, geometric distortions like scale, translation, rotation, and skew effects can be modeled by an affine transformation (n = 1). If the degree of polynomial model is n, there must have a set of M = (n + 1)(n + 2)/2 GCPs at least to solve (34) and (35). Suppose that {(ei, gi) : i = 1, . . . ,L} and {(Xi, Yi) : i = 1, . . . , L} b and Fb respectively. are GCPs selected from G The least square method can be used to estimate the coefficients ai,j and bi,j. Then the transformab is determined. Define the tion between Fb and G b Bilinear Interpolation fine rectified image to be R. technique is applied to get the gray values of pixels b in R.
Step 1. Input an airborne push-broom image and select one band defined as F with size of I · J. Step 2. Let M be the matrix of referencing POS data with size of 6 * J, in which the elements of jth line are the six exterior orientation parameters of jth line in F, j = 1, . . . , J. Step 3. The center of j th line in image plane is (I/ 2, j). Calculate the perspective center coordinates (XGj, YGj) in ground space by (1) and (2). Then for all point (i, j), i = 0, . . . , I 1, compute its corresponding point coordinate (XPj, YPj) in ground space according to (18) and (19) or (20) and (21). Step 4. For each line of image F, repeat step 3. Then get the coarse rectified image defined as Fb by using resampling and interpolation techniques. Step 5. Select some GCPs {(ei, gi) : i = 1, . . . , L} and {(Xi, Yi) : i = 1, . . . , L} from reference image and Fb respectively. Use polynomial
1788
H. Tuo, Y. Liu / Pattern Recognition Letters 26 (2005) 1782–1791
distortion model and Bilinear Interpolab tion to obtain the fine rectified image R. 3. Experiments In our experiments, the reference image is an airborne infrared image acquired by full frame photography in 2002. It has been well-rectified and the geometric resolution is 0.41 m. The images that need to be rectified are obtained by airborne push-broom hyperspectral imager in 2003 with the size of 652 columns, over 3000 rows and 124 bands. In the process of coarse rectification, the POS data are required. Because the whole images are very long, it is better to divide them into several pieces and then to realize coarse rectification. Fig. 4(a) is a subimage with the size of 600 * 652 cut from band 20 of the raw airborne push-broom hyperspectral image. Table 2 lists the POS data of first line of Fig. 4(a), including six exterior orientation parameters (three coordinates of the perspective center, roll, pitch, and yaw angle). Fig. 4(b) shows the coarse rectified image of Fig. 4(a). From Fig. 4(b), it indicates that the effect of coarse rectification is very effective.
Table 2 The POS data of first line of Fig. 4(a) Latitude (deg) Longitude (deg) Altitude (m) Roll angle (deg) Pitch angle (deg) Yaw angle (deg)
3.1204589182E+001 1.2150235801E+002 2.1747928669E+003 1.5322707188E+000 3.8967691065E001 1.4760683542E+001
In the fine rectification process, a reference image is needed. Fig. 5 is the reference image. Fig. 6 is the same as Fig. 4(b). We select 12 set of GCPs from Figs. 5 and 6 and then label them in Figs. 5 and 6 respectively. Using two-degree polynomial model, the results of fine rectification of Fig. 6 are shown in Fig. 7. A lot of tests have been done with other subimages cut from the raw data. From the experiments, we can see that the final rectified images are satisfying. 4. Check for the accuracy of rectification We can check the accuracy of rectification from two aspects: fusion effects and the residuals of GCPsÕ geography coordinates. Since the geometric distortions existing in the 124 bands are negligible,
Fig. 4. (a) A 600 * 652 subimage cut from band 20 of the raw airborne push-broom hyperspectral image. (b) The coarse rectified image of (a).
H. Tuo, Y. Liu / Pattern Recognition Letters 26 (2005) 1782–1791
1789
Fig. 7. The fine rectified image of Fig. 4(a) gotten by twodegree polynomial model.
Fig. 5. The reference image and select some GCPs.
Fig. 6. The coarse rectified image of Fig. 4(a) and select some GCPs.
we can apply the same coarse-to-fine parameters to rectify the other bands. In our experiments, three fine rectified images with the same view are extracted from band 20, 80 and 100. And a false
color image can be composed of them and shown in Fig. 8(a). Fig. 8(b) is the corresponding part cut from the reference image. Fig. 8(c) shows the fusion effect of Fig. 8(a) and (b). Through visual checking, it can be seen that the coarse-to-fine rectification result is very good. The reference image is provided with localplane geography coordinates (for a city, there is only one local-plane geography coordinate system). So the final rectified images also hold the local-plane geography coordinates, which can make a quantitative check for the accuracy of rectification. Suppose that {(ei, gi) : i = 1, . . . , L} and {(Xi, Yi) : i = 1, . . . , L} are GCPs selected from the reference image and the rectified image respectively. Assume that (Gei, Ggi) is the geography coordinates of point (ei, gi) and (Gxi, Gyi) is the geography coordinates of (Xi, Yi). Define the residuals as following: X resdðX i Þ ¼ jGxi Gei j;
i ¼ 1; . . . ; L
ð38Þ
Y resdðY i Þ ¼ jGy i Ggi j;
i ¼ 1; . . . ; L
ð39Þ
mean residual L qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1X ðX resdðX i ÞÞ2 þ ðY resdðY k ÞÞ2 ¼ L i¼1
ð40Þ
1790
H. Tuo, Y. Liu / Pattern Recognition Letters 26 (2005) 1782–1791
Fig. 8. (a) The false color images composed of band 20, 80 and 100 fine-rectified images. (b) The corresponding part of (a) cut from the reference image. (c) The fusion image of (a) and (b).
Table 3 The geography coordinates and the residuals of the selected GCPs GCPs
1 2 3 4 5 6
Fig. 8(b)
Fig. 8(a)
Ge
Gg
Gx
Gy
4100.61E 4115.91E 3959.01E 3942.03E 3940.17E 4131.76E
3232.78S 3313.21S 3278.56S 3470.13S 3362.92S 3448.21S
4101.08E 4116.08E 3958.89E 3942.64E 3939.83E 4131.39E
3232.53S 3312.84S 3278.78S 3470.34S 3362.53S 3448.78S
The mean_residual
X_resd
Y_resd
0.47 0.17 0.45 0.61 0.34 0.37
0.25 0.37 0.22 0.21 0.39 0.57 0.6222
H. Tuo, Y. Liu / Pattern Recognition Letters 26 (2005) 1782–1791
The mean_residual can be used to judge the accuracy of rectification. We select and label six point pairs from Fig. 8(a) and (b). Their geography coordinates and the residuals are listed in Table 3. It is known that the geometric resolution of the reference image is 0.41 m, and the evaluating mean_residual is less than 0.82 m, it shows that the accuracy of rectification is limited to 2 pixels.
5. Conclusion A coarse-to-fine rectification algorithm for airborne push-broom hyperspectral images has been proposed in this paper. The coarse rectification method can quickly adjust the displacement of each line of an image based on the POS data. By selected fewer ground control points, polynomial model can be applied to realize the fine rectification. The experimental images are extracted from several bands of the 124-band raw images. The results show the effectiveness of our rectification algorithm. The accuracy of rectification is judged by two aspects. The fusion effect of the reference image and the false color images indicates the high accuracy of rectification through visual checking. The residuals of GCPsÕ geography coordinates give a quantitative analysis which shows that the accuracy of rectification is limited to 2 pixels.
Acknowledgement This research is supported by Shanghai Science and Technology Development Funds of China (No. 02DZ15001).
1791
References Chen, T., 2001. High precision georeference for airborne ThreeLine Scanner (TLS) imagery. In: Proc. 3rd Internat. Image Sensing Seminar on New Development in Digital Photogrammetry, Gifu, Japan, pp. 71–82. Cramer, M., Stallmann, D., Haala, N., 2000. Direct georeferencing using GPS/INS exterior orientations for photogrammetric applications. Int. Arch. Photogramm. Remote Sensing 33 (Part B3), 198–205. Daniela, P., 2002. General model for airborne and spaceborne linear array sensors. Int. Arch. Photogramm. Remote Sensing 34 (Part B1), 177–182. Ebner, H., Kornus, W., Ohlhof, T., 1992. A simulation study on point determination for the MOMS-02/D2 space project using an extended functional model. Int. Arch. Photogramm. Remote Sensing 29 (Part B4), 458–464. Gruen, A., Zhang, L., 2002. Sensor modeling for aerial mobile mapping with Three-Line-Scanner (TLS) imagery. Int. Arch. Photogramm. Remote Sensing 34 (Part 2), 139–146. Haala, N., Stallmann, D., Cramer, M., 1998. Calibration of Directly Measured Position and Attitude by Aerotriangulation of Three-line Airborne Imagery. ISPRS, Ohio, pp. 28–30. Hinsken, L., Miller, S., Tempelmann, U., Uebbing, R., Walker, S., 2002. Triangulation of LHSystemsÕADS40 imagery using ORIMA GPS/IMU. Int. Arch. Photogramm. Remote Sensing 34 (Part 3A), 156–162. Kratky, V., 1989. Rigorous photogrammetric processing of SPOT images at CCM Canada. ISPRS J. Photogramm. Remote Sensing (44), 53–71. Lee, C., Theiss, H.J., Bethel, J.S., Mikhail, E.M., 2000. Rigorous mathematical modeling of airborne pushbroom imaging systems. Photogramm. Eng. Remote Sensing 66 (4), 385–392. Mostafa, M.M.R., Schwarz, K., 2000. A multi-sensor system for airborne image capture and georeferencing. Photogramm. Eng. Remote Sensing 66 (12), 1417–1423. Skaloud, J., 1999. Optimizing georeferencing of airborne survey systems by INS/DGPS. Ph.D. Thesis, UCGE Report 20216 University of Calgary, Alberta, Canada. Skaloud, J., Schwarz, K.P., 1998. Accurate orientation for airborne mapping systems. Int. Arch. Photogramm. Remote Sensing 32 (part 2), 283–290.