Correction of ZY-3 image distortion caused by satellite jitter via virtual steady reimaging using attitude data

Correction of ZY-3 image distortion caused by satellite jitter via virtual steady reimaging using attitude data

ISPRS Journal of Photogrammetry and Remote Sensing 119 (2016) 108–123 Contents lists available at ScienceDirect ISPRS Journal of Photogrammetry and ...

5MB Sizes 0 Downloads 0 Views

ISPRS Journal of Photogrammetry and Remote Sensing 119 (2016) 108–123

Contents lists available at ScienceDirect

ISPRS Journal of Photogrammetry and Remote Sensing journal homepage: www.elsevier.com/locate/isprsjprs

Correction of ZY-3 image distortion caused by satellite jitter via virtual steady reimaging using attitude data Mi Wang a,b, Ying Zhu a,⇑, Shuying Jin a, Jun Pan a,b, Quansheng Zhu a a b

State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China Collaborative Innovation Center of Geospatial Technology, Wuhan 430079, China

a r t i c l e

i n f o

Article history: Received 20 March 2016 Received in revised form 27 May 2016 Accepted 27 May 2016

Keywords: Distortion correction Satellite jitter Virtual steady reimaging Rigorous imaging model RFM ZY-3

a b s t r a c t ZiYuan-3 (ZY-3), the first Chinese civilian stereo mapping satellite, suffers from 0.67 Hz satellite jitter that deteriorates its geometric performance in mapping, resource monitoring and other applications. This paper proposes a distortion correction method based on virtual steady reimaging (VSRI) using attitude data to eliminate the negative influence caused by satellite jitter in satellite data preprocessing. VSRI helps linear array pushbroom cameras rescan the ground with a uniform integral time and smooth attitude. In this method, a VSRI model is proposed, and the geometric relationship between the original and corrected image is determined in terms of geolocation consistency based on a rigorous geometric model. Thus, the corrected image is obtained by resampling from the original one. Three areas of ZY-3 three-line images suffering from satellite jitter were used to validate the accuracy and efficiency of the proposed method. First, different attitude interpolation methods were compared. It is found that the Lagrange polynomial model and the cubic piecewise polynomial model have higher interpolation accuracy for original imagery. Then, the replacement accuracy of the rational function model (RFM) for ZY-3 was analyzed with 0.67 Hz satellite jitter. The results indicate that attitude oscillation reduces the fitting precision of the RFM for the rigorous imaging model. Finally, the relative orientation accuracy of the three-line images and the geo-positioning accuracy with ground control points (GCPs) before and after distortion correction were compared. The results show that the distortion caused by satellite jitter is corrected efficiently, and the accuracy of the three experimental datasets is improved in both the image space and the ground space. Ó 2016 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rights reserved.

1. Introduction Satellite jitter, which is a microvibration of a satellite platform, is commonly induced by the dynamic structures on board, the changes in the body temperature, the operation of the orbital control, etc. (Amberg et al., 2013; Iwasaki, 2011; Johnston and Thornton, 2012; Tong et al., 2014; Zhang and Xu, 2012). It is difficult to suppress jitter by updating hardware due to the complexity of jitter sources. To some degree, satellite jitter is unavoidable, and the frequencies and amplitudes of jitters for different satellites are diverse (Toyoshima and Araki, 2001). The linear array pushbroom camera is the main type of optical imaging payload on board high-resolution optical satellites ⇑ Corresponding author. Tel.: +86 27 68779586; fax: +86 27 68778969. E-mail addresses: [email protected] (M. Wang), [email protected] (Y. Zhu), [email protected] (S. Jin), [email protected] (J. Pan), zhuqs@whu. edu.cn (Q. Zhu).

(HROSs). Each imaging line constructs a linear central projection independently, and an area scene is generated by continuous pushbroom imaging. Unstable satellite motion or microvibrations will cause different deviations between adjacent imaging lines. Therefore, satellite jitter is an important factor that affects the geometric performance of HROSs with improvements in resolution (Poli and Toutin, 2012; Schneider et al., 2008; Schwind et al., 2009; Takaku, 2011; Teshima and Iwasaki, 2008; Tong et al., 2015a,b, 2014; Toutin, 2004). Many studies show that satellite jitter can deform the geometric performance of high-resolution satellite imagery. Remote sensing satellites such as Terra, ALOS, MOMS-2P, and QuickBird all suffer from satellite jitter (Ayoub et al., 2008; Lehner and Müller, 2003; Schneider et al., 2008; Teshima and Iwasaki, 2008). The frequency and the amplitude of the jitter of Terra are approximately 1.5 Hz and 0.2–0.3 pixels, which corresponds to 6–9 m on the ground (Teshima and Iwasaki, 2008). An oscillation with an amplitude of approximately one pixel was found in the image of ALOS,

http://dx.doi.org/10.1016/j.isprsjprs.2016.05.012 0924-2716/Ó 2016 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rights reserved.

109

M. Wang et al. / ISPRS Journal of Photogrammetry and Remote Sensing 119 (2016) 108–123

which will adversely affect the DEM generation from PRISM triplets (Schwind et al., 2009). The unmodeled jitter of QuickBird produces a distortion of approximately 5 pixels (2.5 m) with a frequency of 1 Hz and 0.2 pixels (0.1 m) with a frequency of 4.3 Hz (Ayoub et al., 2008). The rational function model (RFM), the alternative sensor model for space image exploitation, has been widely used due to its independence of sensors and platforms and its high-efficiency calculations for intersections and resections (Dial et al., 2003; Fraser et al., 2006; Fraser and Hanley, 2005; Grodecki and Dial, 2003; Tao and Hu, 2001). Images with the RFM have become a standard product provided by satellite data vendors. It is reported that satellite jitter also has an impact on the replacement accuracy of the RFM for the rigorous geometric model (Schneider et al., 2008; Schwind et al., 2009; Takaku and Tadono, 2010), which means the RFM cannot replace the rigorous geometric model when satellite jitter exists. ZiYuan-3 (ZY-3), launched on January 9, 2012, is the first civilian stereo mapping satellite in China. The main imaging payloads of ZY-3 for stereo mapping are a three-line camera consisting of a nadir camera, a forward camera and a backward camera. The resolution of the nadir camera is 2.1 m. The forward and backward cameras have the same resolution, 3.8 m. The forward and backward cameras are arranged at inclinations of ±22° from the nadir camera to realize a base-to-height ratio (B/H) of 0.87. To cover the 51 km swath width, the nadir camera has three time delay integration (TDI) charge coupled device (CCD) arrays, each containing 8192 pixels; the forward and backward cameras each have 4  4096 pixels (Fang et al., 2014; Tong et al., 2015a). The key processing algorithms for ZY-3, including inflight calibration (Wang et al., 2014a; Zhang et al., 2014), sensor correction (Fang et al., 2014, 2015; Tang et al., 2014) and accuracy evaluation (Pan et al., 2013; Wang et al., 2014b), have been extensively investigated and widely reported since its launch. The root mean squared error (RMSE) of basic products achieved 2.6 m and 1.6 m in plane and elevation, respectively, when ground control points (GCPS) were deployed around the corners (Pan et al., 2013). ZY-3 is very well suited for three-dimensional mapping of large areas (d’Angelo, 2013). However, satellite jitter was also detected on ZY-3 using both multispectral images and attitude data. ZY-3 has a jitter with a frequency of approximately 0.67 Hz in the early inflight period (Tong et al., 2015a, 2014; Wang et al., 2016; Zhu et al., 2014). Although it is a low-frequency jitter, the influence on the image product cannot be ignored. An amplitude of approximately 2.63 pixels was detected for the three-line array images acquired in the early period after satellite launch (Tong et al., 2015a), which would further influence the quality of the orthophoto and digital surface model (DSM). If the distortion caused by satellite jitter is not corrected in preprocessing, the accuracy of image products can hardly meet the requirements of high-accuracy applications. The attitude data of ZY-3 are obtained by a combination of star sensors and gyros with 4 Hz, which can well describe the jitter (Tong et al., 2015b). Although satellite jitter can be measured with highly sensitive attitude instruments, the fitting accuracy of the RFM is still a problem. That’s why the period distortion remained after sensor correction. To solve the image distortion and RFM replacement problems caused by satellite jitter, this paper proposes a virtual steady reimaging (VSRI) method by reconstructing the accurate rigorous imaging model in the satellite data preprocessing system. The distortion caused by satellite jitter can be efficiently corrected and high-accuracy rational polynomial coefficients (RPCs) can be generated after VSRI. The validation, including oscillating attitude interpolation, RPC fitting and accuracy evaluation of the relative orientation and absolute orientation with ground control points (GCPs), were accomplished using three different sets of ZY3 three-line array images and the corresponding auxiliary data,

including imaging time data, orbit data and attitude data. The results indicate that this method can achieve high accuracy and reliability. The proposed method has been applied to the ground data processing system of ZY-3 at the China Centre for Resources Satellite Data and Application (CRESDA).

2. Rigorous imaging model of ZY-3 The rigorous imaging model, which describes the physical properties of image acquisition through the relationship between the image and the ground coordinate system, is mainly determined for one image point and its corresponding object point through an exterior orientation model and an interior orientation model based on the condition of collinearity (Poli and Toutin, 2012; Toutin, 2004). The interior orientation model helps convert the image coordinates to camera coordinates, determining the accurate line of sight (LOS) of each CCD detector in the camera coordinate system. The interior orientation elements of ZY-3 are described by the look angle ðwx ; wy Þ of the LOS Vimage in camera coordinates. Fig. 1 provides a sketch of the interior orientation model of the nadir camera. The interior orientation elements can be determined by laboratory calibration before launching and updated by inflight calibration during a certain period after launching (Wang et al., 2014a; Xu et al., 2014). The LOS Vimage of any pixel (s, l) in a camera coordinate system can be calculated by the look angle. The exterior orientation model helps convert satellite body coordinates to object coordinates, determining the ray of light from the projection center to the ground points in the satellite body coordinates system. For ZY-3, the exterior orientation elements, including ephemeris and attitude, are obtained from the onboard GPS receiver and attitude determination system (ADS), respectively. The ephemeris, including the satellite position and velocity, are measured in the Earth Centered Fixed (ECF) coordinate system, and attitude is measured in the Earth Centered Inertial (ECI) coordinate system. The ADS of ZY-3 combines star trackers and a gyro with a sampling rate of 4 Hz, which can recognize attitude fluctuation below 2 Hz according to the Nyquist–Shannon theorem (Marks, 2012). Detectors in the same scan line have the same exterior orientation elements, which can be interpolated from the original ephemeris and attitude data by time. For the original sub-CCD image, the rigorous imaging model of each line can be established as follows:

2

tan wx

3

2

X  X S ðtÞ

3

6 7 6 7 body sensor ECI 4 tan wy 5 ¼ k  Rbody  RECI ðwðtÞ; xðtÞ; jðtÞÞ  RECR ðtÞ4 Y  Y S ðtÞ 5 1 Z  Z S ðtÞ ð1Þ where t is the imaging time of the scan line recorded by the camera; k is the scale factor; wx ; wy are the look angles; Rsensor body is the install matrix calculated by inflight calibration, which can be treated as a fixed value over a long period time; Rbody ECI ðwðtÞ; xðtÞ; jðtÞÞ is the rotation matrix from the ECI coordinate system to the satellite body coordinate system converted by rotation angles pitch-wðtÞ, roll-xðtÞ, and yaw-jðtÞ, which are interpolated from the attitude observation under the ECI coordinate system by time; and RECI ECR ðtÞ represents the transformation matrix from the ECF coordinate system to the ECI coordinate system, which is based on the IAU 2000 precession-nutation model from the International Earth Rotation and Reference Systems Service (IERS) conventions. ½ X S ðtÞ Y S ðtÞ Z S ðtÞ T is the position vector of the projection center in the ECF coordinate system, which is interpolated from the ephemeris observation by time.

110

M. Wang et al. / ISPRS Journal of Photogrammetry and Remote Sensing 119 (2016) 108–123

Fig. 1. Sketch of the interior orientation model of the nadir camera.

3. Methodology 3.1. Virtual steady reimaging (VSRI) model If satellite jitter exists, the image will be distorted, as shown in Fig. 2(a), and the attitude will also oscillate. Suppose that the attitude sensor can observe attitude oscillation as shown in Fig. 3. The rigorous imaging model can be established exactly if the attitude of each imaging line can be interpolated exactly from the oscillating attitude data. To correct the distortion caused by a satellite jitter, a VSRI model is proposed. Fig. 2(b) gives an illustration of VSRI. The connotation of steady reimaging includes a smooth attitude, a smooth orbit and invariable integration time. Thus, the ideal image is obtained by VSRI. Considering that the orbit control is always much more stable than the attitude control over a short time, orbit oscillation is not discussed here. The virtual steady reimaging model can be established based on the original rigorous imaging model. Considering that the integration time jumping of TDI CCD will affect the replacement of the RFM, the integration time of each reimaging time should be the same to ensure each line has the same ground sampling distance (GSD). The new integration time is the average of the old integration times, and the imaging time of each reimaging line can be easily determined by accumulating the new integration time. Therefore, the exterior orientation elements can be interpolated from the observed data by time. The orbit of the virtual steady reimaging mode is the same as the orbit of the original image, interpolating by the polynomial model. The attitude angles (pitch, roll and yaw) of the virtual steady reimaging should be smooth, which means the oscillating component of the observed attitude should be filtered, as shown in Fig. 3. Another consideration is that the CCD/lens distortion will still exist after VSRI if the original interior orientation elements are used. There are three and four sub-CCD arrays on the focal plane of the nadir and forward/backward camera, respectively. The misalignment between adjacent sub-CCD arrays might introduce a mismatching problem between sub-CCD images. Therefore, a virtual CCD, which is a unique CCD linear array covering the whole swath without any distortion or deformation, is used to replace

the sub-CCDs (Jacobsen, 2006). Fig. 1 gives a sketch of the virtual CCD. The virtual CCD shares the same CCD size, focal length and principal point. The number of virtual CCD is equal to the sum of the number of sub-CCDs minus the overlapping parts. To avoid mismatching caused by height error, the virtual CCD should be arranged as closely as possible to the real CCD to minimize the view angle difference between original imaging and steady reimaging. Thus, virtual CCD is placed in the middle of sub-CCDs in the across-track direction. The look angle of each virtual CCD can be calculated as follows:

    ~ y ¼ arctan y ; ~ x ¼ arctan x w w f f

ð2Þ

where f is the focal length, (x, y) is the coordinate of a virtual CCD detector in the focal plane, x is the distance between the virtual CCD and principle point, y ¼ pði  0:5nÞ, p is the pixel size of the CCD detector, i is the index of CCD detectors, and n is the total number of virtual CCD detectors. The VSRI model is established as follows:

2 3 3 ~x X  X S ðtÞ tan w 6 7 6 7 ECI ~ body ~ ~ y 5 ¼ k  Rsensor ~ ~ 4 tan w body  RECI ðwðtÞ; xðtÞ; jðtÞÞ  RECR ðtÞ4 Y  Y S ðtÞ 5; Z  Z S ðtÞ 1 2

ð3Þ ~ x; w ~ y are the look angles of the virtual CCD and where w body ~ ~ ~ ðtÞ; j ~ ðtÞÞ is the rotation matrix from the ECI coordinate RECI ðwðtÞ; x system to the satellite body coordinate system converted from rota~ ~ ðtÞ; j ~ ðtÞ, which is interpolated from the filtered x tion angle wðtÞ; attitude observation under the ECI coordinate system by time t. k, ECI Rsensor Y S ðtÞ Z S ðtÞ T have the same meaning as body , RECR ðtÞ, and ½ X S ðtÞ in the rigorous imaging model for the original image shown in Eq. (1). It is noted that the differences between the VSRI model and rigorous imaging model for the original image are that the attitude of VSRI is smooth without oscillation and the look angles are determined by the virtual CCD. By using the VSRI model, the corrected image still meets center projection imaging.

M. Wang et al. / ISPRS Journal of Photogrammetry and Remote Sensing 119 (2016) 108–123

Unstable imaging

Flight direction

Steady reimaging

111

Flight direction

Oscillating attitude Smooth attitude

(a)

(b)

Fig. 2. Illustration of VSRI (a) original imaging with satellite jitter and (b) virtual steady reimaging without satellite jitter.

Oscillating attitude

Smooth attitude

Attitude

Observed attitude

Time Fig. 3. Conceptual illustration of oscillating attitude and steady attitude from observed attitude.

3.2. Workflow of distortion correction via VSRI The workflow of distortion correction based on VSRI has three main steps, as shown in Fig. 4. Firstly, the rigorous imaging model for the original image is reconstructed based on the collinearity condition equation. The LOS Vimage is determined by in-flight geometric calibration (Wang et al., 2014a). The projection center of each imaging line is interpolated from GPS data by a cubic polynomial model and the attitude is interpolated from original attitude data. Then, the VSRI model for corrected image is established. ~ image is determined by the virtual CCD. The projection The LOS V

accurately. The attitude for VSRI must be smooth without oscillation. That is the largest difference between the rigorous imaging model of the original image and the VSRI model for the corrected image. The polynomial model, piecewise polynomial model and Lagrange polynomial model are generally used attitude interpolation models for pushbroom satellite sensors (Poli and Toutin, 2012; Toutin, 2004). The polynomial model is one of the most commonly used attitude interpolation methods and can filter the observation noise through the least squares method (LSM). Because the attitude is unstable when jitter exists, the polynomial model takes the oscillation of the attitude as observation noise, which decreases the accuracy of the rigorous imaging model for the original image. Thus, the polynomial model, which can filter the oscillating component of attitude, is unsuitable for rigorous imaging modeling for the original image but suitable for VSRI modeling. The piecewise polynomial model and Lagrange polynomial model, closer to the observations than a single polynomial model, are recommended for attitude interpolation when the attitude fluctuates. 3.2.2. RFM replacement The RFM is the most widely used replacement model of the rigorous imaging model for the applications of spaceborne images. The terrain-independent method (Fraser and Hanley, 2005) is adopted here for RFM replacement. A large number of virtual control points in an even distribution are calculated through the rigorous model. The RPCs are solved by least squared solution. To speed up the calculation of the coordinates, the RPCs of corrected image are calculated to replace the rigorous imaging model. To guarantee the precision of the coordinate mapping, the RPCs of the original images are not calculated due to satellite jitter.

center of each reimaging line is also interpolated from GPS data by a cubic polynomial model. The attitude is interpolated from filtered attitude data. Finally, the image coordinates mapping relationship between the original image and corrected image is determined by two imaging models, and the corrected image is generated by image resampling. Three key steps in this workflow, including attitude interpolation for the original images and corrected images, RFM replacement and image coordinate mapping, are explained in the following.

3.2.3. Coordinates mapping To avoid divergence in the iteration of resection based on the rigorous imaging model, forward projection based on the rigorous model and backward projection based on the RFM are used. Fig. 5 gives the flowchart of coordinate mapping between original image and corrected image. The specific explanations are as follows:

3.2.1. Attitude interpolation Considering attitude oscillation, high-accuracy attitude interpolation is the key to establishing an accurate rigorous imaging model for original images. The attitude interpolation model for the original image must fit well with the observed attitude so that the rigorous imaging model can describe the actual imaging status

(1) First, for the image point (s, l) on the original image, the detector number s determines the interior orientation elements, and imaging line l determines the imaging time so that the exterior orientation elements can be interpolated by time from attitude and orbit observations. Thus, the corresponding object coordinates (B, L, H) of image point (s, l)

112

M. Wang et al. / ISPRS Journal of Photogrammetry and Remote Sensing 119 (2016) 108–123

Fig. 4. Workflow of virtual steady reimaging-based distortion correction.

Fig. 5. Flowchart of coordinate mapping between original image and corrected image.

will be calculated by the forward projection on the elevation surface H using the rigorous imaging model. H is from the reference elevation surface (such as STRM DEM) or average height. (2) The image point (s0 , l0 ) on the virtual image will be calculated by backward projection from object coordinates (B, L, H) using the RFM in steady mode. (3) Thus, point (s0 , l0 ) on the virtual image and point (s, l) on the original image correspond to the same object point. The digital number of (s0 , l0 ) on the virtual image will be determined from (s, l) by gray resampling. (4) Repeat the above steps until the mapping of every pixel is completed. 3.3. Error analysis Fig. 6 provides the geometric relationship between the real CCD imaging ray and the virtual CCD reimaging ray, which intersect at point P. It is noted that the height error will directly cause a mismatch between the real CCD image and the virtual CCD image because they have different view angles, which is induced by different interior orientation elements and exterior orientation angle elements. The displacement on the ground caused by height error is expressed as follows:

Fig. 6. Influence of elevation error with different view angles.

DR ¼ DH  ðtan a  tan bÞ;

ð4Þ

where DR is the displacement on the ground, DH is the height error,

a is the view angle of the real CCD, and b is the view angle of the virtual CCD. Actually, the displacement is determined by both the height error and the difference between the view angle of the real CCD and the virtual CCD. The view angle difference between the real CCD and the virtual CCD comes from two aspects: LOS in the camera coordinate system and the oscillating component of attitude. Therefore, the virtual CCD should be as close as possible to the real CCD to minimize the LOS difference between the real CCD and virtual CCD. For ZY-3, the along-track offset between the virtual CCD and the real CCD is less than 0.5 pixel such that even a height error of 1000 km would introduce a mismatch of only 1 pixel (Wang et al., 2014a; Zhang et al., 2014). Thus, a mismatch caused by the interior CCD offset between the virtual CCD and the real CCD is negligible. Fig. 7 displays the quantity relation between the amplitude of attitude oscillation and height error when the mismatching error

M. Wang et al. / ISPRS Journal of Photogrammetry and Remote Sensing 119 (2016) 108–123

450 400

Height error (km)

350 300 250 200 150

103.13 km at 4 arc seconds

100

13.75 km at 30 arc seconds

50 0

0

10

20

30

40

Amplitude of attitude oscillation (arc second)

113

maximum geolocation error caused by attitude error can be estimated. The maximum geolocation error on the nadir image is approximately 2.9 pixels, 1.97 pixels and 2.32 pixels across the track and 1.16 pixels, 1.16 pixels and 0.58 pixels along the track for the Dengfeng, Luoyang and Wuan datasets, respectively. For the forward and backward images, the maximum geolocation error is approximately 2.01 pixels, 1.36 pixels and 1.60 pixels in the across-track direction and 0.80 pixels, 0.80 pixels and 0.40 pixels in the along-track direction in the Dengfeng, Luoyang and Wuan datasets, respectively. In addition, to evaluate the geometric accuracy of the corrected images, a total of 55, 24 and 23 ground control points (GCPs) were used in Dengfeng, Luoyang and Wuan, respectively. All of the GCPs were surveyed by a real-time kinematic global positioning system (GPS) with centimeter-level accuracy. The distribution of the GCPs on the nadir images is illustrated in Fig. 9. The points are all located at the centers of road intersections so that they can be recognized easily and accurately. The image locations of GCPs were all collected artificially by referring to the description of points.

Fig. 7. Quantity relation between the amplitude of attitude oscillation and height error.

4.2. Results and discussion is 1 pixel. If the amplitude of attitude oscillation is 4 arc-sec, a 103 km height error would introduce a mismatch of 1 pixel. Utilizing the average height meets the accuracy requirement when the attitude fluctuation is small. A global DEM such as SRTM (Van Zyl, 2001) should be used when the attitude fluctuation is large.

4. Experiments and discussion

The experiments have three purposes: (1) to compare the attitude interpolation algorithms to determine the best attitude interpolation method for the rigorous imaging model of the original image and corrected image, (2) to compare the RFM fitting precision for the rigorous imaging model of the original image and corrected image and (3) to evaluate and compare the relative orientation accuracy and geolocation accuracy with GCPs between the three-line images before and after distortion correction.

4.1. Study area and data source Three sets of three-line images were used in the experiments. Table 1 gives the main information about the images. The three study areas are located in Dengfeng, Luoyang and Wuan, three different areas in Henan province, China. Wuan is a mountainous area, with a height difference of 1237 meters and average height of 727 meters. Scenes A and C, images of Dengfeng and Wuan, were captured in the same orbit on February 3, 2012. Scene B, images of Luoyang, were obtained on Jan, 24, 2012, the first inflight month of ZY-3.The duration of image acquisition is approximately 7 s and 8 s for one nadir image scene and one forward/backward image scene, respectively. The attitude angles during the acquisition of the three image scenes, including pitch, roll and yaw, are illustrated in Fig. 8. It is clear that the pitch and roll oscillate with time and have almost the same frequency, approximately 0.67 Hz. The amplitude of the roll is larger than that of the pitch, which means that the distortion caused by attitude oscillation along the linear array (across track) is larger than that along the flight direction (along track). Moreover, the amplitudes of both the roll and pitch are different in different areas. For the Dengfeng dataset, the amplitude of the roll angle, which is the largest among the three datasets, is almost 2.5 arc-sec, whereas the amplitude of the pitch angle is approximately 1 arc-sec. For the Luoyang dataset, the amplitude of the roll angle is approximately 1.7 arc-sec, and the amplitude of the pitch angle is 1 arc-sec. For the Wuan dataset, the amplitude of the roll angle is approximately 2 arc-sec, and the amplitude of the pitch angle is approximately 0.5 arc-sec, which is the smallest among the three datasets. The orbit height of the ZY-3 satellite is approximately 503 km. Therefore, an attitude error of 1 arc-sec will cause a 2.44 m geolocation error for nadir imaging and 2.82 m geolocation error for forward and backward imaging with 22 degrees, which equals 1.16 pixels in a nadir image and 0.8 pixels in the forward/backward images according to photogrammetric theory. In this way, the

4.2.1. Attitude interpolation and RFM replacement 4.2.1.1. Attitude interpolation comparison. Three categories of interpolation methods—polynomial model, piecewise polynomial model and Lagrange polynomial model—were compared. As we know, the polynomial model can have different degrees, such as the linear polynomial model, quadratic polynomial model and cubic polynomial model. A low degree may not fit the curve of the attitude changes, whereas a high degree may decrease the stability of the polynomial model; therefore, the cubic polynomial model was chosen as the typical representative of the polynomial model. Analogously, the piecewise polynomial model, which is the short-term polynomial model, also has different degrees of polynomials, including the linear piecewise polynomial, the quadratic piecewise polynomial model and the cubic piecewise polynomial model. Taking scene A as an example, five groups of interpolation methods, the cubic polynomial model (CPM), linear piecewise polynomial (LPPM), quadratic piecewise polynomial model (QPPM), cubic piecewise polynomial model (CPPM) and Lagrange polynomial model (LPM), were compared. The attitude interpolation results are shown in Fig. 10. In Fig. 10(a)–(e), the red1 line is the interpolated attitude of each imaging line, and the green circles are the original attitude observations; they are connected by light green lines. It is obvious that the CPM does not fit well with the original curve because the oscillating part is filtered by single polynomial model. The interpolated curves LPPM, QPPM, CPPM and LPM all pass through the observations. However, the interpolated curve by LPPM is not smooth enough and the interpolated curve by QPPM has bad continuity. The CPPM and LPM can achieve better smoothness than the QPPM and LPPM as well as 1 For interpretation of color in Fig. 10, the reader is referred to the web version of this article.

114

M. Wang et al. / ISPRS Journal of Photogrammetry and Remote Sensing 119 (2016) 108–123

Table 1 Information about the experiment data. Study area

Images

Imaging date

Orbit ID

Hav er /Hdiff (m)

Dengfeng Luoyang Wuan

Scene A Scene B Scene C

2012-02-03 2012-01-24 2012-02-03

381 229 381

431/1423 201/680 727/1237

Image size Nadir image (pixels)

Forward image (pixels)

Backward image (pixels)

24,575  24,575 24,575  24,575 24,575  24,575

16,382  16,382 16,382  16,382 16,382  16,382

16,382  16,382 16,382  16,382 16,382  16,382

Hav er denotes the average height, and Hdiff denotes the height difference.

Fig. 8. Attitude angle (pitch, roll, and yaw) observations in (a) Dengfeng, (b) Luoyang, and (c) Wuan.

Fig. 9. Distributions of GCPs in (a) Dengfeng, (b) Luoyang, and (c) Wuan.

better continuity than the QPPM with high similarity to the original observation. Fig. 10(f) shows the fitting differences of the CPM, LPPM, QPPM, and CPPM compared with the LPM. The blue solid line is the difference curve between the CPPM and LPM, the green dashed line is

the difference curve between the QPPM and LPM, the red dashdotted line is the difference curve between the LPPM and LPM, and the magenta dotted line is the difference curve between the CPM and LPM. It is obvious that the difference between the CPPM and LPM is nearly zero for all three angles. Thus, the CPPM has the

M. Wang et al. / ISPRS Journal of Photogrammetry and Remote Sensing 119 (2016) 108–123

115

Fig. 10. The interpolation results by (a) the CPM, (b) LPPM, (c) QPPM, (d) CPPM, and (e) LPM and (f) the differences compared to the LPM.

best consistency with the LPM. The curve of the difference between the CPM and LPM is oscillatory over time for all three angles because the CPM filters the oscillation component of the attitude. Interestingly, it is clear that the yaw angle also had approximately 1 arc-sec of oscillation from the interpolation difference between CPM and LPM in Fig. 10(f), which means attitude oscillation exist in three directions. However, the yaw angle has a much smaller influence on the geometric accuracy according to photogrammetric theory. Above all, both the LPM and CPPM are superior attitude interpolation methods for establishing a high-accuracy geometric model for original images. The CPM is the best attitude interpolation method for virtual images in steady imaging mode. It is noted that the largest attitude difference between the CPM and LPM is less

than 3 arc-sec. From the analysis in Section 3.3, one pixel mismatch in the image space corresponds to an approximately 140 km height error in the object space. The height differences of the three datasets are 1423 m, 680 m and 1237 m. The largest mismatch is as small as 0.005 pixels, even if the average height is used. Thus, the height error can be ignored when VSRI is conducted. 4.2.1.2. RFM replacement comparison. The satellite jitter of ZY-3 is a low-frequency jitter with 0.67 Hz. It is necessary to investigate whether the low-frequency jitter has an impact on the RFM replacement. Two experiments are conducted using oscillating attitude interpolated by LPM and smooth attitude interpolated by CPM, respectively. Taking scene A as an example, firstly, the rigorous geometric model with the oscillating attitude was

116

M. Wang et al. / ISPRS Journal of Photogrammetry and Remote Sensing 119 (2016) 108–123 4

2

4

x 10

2

x 10

1.5

Line

Line

1.5

1

1

0.5 0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

Fitting Error

Fitting Error

3 pixels

0.0003 pixels

2

2.2 4

x 10

Sample

(a)

0.5 0.4 0.6 0.8

1

1.2

1.4 1.6 1.8

Sample

2

2.2 4

x 10

(b)

Fig. 11. Error map of RPC fitting using (a) the oscillating attitude and (b) the smooth attitude.

Table 2 Statistical comparison of RPC fitting using the oscillating attitude and smooth attitude. Statistic

Mean (pixels) RMSE (pixels) Maximum (pixels) Minimum (pixels)

Oscillating attitude

Smooth attitude

Sample

Line

Sample

Line

0.22 2.08 3.99 3.99

0.06 0.76 1.34 1.13

2.22E06 3.26E05 8.10E05 8.00E05

1.50E07 3.61E06 1.00E05 9.00E06

established. Then, a terrain-independent solution was used for RFM replacement. A total of 369,920 virtual control points, generated from rigorous imaging model in five height levels, were used for RPC fitting. Finally, the fitting accuracy was verified using another set of virtual control points generated from the rigorous imaging model. The same workflow was applied to the RFM replacement from the rigorous imaging model with a smooth attitude. Fig. 11 shows the fitting error map using two different attitudes. Table 2 lists the specific statistical results of RPC fitting using two different attitudes.

Fig. 12. Residuals of tie points in the across- and along-track directions after relative orientation before distortion correction on (a) the nadir image, (b) the forward image, and (c) the backward image of scene A.

M. Wang et al. / ISPRS Journal of Photogrammetry and Remote Sensing 119 (2016) 108–123

117

Fig. 13. Residuals of tie points in the across- and along-track directions after relative orientation after distortion correction on (a) the nadir image, (b) the forward image, and (c) the backward image of scene A.

Fig. 14. Residuals of tie points in the across- and along-track directions after relative orientation before distortion correction on (a) the nadir image, (b) the forward image, and (c) the backward image of scene B.

According to Fig. 11, it is clear that the RPC fitting error using the oscillating attitude has great periodicity, whereas the fitting error using the smooth attitude is much smaller and has no periodicity. As shown in Table 2, the RMSE of RPC fitting using

the oscillating attitude is 2.08 pixels and 0.76 pixels in the sample and line directions, respectively. In contrast, the RMSE of RPC fitting using the smooth attitude is 3.26  105 pixels and 3.61  106 pixels in the sample and line directions, which can

118

M. Wang et al. / ISPRS Journal of Photogrammetry and Remote Sensing 119 (2016) 108–123

Fig. 15. Residuals of tie points in the across- and along-track directions after relative orientation after distortion correction on (a) the nadir image, (b) the forward image, and (c) the backward image of scene B.

Fig. 16. Residuals of tie points in the across- and along-track directions after relative orientation before distortion correction on the (a) nadir image, (b) forward image, and (c) backward image of scene C.

almost be ignored and fully meets the accuracy requirement. The absolute values of the maximum error and minimum error reach almost 4 pixels in the sample direction and over 1 pixel in the line

direction when the oscillating attitude is used. The error in the sample direction is larger than that in the line direction because the amplitude of the roll angle is larger than that of the pitch angle.

119

M. Wang et al. / ISPRS Journal of Photogrammetry and Remote Sensing 119 (2016) 108–123

Fig. 17. Residuals of tie points in the across- and along-track directions after relative orientation after distortion correction on (a) the nadir image, (b) the forward image, and (c) the backward image of scene C.

Table 3 The amplitude of relative orientation residuals of tie points on nadir, forward and backward images before distortion correction. Image scenes

A B C

Residuals in the nadir image (pixels)

Residuals in the forward image (pixels)

Residuals in the backward image (pixels)

Across track

Along track

Across track

Along track

Across track

Along track

2.0 0.9 0.9

0.9 0.9 0.8

2.5 0.8 1.2

0.8 0.8 0.6

1.2 0.8 1.9

0.8 0.8 0.6

The results indicate that the RFM cannot directly replace the rigorous geometric model for ZY-3 images when satellite jitter exists. To our knowledge, the RFM has three-order polynomials, so it cannot fit distortions with higher orders. Although ZY-3 suffers a very low frequency attitude oscillation of approximately 0.67 Hz, the imaging time for one nadir image scene image is approximately 7 s, which is equal to approximately 4.7 periods of attitude oscillation. One way to solve this is to add the higher order polynomials to the RFM so that it can fit the high-order distortion.

The disadvantage of this method is that the RFM will lose universality. The other way to solve this problem is to correct the image distortion caused by satellite jitter through steady reimaging; thus, high-accuracy RPCs can be obtained without impacting the following applications. It is important to note that the correction accuracy is very related to the attitude accuracy, including the measurement accuracy and interpolating method.

4.2.2. Accuracy evaluation via RFM The conventional method, which ignores attitude oscillation, uses the cubic polynomial model for attitude interpolation to reconstruct the rigorous imaging model of the original image such that the RFM can replace the rigorous imaging model with high fitting accuracy, as discussed above. However, the distortion remained in the image. In the proposed method, the distortion caused by satellite jitter is corrected through steady reimaging and a corresponding RFM is obtained. To verify the correction accuracy, the relative orientation accuracy and geopositioning accuracy with GCPs using three datasets were evaluated and compared before and after distortion correction

Table 4 The RMSE of relative orientation residuals of tie points on the nadir, forward and backward images. Images

Correction

Residuals in the nadir image (pixels)

Residuals in the forward image (pixels)

Residuals in the backward image (pixels)

A

Before After

Across track

Along track

Across track

Along track

Across track

Along track

1.35 0.14

0.50 0.12

1.83 0.19

0.42 0.10

0.79 0.15

0.42 0.05

B

Before After

0.32 0.09

0.49 0.08

0.38 0.13

0.42 0.08

0.32 0.14

0.41 0.08

C

Before After

0.48 0.09

0.41 0.07

0.89 0.15

0.35 0.07

1.22 0.17

0.35 0.07

M. Wang et al. / ISPRS Journal of Photogrammetry and Remote Sensing 119 (2016) 108–123

2 0 -2 -4 0

0.5

1

1.5

2

2.5

5 GCPs 4 2 0 -2 -4 -6 0

0.5

1

1.5

2

2.5

Line number of nadir image of Scene A

Line number of nadir image of Scene A

x 10

x 10

4 2 0 -2

0.5

1

1.5

2

2.5

Line number of nadir image of Scene A

4

x 10

(a)

4 2 0 -2 -4 0

0.5

1

1.5

2

2.5

Line number of nadir image of Scene A 4 x 10

(b)

55 GCPs

4 2 0 -2 -4 0

0.5

1

1.5

2

2.5

Line number of nadir image of Scene A

4

Residuals along track (pixel)

Residuals along track (pixel)

4.2.2.2. Geopositioning accuracy evaluation with ground control points. Geopositioning accuracy validation and comparison with GCPs were also conducted with three sets of triplet images. The bias-compensation RFM model based on the affine model (Fraser and Hanley, 2005) was used to refine the RPCs with a number of GCPs. Three different GCP/check points (CKPs) configurations, 0 GCP/All CKPs, 5 GCPs/All minus 5 CKPs and All GCPs/All CKPs, were used. The linear bias was first removed using the affine model with three scenarios of GCP configurations, and the discrepancies of the CKPs before and after compensation were compared. Figs. 18–20 show the discrepancies of the CKPs before and after distortion correction with three different GCP configurations in scene A, scene B, and scene C, respectively. Table 5 gives the statistical results of the geopositioning accuracy before and after distortion correction. For three scenes, all ground points were first used as CKPs. The discrepancies of the CKPs were calculated as the difference between the backward-projected image location from the measured ground location and the measured image location of the check points. From the results shown in Figs. 18(a), 19(a) and 20 (a), it can be seen that the discrepancies (red squares) before distortion correction indicate the existence of obvious periodic

6

4

-4 0

Fourier basis. However, the correction accuracy is related to the number and distribution of GCPs. In Fig. 13, 15 and 17, the residuals in the across- and along-track directions on the nadir, forward and backward images of three scenes are randomly distributed, which means the distortion caused by satellite jitter is efficiently corrected by the proposed method. The residuals before and after distortion correction are all distributed around zero as the relative linear bias is removed by the affine model. Table 4 gives the RMSE of the residuals of tie points on the nadir, forward and backward images before and after distortion correction. The RMSEs of the residuals in both directions were greatly improved after distortion correction.

Residuals across track (pixel)

0 GCP

4

Residuals across track (pixel)

Residuals across track (pixel)

4.2.2.1. Relative orientation accuracy evaluation. For ZY-3, satellite jitter has an influence on not only image distortion but also the relative geometric accuracy between the nadir, forward and backward images, which influences stereo mapping and DSM generation (Tong et al., 2015a). Analyzing the residuals of the tie points of the three-line images after relative orientation is a convenient way to detect satellite jitter. First, we use a coarse-to-fine matching strategy with the RFM as an initial to generate dense conjugate points with sub-pixel accuracy (Tong et al., 2015a). We obtained 6899, 14,019 and 11,354 conjugate points for scene A, scene B and scene C, respectively. By utilizing the RPCs and tie points, relative orientation with the affine model is conducted in the image space to remove the relative geometric inconsistences among triplet images (Fraser and Hanley, 2005). The residuals of tie points are calculated by the refined RPCs. Figs. 12–17 show the residuals curves in the across- and along-track directions before and after distortion correction. According to Figs. 12, 14 and 16, the residuals on the nadir images, forward images and backward images before distortion correction have obvious periodicity with almost the same frequency. The amplitudes of residuals in the across- and alongtrack directions on the nadir, forward and backward images of the three image scenes are listed in Table 3. The maximum amplitude group of residuals in the across-track direction is scene A. The second is scene C, and the minimum is scene B. The maximum amplitude groups of residuals in the along-track direction are scenes A and B. The minimum is scene C. The results are very consistent with the ranking of the attitude angles as described in Section 4.1, which means the distortion caused by satellite jitter cannot be removed by the affine model. As we discussed in Section 4.2.1, the RFM with a cubic polynomial does not fit the satellite jitter well, so it is almost impossible to correct the distortion through a normal polynomial model. Wang et al. (2015) proposed a jitter compensation method for spaceborne line-array imagery using compressive sampling, which can model the distortion on a

4

x 10

Residuals along track (pixel)

120

4 2 0 -2 -4 0

0.5

1

1.5

2

2.5

Line number of nadir image of Scene A 4 x 10

(c) The discrepancies of the CKPs before distortion correction The discrepancies of the CKPs after distortion correction The distortion fitting curve by sine function

Fig. 18. Discrepancies of the CKPs in the across-track direction and along-track direction before (red squares) and after (blue triangles) the distortion correction in scene A based on the compensation of the linear bias (a) without any linear bias compensation, (b) with 5 GCPs, and (c) with 55 GCPs. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)

121

4 2 0 0

0.5

1

1.5

2

2.5

Line number of nadir image of Scene B

2 0 -2 -4 0

0.5

1

1.5

2

Line number of nadir image of Scene B

4

-2 -4 -6 -8 0

0.5

1

1.5

2

2.5

24 GCPs 4 2 0 -2 -4 0

0.5

1

1.5

2

2.5

Line number of nadir image of Scene B 4

4 2 0 -2 -4 0

Line number of nadir image of Scene B

0.5

1

1.5

2

2.5

4 2 0 -2 -4 0

Line number of nadir image of Scene B 4

0.5

1

1.5

2

2.5

Line number of nadir image of Scene B 4

x 10

4

x 10

(b)

(a)

4

x 10

x 10

Residuals along track (pixel)

Residuals along track (pixel)

x 10 0

2.5

Residuals across track (pixel)

6

5 GCPs 4

Residuals along track (pixel)

0 GCP 8

Residuals across track (pixel)

Residuals across track (pixel)

M. Wang et al. / ISPRS Journal of Photogrammetry and Remote Sensing 119 (2016) 108–123

x 10

(c)

The discrepancies of the CKPs before distortion correction The discrepancies of the CKPs after distortion correction The distortion fitting curve by sine function Fig. 19. Discrepancies of the CKPs in the across-track direction and along-track direction before (red squares) and after (blue triangles) the distortion correction in scene B based on the compensation of the linear bias (a) without any linear bias compensation, (b) with 5 GCPs, and (c) with 24 GCPs (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.).

0 -2 -4 0

0.5

1

1.5

2

2.5

Line number of nadir image of Scene C

5 GCPs 6 4 2 0 -2 -4 -6 0

0 -2 -4 0

0.5

1

1.5

2

2.5

Line number of nadir image of Scene C

(a)

4

x 10

1.5

2

2.5

23 GCPs 4 2 0 -2 -4 0

0.5

1

1.5

2

4 2 0 -2 -4 0

0.5

1

1.5

2

2.5

Line number of nadir image of Scene C

(b)

4

x 10

2.5

Line number of nadir image of Scene C

4

x 10

x 10

Residuals along track (pixel)

Residuals along track (pixel)

x 10

2

1

Line number of nadir image of Scene C

4

4

0.5

Residuals across track (pixel)

2

discrepancies (blue triangles) after the distortion correction. Furthermore, the linear bias can be observed from the distribution of the discrepancies in Figs. 18(a), 19(a) and 20(a). Because scene A, covering the Songshan calibration field, was used to conduct the

Residuals along track (pixel)

0 GCP 4

Residuals across track (pixel)

Residuals across track (pixel)

distortions, which are consistent with the sine function curves in both directions. The amplitudes of the sine function are calculated based on the attitude oscillation amplitude as discussed in Section 4.1. However, there are no periodic distortions in the

4

4 2 0 -2 -4 0

0.5

1

1.5

2

2.5

Line number of nadir image of Scene C

(c)

4

x 10

The discrepancies of the CKPs before distortion correction The discrepancies of the CKPs after distortion correction The distortion fitting curve by sine function Fig. 20. Discrepancies of the CKPs in the across-track direction and along-track direction before (red squares) and after (blue triangles) the distortion correction in scene C based on the compensation of the linear bias (a) without any linear bias compensation, (b) with 5 GCPs, and (c) with 23 GCPs (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.).

122

M. Wang et al. / ISPRS Journal of Photogrammetry and Remote Sensing 119 (2016) 108–123

Table 5 Results of the geopositioning accuracy of scenes A, B and C before and after distortion correction based on the compensation of the linear bias with different GCP/CKP configurations. Number of GCPs/CKPs

A

0/50

Mean RMSE

5/50

Mean RMSE

55/55

Mean RMSE

B

0/24

Mean RMSE

5/19

Mean RMSE

24/24

Mean RMSE

C

0/23

Mean RMSE

5/18

Mean RMSE

23/23

Mean RMSE

Discrepancies of the CKPs in the nadir image (pixels)

Discrepancies of the CKPs in the forward image (pixels)

Discrepancies of the CKPs in the backward image (pixels)

Discrepancies of the CKPs in planimetry (meters)

Discrepancies of the CKPs in height (meters)

Before

Before

Before

After

Before

x y x y x y x y x y x y

0.32 0.36 2.00 1.29 0.03 0.02 1.93 1.21 0.00 0.00 1.87 1.15

0.25 0.84 0.50 0.98 0.00 0.00 0.44 0.51 0.00 0.00 0.42 0.47

0.12 0.46 1.54 0.74 0.04 0.01 1.54 0.57 0.00 0.00 1.51 0.53

0.17 0.44 0.44 0.55 0.00 0.00 0.40 0.34 0.00 0.00 0.35 0.32

0.04 0.60 0.74 0.97 0.01 0.01 0.87 0.75 0.00 0.00 0.72 0.43

0.09 0.61 0.30 0.71 0.00 0.00 0.28 0.36 0.00 0.00 0.26 0.29

x y x y x y x y x y x y

4.02 3.81 4.17 3.92 0.01 0.02 1.12 1.00 0.00 0.00 1.11 0.93

3.86 4.16 3.89 4.18 0.01 0.01 0.52 0.50 0.00 0.00 0.36 0.41

3.88 3.73 3.99 3.76 0.02 0.02 0.93 0.46 0.00 0.00 0.92 0.37

3.89 -3.78 3.91 3.81 0.02 0.00 0.27 0.45 0.00 0.00 0.22 0.39

1.40 0.03 1.58 0.66 0.00 0.02 0.77 0.70 0.00 0.00 0.74 0.66

x y x y x y x y x y x y

0.36 0.88 2.02 1.06 0.08 0.02 1.98 0.61 0.00 0.00 1.93 0.58

0.11 0.93 0.49 1.03 0.00 0.00 0.50 0.41 0.00 0.00 0.45 0.43

0.54 0.14 1.57 0.53 0.06 0.01 1.56 0.51 0.00 0.00 1.41 0.48

0.00 0.31 0.40 0.47 0.00 0.00 0.41 0.33 0.00 0.00 0.38 0.33

0.28 1.29 0.93 1.52 0.03 0.01 0.92 0.77 0.00 0.00 0.88 0.50

in-flight geometric calibration, the linear bias in scene A is much smaller. From the results shown in Table 5, after the distortion correction, the accuracies of scene A are improved from 2.00, 1.54 and 0.74 pixels to 0.50, 0.44 and 0.30 pixels in the across-track direction and from 1.98, 0.74 and 0.97 pixels to 0.98, 0.55 and 0.71 pixels in the along-track direction for the nadir, forward and backward images in the image space, respectively. In addition, in the ground space, the geo-positioning accuracy is improved from 3.04 m to 1.57 m in planimetry and from 3.26 m to 1.3 m in height. For scenes B and C, the improvement is not as obvious as in scene A due to the larger linear bias. Five GCPs were then used to correct the linear bias with the affine bias compensation model, and the rest ground points were used as CKPs. From the results shown in Figs. 18(b), 19(b) and 20 (b), it can be seen that the discrepancies before distortion correction still have periodicity. After the distortion correction, the discrepancies of the CKPs are randomly distributed around zero. As shown in Table 5, the accuracies of scene A are further improved to 0.44, 0.40 and 0.28 pixels in the across-track direction and 0.51, 0.34 and 0.36 pixels in the along-track direction for the nadir, forward and backward images, respectively, in the image space, and 1.38 m and 1.13 m in planimetry and height, respectively, in the ground space. The accuracies of scene B are improved from 1.12, 0.93 and 0.77 pixels to 0.52, 0.27 and 0.32 pixels in the acrosstrack direction and from 1.00, 0.46 and 0.70 pixels to 0.50, 0.45 and 0.43 pixels in the along-track direction for the nadir, forward

After

After

After

Before

1.29

0.54

0.57

After 0.68

3.04

1.57

3.26

1.30

0.00

0.00

0.00

0.00

3.21

1.38

3.41

1.13

0.00

0.00

0.00

0.00

2.55

1.15

3.20

0.91

1.51 0.07 1.54 0.46 0.01 0.00 0.32 0.43 0.00 0.00 0.30 0.38

11.40

11.62

14.95

15.29

11.73

11.72

15.17

15.36

0.05

0.04

0.17

0.08

3.17

1.35

3.08

1.85

0.00

0.00

0.00

0.00

2.73

1.13

2.62

1.38

0.64 1.33 0.78 1.42 0.00 0.00 0.43 0.41 0.00 0.00 0.38 0.37

1.96

1.95

6.13

6.66

3.69

2.38

7.67

6.75

0.05

0.07

0.06

0.03

3.12

1.94

3.52

1.16

0.00

0.00

0.00

0.00

3.01

1.34

3.44

1.00

and backward images in the image space, respectively. The accuracy in the ground space is also improved from 3.17 and 3.08 m to 1.35 and 1.85 in planimetry and height, respectively. The accuracies of scene C are improved from 1.98, 1.56 and 0.92 pixels to 0.50, 0.41 and 0.43 pixels in the across-track direction and from 0.61, 0.51 and 0.77 pixels to 0.41, 0.33 and 0.41 pixels in the along-track direction for the nadir, forward and backward images in the image space, respectively. The accuracy in the ground space is improved from 3.12 and 3.52 m to 1.94 and 1.16 in planimetry and height, respectively. It is noted that the accuracies in the along-track direction are not as obvious as those in the acrosstrack direction. The reason is that the amplitude of pitch angles is smaller than that of roll angles. Finally, all of the ground points were employed as GCPs in the affine model. The results are shown in Figs. 18(c), 19(c), 20(c) and Table 5. From the results presented in Table 5, slightly higher accuracies are achieved both in the image space and in the ground space.

5. Conclusions and future work In this paper, we proposed a virtual steady reimaging (VSRI) method to correct the distortion caused by satellite jitter. Combining a virtual CCD, which is a single full-field CCD without any distortion, this method can correct the distortion from a camera and

M. Wang et al. / ISPRS Journal of Photogrammetry and Remote Sensing 119 (2016) 108–123

attitude oscillation simultaneously. Our work focused on three aspects. (1) Different attitude interpolation methods for the rigorous imaging model of the original image and the VSRI model of the corrected image were compared. The LPM and CPPM are recommended for interpolating the attitude of original imagery when attitude oscillation exists, and the CPM, which can filter the attitude oscillation component, is more highly recommended for corrected images. (2) The RFM replacement accuracy of ZY-3 was analyzed with 0.67 Hz satellite jitter. Attitude oscillation reduces the RFM fitting precision for the rigorous imaging model such that the rigorous imaging model of the original image cannot be replaced by the RFM, which implies the significance of steady reimaging. (3) The accuracy of the corrected images was evaluated and compared with uncorrected images to validate the reliability and precision of the improved method. The results show that the distortion caused by satellite jitter is corrected efficiently, and the accuracies of three experiment data are all improved in both the image space and the ground space. The proposed method for the distortion correction, which has no need for GCPs, is meaningful for mapping and other applications of ZY-3 requiring high accuracy. The method presents a data processing scheme to eliminate the negative influence of satellite jitter for geo-positioning. The satellite jitter of ZY-3 is a low-frequency jitter (approximately 0.67 Hz), which is equivalent to a period of approximately 5500 lines for the nadir image; thus, the distortion is not sufficient to be noticed easily. The VSRI-based distortion correction method is also applicable to long-strip images if the rigorous imaging model for original long-strip images and the VSRI model for corrected images are accurately established. However, the RPC’s fitting accuracy will decrease when the strip is too long. One feasible solution is to conduct coordinate mapping for the long strip directly with VSRI model for the corrected image, without RFM replacement. It is worth noting that attitude data with high relative accuracy, which can accurately describe the oscillation component, make this method possible. The proposed method also can be used to correct the distortion caused by satellite jitter for other pushbroom spaceborne imagery when the attitude oscillation can be accurately observed. For higher-resolution images, the requirement for the relative accuracy of attitude data will be improved to achieve sub-pixel correction. The observation frequency of attitude data should also be improved if high-frequency satellite jitter exists. The proposed method is effective once the satellite jitter can be observed in the attitude data. Future work will focus on high-accuracy and high-frequency attitude data estimation from onboard attitude sensors or image sensors.

References Amberg, V. et al., 2013. In-flight attitude perturbances estimation: application to PLEIADES-HR satellites. In: SPIE Optical Engineering and Applications. International Society for Optics and Photonics, pp. 206–209. Ayoub, F., Leprince, S., Binet, R., Lewis, K.W., 2008. Influence of camera distortions on satellite image registration and change detection applications. In: IGARSS 2008–2008 IEEE International Geoscience and Remote Sensing Symposium, 2: II-1072–II-1075. d’Angelo, P., 2013. Evaluation of ZY-3 for Dsm and ortho image generation. ISPRSInt. Arch. Photogramm., Rem. Sens. Spatial Inform. Sci. 1 (1), 57–61. Dial, G., Bowen, H., Gerlach, F., Grodecki, J., Oleszczuk, R., 2003. IKONOS satellite, imagery, and products. Rem. Sens. Environ. 88 (1), 23–36.

123

Fang, L., Wang, M., Li, D., Pan, J., 2014. CPU/GPU near real-time preprocessing for ZY3 satellite images: relative radiometric correction, MTF compensation, and geocorrection. ISPRS J. Photogramm. Rem. Sens. 87, 229–240. Fang, L., Wang, M., Li, D., Pan, J., 2015. MOC-based parallel preprocessing of ZY-3 satellite images. Geosci. Rem. Sens. Lett., IEEE 12 (2), 419–423. Fraser, C.S., Dial, G., Grodecki, J., 2006. Sensor orientation via RPCs. ISPRS J. Photogramm. Rem. Sens. 60 (3), 182–194. Fraser, C.S., Hanley, H.B., 2005. Bias-compensated RPCs for sensor orientation of high-resolution satellite imagery. Photogramm. Eng. Rem. Sens. 71 (8), 909– 915. Grodecki, J., Dial, G., 2003. Block adjustment of high-resolution satellite images described by rational polynomials. Photogramm. Eng. Rem. Sens. 69 (1), 59–68. Iwasaki, A., 2011. Detection and estimation satellite attitude jitter using remote sensing imagery. In: Advances in Spacecraft Technologies. InTech, Rijeka, Croatia. Jacobsen, K., 2006. Calibration of imaging satellite sensors. Int. Arch. Photogramm. Remote Sens. 36, 1. Johnston, J.D., Thornton, E.A., 2012. Thermally induced dynamics of satellite solar panels. J. Spacecraft Rockets 37 (5), 604–613. Lehner, M., Müller, R., 2003. Quality check of MOMS-2P ortho-images of semi-arid landscapes. Proceedings of ISPRS Workshop on ‘‘High Resolution Mapping Space 2003”. Marks, R., 2012. Introduction to Shannon sampling and interpolation theory. Springer Science & Business Media. Pan, H. et al., 2013. Basic products of the ZiYuan-3 satellite and accuracy evaluation. Photogramm. Eng. Remote Sens. 79, 1131–1145. Poli, D., Toutin, T., 2012. Review of developments in geometric modelling for high resolution satellite pushbroom sensors. Photogramm. Rec. 27 (137), 58–73. Schneider, M., Lehner, M., Müller, R., Reinartz, P., 2008. Stereo evaluation of ALOS/ PRISM Data on ESA-AO test sites–First DLR results. In: Proceedings ALOS PI Symposium. Schwind, P. et al., 2009. Processors for ALOS optical data: deconvolution, DEM generation, orthorectification, and atmospheric correction. IEEE Trans. Geosci. Rem. Sens. 47 (12), 4074–4082. Takaku, J., 2011. RPC generations on ALOS prism and AVNIR-2. In: Geoscience and Remote Sensing Symposium (IGARSS), 2011 IEEE International. IEEE, pp. 539– 542. Takaku, J., Tadono, T., 2010. High resolution dsm generation from alos prismprocessing status and influence of attitude fluctuation. In: Geoscience and Remote Sensing Symposium (IGARSS), 2010 IEEE International. IEEE, pp. 4228– 4231. Tang, X.M. et al., 2014. Inner FoV stitching of spaceborne TDI CCD images based on sensor geometry and projection plane in object space. Rem. Sens. 6 (7), 6386– 6406. Tao, C.V., Hu, Y., 2001. A comprehensive study of the rational function model for photogrammetric processing. Photogramm. Eng. Rem. Sens. 67 (12), 1347– 1358. Teshima, Y., Iwasaki, A., 2008. Correction of attitude fluctuation of Terra spacecraft using ASTER/SWIR imagery with parallax observation. IEEE Trans. Geosci. Rem. Sens. 46 (1), 222–227. Tong, X. et al., 2015a. Detection and estimation of ZY-3 three-line array image distortions caused by attitude oscillation. ISPRS J. Photogramm. Rem. Sens. 101, 291–309. Tong, X. et al., 2015b. Attitude oscillation detection of the ZY-3 satellite by using multispectral parallax images. IEEE Trans. Geosci. Rem. Sens. 53 (6), 3522–3534. Tong, X.H. et al., 2014. Framework of jitter detection and compensation for high resolution satellites. Rem. Sens. 6 (5), 3944–3964. Toutin, T., 2004. Review article: geometric processing of remote sensing images: models, algorithms and methods. Int. J. Rem. Sens. 25 (10), 1893–1924. Toyoshima, M., Araki, K., 2001. In-orbit measurements of short term attitude and vibrational environment on the Engineering Test Satellite VI using laser communication equipment. Opt. Eng. 40 (5), 827–832. Van Zyl, J.J., 2001. The Shuttle Radar Topography Mission (SRTM): a breakthrough in remote sensing of topography. Acta Astronaut. 48 (5), 559–565. Wang, M., Yang, B., Hu, F., Zang, X., 2014a. On-orbit geometric calibration model and its applications for high-resolution optical satellite imagery. Rem. Sens. 6 (5), 4391–4408. Wang, M., Zhu, Y., Pan, J., Yang, B., Zhu, Q., 2016. Satellite jitter detection and compensation using multispectral imagery. Rem. Sens. Lett. 7 (6), 513–522. Wang, P., An, W., Deng, X., Yang, J., Sheng, W., 2015. A jitter compensation method for spaceborne line-array imagery using compressive sampling. Rem. Sens. Lett. 6 (7), 558–567. Wang, T. et al., 2014b. Geometric accuracy validation for ZY-3 satellite imagery. Geosci. Rem. Sens. Lett., IEEE 11 (6), 1168–1171. Xu, W., Gong, J., Wang, M., 2014. Development, application, and prospects for Chinese land observation satellites. Geo-spatial Inform. Sci. 17 (2), 102–109. Zhang, G. et al., 2014. In-orbit geometric calibration and validation of ZY-3 linear array sensors. Photogramm. Rec. 29 (145), 68–88. Zhang, Y., Xu, S.J., 2012. Vibration isolation platform for control moment gyroscopes on satellites. J. Aerospace Eng. 25 (4), 641–652. Zhu, Y., Wang, M., Zhu, Q., Pan, J., 2014. Detection and compensation of band-toband registration error for multi-spectral imagery caused by satellite jitter. ISPRS Ann. Photogramm., Rem. Sens. Spatial Inform. Sci. 1, 69–76.