Auto-calibration of GF-1 WFV images using flat terrain

Auto-calibration of GF-1 WFV images using flat terrain

ISPRS Journal of Photogrammetry and Remote Sensing 134 (2017) 59–69 Contents lists available at ScienceDirect ISPRS Journal of Photogrammetry and Re...

4MB Sizes 9 Downloads 111 Views

ISPRS Journal of Photogrammetry and Remote Sensing 134 (2017) 59–69

Contents lists available at ScienceDirect

ISPRS Journal of Photogrammetry and Remote Sensing journal homepage: www.elsevier.com/locate/isprsjprs

Auto-calibration of GF-1 WFV images using flat terrain Guo Zhang a,b,⇑, Kai Xu a, Wenchao Huang a a b

State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, PR China Collaborative Innovation Center of Geospatial Technology, Wuhan University, PR China

a r t i c l e

i n f o

Article history: Received 26 September 2016 Received in revised form 10 September 2017 Accepted 10 October 2017

Keywords: Distortion detection Elevation residual Flat terrain Gaofen-1 Wide-field-view camera

a b s t r a c t Four wide field view (WFV) cameras with 16-m multispectral medium-resolution and a combined swath of 800 km are onboard the Gaofen-1 (GF-1) satellite, which can increase the revisit frequency to less than 4 days and enable large-scale land monitoring. The detection and elimination of WFV camera distortions is key for subsequent applications. Due to the wide swath of WFV images, geometric calibration using either conventional methods based on the ground control field (GCF) or GCF independent methods is problematic. This is predominantly because current GCFs in China fail to cover the whole WFV image and most GCF independent methods are used for close-range photogrammetry or computer vision fields. This study proposes an auto-calibration method using flat terrain to detect nonlinear distortions of GF-1 WFV images. First, a classic geometric calibration model is built for the GF1 WFV camera, and at least two images with an overlap area that cover flat terrain are collected, then the elevation residuals between the real elevation and that calculated by forward intersection are used to solve nonlinear distortion parameters in WFV images. Experiments demonstrate that the orientation accuracy of the proposed method evaluated by GCF CPs is within 0.6 pixel, and residual errors manifest as random errors. Validation using Google Earth CPs further proves the effectiveness of auto-calibration, and the whole scene is undistorted compared to not using calibration parameters. The orientation accuracy of the proposed method and the GCF method is compared. The maximum difference is approximately 0.3 pixel, and the factors behind this discrepancy are analyzed. Generally, this method can effectively compensate for distortions in the GF-1 WFV camera. Ó 2017 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rights reserved.

1. Introduction Gaofen-1 (GF-1) is the first of a series of high-resolution optical Earth observation satellites of the Chinese high-resolution Earth observation system (CHEOS). It was launched successfully on 26 April 2013 on the CZ-2D rocket from the Jiuquan Satellite Launch Center (JSLC) (Bai, 2013; Lu et al., 2014; XinHuaNet, 2013). In addition to two 2-m barrel-mounted panchromatic/8-m Multispectral (MS) cameras, the GF-1 satellite is configured with a set of four wide-field-view (WFV) cameras with 16-m MS mediumresolution and a combined swath of 800 km, as shown in Fig. 1. Consequently, it achieves integration of imaging capacity at medium and high spatial resolutions, and has an improved selfsufficiency rate of high-resolution data. Detailed information about the GF-1 WFV camera is given in Table 1.

⇑ Corresponding author at: 129 Luoyu Road, Wuhan, Hubei Province 430079, PR China. E-mail address: [email protected] (G. Zhang).

The WFV cameras can increase the revisit frequency to less than 4 days and facilitate large-scale monitoring of land dynamics. However, as a result of the wide view field angle of the WFV camera (16.44°), serious distortions arise in WFV images due to camera lens aberrations, which seriously influence image geometric quality and the performance of subsequent cartographic applications. Consequently, it is crucial to detect and eliminate distortions of the GF-1 satellite WFV camera. Conventional methods that compensate for the distortion of optical cameras onboard satellites can be generally divided into (1) the classic method based on the geometric calibration field (GCF), and (2) GCF independent methods. The method based on GCF is usually exploited (CNES, 2004; Greslou and Delussy, 2006; Leprince et al., 2008) and has been fully validated by SPOT, ALOS, WorldView, Pleiades, ZY-3, (Bouillon et al., 2003; Kubik et al., 2013; Takaku and Tadono, 2009; Xu et al., 2017) and other high-resolution satellites, which solve calibration parameters using high-precision ground control points (GCP) obtained from the GCF image by a high-accuracy matching method (Bouillon, 2003; Leprince et al., 2007, 2008). Although

https://doi.org/10.1016/j.isprsjprs.2017.10.009 0924-2716/Ó 2017 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rights reserved.

60

G. Zhang et al. / ISPRS Journal of Photogrammetry and Remote Sensing 134 (2017) 59–69 16°

16°

WFV-1

WFV-2

0.4 4°

16°

WFV-3

WFV-4

0.44°

0.44

°

800 kilometers

Fig. 1. Schematic diagram of WFV cameras onboard the GF-1 satellite.

Table 1 Detailed information about the GF-1 WFV camera.

Table 2 Specifications of the geometric calibration field in China.

Items

Values

Swath Resolution Change-coupled device (CCD) size Principle distance Field of view (FOV) Image size

200 km 16 m 0.0065 mm 270 mm 16.44 deg 12,000  13,400 pixels

the GCF method is easy and stable, the following problems arise during its practical application: (1) The GCF method requires costly updating of the GCF (Delevit et al., 2012). The presence of temporary objects such as new construction sites, as well as other changes in the environment between satellite and GCF images caused by long acquisition intervals, as shown in Fig. 2, lead to a reduced accuracy of GCPs during image registration. This, in turn, results in low accuracy of the final geometric calibration. (2) Moreover, it is difficult to obtain sufficient GCPs from current GCFs built in China, (Table 2) because current GCFs fail to cover all rows in only one image due to the wide swath of

Area

GSD of DOM (m)

Plane accuracy of DOM RMS (m)

Height accuracy of DEM RMS (m)

Range (km2) (across track  along track)

Center latitude and longitude

Shanxi

0.5

1

1.5

50  95

Songshan

0.5

1

1.5

50  41

Dengfeng

0.2

0.4

0.7

54  84

Tianjin

0.2

0.4

0.7

72  54

Northeast

0.5

1

1.5

100  600

38.00°N, 112.52°E 34.65°N, 113.55°E 34.45°N, 113.07°E 39.17°N, 117.35°E 45.50°N, 125.63°E

GF-1 WFV. As shown in Fig. 3, the yellow box represents the swath of the GF-1 WFV image, which spans 200 km, while the red box indicates the swath of the GCF of Taiyuan, which spans only 50 km. To solve this problem, a multi-calibration image strategy and a modified calibration model were proposed (Huang et al., 2016), whereby calibration images are

Fig. 2. Dramatic changes in images caused by long acquisition intervals. (a) Satellite image of Songshan control field acquired in 2016 and (b) the corresponding area acquired in 2011.

61

G. Zhang et al. / ISPRS Journal of Photogrammetry and Remote Sensing 134 (2017) 59–69

Fig. 3. Spatial coverage of WFV image and GCF (yellow box: image coverage, red box: GCF coverage). (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)

collected at different times, and their different rows are covered by the GCF. GCPs covering all rows can then be obtained, and the modified calibration model can be applied to detect distortions. Nevertheless, this method depends overly on the distribution of images and the GCF, and the cohesion between images should be handled carefully in the modified calibration model. Generally, the detection of GF-1 WFV distortions is difficult using this method. GCF independent methods typically include self-calibration block adjustment (Di et al., 2014; Habib et al., 2010; Zheng et al., 2015) and the geometric auto-calibration method of Pleaides-HR (Kubik et al., 2012), which detect distortions and partly make up for the disadvantages of GCF-based methods. However, the process of self-calibration block adjustment requires collection of a large number of images, and focuses on the calculation of block adjustment parameters. Furthermore, the geometric auto-calibration

S1

S2

b

2. Methodology 2.1. Principle of auto-calibration using flat terrain The principle of auto-calibration using flat terrain is shown in Fig. 4. Point A is the ground point in the object space, and the

S1

S2

b

c

a

method can only be implemented with satellites that exhibit strong agility, such as Pleaides-HR, and if corresponding digital elevation models (DEM) exist. Regarding the defects associated with conventional methods, further study is required on the elimination of distortions in WFV cameras. In the computer vision (CV) field, Faugeras et al. (1992), Maybank and Faugeras (1992), and Hartley (1997) proposed an auto-calibration technique, and proved that it is possible to directly detect distortion using multiple images. Following from this research, Malis and Cipolla (2000a,b, 2002) achieved autocalibration using unknown planar structures of multiple images, which produced favorable experimental results. However, the above methods were only applied to close-range photogrammetry or computer vision fields, and have not yet been applied to satellite photogrammetry. There are three main reasons for this. Firstly, it is impossible to acquire planar structures for satellite imagery in a purely geometric sense because of the curvature of the Earth, especially for wide swath imagery. Secondly, these methods were initially designed to solve the distortion of array cameras, so they are not directly applicable for linear sensor cameras such as GF-1 WFV. Lastly, these methods solve nonlinear equations that involve massive computational loads, and the process is often unstable. In this study, a novel auto-calibration method is proposed for detecting nonlinear distortions of the GF-1 WFV camera. First, a classic geometric calibration model is built for GF1 WFV. Then, at least two images with an overlap area and that cover flat terrain are collected. The elevation residuals between the real elevation of the flat terrain and the elevation calculated by forward intersection are used to solve the parameters of nonlinear distortions in the WFV image. Its superiority over traditional methods that are implemented using GCF or that depend on satellite agility is that the method can compensate for the distortion in GF-1 WFV images and improve GF-1 geometric performance with free from these constraints.

d

a

Real Surface A

d

B

B Real Surface

c

Elevation Error A

Elevation Residual Supposed Surface

Fig. 4. Schematic diagram of auto-calibration using flat terrain. (a) Two images covering complex terrain cannot detect distortion, whereas (b) two images covering flat terrain can be used to detect distortion.

62

G. Zhang et al. / ISPRS Journal of Photogrammetry and Remote Sensing 134 (2017) 59–69

irregular curve is the real surface. S1 and S2 are the projection center of the two images, and the thick solid lines are their focal planes. On complex terrain, when there is no error, object point A will be imaged at point a in image S1 and point c in image S2, respectively. If there is camera distortion, however, image point a will move to b and image point c will move to d. Lines S1b and S2d are the new rays intersecting at object point B in this case. However, with the position of object point A unknown during the calibration process, the change in the imaged point can also be attributed to the error of the object point, as shown in Fig. 4(a). Thus, in the absence of any constraints, two images are unable to discern whether the change in the image coordinate is induced by image distortion or the object point error. However, flat terrain where the elevation is known can be introduced as a constraint to solve this problem (Fig. 4(b)). With the elevation constraint, the height value of B calculated by forward intersection will differ from the elevation of supposed surface if there are any image distortions. The discrepancy in the height value is called the elevation residual, which is used to construct the error equation in this study. It is noticed that the ellipsoidal elevation of supposed surface whose value of elevation is constant is used to replace that of real surface to calculate the elevation residual. The error induced by difference of elevation value between real surface and supposed surface, i.e. elevation error, will be analyzed in following paragraph. 2.2. Workflow of auto-calibration using flat terrain 2.2.1. Calibration model for GF1 WFV camera The calibration model for the linear sensor model is established based on the work of Tang et al. (2012) and Xu et al. (2017) as Eq. (1):

2

3

2

3

2

3

XðtÞ XS x þ Dx 7 6 7 6 7 6 4 Y S 5 ¼ 4 YðtÞ 5 þ m  RðtÞ  RU  4 y þ Dy 5 ZðtÞ 1 ZS

ð1Þ

where ½ XðtÞ YðtÞ ZðtÞ  is the satellite position, RðtÞ is the rotation matrix, ½ x þ Dx y þ Dy 1  represents the ray direction, m is the unknown scaling factor, ½ X s Ys Z s  is the unknown ground position, RU is the offset matrix that compensates for the exterior errors, and ðDx; DyÞ denotes the image distortion. RU can be expanded to Eq. (2) (Radhadevi and Solanki, 2010; Xu, 2004; Yuan, 2012):

2 6 RU ðtÞ ¼ 6 4

cos u

0

sin u

0

1

0

3 2

0

0

0

0

sin x

cos x

3

ð2Þ

1

where u, x, and j are rotation parameters in the satellite body fixed coordinate system. It should be noted that different images should have different RU values. ðDx; DyÞ can be expanded as in Eq. (3) (Bouillon, 2003; Jiang et al., 2014; Mulawa, 2008; Zhang et al., 2014):

(

Dx ¼ a0 þ a1 s þ a2 s2 þ    þ ai si ;i 6 5 Dy ¼ b0 þ b1 s þ b2 s2 þ    þ bi si

x ¼ xða0 ;    ; ai ; b0 ;    ; bi ; x; u; ¼ yða0 ;    ; ai ; b0 ;    ; bi ; x; u;

j; X S ; Y S ; Z S Þy j; X S ; Y S ; Z S Þ; 0 6 i 6 5

2

Xs

3

2

ðN þ HÞ cos B cos L

6 7 6 6 Ys 7 ¼ 6 ðN þ HÞ cos B sin L 4 5 4 Zs

3 7 7 5

ð5Þ

½Nð1  e2 Þ þ H sin B

where N represents the radius of curvature in the prime vertical related to latitude B, and e represents the first eccentricity of the Earth. Then, the calibration model (4) can be rewritten as (6):

x ¼ xða0 ;    ; ai ; b0 ;    ; bi ; x; u; ¼ yða0 ;    ; ai ; b0 ;    ; bi ; x; u;

j; B; L; HÞy j; B; L; HÞ; 0 6 i 6 5

where variables a0 ; a1 ; a2 ;    ai , b0 ; b1 ; b2 ;    bi are parameters describing the distortion, and s is the image coordinate across track.

ð6Þ

As elevation H is known, the calibration model (6) can be rewritten as Eq. (7) using the proposed method. Eq. (7) implicitly uses the elevation residual constraint by giving the real elevation to the corresponding points of different images.

x ¼ xða0 ;    ; ai ; b0 ;    ; bi ; x; u; ¼ yða0 ;    ; ai ; b0 ;    ; bi ; x; u;

j; B; LÞy j; B; LÞ; 0 6 i 6 5

ð7Þ

Eq. (7) is the basic calibration model of the proposed method. It should be noted that a plains whose ellipsoid elevation is supposed constant is introduced to solve the problem that acquiring a strict planar structure spanning 200 km for satellite imagery. 2.2.2. Error equation construction Taking the partial derivative of Eq. (7) with respect to x, u, j, a0 ; a1 ;    ; ai , b0 ; b1 ;    ; bi and B; L, the linearization form of Eq. (7) is Eq. (8) for object point k projected in image j: 8 @x @x @x @x @x > > v jkx ¼ da0 þ    þ dai þ db0 þ    þ dbi þ dx j > > @a0 @ai @b0 @bi @x j > > > > > > @x @x @x @x k > k jk > > þ du j þ dj j þ k dB þ k dL  lx > < @u j @j j @B @L

; 06i65

ð8Þ

For simplicity, Eq. (8) can be written as Eq. (9):

8 @x @x @x j k jk jk > > dC þ k dD  lx > v x ¼ @A dA þ < @D @C j > > @y @y @y j k jk > : v jk dC þ k dD  ly dA þ y ¼ @A @C j @D where dA ¼ ½ da0    value of the dC ¼ ½ dx j j

ð3Þ

ð4Þ

where x, u, j, a0 ; a1 ;    ; ai and b0 ; b1 ;    ; bi are calibration parameters to be solved, and variable ½ X s Ys Z s  is the unknown ground position. The ground position in the geocentric Cartesian coordinate system ½ X s Ys Z s  can be represented in the geographic coordinate system by longitude L, latitude B, and elevation H:

> @y @y @y @y @y > > > v jky ¼ da0 þ    þ dai þ db0 þ    þ dbi þ j dx j > > @a0 @ai @b0 @bi @x > > > > > > @y @y @y @y k > k jk > : þ du j þ dj j þ k dB þ k dL  ly @u j @j j @B @L

7 6 7 7  6 0 cos x  sin x 7 5 4 5

 sin u 0 cos u 3 2 cos j  sin j 0 7 6 7 6 4 sin j cos j 0 5 0

1

It should be noted that images collected at different times have the same distortion. Based on Eqs. (1)–(3), the calibration model can be written as in Eq. (4) for a specified image:

du j

dai db0    distortion

;0 6 i 6 5

ð9Þ

dbi  represents the correction calibration parameters,

dj j  is the correction value of the offset matrix k

calibration parameters of image j, and dD ¼ ½ dBk dLk  represents the correction value of the longitude and latitude of object point k. Considering different images and different object points, the error equation can be written as:

63

G. Zhang et al. / ISPRS Journal of Photogrammetry and Remote Sensing 134 (2017) 59–69

8 @x @x @x > > v x1;1 ¼ dA þ 1 dC 1 þ 1 dD1  l1;1 > x > @A > @D @C > > > > @y @y @y 1 1 1;1 > 1;1 > > < v y ¼ @A dA þ @C 1 dC þ @D1 dD  ly  ; 06i65 > > @x @x @x > m n m;n m;n > v x ¼ dA þ m dC þ n dD  lx > > > @A @D @C > > > > @y @y @y m n m;n > m;n : vy ¼ dA þ m dC þ n dD  ly @A @D @C

S ð10Þ

Focal Plane f A

m

2.2.3. Fast solution for calibration parameters The corresponding normal equation of (11) is Eq. (12):

"

AT A

AT D

DT A DT D

#  t X

" ¼

AT K

#

DT K

ð12Þ

N11

N12

  t

N21

N22

X

dh Elevation Error

 ¼

K1



K2

ð13Þ

The reduced normal equation yielded by eliminating the correction value of the longitude and latitude of object points is shown in Eq. (14): 1

T 1 t ¼ ðN11  N12 N1 22 N 12 Þ ðK 1  N 12 N 22 K 2 Þ 1 ¼ N1 11 ðK 1  N 12 N 22 K 2 Þ

D 90+

Supposed Surface

ð14Þ

B

Fig. 5. Schematic showing analysis of the elevation error induced by simplification of real surface.

To calculate bc, we draw a line over point C parallel to bc, which intersects SB at D, and extend SO to DC at point O0. \SBC ¼ \ASB ¼ h  a, and \CDB ¼ \cbB ¼ 90 þ a. Then the projection error bc can be calculated by:

bc ¼

which can be simplified to Eq. (13):



O



¼ ½ da0    dai db0    dbi where t ¼ dA dC    dC dx1 du1 dj1    dxm dum djm  is the correction value of the image cal  ibration parameters, X ¼ dB1 dL1    dBn dLn is the correction value of the object points’ longitude and latitude, A and D are coefficient matrixes of the error equation, and K is the constant vector. Eq. (11) is the basic error equation that should be calculated. 1

C

Real Surface

ð11Þ



c

b

H

where m represents the number of images (m > 1) and n represents the number of ground points, which is intersected by the corresponding points between images. For convenience, equation can be simplified into Eq. (11):

V ¼ At þ DX  K

O

f  DC sinðh  aÞ  ðcos h cos a þ sin h sin aÞ  f ¼  dh SO0 ðcos aH  cos h cosðh  aÞdhÞ  cos a

ð15Þ

Given that dh  H, sin h sin a, and cos h cosðh  aÞdh are comparatively small values, Eq. (15) can be simplified as:

bc 

sinðh  aÞ  cos h  f  dh cos a  H

ð16Þ

Taking the derivative of Eq. (16) with respect to a, the derivative of bc is: 0



  cosðh  aÞ sinðh  aÞ  sina f cos h  dh þ 2 cos a H cos a cosðh  aÞ cos a  sinðh  aÞ sin a f cos h  dh ¼ H cos2 a 2 cos h f ¼  dh 6 0 cos2 a H

bc ¼

ð17Þ

Considering that correlation between the parameters to be solved would result in morbidity in Eq. (14), we adopted the method of iteration by correcting characteristic values (ICCV) (Wang et al., 2001) to solve this problem. After the correction value of the calibration parameters is calculated, the calibration parameters can be updated. When the correction value of the calibration parameter is sufficiently small, the calibration parameters can be obtained. Using the fast solution, we can solve another problem whereby nonlinear equations with massive computational loads are often unstable.

Because bc is always less than 0, bc is a monotonically decreasing function about variable a. Thus, the minimum value of a, bc is the maximum projection error, as shown in (18):

2.3. Analysis of elevation error

For the GF-1 WFV-1 sensor, f is approximately 41,538 pixels, and H is approximately 644.5 km. The swing angle h is 24°, and the range of a is from 8.22° to 8.22°. Under these conditions, the maximum projection error can be calculated by Eq. (19):

Some factors can influence the proposed method, such as a lack of absolute reference, strong correlation between parameters, over-parameterization, quality of the corresponding points, and the elevation error. Among them, the elevation error can be analyzed quantitatively. Fig. 5 shows the elevation error analysis schematic. S is the projected point of the sensor, the thick solid line represents the focal plane, SO is the focal length f in pixel, the swing angle is h, the real elevation is B, and the elevation used in the calculation is C, so the elevation error BC is dh. Point B projects to image S at b, and point C projects to image S at c. The view angle \bSO of image point b in the image coordinate is a. Thus, bc is the projection error resulting from the elevation error.

0

bcmax ¼

bc ¼

sinðh  amin Þ  cos h  f cos amin  H

sinð24 þ 8:22Þ  cosð24Þ  41; 538dh cosð8:22Þ  644; 500

ð18Þ

ð19Þ

¼ 0:0317dhðpixelÞ where the projection error bc is related to the elevation error. When dh is 30 m, the elevation error only results in a projection error for the GF-1 WFV-1 camera of approximately 1 pixel. For the GF-1 WFV-2 sensor, f is approximately 41,538 pixels, and H is approximately 644.5 km. The swing angle h is 8°, and the range of a is from 8.22° to 8.22°. Thus, the maximum projection error can be calculated by Eq. (20):

64

G. Zhang et al. / ISPRS Journal of Photogrammetry and Remote Sensing 134 (2017) 59–69

bc ¼

sinð8 þ 8:22Þ  cosð8Þ  41; 538dh cosð8:22Þ  644; 500

3.2. Distortion detection for GF-1 WFV-1 camera

ð20Þ

¼ 0:0180dhðpixelÞ When dh is 55 m, the elevation error only results in a projection error for the GF-1 WFV-2 camera of approximately 1 pixel.

3. Experiments 3.1. Study area and data sources To sufficiently validate the proposed method, several images of GF-1 WFV-1 were collected for the experiment. Table 3 shows detailed information of the experimental images covering Shanxi, Henan, and Hebei areas in China. Scene 129419 and 098850 were used to perform auto-calibration. The two images cover the plains of North China, Hebei province. Their coverage and corresponding elevation ranges are shown in Fig. 6. Given that the elevation value ranges from 3 m to 30 m, the real elevation of the overlap area is set to approximately 15 m. By the elevation error analysis, the maximum error induced by this approximation will theoretically be less than 0.4755 pixels for GF-1 WFV-1. Therefore, these two scenes are ideal for the validation and implementation of the proposed method. Other scenes are used for the validation. Table 3 Detailed information of the experimental images. Scene ID

Area

Acquisition date

Elevation range (m)

068316 079476 098850 108244 125565 125567 126740 129419 132279

Shanxi Henan Hebei Shanxi Shanxi Henan Shanxi Hebei Henan

August 10, 2013 September 3, 2013 October 17, 2013 November 7, 2013 November 27, 2013 November 27, 2013 December 5, 2013 December 9, 2013 December 13, 2013

(740, 2090) (60, 1010) (3, 30) (740, 2090) (130, 2060) (130, 1530) (50, 1350) (3, 30) (30, 430)

As shown in Fig. 7, 13689 evenly distributed corresponding points were obtained from the overlap area between scenes 129419 and 098850 using high-accuracy matching methods (Leprince et al., 2007, 2008) so that the proposed method could be used to detect distortions. After obtaining sufficient corresponding points, the calibration parameters were acquired using the proposed method according to Eq. (14). The distortion results are shown in Fig. 6, in which the maximum distortion across track and along track is 8 pixels and 0.004 pixels, respectively. In addition, the distortion across track can be described by a quintic polynomial, whereas the distortion along track is not significant. In short, the real order of the distortion model along track is less than 5, and the calibration model and the error equation is over-parameterization (see Fig. 8). 3.3. Validation of calibration parameters The positioning accuracies before and after applying the calibration parameters for GF-1 WFV-1 camera’s images were assessed by check points (CPs) which obtained from the GCF image by highprecision matching methods (Huang et al., 2016; Leprince et al., 2007, 2008) or manually extracted from Google Earth. Considering that the effect of calibration parameters for compensating camera’s distortion was mainly validated, the image affine model was adopted as the exterior orientation model (Fraser and Hanley, 2003; Fraser and Yamakawa, 2004) to remove other errors caused by exterior elements. 3.3.1. Accuracy validation using GCF CPs Scenes 068316, 108244, 125565 and 126740 and corresponding Shanxi GCF were used to conduct validation work. The coverage of these test scenes and Shanxi GCF is shown in Fig. 9. Due to the wide swath of WFV images, the four scenes covering the GCF construct an entire image. The corresponding Shanxi GCF includes a 1:5000 digital orthophoto map (DOM) and a digital elevation model (DEM), whose specific information is given in Table 2.

Fig. 6. Coverage of calibration images and corresponding elevation range.

G. Zhang et al. / ISPRS Journal of Photogrammetry and Remote Sensing 134 (2017) 59–69

Fig. 7. Corresponding points for calibration between (a) scene 129419 and (b) scene 098850.

Fig. 8. Distortions of the GF-1 WFV-1 camera both (a) across and (b) along track.

Fig. 9. Coverage of validation scenes and the GCF for Shanxi.

65

66

G. Zhang et al. / ISPRS Journal of Photogrammetry and Remote Sensing 134 (2017) 59–69

Table 4 Orientation accuracy before and after calibration evaluated by GCF CPS (unit: pixel). Scene ID

No. GCPs/CPs

Sample range (Pixel)

Line

Sample

Max

Min

RMS

068316

4/15,800

6300–9000

Ori. Pro.

0.383 0.384

0.537 0.416

2.345 2.019

0.005 0.001

0.660 0.566

108244

4/18,057

10,200–12,000

Ori. Pro.

0.382 0.382

0.864 0.411

4.863 1.641

0.005 0.005

0.945 0.561

125565

4/19,459

3200–5700

Ori. Pro.

0.374 0.374

0.428 0.387

3.045 3.099

0.005 0.003

0.569 0.538

126740

4/14,551

500–2700

Ori. Pro.

0.432 0.432

0.813 0.439

3.973 3.091

0.009 0.002

0.920 0.616

Ori indicates the orientation accuracy without using auto-calibration parameters. Pro represents the orientation accuracy after applying auto-calibration parameters. The sample range represents GCF coverage for the start and end rows of images across the track.

Table 4 shows that the orientation accuracies are all within 1 pixel before and after applying auto-calibration parameters for validation scenes, and the main distortion is manifested across track. Because scenes 108244 and 126740 are both at the ends of the sample where the distortion is more serious, their accuracy (0.9 pixels) is lower than that of scenes 068316 and 125565 (0.6 pixels) without applying calibration parameters. After applying autocalibration parameters to compensate for the distortion, the orientation accuracy is improved, especially for scenes 108244 and 126740 across track, with orientation accuracies within 0.6 pixels for all four scenes. The residual plots before and after applying auto-calibration parameters are shown in Fig. 10(a) and (b), respectively. The residual errors before calibration remain containing a systematic error across track (a quintic polynomial), although the orientation accu-

racy before calibration is as high as 1 pixel. After employing calibration parameters to compensate for the distortion, the residual errors are random and constrained within 0.6 pixels, which demonstrates the effectiveness of the calibration parameters are effective. 3.3.2. Accuracy validation using Google CPs Due to the coverage restriction of the GCF, validation using GCF CPs cannot reflect the inner accuracy of the entire scene. Considering the 16-m resolution of the GF-1 WFV camera, CPs manually extracted from Google Earth are used to validate the effectiveness of auto-calibration. As shown in Table 5, the maximum errors without using auto-calibration parameters are 5.5 pixels, and the orientation accuracy is only about 2 pixels for every image. After applying auto-calibration parameters, the maximum errors are

Fig. 10. Residual error (a) before and (b) after calibration. The horizontal axis denotes the image row across the track and the vertical axis denotes residual errors after orientation.

Table 5 Orientation accuracy before and after applying auto-calibration parameters evaluated by Google Earth CPs (unit: pixel). Scene ID

No. GCPs/CPs

Line

Sample

Max

Min

RMS

068316

4/16

Ori. Pro.

0.916 0.678

1.069 0.467

2.692 1.800

0.207 0.122

1.410 0.824

079476

4/24

Ori. Pro.

0.840 0.823

1.921 0.597

5.538 2.395

0.512 0.163

2.097 1.016

125567

4/22

Ori. Pro.

0.966 0.730

1.721 0.438

3.173 1.566

0.541 0.265

1.973 0.852

132279

4/22

Ori. Pro.

0.790 0.772

1.991 0.667

4.922 2.403

0.249 0.305

2.142 1.020

Ori indicates the orientation accuracy without using auto-calibration parameters, Pro represents the orientation accuracy after applying auto-calibration parameters.

G. Zhang et al. / ISPRS Journal of Photogrammetry and Remote Sensing 134 (2017) 59–69

reduced to 2.4 pixels, and the orientation accuracy is about 1 pixel for every image. In addition, the orientation residual plots of scenes 068316 and 125567 before and after employing auto-calibration parameters are shown in Fig. 11. Without using auto-calibration parameters, the four corners of the scenes are more accurate than other regions (Fig. 11(a) and (b)) because the affine model with 4 GCPs is only effective at absorbing the influence of higher-order distortions in the four corners. After using auto-calibration parameters (Fig. 11 (c) and (d)), the accuracy is consistently approximately 1 pixel and the residual errors are random. In short, the distortion has been eliminated by calibration, and the images are undistorted, with 4 GCPs absorbing the residual system errors.

67

3.4. Comparison with traditional methods The conventional GCF method (Huang et al., 2016) was also used to acquire calibration parameters, so a comparison between the auto-calibration method and the GCF method was performed. Two calibration parameters, one from the proposed method and one from the GCF method, were applied to scenes 068316, 079476, 125567, and 132279, and the orientation accuracy of these scenes was compared (Table 6). Clearly, the accuracy of the proposed method is lower than that of the GCF method, both in line and in sample, despite the fact that it is consistently approximately 1 pixel.

Fig. 11. Orientation errors before and after calibration. (a) Scene 063816 before calibration, (b) Scene 125567 before calibration, (c) Scene 063816 after calibration, and (d) Scene 125567 after calibration.

68

G. Zhang et al. / ISPRS Journal of Photogrammetry and Remote Sensing 134 (2017) 59–69

Table 6 Orientation accuracy comparison between the proposed method and the GCF method (unit: pixel). Scene ID

Line

Sample

Max

Min

RMS

068316

Po. GCF

0.678 0.430

0.467 0.437

1.800 0.991

0.122 0.130

0.824 0.613

079476

Pro. GCF

0.823 0.646

0.597 0.635

2.395 1.788

0.163 0.088

1.016 0.906

125567

Pro. GCF

0.730 0.384

0.438 0.433

1.566 1.072

0.265 0.079

0.852 0.579

132279

Pro. GCF

0.772 0.525

0.667 0.505

2.403 1.198

0.305 0.054

1.020 0.728

Pro denotes orientation accuracy using calibration parameters from the proposed method, GCF denotes orientation accuracy using calibration parameters obtained from the GCF method.

Reasons for the lower accuracy may be the lack of an absolute reference, the strong correlation between calibration parameters, over-parameterization of the calibration model, quality of the corresponding points, and the elevation error. The influence of the elevation error has been analyzed above (maximum 0.4 pixels). In addition, due to the poor radiation quality of GF-1 WFV, the registration accuracy of the corresponding points is approximately 0.5 pixels, which can be seen from Fig. 7(b). Thus, unlike the GCF method, the low registration accuracy has inevitably resulted in unstable solutions of the proposed method. If these two problems are solved, we believe that the accuracy of the proposed method will be improved to approach the accuracy of the GCF method.

4. Conclusion In this study, an auto-calibration method was proposed to calibrate GF1 WFV camera distortions. The proposed method uses at least two images with an area of overlap and hat cover flat terrain, and takes advantage of the elevation residual between the real and calculated elevations to detect nonlinear distortions. Images captured by the GF-1 WFV-1 camera were collected as experimental data. Several conclusions were drawn: (1) Systematic errors manifest across track in GF-1 WFV-1 images when auto-calibration parameters are not applied, which will reduce geometric performance, especially in cartographic applications. (2) After applying the auto-calibration parameters to compensate for GF-1 WFV-1 camera distortions, images with superior orientation accuracy are achieved. Experiments demonstrate that the orientation accuracy of the proposed method evaluated by GCF CPs is within 0.6 pixels, and residual errors manifest as random errors. Validation using Google Earth CPs further proves the effectiveness of autocalibration, and the entire scene is undistorted compared to that without using calibration parameters. (3) It should be noted that the orientation accuracy of the proposed method is close to the conventional GCF method, with a maximum difference of 0.3 pixels induced by the lack of an absolute reference, strong correlation between the calibration parameters, over-parameterization of the calibration model, and the quality of the corresponding points. Despite the promising results achieved for the WFV-1 camera, the proposed method cannot yet be applied to other WFV cameras onboard the GF-1 satellite because of image collection restrictions and the fact that only one of the four WFV cameras onboard the satellite is considered. Thus, implementation of auto-calibration using flat terrain for other WFV cameras is required. Furthermore, auto-calibration for four WFV cameras onboard one satellite needs

further research. When compared with traditional methods that are implemented by GCF or that depend on satellite agility, the proposed method is free from these constraints and can therefore compensate for the distortions in WFV images on the GF-1 satellite. Acknowledgements This work was supported by the Key Research and Development Program of the Ministry of Science and Technology (2016YFB0500801), the National Natural Science Foundation of China (Grant No. 91538106, Grant No. 41501503, 41601490, Grant No. 41501383), the China Postdoctoral Science Foundation (Grant No. 2015M582276), the Hubei Provincial Natural Science Foundation of China (Grant No. 2015CFB330), the Open Research Fund of the State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing (Grant No. 15E02), the Open Research Fund of the State Key Laboratory of Geo-information Engineering (Grant No. SKLGIE2015-Z-3-1), and Fundamental Research Funds for the Central University (Grant No. 2042016kf0163). The authors also thank the anonymous reviews for their constructive comments and suggestions. References Bai, Z.G., 2013. GF-1 satellite-the first satellite of CHEOS. Aerosp. Chin. 14, 11–16. Bouillon, A., 2003. SPOT5 HRG and HRS first in-flight geometric quality results. Int. Symp. Rem. Sens., 212–223. Bouillon, A., Breton, E., Lussy, F.D., Gachet, R., 2003. SPOT5 geometric image quality. In: Geoscience and Remote Sensing Symposium, 2003. IGARSS ’03. Proceedings. 2003 IEEE International, vol. 301, pp. 303–305. CNES, C.N.d.E.S., 2004. SPOT Image Quality Performances. Delevit, J.M., Greslou, D., Amberg, V., Dechoz, C., De Lussy, F., Lebegue, L., Latry, C., Artigues, S., Bernard, L., 2012. Attitude assessment using pleiades-Hr capabilities. ISPRS – Int. Arch. Photogr. Rem. Sens. Spat. Inform. Sci. XXXIXB1, 525–530. Di, K.C., Liu, Y.L., Liu, B., Peng, M., Hu, W.M., 2014. A self-calibration bundle adjustment method for photogrammetric processing of Chang’E-2 Stereo Lunar Imagery. IEEE Trans. Geosci. Remote Sens. 52, 5432–5442. Faugeras, O.D., Luong, Q.T., Maybank, S.J., 1992. Camera Self-calibration: Theory and Experiments. Springer, Berlin, Heidelberg. Fraser, C.S., Hanley, H.B., 2003. Bias compensation in rational functions for IKONOS satellite imagery. Photogr. Eng. Rem. Sens. 69, 53–58. Fraser, C.S., Yamakawa, T., 2004. Insights into the affine model for high-resolution satellite sensor orientation. ISPRS J. Photogr. Rem. Sens. 58, 275–288. Greslou, D., Delussy, F., 2006. Geometric Calibration of Pleiades Location Model. ISPRS – International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Paris, France, pp. 18–23. Habib, A.F., Michel, M., Young, R.L., 2010. Bundle adjustment with self-calibration using straight lines. Photogr. Rec. 17, 635–650. Hartley, R.I., 1997. Self-calibration of stationary cameras. Int. J. Comput. Vision 22, 5–23. Huang, W.C., Zhang, G., Tang, X.M., Li, D.R., 2016. Compensation for distortion of basic satellite images based on rational function model. IEEE J. Select. Top. Appl. Earth Observ. Rem. Sens. 9, 5767–5775. Jiang, Y.H., Zhang, G., Tang, X.M., Li, D.R., 2014. Geometric calibration and accuracy assessment of ZiYuan-3 multispectral images. IEEE Trans. Geosci. Remote Sens. 52, 4161–4172.

G. Zhang et al. / ISPRS Journal of Photogrammetry and Remote Sensing 134 (2017) 59–69 Kubik, P., Latry, C., Lebegue, L., 2013. PLEIADES-HR 1A&1B image quality commissioning: innovative radiometric calibration methods and results. SPIE Opt. Eng. + Appl. 886610-886610-886611. Kubik, P., Lebègue, L., Fourest, F., Delvit, J.M., Lussy, F.D., Greslou, D., Blanchet, G., 2012. First in-flight results of Pleiades 1A innovative methods for optical calibration. Leprince, S., Barbot, S., Ayoub, F., Avouac, J.P., 2007. Automatic and precise orthorectification, coregistration, and subpixel correlation of satellite images, application to ground deformation measurements. IEEE Trans. Geosci. Remote Sens. 45, 1529–1558. Leprince, S., Musé, P., Avouac, J.P., 2008. In-flight CCD distortion calibration for pushbroom satellites based on subpixel correlation. IEEE Trans. Geosci. Remote Sens. 46, 2675–2683. Lu, C.L., Wang, R., Yin, H., 2014. GF-1 satellite remote sensing characters. Spacecr. Recov. Rem. Sens. 35, 67–73. Malis, E., Cipolla, R., 2000a. Multi-view Constraints between Collineations: Application to Self-Calibration from Unknown Planar Structures. Springer, Berlin, Heidelberg. Malis, E., Cipolla, R., 2000b. Self-Calibration of Zooming Cameras Observing an Unknown Planar Structure. 1, 85–88 vol. 81. Malis, E., Cipolla, R., 2002. Camera self-calibration from unknown planar structures enforcing the multiview constraints between collineations. Patt. Anal. Mach. Intell. IEEE Trans. 24, 1268–1272. Maybank, S.J., Faugeras, O.D., 1992. A theory of self-calibration of a moving camera. Int. J. Comput. Vision 8, 123–151.

69

Mulawa, D., 2008. On-orbit Geometric Calibration of the Orbview-3 High Resolution Imaging Satellite. 35. Radhadevi, P.V., Solanki, S.S., 2010. In-flight geometric calibration of different cameras of IRS-P6 using a physical sensor model. Photogr. Rec. 23, 69–89. Takaku, J., Tadono, T., 2009. PRISM on-orbit geometric calibration and DSM performance. IEEE Trans. Geosci. Remote Sens. 47, 4060–4073. Tang, X.M., Zhang, G., Zhu, X.Y., Pan, H.B., Jiang, Y.H., Zhou, P., Wang, X., 2012. Triple linear-array image geometry model of ZiYuan-3 surveying satellite and its validation. Int. J. Image Data Fusion 4, 33–51. Wang, X.Z., Liu, D.Y., Zhang, Q.Y., Huang, H.L., 2001. The iteration by correcting characteristic value and its application in surveying data processing. J. Heilongjiang Inst. Technol. XinHuaNet, 2013. China launches Gaofen-1 satellite. (accessed 26 April, 2013). Xu, J.Y., 2004. Study of CBERS CCD camera bias matix calculation and its application. Spacecr. Recov. Rem. Sens. Xu, K., Jiang, Y.H., Zhang, G., Zhang, Q.J., Wang, X., 2017. Geometric potential assessment for ZY3-02 triple linear array imagery. Rem. Sens. 9, 658. Yuan, X.X., 2012. Calibration of angular systematic errors for high resolution satellite imagery. Acta Geodaet. Cartogr. Sin. 41, 385–392. Zhang, G., Jiang, Y.H., Li, D.R., Huang, W.C., Pan, H.B., Tang, X.M., Zhu, X.Y., 2014. Inorbit geometric calibration and validation of Zy-3 linear array sensors. Photogram. Rec. 29, 68–88. Zheng, M.T., Zhang, Y.J., Zhu, J.F., Xiong, X.D., 2015. Self-calibration adjustment of CBERS-02B long-strip imagery. IEEE Trans. Geosci. Remote Sens. 53, 3847–3854.