Signal Processing 171 (2020) 107506
Contents lists available at ScienceDirect
Signal Processing journal homepage: www.elsevier.com/locate/sigpro
Inverse-mapping filtering polar formation algorithm for high-maneuverability SAR with time-variant acceleration Yachao Li∗, Xuan Song, Liang Guo, Haiwen Mei, Yinghui Quan National Laboratory of Radar Signal Processing, Xidian University, Xi’an 710071, China
a r t i c l e
i n f o
Article history: Received 28 September 2019 Revised 31 December 2019 Accepted 25 January 2020 Available online 27 January 2020 Keywords: High-maneuverability synthetic aperture radar Polar formation algorithm Time-variant acceleration Geometric distortion correction
a b s t r a c t This paper investigates a new imaging algorithm for high-maneuverability synthetic aperture radar (HM-SAR) travels with high velocities and time-variant accelerations. To be specific, a more precise acceleration-separation-based equivalent range model (ASERM) is established, which can separate timevariant acceleration errors from the ideal linear range model. After 2-D spectral interpolation, a joint compensation method based on inverse-mapping and local filtering is constructed for compensating the space-variant motion error (ME) and image distortions caused by the time-variant acceleration.
1. Introduction HIGH-resolution undistorted SAR ground images are very important for the Earth observation [1–7]. The radar has been mounted on high-maneuverability platform aims to emit beams to continuously irradiate the target scene and obtain 2-D highresolution SAR ground images. The images can be matched with referential space-borne images so as to obtain accurate location information for Earth observation and body posture correction. However, the time-variant acceleration brought by high dynamics of the HM-SAR platform causes serious 2-D spectral distortion and makes imaging algorithms based on uniform linear trajectory no longer applicable. In addition, the special imaging geometry of HM-SAR also poses many new problems. Particularly, the large grazing angle increases the image resolution span during image geometric distortions correction (IGDC), which makes the image gray-scale distortion more serious. The descending flight exacerbates the 2-D coupling distortion and increases the difficulty of IGDC. Therefore, to meet the requirements of HM-SAR imaging, it is crucial to design a deep fusion algorithm with high realtimeness and can integrate imaging, motion compensation (MoCo) and IGDC.
∗
Corresponding author. E-mail addresses:
[email protected] (Y. Li),
[email protected] (X. Song),
[email protected] (L. Guo),
[email protected] (H. Mei),
[email protected] (Y. Quan). https://doi.org/10.1016/j.sigpro.2020.107506 0165-1684/© 2020 Elsevier B.V. All rights reserved.
© 2020 Elsevier B.V. All rights reserved.
For spotlight SAR imaging with curved track, existing imaging algorithms include Back Projection Algorithm (BPA) [5] and its improved forms [8–10], Range Migration Algorithm (RMA) [11–13], Polar Formation Algorithm (PFA) [14–17], etc. From the application point of view, the traditional BPA and its improved forms have high precision, which can meet the accuracy requirements of SAR imaging with arbitrary trajectory. Besides, they also have such advantages as flexible imaging plane selection, undistorted SAR ground image and easy multi-core parallel processing. However, these algorithms suffer from high complexity and low real-time performance, and are difficult to be combined with MoCo algorithms [18–20]. Therefore, these algorithms are not suitable for HM-SAR systems that have a high real-time requirement. With the development of hardware and the proposition of efficient algorithms such as NuFFT, Stolt interpolation and noninterpolation methods [21–23], it is possible to apply highresolution imaging algorithms such as RMA and PFA on highmaneuverability platforms. RMA needs 1-D interpolation and has a high requirement of the flight trajectory. Consequently, it cannot be directly applied to the HM-SAR imaging with curved trajectory. PFA adopts a 2-D linear interpolation processing that can prune the signal spectrum and resample the pixel grid in the 2-D wavenumber domain. This algorithm selects appropriate width for the support domain and adjusts the pixel resolution, which is beneficial to the subsequent IGDC operation. However, due to the three-axis time-variant acceleration and velocity of the HM-SAR platform, the uniform linear trajectory model on which the traditional RMA and PFA imaging algorithms are based is no longer valid. As a result,
2
Y. Li, X. Song and L. Guo et al. / Signal Processing 171 (2020) 107506
the established non-approximate hyperbolic slant model will fail, and the subsequent imaging algorithm based on the linear hyperbolic range model cannot meet the requirements of HM-SAR imaging with curved trajectory. For the instantaneous range model of curvilinear SAR imaging, to simplify the complex slant range model, many other models have been proposed such as modified equivalent slant range model (MESRM) [24], modified advanced hyperbolic range equation model (MAHRE) [25], improved fourth-order equivalent slant range model (IERM4) [26] and error-projection-based equivalent range model (EPERM) [27]. These models use Taylor series expansion, and the curvilinear SAR imaging is equivalent to a combination of the uniform linear slant range component and the error projection component. This can separate the motion error (ME) from the ideal linear trajectory. But for three-axis time-variant acceleration errors, the accuracy of these models are relatively low and unsuitable to the high-resolution of SAR imaging. Given the above problems of HM-SAR imaging with timevariant acceleration, this paper proposes a new HM-SAR imaging algorithm named IMF-PFA. Compared with the existing algorithms, the proposed algorithm has the following innovations: •
•
•
•
The improved ASERM is more accurate and can separate timevariant acceleration errors from the linear range model, which provides the basis for the subsequent PFA interpolation operation. Considering three-axis velocity and time-variant acceleration, which facilitates HM-SAR imaging with curved trajectory. Pixel point inverse-mapping and local filtering are adopted in the proposed local joint compensation method of ME. This makes the method more efficient than BPAs in terms of time complexity while inheriting such advantages of PFA as flexibility in choosing imaging plane, obtaining undistorted SAR ground images and the capability of multi-core parallel processing. For IGDC, traditional imaging algorithms interpolate images from the slant plane to the ground plane, with a large pixel resolution span. In addition, the images may have gray-scale distortions due to the size limitation of the interpolation template. In this paper, PFA interpolation is used to project the signal spectrum directly to the ground plane, which enables spectrum pruning and pixel grid resampling in the frequency domain. Additionally, in the process of IGDC, the smaller the resolution span of pixel grid is, the higher the image quality.
This paper is divided into six parts as follows. In Section 2, the improved [28] ASERM is introduced and compared with the traditional EPERM. Section 3 describes the proposed IMF-PFA in detail, and an analysis of practical application follows in Section 4. Section 5 further tests the proposed algorithm through simulation experiments and measured data processing. Conclusions of the whole paper are provided in Section 6. 2. Imaging geometry and ASERM As is illustrated in Fig. 1, the geometric configuration of the HM-SAR with time-variant acceleration is established. Here, the scene central point O is the coordinate origin, and the straight line AB is the ideal linear motion track of the radar platform. The radar works in the spotlight mode, and its beam platform which in XOZ plane. The radar works in the spotlight mode, and its beam center is always aligned with the scene center O. The angle θ 1 is the inclination angle between line AB and the plane XOY, the angle θ 2 is the instantaneous azimuth angle, and the angle β is the instantaneous grazing angle. The coordinate of the aperture center point O is (x0 , y0 , H), the coordinate of an point Pm is
Fig. 1. Geometric configuration of HM-SAR with time-variant acceleration.
(xp , yp , 0), and the velocity vector v of the platform is(vx , 0, vz ). The error vector of the actual position of the platform deviating from the ideal position is d = (dx (tm ), dy (tm ), dz (tm )), and the corresponding acceleration (tm ) = (ax (tm ), ay (tm ), az (tm )) of which the three-axis vector is a accelerations are a function of the azimuth slow time tm . Compared with the conventional airborne SAR platform, the synthetic aperture time of the HM-SAR is relatively short. Assume that accelerations are the third-order functions of tm , and there exist
2 3 ax (tm ) = ax0 + ax1tm + ax2tm + ax3 tm 2 3 ay (tm ) = ay0 + ay1tm + ay2tm + ay3 tm 2 3 az (tm ) = az0 + az1 tm + az2 tm + az3 tm
(1)
Here, amn (m = x, y, z; n = 0, 1, 2, 3)is the acceleration expansion coefficient. The instantaneous slant range of the radar to the target point Pm is
R1 (tm ) =
(x0 + vxtm + dx (tm ) − x p )2 + (y0 + dy (tm ) − y p )2 2 +(vz tm + dz (tm ) + H )
(2)
where
⎧ ⎪ d t = ax (tm )dtm dtm ( ) x m ⎪ ⎪ ⎪ ⎨ dy (tm ) = ay (tm )dtm dtm ⎪ ⎪ ⎪ ⎪ ⎩d (t ) = az (tm )dtm dtm z m
(3)
Based on the idea of slant range equivalence [26], the instantaneous slant range R1 (tm ) in Eq. (2) is equivalent to the combination of the ideal uniform linear slant range model and the ME. Such an equivalence relationship can guarantee spectrum accuracy while separating the impacts of accelerations on the slant range. Since accelerations only introduce second- and higher-order ME terms instead of linear ME terms, considering imaging accuracy and computation, the acceleration error component is expended from the second-order to the sixth-order. The corresponding improved ASERM is
RASERM (x p , y p ; tm ) = RIR (x p , y p ; tm ) + RACC (x p , y p ; tm )
(x0 + vxtm − x p )2 + (y0 − y p )2 + (vztm + H )2
= +
6
i Ai (x p , y p )tm
i=2
Ai ( x p , y p ) =
1 ∂ i RIR 1 ∂ i R1 − i i i! ∂ tm i! ∂ tm
(4)
, i = 2, · · ·, 6
(5)
tm =0
whereRIR (xp , yp ; tm )is the uniform motion term,RACC (xp , yp ; tm )is the ME term and Ai (i = 2, ..., 6) are the disturbance coefficients. All of them were introduced by the time-variant acceleration as shown in Appendix A. Compared with the equivalent slant range
Y. Li, X. Song and L. Guo et al. / Signal Processing 171 (2020) 107506 Table 1 Simulation parameters of equivalent slanting range accuracy. Carrier Frequency
18 GHz
Radar Location Radar Velocity Scene Center Squint angle Pulse width Pulse Repeat Frequency Range Bandwidth Sampling Rate Acceleration In X Direction Coefficients In Y Direction In Z Direction Scene Size Azimuth Time
(29.3,7.9, 17.5) km (1449,0,−388) m/s (0, 0, 0) m 69.6° 5 us 8000 Hz 250 MHz 300 MHz [−15,−25,19,−8] [20,−20,−5,8] [−10,−30,3,−5] 4 km × 4 km 1.5 s
the impacts of time-variant acceleration on the spectrum and recover the 2-D spectrum corresponding to the ideal uniform linear trajectory, it is necessary to compensate the space-invariant component in the ME. In the spotlight mode, the azimuth bandwidth of the signal containing the ME term can be expressed as
Ba = γa (tm )Ta =
Ci =
Fig. 2. Range errors and phase errors of ASERM and EPERM. (a) Range errors. (b) Phase errors.
methods in [24–28], the model proposed in this paper is more accurate and can separate acceleration effects, which can facilitate the following parameter estimation and MoCo. In order to verify the effectiveness of the proposed equivalent range model, simulation experiments are conducted with parameters in Table 1, and the EPERM proposed in [27] is selected as a comparison. The maximum errors of edge points are calculated for analysis purposes. Fig. 2(a) shows the edge point range error curves of the two range models. By comparison, it can be observed that the accuracy of the proposed ASERM is higher than that of EPERM. Fig. 2(b) illustrates phase errors corresponding to the two models. The results show that phase errors introduced by EPERM are approximate to π /4, and consequently the edge points of the image will be defocused. On the contrary, phase errors of the proposed ASERM are far less than π /4, which further demonstrates that ASERM can meet the high-resolution imaging requirements of HM-SAR imaging with time-variant acceleration better. 3. Improved IMF-PFA Assume that the radar transmits linear frequency modulated signals. Then the wavenumber domain expression of the echo signal after range compression can be expressed as follows
S (kr , tm ) = Wr (kr )wa (tm ) exp (− jkr RASERM (x p , y p ; tm ) )
3
Ta
0
6
1 i (Ai + Ci )tm dtm
λ ·
i=2
(7)
1 ∂ i RIR (tm ; x p , y p ) , i = 2, · · ·, 6 i! ∂ tmi
(8)
where γ a (tm )is the time-variant azimuth chirp rate, and Ta is the azimuth synthetic aperture time. According to the expression of the azimuth bandwidth in Eq. (7), for the HM-SAR imaging system of short wavelength, time-variant accelerations make a drastic change in the Doppler parameters of the echo signal, resulting in serious distortions in the azimuth spectrum [29,30]. Fig. 3(a) presents the 2-D spectrum without ME. When the ME terms is positive, the spectrum of the echo signal will be narrower along the azimuth, as shown in Fig. 3(b); when the ME terms is negative, the azimuth spectrum is broadened and will be aliased, which will affect the subsequent image focusing, as shown in Fig. 3(c) [29,30]. In addition, it can be observed from Fig. 3 that the second- and the third- order terms of time-variant acceleration also cause spectral distortions. To recover the distorted 2-D spectrum, the center of the scene is taken as a reference point to compensate for the space-invariant ME. The correction factor of the space-invariant ME is
HTACC (kr , tm ) = exp
j kr
6
Aˆ i0 (0, 0 )
i tm
(9)
i=2
where Aˆ i0 (0, 0 ) is the ME coefficient of Ai (xp , yp ) at the scene center (0, 0). 3.2. Deramp process of azimuth spectrum In ASERM of the previous section, the spectral distortion caused by time-variant accelerations is eliminated by compensating the space-invariant ME. The signal spectrum after the rough compensation is approximately the spectrum under ideal linear trajectory. Similar to the traditional PFA, the azimuth Deramp processing removes invariant phase terms and compresses the azimuth spectrum. The generalized azimuth Deramp factor is
HDrmp (kr , tm ) = exp ( jkr RIR0 (0, 0, tm ) )
(10)
where RIR0 (0, 0, tm ) is the value of RIR at the scene center (0, 0). The Deramp factor in [27] can only be applied to flat-flight configurations, while the Deramp factor in Eq. (10) is applicable to any configuration like the flat-flight and the descending segment. The expression of the signal after the azimuth Deramp process is
S (kr , tm ) = Wr (kr )Wa (tm ) × exp ( jkr (RIR0 (0, 0; tm ) − RIR (x p , y p ; tm ) ) )
(6)
× exp jkr RˆF O (x p , y p ; tm )
(11)
where Wr ( · ) is the range window function in the wavenumber domain, wa ( · ) is the azimuth window function in the time domain, kr is the range wavenumber, kr = 4π ( fr + fc )/c, c is light speed, fr is range frequency, and fc is radar carrier frequency.
where
3.1. Analysis of time-variant acceleration and rough compensation
3.3. Spectrum projection interpolation and pixel grid resampling
From Eq. (4), it can be seen that through the improved ASERM, the ME term caused by time-variant acceleration has been separated from the ideal uniform linear range model. To further reduce
At the terminal guidance phase, the radar obtains 3-D position coordinates relative to the target through image matching [31– 33] and locating, which can provide accurate position information
RˆF O (x p , y p ; tm ) =
6
i Aˆ i0 (x p , y p ) − Aˆ i0 (0, 0 ) tm
(12)
i=2
4
Y. Li, X. Song and L. Guo et al. / Signal Processing 171 (2020) 107506
Fig. 3. Impacts of time-variant acceleration ME on the 2-D spectrum. (a) Spectrum without ME. (b) Spectrum with positive ME. (c) Spectrum with negative ME.
for Earth observation. Therefore, in image matching applications, real-time acquired undistorted or low-distorted SAR ground images are superior to slant range plane SAR images. For squint SAR imaging, the traditional PFA [17,27] usually projects the spectrum onto a 2-D equivalent slant range plane for focusing purposes, which makes the obtained slant-range-plane SAR image contain geometric distortions. Then, by the traditional IGDC method, the image is interpolated from the slant range plane to the ground plane to yield an undistorted SAR ground image. However, there is a large grazing angle and a large inclination angle in the HM-SAR imaging, which makes the pixel resolution span between the slant range image and the ground image large. Due to the size limitation of the interpolation template, the interpolation accuracy will be decreased, which will result in scene information loss, gray-scale distortion and other image quality problems. The advantage of the PFA is that spectral pruning and pixel grid resampling can be implemented in the 2-D decoupling process of the spectrum through 2-D interpolation. By setting the interpolating kernel on the ground plane, this paper proposes a new imaging algorithm of spectral interpolation. The key idea is to pre-calculate the support domain width by pixel grid resolution of matching SAR image first. Then polar formation interpolation is applied to make the pixel resolution of the focusing image closer to that of the SAR matching image, while ensuring that the spectrum is projected to the ground. This can reduce image quality degradation caused by the large resolution span in the IGDC process. Section III Part A will present the calculation and the analysis of support domain width in more detail. In this way, the high-resolution SAR ground image can be obtained directly through focusing processing such as ME compensation and IGDC. Such spectrum projection methods are suitable for configurations like flat flight and descending flight, and thus more advantageous in precision guidance applications. The phase expression of the first term in Eq. (11) is
ϕ (kr , tm ) = kr (RIR0 (0, 0; tm ) − RIR (x p , y p ; tm ) ) The first-order Taylor expansion of the first Eq. (13) relative to the target point (xp , yp )at(0, 0)is
ϕ (kr , tm ) = kr ( x (tm )x p + y (tm )y p )
(13) phase
in
(14)
where
⎧ x + vx tm ⎪ ⎨ x (tm ) = − 0 RIR0
⎪ ⎩ y (tm ) = − y0
spectrum is rotated, and the corresponding rotation matrix is as follows [34–37]
A=
cos ϕs sin ϕs
ϕs = tan
−1
y (tm )
x (tm )
(18)
k x = kx cos ϕs + ky sin ϕs k y = −kx sin ϕs + ky cos ϕs
(19)
Eqs. (16) and (19) is the interpolation kernel function of the signal spectrum projected from the coordinate domain (kr , tm ) on the slant range plane to the coordinate domain (k x , k y ) on the ground plane. After 2-D spectral interpolation, the spectrum of the echo ∗ ) is an equivalent uniform sampling spectrum in dosignal (k∗r , tm main (k x , k y ), and non-uniform in domain(kr , tm ) [34–37]. Put Eqs. (16) and (19) into Eq. (13) to obtain the signal spectrum regarding (k x , k y ). Perform the second-order Taylor expansion of the spectrum at kx = kx0 and ky = 0. Then the expression of the signal in the interpolated domain (k x , k y ) is obtained:
S k x , k y = Wr k y Wa k x
× exp
j
× exp
j
b10 k x − k x0 + b11 k y + b20 k x − k x0
+b21 k y + b22 k x − k x0 k y 2
2
cos ϕs k x − sin ϕs k y RˆF O (x p , y p ; tm∗ )
x (tm∗ )
(20)
where kx0 = 4π fc /c, b10 and b11 represent the focusing positions of the target alongX-axis and Y-axis, respectively; b21 is the wavefront bending coefficient and reflects the Y-axis defocusing level, b20 represent theX-axis defocusing level, b22 represent the coupling terms. The Taylor expansion coefficient in Eq. (20) is shown in Appendix B. In order to improve the efficiency of the subsequent space-variant phase filtering, the signal is changed to the positionwavenumber domain. Ignore the constant phase terms that have no effect on the focus, and there exists
S Ximg , k y = sin c Ximg + b10 Wa k y exp j b11 k y + b21 k y
× exp
Let
2
cos ϕs k x0 − sin ϕs k y j RˆF O (x p , y p ; tm∗ )
x (tm∗ )
(21)
(16)
where kx is the new range wavenumber, and ky is the new azimuth wavenumber [17]. In order to improve spectrum utilization, the
(17)
RIR0
kx = kr x (tm ) ky = kr y (tm )
where ϕ s is rotation angle. Based on Eq. (17), the rotation wavenumber vector in the case of squint imaging is
(15)
− sin ϕs cos ϕs
∗ tm
=
y0 k x0 cos ϕs − k y sin ϕs − x0 k x0 sin ϕs + k y cos ϕs
vx (k x0 sin ϕs + k y cos ϕs )
(22)
Y. Li, X. Song and L. Guo et al. / Signal Processing 171 (2020) 107506
5
the ground pixel point (xp , yp ), find its corresponding coordinates (ximg , yimg ) in the imaging plane by Eq. (23), ximg = −b11 , yimg = −b12 , as shown in Fig. 4(b). Step 3: To improve the efficiency of the joint compensation operation, local joint compensation of space-variant phase errors is carried out. Take the position of the image plane (ximg , yimg ) as the center and window to intercept Nm × Nm data units. Perform azimuth IFFT to change into the position-wavenumber domain. Multiply with the wavefront bending compensation factor HWBC and the ME compensation factor HMEC , and perform azimuth FFT to obtain the compensated local focusing image as shown in Fig. 4.2(c). This yields the pixel information of the ground network point (xp , yp ). The value of Nm is bigger than the size of the Sinc interpolation template, but far less than the number of azimuth sampling points.
HWBC = exp − jb21 k y
HMEC = exp − j
Fig. 4. The schematic diagram of local joint compensation. (a) Lay a set of pixel grids in the ground plane. (b) Finding the corresponding coordinates of the pixel point in the imaging plane. (c) Local compensation for residual space-variant phase errors.
It should be noted that the second exponential term in Eq. (21) represents the residual ME [31]. This term contains the first-, the second-, and the high-order terms of kx , and their effects are analyzed in more detail in Section IV Part C. In order to improve the real-time performance, the azimuth FFT is further performed on the signal in Eq. (21) to make the image coarsely focused. The expression of coarsely focused image is
S Ximg , Yimg = sinc Ximg + b10 sinc Yimg +b11
× exp jϕtemp Ximg , Yimg
(23)
where ϕ temp (Ximg , Yimg )is the phase error in complex image domain. 3.4. Local joint compensation for image distortion and space-variant phase errors Based on the idea of back projection in BPA [5], this paper proposes a local joint compensation method for image distortion and space-variant phase errors based on inverse-mapping interpolation. The advantage of this method is that it uses fewer interpolated pixels and is easier to be processed in parallel. By mapping the pixel grids on the ground plane back to the imaging plane one by one, the data are intercepted in the complex image domain and local filtering compensation is performed for spacevariant phase errors. Therefore, the amount of calculation is small and the undistorted SAR ground image can be obtained directly. In addition, since compensation and correction are performed on each pixel, the problem of image discontinuity caused by the existing image block compensation methods is avoided, which makes the image more suitable for the subsequent matching operation. The schematic diagram is shown in Fig. 4, and detailed description of this method is as follows [34]. Step 1: Lay a set of M × N pixel grids in the ground plane alongX-axis andY-axis, as shown in Fig. 4(a). Step 2: It can be observed from Eq. (23) that the echo signal after PFA interpolation is changed to 2-D complex image domain. For
2
(24)
cos ϕs k x0 − sin ϕs k y RˆF O (x p , y p ; tm∗ )
x (tm∗ )
(25)
Step 4:Construct a 2-D Sinc interpolation template whose size ism × m. Take the obtained local image of size m × m around the point (ximg , yimg ), and the pixel value corresponding to pixel point (xp , yp )is obtained by Sinc interpolation. The size of Sinc interpolation template is generally set to 8 × 8 for the purpose of precision and efficiency. Step 5:Repeat Step 1 to Step 4 until all ground pixel points are interpolated, and the SAR ground image is obtained. The whole algorithm descripted above is illustrated in Fig. 5 [34]. 4. Application analysis 4.1. Analysis of pixel grid resampling method In the process of IGDC, the size of the Sinc interpolation template is usually small considering precision and efficiency. To ensure the accuracy of Sinc interpolation, image resolutions before and after interpolation should be made as close as possible. Therefore, the range and the azimuth sampling units can be resampled by adjusting the support domain width of kx and ky in the process of 2-D spectral interpolation. This can further reduce the image resolution span in the process of geometric distortion correction, and thus solve the problem of image information distortion in the process of IGDC. Assume that the range and the azimuth pixel resolutions of the SAR ground image in Fig. 4(a) are ρ x and ρ y , respectively. Considering interpolation accuracy, the range and the azimuth pixel resolution of the original image in Fig. 4(b) are also set to ρ x and ρ y . Let kx and ky be the support domain widths of the range and the azimuth, respectively. Then the range and the azimuth support domain widths of 2-D spectral interpolation is
k x = 2 π / ρ x k y = 2 π / ρ y
(26)
Assume that the number of the range and the azimuth sampling units is Na × Na before 2-D interpolation. Then the scene sizes Sx and Sy after the pixel grid resampling process are
Sx = Na ρx Sy = Na ρy
(27)
For the application of SAR image matching, the referential matching image is usually a space-borne SAR image of which the resolution is lower than that of the real-time SAR image. This makes the scene size of real-time SAR image expanded after IGDC.
6
Y. Li, X. Song and L. Guo et al. / Signal Processing 171 (2020) 107506
Fig. 5. Algorithm flow-chart of the proposed algorithm.
RME (k x , k y )is
4.2. Analysis of algorithm time complexity In order to testify the efficiency of the proposed IMF-PFA, its time complexity needs to be evaluated and analyzed. Assume that the number of range and azimuth sampling units is Nr × Na , the number of pixel points of the ground image is Ng × Ng , and the number of intercepted image bins is Nm × Nm . In HM-SAR imaging, there usually exists Nr = Na > Ng >> Nm . For each FFT/IFFT, the corresponding float point operations and complex multiplication operations are 5Nr Na log2 Na and 6Na Nr , respectively. Based on the flowchart in Fig. 5, the total computation of the IMF-PFA is
2 T (Na ) = 5 × 3Na Nr log2 Nr + 2 Nm log2 Nm Ng2
2 2 + 6 × 17Nr Na + 64Ng2 + Nm Ng
(28)
When Nr = Na = Ng >> Nm , the time complexity of the IMF-PFA is O(Na2 log2 Na ). This magnitude is lower than that of the BPA (O(Na3 )), and comparable with conventional Fast Factorized BPA (O(Na2 log2 Na )) [38]. Therefore, on one hand, the proposed algorithm inherits the advantages of PFA in spectral pruning and pixel grid resampling. On the other hand, similar to BPA, it can choose the imaging plane flexibly, get the SAR ground map without distortions, and parallelize the processing of multi-core. In addition, through local joint compensation, the problem of excessive computation caused by pixel-by-pixel compensation can also be avoided so that the real-time performance of the algorithm is better. 4.3. Analysis of residual space-variant phase errors effect For HM-SAR imaging systems, imaging with sub-aperture data is often adopted to enhance real-time processing capabilities [39– 41]. In order to improve the real-time performance of the proposed IMF-PFA, the signal is transformed to the complex image domain before local joint compensation. Besides, the effect of space-variant phase errors on azimuth defocus is eliminated by means of local filtering compensation. Therefore, in Eqs. (20) and (21), the residual of space-variant phase error RME (k x , k y ) is approximated, and the effect of range migration errors caused by the time-variant acceleration on imaging is neglected [34]. This section intends to analyze the error of the proposed algorithm and discuss the boundary constraint of it. The expression of
RME k x , k y = exp
j
cos ϕs k x −sin ϕs k y
x (tm∗ )
6 i=2
Aˆ i0 (x p , y p ) − Aˆ i0 (0, 0 ) t ∗m i
(29)
For Eq. (29), its third-order Taylor expansion at kx = kx0 is
RME k x , k y = exp jg0 x p , y p ; k y k x0 × exp jg1 x p , y p ; k y
× exp jg2 x p , y p ; k y
k x − k x0
k x − k x0
2
(30)
In Eq. (30), gi (xp , yp ; k y ) is the Taylor expansion coefficient, i = 0, 1, 2. Specifically, the first exponential term is the azimuth phase error caused by acceleration, and it can incur azimuth defocusing. The proposed algorithm has eliminated such an effect through local filtering compensation. The second exponential term denotes the space-variant range migration error introduced by time-variant acceleration. The third term represents the secondary range compression, and it can influence the focus depth in range dimension. First, the envelope error phase and the range migration in the wavenumber domain in Eq. (30) are as follows:
ϕ1 k x , k y = g1 x p , y p ; k y k x − k x 0
(31)
RM1 = g1 x p , y p ; k y
(32)
It can be seen from Eq. (32) that the range migration is a function of the target position and the azimuth wavenumber. As the azimuth resolution increases, the azimuth wavenumber ky becomes larger, resulting in increased range migration RM1 . When the size of the scene becomes larger, xp , yp and RM1 will increase. However, when RM1 is greater than a quarter of the range unit width, the focusing in range dimension is affected. For a HM-SAR imaging system, considering real-time processing and algorithm performance, it often uses sub-aperture data imaging. The corresponding image resolution is usually meter-level, which implies that the azimuth support domain width ky is relatively small. The imaging parameters in Table 1 are used to further analyze the influence of space-variant envelope errors. For the case where the azimuth resolution is 1 m, the range migration and secondary range compression phase of the scene edge point are shown in Fig. 6 for the
Y. Li, X. Song and L. Guo et al. / Signal Processing 171 (2020) 107506
Fig. 6. Range migration and range phase error. (a) Range migration. (b) Range phase error.
Fig. 7. Simulation geometry and targets distribution.
imaging range of 4km × 4km. It can be observed that the residual range migration error caused by approximation at the scene edge is about 0.16 m, as shown in Fig. 6(a). When the image resolution is greater than 0.64 m, no range migration will happen. Therefore, the envelope error can be neglected for the imaging parameters in Table 1. In Fig. 6(b), the range phase is less than π /4, which has no effect on range focusing and can be neglected [41,42]. 5. Simulation experiment In order to verify the performance of the proposed IMF-PFA on processing SAR data with time-variant acceleration, experiments with simulated data are carried out. 5.1. Simulations with point target data To demonstrate the effectiveness of the proposed algorithm, the simulation experiment is divided into three parts. The first part conducts point target simulation with the imaging parameters in Table 1. Results are compared with the imaging algorithm in [17] to verify the effectiveness of the proposed algorithm on space-variant ME compensation. The second part testifies the effectiveness of the IGDC in the proposed algorithm through point array target simulation. The third part conducts point array target simulation to demonstrate the effectiveness of pixel grid resampling for gray-scale distortion in IGDC. Part 1:Residual Space-variant phase errors compensation The layout of the point target simulation is shown in Fig. 7. To be specific, a 3 × 3 point array is spaced in the ground plane with scene size of 2km × 2km, range bandwidth of 250 MHz, sampling frequency of 300 MHz, and azimuth resolution of 0.75 m. Target point (TP) 2 at the scene center is selected as the reference point, which is compared with TP1 and TP3 that have the maximum space-variant phase errors. Fig. 8 is the azimuth profiles of the point targets obtained by the reference algorithm [17]. It can be observed that due to the
7
influence of time-variant acceleration, the edge points have serious defocuses. Compared with the central point, namely TP2, there still exist many problems: the integrated side lobe ratio (ISLR), the peak side lobe ratio (PSLR) and the first side lobe are raised, the first null point is large, the main lobe is expended, and the azimuth resolution is decreased. Fig. 9 presents the azimuth profiles obtained by the proposed IMF-PFA. With the help of space-variant characteristics of ME and inverse local filtering compensation for each pixel, the phase error of edge point can still be well approximated and compensated. Compared with Fig. 8, it can be seen that the azimuth profiles of edge points are both focused well, and the main-lobes and the side-lobes are separated. Fig. 10 further shows the contour plots of the results obtained by the proposed algorithm. According to the figure, the main-lobes and the side-lobes are clearly separated as a “cross”, and the edge points are close to the central point. Therefore, the validity of the proposed algorithm is verified. To further demonstrate the effectiveness of the proposed IMFPFA, Table 2 presents the values of PSLR, ISLR, and azimuth resolution (3 dB width) of the selected points. It can be seen that the results of the proposed algorithm and the corresponding theoretical values (PSLR: −13.26 dB, ISLR: −9.8 dB, azimuth resolution: 0.75 m) are close, which demonstrates the effectiveness of the proposed algorithm. Part 2:Image geometric distortion correction In this part, the effectiveness of the local joint compensation method for the compensation of space-variant phase errors and image distortions is verified through simulation of point array target. A 11 × 11 point array scene is spaced along X-axis and Y-axis where the size of the scene is 1km × 1km, and the resolution of pixel grid is 1m × 1m. It can be observed that the focusing image before IGDC has a big distortion and the whole image appears as a “trapezoid” as shown in Fig. 11(a). However, after IGDC, Fig. 11(b) shows that the image is recovered to a “square”, and similar to that of simulation points. All of these results verify the effectiveness of the proposed local joint compensation method. Part 3: Pixel grid resampling This part verifies the effectiveness of the proposed pixel grid resampling method in solving the gray-scale distortion problem caused by large pixel resolution span. For the simulation experiment, a 9 × 9 point array is set in the ground plane with the scene size of 1km × 1km and the pixel grid resolution of 3m × 3m. In virtue of support domain width of spectral interpolation by IMFPFA, the internal distance of pixel grids before geometric correction is adjusted to 0.5m × 0.5m and 2.5m × 2.5m. Fig. 12(a) shows that, the original pixel resolution span of 0.5m × 0.5m is relatively large when the resolution of the ground pixel grid is 3m × 3m. After IGDC, information distortion occurs, and a part of the points have been lost. Fig. 12(b) presents the imaging results corresponding to the pixel grid of 2.5m × 2.5m. After using the pixel grid resampling method proposed in this paper, the pixel grid of the original image is adjusted to 2.5m × 2.5m. Compared with the case when the pixel resolution is 0.5m × 0.5m, the resolution span becomes smaller. After IGDC, the point array of the SAR image is clearly visible, which indicates that the proposed pixel grid resampling method is effective.
5.2. Processing of SAR image data In order to further validate the applicability of the proposed algorithm in complex imaging scenes with high squint angle and great acceleration, the time-domain echo simulation method is applied to obtain the echo data under high-maneuverability conditions in this part, which also offsets the deficiency of highmaneuverability SAR measurement data.
8
Y. Li, X. Song and L. Guo et al. / Signal Processing 171 (2020) 107506
Fig. 8. Azimuth profiles of three TPs obtained by reference algorithm [17]. (a) Azimuth profile of TP1. (b) Azimuth profile of TP2. (c) Azimuth profile of TP3.
Fig. 9. Azimuth profiles of three TPs obtained by proposed algorithm. (a) Azimuth profile of TP1. (b) Azimuth profile of TP2. (c) Azimuth profile of TP3.
Fig. 10. Contour plots of three TPs obtained by proposed IMF-PFA. (a) Coutour plot of TP1. (b) Coutour plot of TP2. (c) Coutour plot of TP3. Table 2 Imaging performance results of the selected TPs. Parameters
Target point TP1
Azimuth resolution(m) PSLR(dB) ISLR(dB)
TP2
TP3
Reference method
IMF-PFA
Reference method
IMF-PFA
Reference method
IMF-PFA
44.74 −0.30 15.90
0.76 −12.45 −10.84
0.77 −13.74 −11.00
0.76 −13.78 −11.06
43.78 −0.03 15.66
0.77 −12.81 −10.25
The effectiveness of the proposed IMF-PFA will be validated in this part by the use of parameters in Table 1. The available scene size is about 1km × 1km, and the pixel resolution is about 1m × 1m. Fig. 13(a) shows the whole image processed by the proposed IMF-PFA before IDGC. It can be seen that the focusing qual-
ity of the whole image is low, and there exists a large geometric distortion. Fig. 13(b) presents the final ground image of the IMFPFA. It is clear that its focusing quality is improved compared with that in Fig. 13(a). Based on all of these results, it can be concluded that the proposed algorithm is effective.
Y. Li, X. Song and L. Guo et al. / Signal Processing 171 (2020) 107506
9
sociative interest that represents a conflict of interest in connection with the work submitted.
CRediT authorship contribution statement
Fig. 11. Results of geometric correction via inverse projection with the pixel size1m × 1mon the ground. (a) Before geometric correction. (b) After geometric correction.
Yachao Li: Conceptualization, Methodology, Supervision, Project administration. Xuan Song: Writing - original draft, Software. Liang Guo: Data curation, Software. Haiwen Mei: Software, Validation. Yinghui Quan: Writing - review & editing.
Acknowledgments This work was supported in part by the National Key R&D program of China under Grant 2018YFB2202500, National Natural Science Foundation of China under Grant 61471283, Grant 61303035 and Grant 61772379, the key R&D program of Shaanxi Province under Grant 2017KW-ZD-12.
Fig. 12. Results of pixel grid resampling. (a) Pixel grid resolution of 0.5m × 0.5m. (b) Pixel grid resolution of 2.5m × 2.5m.
Appendix A After the slant range equivalence operation, the ME term introduced by the time-variant acceleration is separated from the conventional linear slant range model. The expression of ME can be derived from Eq. (4) as follows:
RACC (x p , y p ; tm ) =
6
i Ai (x p , y p )tm
(A.1)
i=2
The disturbance coefficients can be derived from Eq. (5), and this yields Fig. 13. Results of image data processed by IMF-PFA. (a) Whole image before IGDC. (b) The final ground image.
6. Conclusion In this paper, a new imaging algorithm named IMF-PFA is proposed to solve the problem of space-variant phase error and geometric distortion in HM-SAR imaging. Established upon the ASERM model, the proposed algorithm is more accurate and can separate time-variant acceleration errors from the ideal linear slant range model. Besides, the proposed pixel grid resampling method integrates imaging and IGDC, which is helpful to the reduction of image gray-scale distortions caused by large pixel resolution span. Meanwhile, the proposed inverse-mapping local joint compensation method makes it easier to integrate imaging, MoCo and IGDC. The time complexity of proposed algorithm is lower than that of the traditional BPA while inheriting such advantages of BPA as flexibly selecting the imaging plane, directly obtaining the undistorted SAR ground image and conveniently conducting multi-core paralleling processing. Numerous simulation experiments and measure data demonstrate that the proposed algorithm can meet the requirements of HM-SAR imaging very well. Declaration of Competing Interest The authors declared that they have no conflicts of interest to this work. We declare that we do not have any commercial or as-
Ai ( x p , y p ) = ki − ci i = 2 , · · · , 6
(A.2)
where ki and ci are
⎧ M2 2 M1 ⎪ ⎪ ⎪k2 = R − 3 ⎪ 1 R1 ⎪ ⎪ ⎪ ⎪ 2 ⎪ 3 M1 M2 3 M1 3 M1 ⎪ ⎪ k3 = − + ⎪ 5 3 ⎪ R1 R R ⎪ 1 1 ⎪ ⎪ ⎪ 4 2 ⎪ 15M2 3 M1 2 + 4 M2 M3 18M1 M2 M4 ⎪ ⎪ k4 = − + − + ⎪ 7 5 3 ⎪ R1 R R R ⎪ 1 1 1 ⎪ ⎪ ⎪ 5 3 2 2 ⎪ 75 105 M M M 45 M M + 30 M M ⎪k = 2 1 2 1 2 2 3 ⎪ − + ⎨ 5 9 7 5 R1
R1
R1
⎪ 5M2 M4 + 10M1 M3 2 M5 ⎪ ⎪ − + ⎪ 3 ⎪ R1 R ⎪ 1 ⎪ ⎪ 6 ⎪ ⎪ 675M1 2 M2 2 + 300M2 3 M3 945M2 3150M1 M2 4 ⎪ ⎪ k6 = + − ⎪ 11 9 ⎪ R1 R1 R1 7 ⎪ ⎪ ⎪ ⎪ ⎪ 45M1 3 + 45M2 2 M4 + 180M1 M2 M3 2M6 ⎪ ⎪ + + ⎪ ⎪ R1 R1 5 ⎪ ⎪ ⎪ ⎪ 10M 2 + 9M M + 10M M ⎪ ⎪ 3 2 5 1 4 ⎩− R1 3
(A.3)
10
Y. Li, X. Song and L. Guo et al. / Signal Processing 171 (2020) 107506
⎧ M2 2 vm 2 ⎪ ⎪ ⎪c2 = − 3 + R ⎪ ⎪ 1 R1 ⎪ ⎪ ⎪ 3 ⎪ 3 M2 vm 3 M2 ⎪ ⎪ − ⎪c3 = 5 ⎪ ⎪ R R1 2 1 ⎪ ⎪ ⎨ 4 c4 = −
15M2
+
where
18M2 2 vm 2
−
3vm 4
⎪ R1 R1 R1 3 ⎪ ⎪ ⎪ ⎪ ⎪ 150M2 3 vm 2 105M2 5 45M2 vm 4 ⎪ c5 = − + ⎪ ⎪ 9 7 ⎪ R1 R1 R1 5 ⎪ ⎪ ⎪ ⎪ ⎪ 945M2 6 675M2 2 vm 4 3150M2 4 vm 2 45vm 6 ⎪ ⎩c6 = − + − + 11 9 7 5 7
R1
5
R1
R1
(A.4)
R1
Here, the corresponding variables can be expressed as follows:
⎧ ∂ (RIR0 − RIR ) ⎪ R1 = ⎪ ⎪ ∂ tm ⎪ ⎪ ⎪ ⎪ vx ( x + vx tm ) ( vx ( x0 + vx tm ) + vz ( H + vz tm ) ) ⎪ ⎪
x = 0 − ⎪ ⎪ RIR0 R3 IR0 ⎪ ⎪ ⎪ ⎨ y ( v ( x + vx tm ) + vz ( H + vz tm ) )
y= 0 x 0 R3 IR0 ⎪ ⎪ ⎪ ⎪ ∂ (RIR0 − RIR ) y − R1 y ⎪ ⎪F x = ⎪ ∂ tm
x y − x y ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ∂ RIR0 − RIR ) x − R1 x ( ⎪ ⎩F y = ∂ tm
y x − y x
(B.2)
In Equation (B.1),b10 and b11 represent the focusing positions of the target alongX-axis and Y-axis, respectively; b21 is the wavefront bending coefficient and reflects the azimuth defocusing level. References
⎧ ⎪ ⎪ R = (x0 − x p )2 + (y0 − y p )2 + H 2 1 ⎪ ⎪ ⎪ ⎪ ⎪ M1 = vx 2 + vz 2 + H az0 + ax0 (x0 − x p ) + ay0 (y0 − y p ) ⎪ ⎪ ⎪ ⎪ ⎪ M2 = H vz + vx ( x0 − x p ) ⎪ ⎪ ⎪ ⎪ ⎪ M3 = H az1 + 3ax0 vx + 3az0 vz + ax1 (x0 − x p ) + ay1 (y0 − y p ) ⎪ ⎪ ⎪ ⎪ ⎪ ⎨M4 = 3ax0 2 + 3ay0 2 + 3az0 2 + 2H az2 + 4ax1 vx + 4az1 vz +2ax2 (x0 − x p ) + 2ay2 (y0 − y p ) ⎪ ⎪ ⎪ ⎪ M5 = 3H az3 + 5ax0 ax1 + 5ay0 ay1 + 5az0 az1 + 5ax2 vx ⎪ ⎪ ⎪ ⎪ ⎪ +5az2 vz + 3ax3 (x0 − x p ) + 3ay3 (y0 − y p ) ⎪ ⎪ ⎪ ⎪ ⎪M6 = 5ax1 2 + 5ay1 2 + 5az1 2 + 15ax0 ax2 + 15ay0 ay2 + 15az0 az2 ⎪ ⎪ ⎪ ⎪+18a v + 18a v ⎪ ⎪ x3 x z3 z ⎪ ⎪ ⎩ 2 2 vm = vx + vz (A.5)
Appendix B After azimuth Deramp, the spectrum still has 2-D coupling, and cannot be focused by 2-D IFFT. Therefore, signal resampling is conducted through 2-D spectral interpolation to obtain the orthogonal spectrum in the (k x , k y ) domain. The second-order Taylor series expansion is conducted on the signal at kx = kx0 and ky = 0, as shown in Eq. (20) [34–37]. The expansions of the coefficients are
⎧ G y − G y G x − G x ⎪ ⎪ b10 = cos ϕs + sin ϕs ⎪ ⎪
x y − x y
y x − y x ⎪ ⎪ ⎪ ⎪ G y − G y G x − G x ⎪ ⎪ ⎪ ⎨b11 = − x y − x y sin ϕs + y x − y x cos ϕs 2 ⎪
1 y F y sin ϕs − y F y sin ϕs cos ϕs ⎪ ⎪ ⎪b21 = ⎪ 2( x y − x y )krc − x F y cos2 ϕs + x F x sin ϕs cos ϕs ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩b20 = 0 b22 = 0
(B.1)
[1] Y. Liang, Y. Huai, J. Ding, et al., A modified ω – k algorithm for HS-SAR small-aperture data imaging, IEEE Trans. Geosci. Remote Sens. 54 (6) (Feb. 2016) 3710–3721. [2] T. Zeng, Y. Li, Z. Ding, T. Long, Subaperture approach based on azimuth-dependent range cell migration correction and azimuth focusing parameter equalization for maneuvering high-squint-mode SAR, IEEE Trans. Geosci. Remote Sens. 53 (12) (Jul. 2015) 6718–6734. [3] Z. Li, M. Xing, W. Xing, et al., A modified equivalent range model and wavenumber-domain imaging approach for high-resolution-high-squint SAR with curved trajectory, IEEE Trans. Geosci. Remote Sens. 55 (7) (2017) 3721–3734 Mar.. [4] L. Yunl, L. Yanlei, Z. Liangj, L. Xingd, “A fast precise geometric calibration method for airborne sar, J, Radars 5 (4) (2016) 419–424, doi:10.120 0 0/JR16064. [5] M.D. Desai, W.K. Jenkins, Convolution backprojection image reconstruction for spotlight mode synthetic aperture radar, IEEE Trans. Image Process. 1 (4) (Oct. 1992) 505–517. [6] H. Penghui, X. Xiang-Gen, L. Xingzhao, et al., Refocusing and motion parameter estimation for ground moving targets based on improved axis rotation-time reversal transform, IEEE Trans. Comput. Imaging (2018) 1-1. [7] P. Huang, G. Xia X, Y. Gao, et al., Ground moving target refocusing in sar imagery based on RFRT-FRFT, IEEE Trans. Geosci. Remote Sens. (2019) 1–17. [8] L. Zhang, H.L. Li, Z.J. Qiao, et al., A fast BP algorithm with wavenumber spectrum fusion for high-resolution spotlight SAR imaging, IEEE Geosci. Remote Sens. Lett. 11 (9) (Jan. 2014) 1460–1464. [9] H.L. Li, L. Zhang, M.D. Xing, Z. Bao, Expediting back-projection for circular SAR imaging, J. Signal Process. 51 (10) (May 2015) 785–787. [10] Q. Dong, G. Sun, Z. Yang, et al. ”Cartesian factorized backprojection algorithm for high-resolution spotlight SAR imaging, IEEE Sens. J. 18 (3) (2018) 1160–1168. [11] J. Yang, G. Sun, M. Xing, et al., Squinted tops sar imaging based on modified range migration algorithm and spectral analysis, IEEE Geosci. Remote Sens. Lett. 11 (10) (2017) 1707–1711. [12] T. Xiong, M. Xing, X.G. Xia, et al., New applications of Omega-K algorithm for SAR data processing using effective wavelength at high squint, IEEE Trans. Geosci. Remote Sens. 51 (5) (2013) 3156–3169. [13] A. Reigber, E. Alivizatos, A. Potsis, et al., Extended wavenumber-domain synthetic aperture radar focusing with integrated motion compensation, IEE Proc. Radar Sonar Navig. 153 (3) (2006) 301–310. [14] G.C. Sun, M. Xing, X.G. Xia, et al., Beam steering sar data processing by a generalized PFA, IEEE Trans. Geosci. Remote Sens. 51 (8) (2013) 4366–4377. [15] M. Scherreik, L.R. Gorham, B. Rigling, New phase error corrections for PFA with squinted SAR, IEEE Trans. Aerosp. Electron. Syst. 53 (5) (2017) 2637–2641. [16] L.A. Gorham, B.D. Rigling, Scene size limits for polar format algorithm, IEEE Trans. Aerosp. Electron. Syst. 52 (1) (2016) 73–84. [17] X. Mao, D. Zhu, Z. Zhu, Polar format algorithm wavefront curvature compensation under arbitrary radar flight path, IEEE Geosci. Remote Sens. Lett. 9 (3) (2012) 526–530. [18] Z. Ding, T. Zeng, F. Dong, L. Liu, W. Yang, T. Long, An improved polsar image speckle reduction algorithm based on structural judgment and hybrid four– component polarimetric decomposition, IEEE Trans. Geosci. Remote Sens. 51 (Aug (8)) (2013) 4438–4449. [19] Z. Ding, L. Liu, T. Zeng, W. Yang, T. Long, Improved motion compensation approach for squint airborne SAR, IEEE Trans. Geosci. Remote Sens. 51 (8) (2013) 4378–4387 Aug.. [20] B. Fan, Z. Ding, W. Gao, T. Long, An improved motion compensation method for high resolution UAV SAR imaging, Sci. China Inf. Sci. 57 (12) (Dec. 2014) 1–13. [21] B. Subiza, E. Gimenonieves, J.M. Lopezsanchez, et al., Nonuniform FFTs (NUFFT) algorithms applied to SAR imaging, Proc. SPIE 5236 (2004) 72–79.
Y. Li, X. Song and L. Guo et al. / Signal Processing 171 (2020) 107506 [22] Y. Wu, H. Song, X. Shang, et al., Improved RMA based on nonuniform fast fourier transforms(NUFFT’s), in: International Conference on Signal Processing. IEEE, Beijing, China, October 26–29, 2008, pp. 2489–2492. [23] Z. Li, J. Wang, Q.H. Liu, Interpolation-free stolt mapping for SAR imaging, IEEE Geosci. Remote Sens. Lett. 11 (5) (2014) 926–929. [24] P. Wang, W. Liu, J. Chen, et al., A high-order imaging algorithm for high-resolution spaceborne SAR based on a modified equivalent squint range model, IEEE Trans. Geosci. Remote Sens. 53 (3) (2015) 1225–1235. [25] M. Bao, M.D. Xing, Y. Wang, et al., Two-dimensional spectrum for meo sar processing using a modified advanced hyperbolic range equation, Electron. Lett. 47 (18) (2011) 1043–1045. [26] Z. Li, Y. Liang, M. Xing, et al., An improved range model and omega-k-based imaging algorithm for high-squint SAR with curved trajectory and constant acceleration, IEEE Geosci. Remote Sens. Lett. 13 (5) (2016) 656–660. [27] L.T. Zeng, Y. Liang, M.D. Xing, et al., Two-dimensional autofocus technique for high-resolution spotlight synthetic aperture radar, IET Signal Process. 10 (6) (2016) 699–707. [28] L. Yang, M. Xing, Y. Wang, et al., Compensation for the NSRCM and phase error after polar format resampling for airborne spotlight SAR raw data of high resolution, IEEE Geosci. Remote Sens. Lett. 10 (1) (2013) 165–169. [29] S. Tang, Study on Imaging Methods for SAR on Curved-Path Platforms, Dept. Elect. Eng., Xidian Univ., Xi’an, China, 2016. [30] Z. Li, M. Xing, Y. Liang, et al., A frequency-domain imaging algorithm for highly squinted sar mounted on maneuvering platforms with nonlinear trajectory, IEEE Trans. Geosci. Remote Sens. 54 (7) (2016) 4023–4038. [31] C. Qiu, M. Schmitt, X.X. Zhu, Comparative evaluation of signal-based and descriptor-based similarity measures for SAR-optical image matching, in: IGARSS, Fort Worth, Texas, USA, July 23–28, 2017, pp. 5462–5465. [32] T. Chen, L. Chen, A union matching method for SAR images based on SIFT and edge strength, IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 7 (12) (2014) 4897–4906.
11
[33] S. Wang, H. You, K.Fu. BFSIFT, A novel method to find feature matches for SAR image registration, IEEE Geosci. Remote Sens. Lett. 9 (4) (2012) 649–653. [34] H. Deng, Y. Li, M. Liu, et al., A space-variant phase filtering imaging algorithm for high-maneuverability BISAR with arbitrary configuration and curved track, IEEE Sens. J. 18 (8) (2018) 3311–3326. [35] B.D. Rigling, R.L. Moses, Polar format algorithm for bistatic SAR, IEEE Trans. Aerosp. Electron. Syst. 40 (4) (2004) 1147–1159. [36] J. Sun, S. Mao, G. Wang, et al., Polar format algorithm for spotlight bistatic SAR with arbitrary geometry configuration, Prog. Electromagn. Res. 103 (4) (2010) 323–338. [37] X. Wang, D. Zhu, X. Mao, et al., Space-variant filtering for wavefront curvature correction in polar formatted bistatic SAR image, IEEE Trans. Aerosp. Electron. Syst. 48 (2) (2012) 940–950. [38] I.J. Ahn, K.Y. Jeong, W.H. Nam, et al., GPU-based fast projection-backprojection algorithm for 3-D PET image reconstruction, in: Nuclear Science Symposium and Medical Imaging Conference. IEEE, Valencia, Spain, October 23–29, 2011, pp. 2672–2674. [39] Y. Liang, Z. Li, L. Zeng, et al., A high-order phase correction approach for focusing HS-SAR small-aperture data of high-speed moving platforms, IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 8 (9) (2015) 4551–4561. [40] S. Chen, Y. Yuan, S. Zhang, H. Zhao, Y. Chen, A new imaging algorithm for forward-looking high-maneuverability bistatic SAR, IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 9 (4) (2016) 1543–1552 Apr.. [41] D. Li, W. Wang, H. Liu, H. Cao, H. Lin, Focusing highly squinted azimuth variant bistatic SAR, IEEE Trans. Aerosp. Electron. Syst. 52 (6) (2017) 2715–2730 Dec.. [42] H. Z5u, M. Sun, Focusing high-squint and large-baseline one-stationary bistatic SAR data using keystone transform and enhanced nonlinear chirp scaling based on an ellipse model, EURASIP J. Adv. Signal Process. 2017 (1) (2017) 33–45 Dec..