Introduction to Inverse Synthetic Aperture Radar

Introduction to Inverse Synthetic Aperture Radar

CHAPTER Introduction to Inverse Synthetic Aperture Radar 19 Marco Martorella Dipartimento di Ingegneria dell’Informazione, University of Pisa, Pisa...

3MB Sizes 4 Downloads 194 Views

CHAPTER

Introduction to Inverse Synthetic Aperture Radar

19 Marco Martorella

Dipartimento di Ingegneria dell’Informazione, University of Pisa, Pisa, Italy CNIT—Radar and Surveillance Systems (RaSS) National Laboratory, Pisa, Italy

2.19.1 Introduction Radar Imaging refers to the ability to form images of natural or man-made objects using ElectroMagnetic echo location. As will become clearer later, coherent radars may have suitable specifications that allow implementation of special features using specific signal processing. We may argue that, given a suitable coherent radar, radar imaging can be provided by adding some “special” signal processing to the received signal. Conventional radar images are typically represented as two-dimensional (2D) images where a mapping function transforms the three-dimensional (3D) world into a 2D image. An obvious comparison could be formulated with photographic images, as these are also the result of some mapping from the 3D world into a 2D photographic image. However, there are several differences that may be pointed out in regards of the type of mapping and image features. Radar images, as well as other type of images (e.g., photographic, infra-red, X-ray images) are usually characterized by means of some quality measurements, such as geometrical and radiometric resolution and signal-to-noise (SNR) ratio. Pushed by the need to form high quality radar images that can be used in applications such as automatic target recognition and classification, researchers have designed a variety of radar imaging processors. In this tutorial we will introduce the fundamental concepts at the base of radar imaging and we will provide an overview of the most commonly used radar imaging techniques. Examples will be also used throughout this chapter to clarify concepts and to show some radar imaging results.

2.19.2 Historical overview We have to distinguish two starting points when considering the origins of radar imaging: one for Synthetic Aperture Radar (SAR) and one for Inverse Synthetic Aperture Radar (ISAR). Although the two approaches to radar imaging have quite a lot in common, there are some significant differences that mark a line between them. As mentioned in [1], the SAR concept was conceived in 1951 by Carl Wiley, although the first operational system (classified) was built in 1957 by the Willow Run Laboratories of the University of Michigan for the US Department of Defense (DoD). Unclassified SAR systems were successfully built by NASA in the 1960s. The first spaceborne SAR system, SEASAT-A, was launched in 1978. Although this spaceborne system was specifically designed for oceanographic purposes, it also produced important results in other fields, such as in ice and land studies. The results obtained with Academic Press Library in Signal Processing. http://dx.doi.org/10.1016/B978-0-12-396500-4.00019-3 © 2014 Elsevier Ltd. All rights reserved.

987

988

CHAPTER 19 Introduction to Inverse Synthetic Aperture Radar

SEASAT-A demonstrated the importance of radar imaging for the observation of the earth. Since then, several spaceborne SAR systems have been launched that provide improved resolution, wider coverage and faster revisit times. Several airborne SAR system have also been developed to overcome limitations of spaceborne SAR systems, such as cost, revisiting time and resolution. After the first experiments on the 1960s operated by the NASA, other important missions have been accomplished, such as the SIR-A, SIR-B and SIR-C missions, which flew in 1981, 1984, and 1994, respectively. The history of ISAR began later, when Walker and Aushermann, with their pioneering work [2,3] introduced the concept of radar imaging of rotating objects with fixed antennas. The main insight in their work was to exploit Doppler information generated by the rotation of an object to separate echoes returning from different parts of the object along a cross-range axis. Such Doppler separation, together with the time-delay separation (along the radar range), produces a two-dimensional (2D) image, which can be mapped onto an image plane.

2.19.3 High resolution radar and radar imaging The definition of an image as given by the IEEE is “a spatial distribution of a physical property such as radiation, electric charge, conductivity, or reflectivity, mapped from another distribution of either the same or another physical property.” Narrowing this definition to a radar image, we may define a radar image as “a spatial distribution of the electromagnetic (EM) reflectivity of an object mapped from a distribution of currents on the object’s surface.” The latter concept of mapping an object’s current distribution onto an image of the object’s reflectivity function derives from the fact that a radar is able to capture echoes of e.m. energy irradiated by the radar itself and backscattered from the object. The backscattering effect is produced by the object because the incident e.m. field induces a set of currents on the object’s surface, which in turns produces a scattered e.m. field that (in part) propagates back to the radar. If a radar image has fine resolution, the object’s reflectivity may be observed in fine detail with the result that it would be possible to spatially separate reflectivity contributions from different parts of the object. It is quite intuitive that the finer the resolution the more the detail that may be visible in the image. A desirable radar imaging system would provide finer resolution to allow characterizing smaller and smaller scale objects. Although 3D radar imaging is nowadays possible, we will consider the more usual concept of 2D radar images. Therefore, we will be conscious that the mapping function that links the object of interest with its image is mathematically representable with a function f : C3 → C2 , where the symbol C represents the set of complex numbers, as radar images are represented by complex numbers (I & Q or magnitude and phase). Radar images, as will be clearer later, are typically represented in cartesian coordinates, where one axis is aligned with the radar range direction and the other with the cross-range direction (otherwise indicated as azimuth direction). It is worth pointing out that the range direction is uniquely identified by the orientation of the radar antenna (typically coincident with the antenna maximum gain direction). In a 3D world, there are an infinite number of directions that are orthogonal to the range direction, therefore, the concept of cross-range direction becomes highly ambiguous if some constraints are not applied that uniquely define such a direction. Nevertheless, a radar image will be identified as a mapping of some physical quantity that is defined in a 3D coordinate system onto a plane identified by the range direction and a cross-range direction, which, as it will be clearer later, will depend on the radar-target geometry and dynamics.

2.19.3 High Resolution Radar and Radar Imaging

989

Nevertheless, in order to enable radar imaging capabilities in a radar, sufficiently fine resolution must be achievable in both the range and cross-range directions. A desirable radar image will show fine and possibly similar resolution in both axes.

2.19.3.1 Resolution The resolution is the minimum distance between two quantities at which a measurement system is said to be able to distinguish two separate contributions. This general definition may be applied to all sorts of measurements and the resolution may be expressed in terms of a specific measurement unit, e.g., meters for spatial resolution, Hertz for frequency resolution, and so on. By applying the general definition of resolution to radar, this can be defined a little more specifically as the minimum distance along a given direction between two point-like scatterers of equal magnitude such that the two scatterers can be distinguished by the radar. This concept is further explained in Figure 19.1. The two scatterers may be distinguished if the two received echoes are sufficiently separated such that a signal processing is able to detect both contributions separately. According to Rayleigh criterion, the minimum distance that allows separating two echoes is equal to the echo half-power width. Traditionally, in radar, we talk about range resolution to indicate the ability of the radar to distinguish scatterers along the range direction. The same concept applies to azimuth and elevation resolution in the azimuth and elevation planes. We will now pay special attention to the Doppler resolution. The Doppler resolution is the ability of the radar to distinguish two point-like scatterers where each scatterer produces a Doppler component due to their radial motion with respect to the radar. The principle behind the Doppler frequency generation can be explained in simple terms in the radar case. With reference to Figure 19.2, let a point like target move with a given velocity v with respect to the radar. Therefore, the radar-target distance can be approximated with the linear function R(t) = R0 + v R t. Assuming that a radar transmits a pure tone sT (t) = A cos (2π f 0 t), the received signal can be written by taking into account the round-trip delay and an amplitude attenuation: (19.1) s R (t) = B cos [2π f 0 (t − τ (t))], where τ (t) =

FIGURE 19.1 Radar resolution.

2 2 R(t) = (R0 + v R t) c c

(19.2)

990

CHAPTER 19 Introduction to Inverse Synthetic Aperture Radar

FIGURE 19.2 Doppler frequency.

and B < A. By substituting (19.2) in (19.1) we obtain s R (t) = B cos [2π( f 0 + f D )t + ϕ0 ], where ϕ0 = − 4πλR0 with λ = fc0 and the Doppler frequency can be defined as f D = − 2vλR , where v R is the scatterer’s radial velocity and λ is the radar wavelength. The radar Doppler resolution, as will be clear later, becomes important it is directly related to the cross-range resolution in Inverse Synthetic Aperture Radar (ISAR) systems. In radar imaging, the concept of resolution applies to range and cross-range, as they represent the two coordinates in a radar image. Therefore, we can define the concept of radar image resolution as a pair of values that indicate the range and cross-range resolution in the image. In the following subsections, the concepts of range and cross-range resolution will be detailed and methods for obtaining high range and cross-range resolution will be addressed.

2.19.3.2 High range resolution In pulsed radar where the phase is not measured (or used), the range resolution is typically associated with the transmitted pulse duration. In fact, as a first approximation, a signal echo due to an ideal scatterer, persists for a time interval equal to the length of the transmitted pulse. Therefore, we should expect to be able to detect a second ideal scatterer when the echo of the first scatterer vanishes, leading to an expression for the range resolution equal to r = cT 2 , where T is the pulse duration [4]. Such a concept is roughly represented in Figure 19.1, where the echoes relative to two ideal scatterers are partly overlapped. It becomes evident that a minimum distance between two ideal scatterers exists, below

2.19.3 High Resolution Radar and Radar Imaging

991

which, the two echoes are largely overlapped and therefore the two contributions are not distinguishable any longer. As finite bandwidth signals have infinite duration pulses, we will use the definition of duration at the point where peak power has reduced by 3 dB, which is consistent with the Rayleigh criterion. In practice, all radar systems employ a matched filter, which ensures that a maximum SNR is obtained at the filter output. It must be recalled that a matched filter produces the transmitted signal autocorrelation function at its output when an echo is present at its input. The autocorrelation function has the following properties: 1. It is the Fourier Transform of the Energy Spectral Density (ESD), 2. Bτ = 1, where B is the transmitted signal bandwidth and τ is the relation signal duration at the output of the matched filter. The second property, which represents the uncertainty relationship of a Fourier Transform pair (in this case, it is given by the autocorrelation and the ESD), indicates that to obtain short duration pulses at the output of the matched filter wide bandwidth signals should be transmitted. Although, short duration pulses provide a straightforward way to increase the bandwidth, this should be discouraged as it has the drawback of reducing the transmitted energy (and hence the system sensitivity), unless higher peak power pulses are used in transmission to compensate. Typically, phase modulations are used to increase the signal bandwidth rather than amplitude modulation since they are able to effectively increase the signal bandwidth without having to decrease the pulse duration, e.g., linear frequency modulated signals (or chirp signals) are typical phase modulated signals. If we agree that two echoes relative to two separate scatterers can be distinguished only if the echoes are separated by a delay equal to the pulse duration at the output of the matched filter (τ ), then we can say that the range resolution is related to the transmitted signal bandwidth by means of the following expression: c c . (19.3) R = τ = 2 2B The effect of using wide bandwidth transmitted signals in conjunction with a matched filter is called pulse compression and it is a standard way to proceed to enable higher range resolution. The term pulse compression comes from the simple fact that the duration of the pulse at the matched filter output is shortened by a factor ρ = T τ , namely compression factor, with respect the duration of the transmitted pulse (T ). As an example, if we want to achieve a range resolution of 1 m, we have to generate a signal with a bandwidth of 150 MHz, independently of its duration, which corresponds to a non-modulated signal of 1.5 ns of duration. A side effect of the pulse compression is the presence of sidelobes in the compressed signal. Sidelobes are unwanted signal peaks that may mask other echoes and they must be suppressed or attenuated as much as possible. A performance indicator in terms of sidelobes is the Side Lobe Level (SLL), which is defined as ratio between the pulse peak and the highest sidelobe peak. For an unweighted signal, the SLL is typically around −13 dB. Both analog and digital modulations are typically used to obtain wide bandwidth waveforms. Chirp signals are the most commonly used among analog modulations as they are easy to generate and they show desired characteristics in terms of SLL and robustness to noise and Doppler effect. Alternatively,

992

CHAPTER 19 Introduction to Inverse Synthetic Aperture Radar

FIGURE 19.3 Pulse compression—Chirp signal—Uncompressed (transmitted) pulse on the left and Compressed pulse (Matched filter output) on the right.

Barker and pseudo-random digital codes can be used to generate digital phase modulated signals, which have similar characteristics to those of chirp signals. The big advantage of pulse compression is that high resolution can be achieved by transmitting long pulses, therefore maintaining high sensitivity at long ranges. An example of pulse compression is provided in Figure 19.3, where a chirp signal is considered as transmitted signal.

2.19.3.3 High cross-range resolution The ability to resolve scatterers in a cross-range direction is related to the angular separation in the same direction. Traditionally, azimuth and elevation resolution were achieved by building wide antennas since, approximately, the angular resolution is inversely proportional to the antenna size. For example, the angular resolution of a rectangular antenna can approximately be calculated as follows: αx 

λ , Lx

(19.4)

where λ is the radar wavelength, L x is the size of the antenna along a given cross-range direction (usually named as azimuth and elevation) and αx is the angular resolution along the same cross-range direction (expressed in radians). Although it is directly related to system parameters, specifically the antenna size and the radar wavelength, the angular resolution is not sufficient to provide high resolution needed for radar imaging. The main reasons are that: •

it refers to an angular domain and not a spatial domain (images should be scaled in spatial coordinates and not angular coordinates),

2.19.4 Inverse Synthetic Aperture Radar

993



the cross-range resolution, which can be calculated from the angular resolution as shown in (19.5), becomes range (R) dependent Rλ . (19.5) δx = Rαx = Lx At long ranges, the cross-range resolution may become coarse, even in the case of fine angular resolution. To give an example, if we consider a distance R = 104 m, a wavelength λ = 3 cm and an antenna size L x = 3 m, the cross-range resolution would be equal to δx = 100 m. Conversely, if we wanted to obtain a cross-range resolution equal to 1 m at a distance of 10 km, we would have to build a 300 m wide antenna. It is evident that this problem cannot be solved by building very wide antennas as there are practical limits to that. Another solution could be to build an antenna array, which may relax the previous problem. Nevertheless, at very long ranges the problem would still be too hard to solve. Inverse Synthetic Aperture Radar provides a solution to this problem enabling high cross-range resolution as if a wide aperture had been used. This concept will be detailed in the next section.

2.19.4 Inverse synthetic aperture radar The concept of Inverse Synthetic Aperture Radar will be introduced in a modern way. Rather than traditionally considering an ISAR system as a configuration where the radar is static and on object moves with respect to it, we will migrate from the SAR concept and geometry.

2.19.4.1 From SAR to ISAR As pointed out in Section 2.19.3, real aperture antennas or antenna arrays do not provide a viable solution for radar imaging systems. Nevertheless, high cross-range resolution can only be enabled if an antenna aperture can be formed. The first idea of SAR was conceived by thinking of a single element that moves along a given trajectory, therefore providing the means for forming a virtual array in a given time interval. Such concept is depicted in Figure 19.4, where a synthetic array formation is compared with a real array. As the formation of the synthetic aperture is not instantaneous, any equivalence between the synthetic aperture and a real array can be stated only if the illuminated scene is static during the synthetic aperture formation (from t1 to t N ). If such an assumption can be made, there would be no physical difference between the signal acquired by a synthetic aperture radar and a real aperture radar (which makes use of a real array). The condition under which the effect of the element motion can be neglected is known as the stop & go assumption, which implies that the transmission of the signal and the reception of its echo occur instantaneously at a particular position. Obviously, this assumption cannot be perfectly matched unless the platform that carries the radar stops every time a pulse is transmitted and received before moving to the next position. Nevertheless, in practical scenarios, such an assumption can be considered satisfied since the round trip delay (the time for the e.m. wave to propagate from the radar to the illuminated scene and back) is short enough to neglect the element offset created by the platform motion during such a time interval. Attention should now be paid to the “relative motion” that there is between the platform and the target, as such a motion is not necessarily produced by a moving platform that carries the radar. Relatively speaking, if the sensor is stationary and the target moves with respect to it inducing a relative motion,

994

CHAPTER 19 Introduction to Inverse Synthetic Aperture Radar

FIGURE 19.4 Synthetic aperture—virtual array.

a synthetic aperture would be created at the same way. To strengthen this concept, one could argue that the cases of stationary target and moving platform and the case of stationary platform and moving target can only be stated once the reference system has been chosen. In fact, by placing the reference system on the target, the first case is enabled whereas, by placing the reference system on the radar, the latter is obtained. According with this last view, the differences between synthetic aperture and inverse synthetic aperture would only depend on where the reference system is positioned. Such a concept is depicted in Figure 19.5 where a Spot-light SAR configuration is transformed into an ISAR configuration by moving the reference system from the target to the radar. Conversely, the same concept may be argued by starting with a controlled ISAR configuration such as that of a turntable experiment. In a turntable configuration, the antenna is fixed on the ground (typically mounted on a turret) and the target is positioned on a turntable, which rotates as the radar takes measurements of the target. By moving the reference system from the radar to the target, a circular SAR geometry can be enabled, as depicted in Figure 19.6. In truth, a subtle but significant detail exists that substantially defines the difference between SAR and ISAR. Such a detail is the cooperation of the illuminated target. To better explain this concept, one may place the reference system on the target. If such a target moves (with unknown motions) with

2.19.4 Inverse Synthetic Aperture Radar

995

FIGURE 19.5 From Spot-light SAR to ISAR.

FIGURE 19.6 From ISAR to circular SAR.

respect to the radar, the synthetic aperture formed during the CPI differs from the expected one (which is formed by controlled platform motion). Any SAR image formation that follows would be then based on the erroneously predicted synthetic aperture and therefore would lead to defocused images. A pictorial example is shown in Figure 19.7.

2.19.4.2 Geometry Figure 19.8 shows the ISAR Geometry. The reference system Tξ is embedded in the radar with the axis ξ2 oriented along the line of sight (LOS). Without losing generality, it is assumed that the target moves along a trajectory that intersects the axis ξ2 at the central instant t = 0. The target rotation due to the translation

996

CHAPTER 19 Introduction to Inverse Synthetic Aperture Radar

FIGURE 19.7 Synthetic array formed by a non-cooperative target.

motion is denoted as the translational rotation vector tr (t). In practical conditions, external forces produce angular motions that are represented by the angular rotation vector a (t), which is applied to the center O of the target (from now on “point O”). The sum of these two rotation vectors yields the total angular rotation vector T (t). The projection of T (t) on the plane orthogonal to the LOS is called effective rotation vector eff (t), which is the rotation vector component that contributes to the target aspect angle variation. The imaging plane (x1 , x2 ) is orthogonal to the effective rotation vector and is represented in Figure 19.8 (this will be demonstrated in Section 2.19.5). The time varying coordinate system Tx is chosen so to have the x2 axis oriented along the LOS, the x3 axis along the effective rotation vector and the origin in the point “O” at time t = 0. With this choice, the x1 and the x2 axes become the cross-range and range coordinates of the imaging plane, respectively. It is worth noting that, in general, the imaging plane (x1 , x2 ) is time-varying because the effective rotation vector varies with respect to the time.

2.19.4.3 Signal modeling A convenient way to represent the received signal when dealing with ISAR processing is by using a time-frequency format. In this representation, the frequency coordinate is represented by the variable f, whereas the time coordinate is represented by the variable t. When written in these terms, the received signal can be seen as a time varying signal spectrum, where the time variance is typically introduced by the target relative motions and not by the transmitter (unless time-varying modulations are used, e.g., in adaptive systems). Therefore, the complex base-band received signal, in free space conditions, can be written in a time-frequency format as follows:  S R ( f , t) = W ( f , t)

  4π f ξ(x) exp − j R(t) dx, c

(19.6)

2.19.4 Inverse Synthetic Aperture Radar

997

FIGURE 19.8 ISAR geometry.

where



t W ( f , t) = rect t



 rect

f − f0 B

 ,

(19.7)

and where f 0 represents the carrier frequency, B is the transmitted signal bandwidth, t is the observation time, c is the speed of light in vacuum, R(t) is the distance between the radar and the generic point x on the target and ξ(x) is the target reflectivity function. Function rect (·) yields 1 when |·| < 0.5 and 0 otherwise.

998

CHAPTER 19 Introduction to Inverse Synthetic Aperture Radar

2.19.4.4 Image formation ISAR image formation was initially introduced in two pioneering works [2,3], where a Range-Doppler approach was presented to describe how an e.m. image could be formed by exploiting a target’s rotation with respect to a radar. Since then, several ISAR image reconstruction techniques have been presented to form well focussed and high resolution images. Most of the techniques presented in the literature are based on a two-step approach. A first step is taken to eliminate the target’s radial motion (often addressed with the term autofocus) and a second step is applied to the motion compensated data which aims at forming the image. In the following subsections, the problem of radial motion compensation is firstly addressed and a few techniques are presented that implement this step and then image formation techniques are presented that make use of the classical Fourier approach and Time-Frequency Transforms (TFTs).

2.19.4.4.1 Radial motion compensation The straight-iso-range approximation can be applied when the target is much smaller than the radartarget distance. In practice this is equivalent to effectively approximating the radar-target distance as follows: (19.8) R(t)  R0 (t) + x T · iLoS (t), where R0 (t) is the residual from point “O” to the radar at the time t, x is the column vector that identifies a scatterer on the target and iLoS is the column unit vector that identifies the radar Line of Sight. To further clarify this concept, the reader may refer to Figure 19.9. By substituting (19.8) into (19.6) we obtain the expression in (19.9)     4π f 4π f T ξ(x) exp − j (19.9) R0 (t) x · iLoS (t) dx. S R ( f , t) = W ( f , t) exp − j c c The radial motion compensation aims at eliminating the phase modulation produced by the phase term outside the integral, namely   4π f (19.10) R0 (t) . φ0 (t) = exp − j c

FIGURE 19.9 Straight iso-range approximation.

2.19.4 Inverse Synthetic Aperture Radar

999

In ISAR applications the target is typically non-cooperative, therefore, the term in (19.10) is not known a priori. The direct consequence is that such a motion compensation must be performed blindly. Techniques used in ISAR imaging to perform this task are usually referred to as autofocusing techniques, as they aim at focusing ISAR images by trying to compensate for the phase modulation introduced by φ0 (t), which typically provoke an image defocusing effect. After perfect radial motion compensation, the compensated signal can be written as follows:    4π f T SC ( f , t) = W ( f , t) ξ(x) exp − j x · iLoS dx. (19.11) c

2.19.4.4.2 Range-doppler image formation We will now concentrate on the phase term inside the integral in (19.11). It is worth noting that the scalar product in the phase term is actually the radial coordinate x2 (t). After radial motion compensation, any scatterer’s radial coordinate variation (x2 (t)) would be generated by the target’s rotation with respect to point “O.” It is worth pointing out that only a component of the target’s rotation vector produces an effective target’s aspect angle variation. To explain this concept a bit more clearly let the target’s rotation vector be represented by the sum of two components, one aligned with the radar LoS and the other orthogonal to it. This can be expressed mathematically as follows: T = LoS + eff .

(19.12)

We will now demonstrate that the component aligned wit the radar LoS (LoS ) does not produce any range variation for any of the target’s scatterers. To demonstrate this concept, we will refer to Figure 19.10, where a radar is depicted together with a target with an arbitrary scatterer that rotates with respect to a reference system defined by three cartesian

FIGURE 19.10 Effective rotation vector.

1000

CHAPTER 19 Introduction to Inverse Synthetic Aperture Radar

coordinates (x1 , x2 , x3 ). By assuming that the radar-target distance is much greater than the size of the target (same assumption made for the straight iso-range approximation), and that the radar is aligned with the x2 axis, we can write the differential equation systems that rules the rotational motion of a rigid body given a rotation vector ω(t) x˙ (t) = T × x(t) with the initial condition x(0) = x0 . Therefore, radar-target distance changes can be measured as changes along the LoS direction (x2 axis), which can be calculated as follows: x˙2 (t) = T3 x1 (t) − T1 x3 (t), which indicates that the rotation vector component along the LoS (x2 axis) does not produce and target-radar distance changes. This means that the radar is actually blind to any motion induced by rotation vector component aligned with the LoS. The effective target rotation vector is the orthogonal component that produces scatterer’s range variations. Therefore, variations will be observed in the image plane, which means that only the coordinates (x1 , x2 ) will play a role. More specifically, the effective rotation vector causes radial variations that are the result of a rotation around the effective rotation axis. The effect on the compensated signal phase can mathematically be expressed as follows:        4π f  dx. (19.13) x1 sin eff t + x2 cos eff t SC ( f , t) = W ( f , t) ξ(x) exp − j c We can now demonstrate that when the effective rotation vector can be considered constant within an observation time τ , the ISAR image can be interpreted as the image of the target’s projection onto a plane, namely the Image Projection Plane (IPP). This can be done by simply manipulating the Eq. (19.13), as follows:        4π f  d x1 d x2 , (19.14) x1 sin eff t + x2 cos eff t SC ( f , t) = W ( f , t) ξ  (x1 , x2 ) exp − j c 

where 

ξ (x1 , x2 ) =

ξ(x)d x3

(19.15)

is the target’s reflectivity function projected onto the image plane. When small aspect angles are spanned, the signal in (19.13) can be approximated with the following:     4π f   (19.16) x1 eff t + x2 d x1 d x2 . SC ( f , t)  W ( f , t) ξ (x1 , x2 ) exp − j c After making the following substitutions: 2f

eff x1 , c 2 η = x2 . c ν=

(19.17)

2.19.5 ISAR Image Evaluation

Equation (19.16) can be rewritten as follows:     SC ( f , t)  K W ( f , t) ξ  (η, ν) exp − j2π νt + f η dη dτ.

1001

(19.18)

It should be pointed out that the compensated signal in (19.18) can be read as a windowed Fourier Transform (FT) of the target’s projected reflectivity function. Therefore, an image of the projected reflectivity function ξ  (η, ν) can be obtained by simply applying an Inverse Fourier Transform (IFT) to the compensated signal. The result can be written as follows:   (19.19) IC (η, ν) = 2D-IFT SC ( f , t) = K w(η, ν) ⊗ ⊗ξ  (η, ν). A direct interpretation of the result in (19.19) suggests that the target’s ISAR image obtained by applying a 2D-IFT to the radial motion compensated signal is a filtered version of the target’s reflectivity function projected onto the ISAR image plane.

2.19.5 ISAR image evaluation The interpretation of an ISAR image is not as straightforward as that of a SAR image. This is the direct consequence of the target of interest being non-cooperative. Specifically, the following issues must be considered to give a correct interpretation of an ISAR image: • • • •

the cross-range coordinate is represented in the Doppler frequency domain and not in spatial coordinates, the IPP, and therefore the target projection shown in the image, is not known a priori, the imaging system response, namely the Point Spread Function (PSF) ,is not known a priori and cannot be entirely controlled, the cross-range resolution is not known a priori and it is not a controllable parameter.

Each of these aspects is analyzed in the following subsections.

2.19.5.1 ISAR image coordinates The coordinates η and ν, which appear in (19.19) can be interpreted as the delay-time and Doppler coordinates. This is a direct consequence of the substitutions that were made in (19.17). In fact the coordinate η is the exact calculation of the round-trip delay relative to a scatterer located in the range coordinate x2 , whereas the coordinate ν can be interpreted as the Doppler frequency generated by a scatterer with a radial velocity equal to vr = eff x1 . As ISAR images are typically used for target’s classification and recognition, a desired output would be an ISAR image represented in terms of spatial coordinates (x1 , x2 ). An operation of coordinate scaling would involve an inversion of the eqs. in (19.17). Such an operation will be detailed in Section 2.19.9. Theoretically, we may rewrite the result in (19.19) in terms of spatial coordinates, as follows: IC (x1 , x2 ) = K w(x1 , x2 ) ⊗ ⊗ξ  (x1 , x2 ).

(19.20)

1002

CHAPTER 19 Introduction to Inverse Synthetic Aperture Radar

2.19.5.2 Additional considerations on the image projection plane and limitations of the use of the RD technique As demonstrated in Eqs. (19.14) and (19.15), the function that maps the target’s 3D complex reflectivity function onto the 2D image domain is a projection along the target’s effective rotation vector. Nevertheless, it should be pointed out that such a result is valid under the following assumptions: • •

far field (validity of the straight iso-range approximation), constant target’s rotation vector.

With reference to the geometry described in Section 2.19.4.2, it should be noted that the range position in the ISAR image only depends on the x2 coordinate whereas the Doppler position only depends on the x1 coordinate. The dependance of the range position in the ISAR image on the x2 coordinate is the direct consequence of the choice of the reference system Tx , which has the x2 axis aligned with the radar range coordinate ξ2 . The dependance of the Doppler coordinate on the x1 axis is demonstrated in (19.17), where a linear relationship holds between the two quantities. It is then evident that the x3 coordinate does not affect either the range or the Doppler position of a scatterer in the ISAR image. This can be interpreted in the following terms: two scatterers that are located in two positions such that they only differ by the third coordinate, namely x3 , are mapped onto the same range and Doppler. Therefore, their contributions are coherently summed. By extending this concept, we can say that a range-Doppler bin in an ISAR image will show a complex intensity that is the result of a sum of all contributions from those scatterers that have the same coordinates in terms of (x1 , x2 ) and differ for their x3 coordinate. This is equivalent to saying that the target is projected onto the (x1 , x2 )-plane, which can be interpreted as the IPP. It is worth pointing out that such a result is critical when interpreting ISAR images, in facts: • •

ISAR images are images of the target’s projection onto a plane (IPP), the IPP is not known a priori and therefore, the interpretation of the ISAR image is consequentially difficult.

2.19.5.3 Point spread function ISAR image quality can be assessed by defining the PSF. The PSF is the image of an ideal point-like scatterer located in a generic position x. Therefore, we can easily calculate the ISAR image PSF when the RD is used to form the image. This can be done mathematically as follows: PSF(x1 , x2 ) = K w(x1 , x2 ) ⊗ ⊗Aδ(x1 − x10 , x2 − x20 ) = K  w(x1 − x10 , x2 − x20 ).

(19.21)

It should be noted that • •

the PSF is space-invariant: an ideal scatterer is imaged in the same way independently of its position, the image geometrical resolution only depends on the characteristics of the window W ( f , t).

2.19.5.4 ISAR image resolution The ISAR image resolution can be obtained by analyzing the PSF. In order to analyze the PSF in terms of spatial coordinates we will rewrite (19.14) by introducing the concept of spatial frequency coordinates.

2.19.5 ISAR Image Evaluation

1003

Mathematically, this can be done as follows:     SC ( f , t)  W ( f , t) ξ  (x1 , x2 ) exp − j2π x1 X 1 + x2 X 2 d x1 d x2 , where

2f sin ( eff t), c 2f cos ( eff t). X2 = c X1 =

(19.22)

In particular, in the case of small aspect angle variations, i.e., when eff t  1, |t| < t 2 , the polar domain described by the parametric functions in (19.22) can be approximated by a rectangular domain, as shown in Figure 19.11. Under the same assumption, the spatial frequency coordinates defined in (19.22) can be = approximated with the following: 2 f0

eff t, c 2f . X2 = c

X1 =

(19.23)

It is worth noting that for X 1 the frequency f has been substituted by the central frequency f 0 , as a result of the approximation of the polar domain by a rectangular window, which intersects the angular sector at the coordinate X 2 = 2cf0 . It should also be noted that this approximation is the one that leads to the minimum error, as it can be inferred by examining Figure 19.11 where the two polar and rectangular windows are superposed.

X2

X1 FIGURE 19.11 Fourier domain—rectangular approximation.

1004

CHAPTER 19 Introduction to Inverse Synthetic Aperture Radar

By considering the rectangular approximation, we can rewrite the acquisition window W ( f , t) in terms of spatial frequencies. This can be done mathematically by substituting (19.23) into (19.7) as follows:



X1 X 2 − X 20 , (19.24) W (X 1 , X 2 ) = rect 2 f t rect 2B 0

eff

c

c

2 f0 c .

where X 20 = The PSF can be obtained by calculating the IFT of (19.24), as follows:     x1 x2 sinc , w(x1 , x2 ) = K sinc δx1 δx2 where c , 2 f 0 eff t c δx2 = . 2B δx1 =

are the cross-range and range resolution respectively. The following remarks can be made. •





The range resolution depends on the transmitted signal bandwidth. The direct consequence of this is that the range resolution can be directly controlled (by setting the desired bandwidth B) or, at least, it can be calculated. Therefore, it can be considered as a known parameter. The cross-range resolution depends on the target’s motions. This results in a parameter that is not controllable which means that it cannot be defined by setting one or more radar parameters. For the same reason, the cross-range resolution is an unknown parameter. An ISAR imaging system performance in terms of resolution cannot be predicted unless the target’s motions are known. This uncertainty may reflect onto uncertainties in the performance of classifiers based on ISAR images.

It must be pointed out that the uncertainty about the cross-range resolution is exactly the same problem of cross-range scaling. A technique for estimating the effective rotation vector magnitude will be presented in Section 2.19.9, where a technique will be presented that aims to solve this problem.

2.19.6 Examples of ISAR images Depending on the type of targets to be imaged, ISAR systems and platform may change to allow covering areas of interest and improving performance. For instance, ISAR imaging of sea vessels is often carried out by means of coastal radars, airborne radars and also by spaceborne radar, whereas, images of aircraft are more likely to be obtained by means of ground-based radar. Some specific ISAR imaging systems are also employed to image spaceborne objects, such as the Fraunhofer TIRA system. An example of a sea vessel ISAR image obtained by processing data collected by an airborne radar is shown in Figure 19.12, together with an aerial picture of the same target. It is interesting to note some important features, such as the ship’s cranes, which may be used by a classifier to recognize the type of target. Another example of an aircraft ISAR image is shown in Figure 19.13. This ISAR image was obtained by processing data collected by a ground based radar located near an airport. The target, a Boeing 737

2.19.6 Examples of ISAR Images

Aerial photo of the target

FIGURE 19.12 Airborne radar imaging of a moving sea vessel (bulk loader).

1005

1006

CHAPTER 19 Introduction to Inverse Synthetic Aperture Radar

FIGURE 19.13 Ground-based radar imaging of a flying aircraft (Boeing 737).

2.19.7 Image Autofocus

1007

was taking off during the acquisition. Also in this case features such as length and wing-span may be used to classify this target. The last example is about a spaceborne SAR image of a non-cooperative sea vessel together with the same target’s image obtained by using an ISAR processor. It is worth noting that the SAR processor is unable to focus the target as this is moving with respect to the rest of the illuminated scene (see Figure 19.14).

2.19.7 Image autofocus Radial motion compensation is accomplished by means of a step in the ISAR processing chain that is typically termed image autofocus. This operation is accomplished by removing the phase term φ0 (t) (see Eq. (19.10)). When no external data are available, the motion compensation must be performed only by using the radar received signal. For this specific reason, the image focusing process is called ISAR image autofocus. In many years of research in the field of ISAR autofocus, several techniques have been developed, each of them showing pros and cons. Autofocus algorithms can be classified as parametric and non-parametric [5]. Parametric methods need a parametric model of the radar received signal, whereas non-parametric techniques do not make use of any model. Two parametric and two of the non-parametric techniques are detailed in the following subsections.

2.19.7.1 ICBA The autofocus technique adopted here is the Image Contrast Based Autofocus (ICBA) algorithm [6]. The ICBA is a parametric technique and it is based on Image Contrast (IC) maximization. The idea behind this approach is based on the simple concept that an image will more focussed when the value of the IC is larger. The autofocus technique aims at removing the term R0 (t) due to the target’s residual translational motion. For a relatively short observation time interval t and relatively smooth target motions, the radar-target residual distance can be expressed by means of an Nth order polynomial, as defined in (19.25). It is worth pointing out that a smooth target motion does not imply that the target undergoes slow motions but that the function that represents the target’s motion is a continuous and differentiable function R0 (t) =

N 1 αn t n . n!

(19.25)

n=0

In Eq. (19.25), α n are the focusing parameters, which can be denoted by means of a vector format, namely α. The estimation of R0 (t) resorts to the estimation of the target radial motion parameters. By naming the radial motion model with R0 (α, t), the radial motion compensation problem can be recast as an optimization problem where the Image Contrast (IC) is maximized with respect to the unknown vector α, as defined in (19.26) αˆ = arg max {IC(α)} , α

(19.26)

1008

CHAPTER 19 Introduction to Inverse Synthetic Aperture Radar

FIGURE 19.14 Spaceborne SAR and ISAR images of a moving sea vessel—ISAR imaging is required to focus non-cooperative moving targets.

2.19.7 Image Autofocus

where

  2  A I (η, ν, α) − A I (η, ν, α)   , IC(α) = A I (η, ν, α)

1009

(19.27)

and where A[·] indicates an average operation over the variables η and ν, I (η, ν, α) can be defined as the ISAR image magnitude or intensity (power) after compensating the target’s translational motion by using α as focusing parameters. This can be expressed mathematically as follows:    p    4π f I (η, ν, α) = 2D-IFT S R ( f , t) exp j (19.28) R0 (α, t)  , c where p = 1 in the case of image magnitude and p = 2 in the case of image intensity. The optimization problem can be solved numerically by using classical methods, such as the NelderMead [7] or more recent Genetic Algorithms (GA) [8].

2.19.7.2 IEBA An approach similar to the one proposed in Section 2.19.7.1 can be devised by substituting the IC with the Image Entropy (IE). As for the IC, the IE is a good indicator of the image focus. Unlike the IC, the IE has small values when the image is well focussed whereas it reaches large values when an image is not well focussed. The mathematical expression that may be used to calculate the IE follows in (19.29):    (19.29) I E(α) = − ln (I η, ν, α) I (η, ν, α)dη dν, where I (η, ν, α) =

I (η,ν,α)  . A I (η,ν,α)

2.19.7.3 DSA The Dominant Scatterer Autofocus (DSA), also known as Hot Spot (HS), is a two-stage technique: the first is to set up a rough alignment of all the profiles, before applying a form of phase compensation in the second step. The principles for this technique were obtained by delving into two other areas of research, namely time delay estimation [9] and adaptive beamforming [10]. A brief overview of the algorithm follows, whilst a more complete description may be found in [11]. After measuring and storing the complex envelopes of the echo samples, high resolution range profiles can be generated. Let s R (η, t) be a range profile acquired at time t. A cross-correlation and shift operation can be performed between successive range profiles in order to obtain a rough range profile alignment. Let us refer to the roughly aligned profiles as: s1 (η, t) = A(η, t) exp[ jφi (η, t)],

(19.30)

where τ indicates a range cell and t is the slow time. A search along the range coordinate is then performed in order to find a dominant and stable scatterer. The range cell where such a scatterer is found is called the reference range η0 s0 (t) = A(η0 , t) exp[ jφ0 (t)],

A(η0 , t)  A.

(19.31)

1010

CHAPTER 19 Introduction to Inverse Synthetic Aperture Radar

The value of η0 is found by measuring the normalized echo variance in each range cell, and is determined to be the range for which the variance value is a minimum. This approach relies on the assumption that a dominant scatterer with large radar cross-section exists and therefore, the measured phase can be attributed to the phase generated by one point scatterer. The next step is to perform a phase conjugation using the phase history of s0 (t). In particular, by applying it to the range cell data corresponding to η0 , the following result is obtained: s2 (t) = A0 (η0 , t)  A.

(19.32)

By applying the same operation to all other range cells motion compensation is achieved: sC (η, t) = A(η, t) exp { j[φ(η, t) − φ0 (t)]} .

(19.33)

This algorithm is known as the Minimum Variance Algorithm (MVA) or the dominant scatterer algorithm (DSA) due to the criterion used to choose the reference range cell. A more robust version of this algorithm, which combines the echoes from several reference range cells, will be now considered. Called the multiple scatterer algorithm (MSA) [11], this modified algorithm essentially averages the phase differences of M reference scatterers (after unwrapping) to provide the phase correction. Typically three reference range bins (M = 3) are sufficient to produce focussed images. This concept is translated into mathematical details as follows. Let the mth reference cell be represented as: (19.34) sm (ηm , t) = A(ηm , t) exp[ jφm (t)], Am (t)  A and let M be the number of selected range cells. Therefore, the estimation of the phase history φ0 (t) can be obtained by averaging the phase histories of the selected range cells, as shown in Eq. (19.35) φ0 (t) =

M 1 φm (t). M

(19.35)

m=1

Phase conjugation is then carried out as before (c.f. Eqs. (19.32) and (19.33)). The DSA algorithm is summarized in Figure 19.15.

2.19.7.4 PGA The extension of the original DSA proposed in [11] leads to a general question about how much information remains in those range cells that are discarded after selecting the MV range cell that could

FIGURE 19.15 DSA algorithm flow chart.

2.19.7 Image Autofocus

1011

FIGURE 19.16 PGA flow chart.

be still used to improve the phase error estimation. An answer to this question may be found in the solution proposed by Jakowatz et al. [12], namely the Phase Gradient Algorithm (PGA). The PGA algorithm substitutes the phase conjugation approach used in the DSA with a solution based on a Maximum Likelihood estimator. The ML approach theoretically uses the information contained in all range cells, although in practice, the reduction in those range cells where the SNR is high enough improves the PGA performance. As shown in Figure 19.16, a Range-Doppler ISAR image is formed with the roughly aligned range profiles. The peak value in each range cell, which is supposed to be the return from a dominant scatterer, is firstly found along the Doppler coordinate and then center-shifted and windowed in the Doppler domain (Low-pass filter). Each range cell is then transformed back via an Inverse Fourier Transform (IFT) to obtain phase shifted and filtered time histories. This operation corresponds to isolating a scatterer contribution and zero-Doppler forcing, which can be interpreted as a way to remove scatterer’s radial motion. Let an arbitrary kth range cell time history be represented as in (19.36), where two consecutive samples are considered. g(k, m − 1) = a(k) exp[ jφ(m − 1)] + n(k, m − 1), g(k, m) = a(k) exp[ jφ(m)] + n(k, m),

(19.36)

where k indicates the range cell number, m indicates the mth time sample, a(k) is the amplitude, which is assumed to be constant for two consecutive samples, φ(m) is the phase at time m and n(k, m) is an additive white Gaussian noise sample. The phase gradient between two consecutive samples can be defined as φ = φ(m) − φ(m − 1).

(19.37)

The PGA aims to estimate the phase gradient expressed in (19.37) by using the ML approach. The theoretical derivation of the ML estimator and its performance are detailed in [12]. The solution is shown in (19.38)

N ˆ g(k, m)g ∗ (k, m − 1) , (19.38) φ(m) =∠ k=1

1012

CHAPTER 19 Introduction to Inverse Synthetic Aperture Radar

FIGURE 19.17 ISAR images formed by applying autofocusing algorithms: (upper-left) DSA, (upper-right) PGA, (bottom-left) IEBA, (bottom-right) ICBA.

where N is the number of range cells used to estimate the phase error and where the symbol ∠(·) indicates the phase of a complex number. The phase correction term can be then calculated by integrating the estimated phase gradient, as follows: ˆ φ(m) =

m

ˆ ˆ φ(n), φ(0) = 0.

(19.39)

n=2

Examples of the application of the DSA, PGA, IEBA and ICBA to a ship target are shown in Figure 19.17. Results show that all techniques are able to correctly focus the ISAR image although small differences can be noticed among the images.

2.19.8 Time-Windowing

1013

2.19.8 Time-windowing As already stated, the RD technique can be successfully applied when the effective rotation vector does not change significantly during the CPI. However, the target’s own motion may induce a non-uniform target’s rotation vector. In order to minimize target’s rotation variations, the CPI can be controlled via a time-windowing approach. Typically, an operator would define a fixed time window length (CPI) and would process the entire dataset by sliding the window and forming ISAR images. The same operator would then select the ISAR images that would be suitable for target classification or recognition. One of the most important requirements for such images is a good level of focus, as the target’s details would then be sharper than in defocused images. Short CPIs are more likely to provide well focussed images although the resolution may be poor due to a small total aspect angle variation. On the other hand, long CPIs are more likely to provide wider aspect angle variations, although they would increase the chance that the target’s effective rotation vector would be time-varying and therefore produce defocused images. It is quite evident that a trade-off must be identified to obtain an optimal result that would be based on obtaining both a high resolution and a well focussed image (See Video Files 1, 2, 3). The technique described here is an automatic time-windowing technique, originally proposed in [13]. Specifically, the time window position across the data and its length are automatically chosen in order to obtain one or more images with the highest focus. To better explain this concept, we will refer to Figure 19.18 where the data is referred to as distributed along the time axis and a temporal window is defined by two parameters, namely τ and t. The criterion used to define the highest focused image is based on the IC. Basically, the IC is used as an indicator of the image focus, which is assumed to be a maximum when the position (τ ) and length (t) identify a time window that selects a data subset where the condition of constant rotation vector and resolution are optimal. It is worth highlighting that the criterion of optimality here adopted is based on best image focus.

Δti

Δt j

τi

τj

ISAR Processor FIGURE 19.18 Time-windowing concept.

t

ISAR image

1014

CHAPTER 19 Introduction to Inverse Synthetic Aperture Radar

Therefore, the optimal time window position and length are obtained by maximizing the IC with respect to the couple (τ, t). Therefore, the following optimization problem can be formulated   (19.40) (τopt , topt ) = arg max I C(τ, t) , (τ,t)

where the I C(τ, t) is obtained by redefining (19.27) with respect the two new time variables. It must be noticed that the variables (τ, t) are discrete variables as the data in input to an ISAR processor is digitised. Therefore, the problem in (19.40) is a discrete optimization problem. Specifically, such a problem can be classified as a non-linear Knapsack Problem [14]. The solution proposed in [13] is based on a double linear search, which can be briefly defined as follows: 1. Maximization of the contrast with respect to τ for a given guess t (in) . Let τopt be the solution of such maximization. 2. Optimization with respect to t with τ = τopt . This procedure is depicted in Figure 19.19 for the sake of clarity. The justification of this procedure is heuristic. It can be observed both in simulated and real ISAR data of several types of targets that the position of the optimal time-window is quite independent of the window length. This means that the IC peak position along the τ axis does not change by changing the window length. Physically, this can be explained by the fact that the target’s own motion will be characterized by regular motions at given times and less regular motions at other times. The reader may think of a ship that undergoes pitch and roll due to the sea surface waves. A regular motion may be disturbed by an incoming wave and generate complex motions which cause the rotation vector to rapidly change. An example is provided in Figure 19.20 where the IC is calculated by moving four fixed length windows along the time axis τ . It can be noticed that the position of the peaks is practically the same for all the four windows. As an example, we apply the technique of optimal time-windowing to an ISAR dataset of and airplane, namely a boeing 737. The results are shown in Figure 19.21, where three images are displayed. The first image (Figure 19.21a) is obtained by processing a short CPI dataset (t = 0.41 s). It is quite evident that, although the image is well focussed, the resolution is poor. The second image (Figure 19.21b) shows the case of long CPI (t = 3.27 s). The result shows potential finer resolution but at the same time a strong defocusing effect produced by the target’s rotation vector time-variance. The third image (Figure 19.21c) shows the result obtained by automatically selecting the optimal time-window, which resulted in a value of the CPI equal to t = 0.87 s. The conclusion is that the image produced by means of the automatic optimal time window selection shows fine details due to a fine resolution whilst retaining a good level of image focus.

FIGURE 19.19 Optimal time window estimator.

2.19.9 Image Scaling

1015

FIGURE 19.20 IC as a function of time by using four different time window lengths.

Often, a sequence of ISAR images is used for classifying or recognizing a target, as the information about the target increases when the number of images used increases. A direct consequence of the use of multiple images, especially in the case of highly maneuvering or oscillating targets, is the change in the IPP, which leads to different target projections. This typically leads to a better indication about the geometrical features of the target, which can then be used for classification purposes. To this purpose, it would be desirable to obtain a number of well focussed ISAR images out of a long target observation time. An extension of the time-windowing approach that provides multiple well-focussed images can be obtained by selecting multiple peaks from the maximum peak locator and iteratively find optimal time window length for each of the selected peaks. The implementation of this extension is straightforward and is not worth describing in details.

2.19.9 Image scaling Inverse Synthetic Aperture Radar generates two dimensional, high resolution images of targets in the time delay—Doppler domain. In order to determine the size of the target, it is required to have fully scaled image. The range scaling can be performed by using the well known relationship r = cη/2, where r is the slant range coordinate and η is the time delay. On the other hand, cross range scaling requires the estimation of the modulus of the target effective rotation vector.

1016

CHAPTER 19 Introduction to Inverse Synthetic Aperture Radar

FIGURE 19.21 ISAR image obtained by processing (a) a short CPI dataset (0.41 s), (b) a long CPI dataset (3.27 s), (c) the optimal CPI (0.87 s).

2.19.9 Image Scaling

1017

In this subsection, we will illustrate an algorithm that is based on the assumption of quasi-constant target rotation [15]. When the target’s rotation vector can be assumed constant within the CPI, the chirp rate produced by the scattering centers can be related to the modulus of the target’s effective rotation vector by means of an analytical expression. Therefore, each scattering center carries information about the modulus of the target’s rotation vector through its chirp rate. To demonstrate this, we will consider the radial motion compensated signal in (19.14). The phase term inside the integral function can be approximated with a second order Taylor polynomial, as follows:   4π f 1 2 2 x2 + x1 eff t − x2 eff t . (19.41) ϕ( f , t; x1 , x2 ) = − c 2 It should be noted that the phase in (19.41), represents the phase of a chirp signal, where the second order term coefficient is usually referred to as chirp rate. The echo of an ideal scatterer located in (x¯1 , x¯2 ), with finite reflectivity function ξ¯ (x¯1 , x¯2 )δ(x1 − x¯1 , x2 − x¯2 ) can be written by approximating its phase with the expression in (19.41), as follows:    4π f 1 2 2 ¯ SC ( f , t; x¯1 , x¯2 ) = W ( f , t)ξ (x¯1 , x¯2 ) exp − j x¯2 + x¯1 eff t − x¯2 eff t . (19.42) c 2 The range compressed profile can e obtained by applying a FT to the signal in (19.42) along the coordinate f. This can be mathematically expressed as follows:      2 t ¯ R p (η, t; x¯1 , x¯2 ) = B ξ (x¯1 , x¯2 )sinc B η − x¯2 rect c t    (19.43) 4π f 0 1 2 2 exp − j x¯2 + x¯1 eff t − x¯2 eff t . c 2 If a method for perfectly estimating the chirp rate of a given scatterer was directly available, the following equation could be written: 2 f0 m= (19.44) x2 2eff . c Therefore, as the scatterer’s range coordinate x2 can be readily obtained by measuring the delay-time η, the effective rotation vector can be obtained by inverting Eq. (19.44), as follows: c m. (19.45)

eff = 2 f 0 x2 In practice, a scatterer’s chirp rate, as well as its range, must be estimated from the received data. Therefore, the estimation of the effective rotation vector magnitude would in general be affected by an error. Techniques for estimating target’s scattering center chirp rates have been proposed that make use of atomic decomposition [16], CLEAN technique [17,18] and based on the IC method proposed in [15]. To make the estimation more accurate and robust the chirp rates of a number of target’s scatterers can be measured together with their ranges. The problem of estimating the effective rotation vector

1018

CHAPTER 19 Introduction to Inverse Synthetic Aperture Radar

magnitude is then transformed into a problem of estimating the slope of a straight line that fits the scatterplot generated by the set of range and chirp rate estimates. One way of solving this problem is to apply a LSE approach [19]. The mathematical problem can be set as follows: m k = ax2k + k ,

(19.46)

2 f0 2 c eff

and m k , x2k , and k are the chirp rate estimate, the range estimate and the estimation where a = error for the kth scatterer, respectively. The LSE problem and its solution for the estimation of a is given in (19.47) N N N N N k=1 mˆ k x2k − k=1 mˆ k k=1 x2k 2 k = . (19.47) aˆ = arg min  2 N a N 2 − k=1 N k=1 x2k x 2k k=1 An example is provided in Figure 19.22, where the range vs chirp rate scatterplot is shown for a large ship. The linear relationship between range and chirp rate, theoretically predicted by Eq. (19.46), is quite evident. The fully scaled ISAR image of the ship is shown in Figure 19.23. The full representation of the ISAR image in spatial coordinates allows measuring important features, e.g., ship’s length, directly from the ISAR picture. To conclude this section, we provide a few remarks regarding effective applicability of this crossrange scaling technique. First of all, as any other cross-range scaling technique, its application is effective only if a constant effective rotation vector applies during the CPI. In fact, such a condition is necessary to establish a linear relationship between Doppler and cross-range coordinates. Moreover, in the specific case of the technique discussed in this section, a well focussed image must be produced in order to make sure that quadratic phases are associated with single scatterer’s motions. The number of scattering centers also play an important role as the accuracy of the effective rotation vector estimation generally improves when the number of independent scattering centers increases.

FIGURE 19.22 Range vs. chirp rate scatterplot.

2.19.10 Time-Frequency Image Formation

1019

FIGURE 19.23 Fully scaled ISAR image.

2.19.10 Time-frequency image formation The RD technique is based on the assumption that the Doppler frequency of each scatterer, relative to point O, is constant during the observation time. This hypothesis is usually valid for low spatial resolution (of the order of one meter) and when the target does not undergo fast maneuvers and/or is affected by significant oscillating motions, such as pitch, roll and yaw (typical of sea vessels). When very high spatial resolutions are required (of the order of ten centimeters), typically, a longer integration time is needed and the Doppler frequency associated with each scatterer becomes time-varying. The situation is also aggravated when the target maneuvers or when it undergoes angular motions, as in the case of ships. In these cases, the target’s rotation vector is not constant and the RD technique fails because of the spreading effects due to the time-varying frequency of each scatterer’s contribution. To solve this problem, the Fourier approach employed by the RD technique is replaced by Time Frequency Transforms (TFTs), which are suitable for the analysis of non-stationary signals. Specifically, bilinear TFTs, such as those described by the Cohen’s class [20], prove effective when dealing with ISAR signals, which, up to some extent, can be approximated with chirp-like signals (second order phase terms). Pioneering work in this sense was delivered by Victor Chen, which is mainly collected in [21], although more work in this area followed [22,23]. In this subsection, we will recall the work done in [23], as it aims to analytically derive the ISAR image PSF in when using bilinear TFTs. The PSF is derived in the case of Wigner-Ville transform, which is the basic Cohen’s class TFT. The derivation in the case of all the other TFTs can be obtained by simply filtering the data in the adjunct Fourier domain, which is the Ambiguity Function domain, as demonstrated in [20]. In order to derive the PSF, we will consider the signal model in (19.42), where the motion compensated received signal relative to a single ideal scatterer located in the coordinates (x¯1 , x¯2 ) is considered. It is

1020

CHAPTER 19 Introduction to Inverse Synthetic Aperture Radar

worth noting that a simple RD approach would lead to a smeared image reconstruction along the cross-range (Doppler) coordinate, due to the presence of a quadratic phase (chirp term). The Cohen’s class generic TFT can be analytically expressed as follows:   τ τ  s u+ CTFT(t, ν) = K (θ, τ )s ∗ u − (19.48) 2 2 · exp (− jθ t) exp (− j2π ντ ) exp (− jθu)du dτ dθ, where, s(t) is the signal to be transformed and K (θ, τ ) is the transform kernel, which defines any specific bilinear TFT belonging to the Cohen’s class. The Wigner-Ville (WV) is a particular TFT that can be obtained from the Cohen’s class by posing K (θ, τ ) = 1. The WV is analytically expressed in (19.49)   τ τ  s u+ exp (− j2π ντ )dτ. (19.49) WV(t, ν) = s∗ u − 2 2 After applying (19.49) to the range compressed signal in (19.43) we obtain a data cube where for each range cell a time-frequency representation of the data is obtained. With respect to the Fourier Transform, this approach has the advantage of capturing the time-varying signal spectrum. From an ISAR imaging perspective, it is important to note that for each time instant t, an ISAR image can be obtained in the time-delay (η) and Doppler (ν) domain. The ISAR image obtained for the time instant t = 0 is shown in (19.50) ITFT (η, ν) = IT F T (η, t, ν)|t=0      2    2 2 f0   ¯ =  B ξ (x¯1 , x¯2 )sinc B η − x¯2  exp (− j2π f 0 η)sinc 2t ν −

eff x¯1 . c c (19.50) By looking at the analytical expression in (19.50), it is worth pointing out that • • •

the effect of chirp-like terms is completely canceled out as an effect of the application of a bilinear transform, which has the ability of canceling all even terms, including the quadratic term, the bilinear transform produces a square value in the sinc function along the delay-time coordinate, the Doppler resolution is halved with respect to the Fourier approach.

As a side effect, cross-terms are typically introduced by bilinear transforms, therefore resulting in fake target’s scatterer images. This effect could be very detrimental in target recognition by means of ISAR images as false scatterers may appear in critical positions. Specific transform kernels K (θ, τ ) have the property of removing or attenuating the cross-terms. Nevertheless, a trade-off between cross-terms removal and Doppler resolution loss must be found in the selection of the kernel function. A suitable kernel function can be designed as a product of two single variable functions, as follows: K (θ, τ ) = F(θ )G(τ ),

(19.51)

where F(θ ) and G(τ ) may be two weighting windows, such as Hamming, Kaiser, and so on. The shape parameters in such weighting functions can be determined based on the required level of cross-term suppression and Doppler resolution loss [23]. The choice of the kernel function such as expressed in (19.51) leads to the definition of the Smoothed Pseudo Wigner Ville (SPWV), which is typically

2.19.11 Polarimetric ISAR (Pol-ISAR)

1021

FIGURE 19.24 ISAR image obtained by applying the Wigner-Ville TFT.

implemented in ISAR applications as it provides a flexible solution to the problem of cross-terms and resolution. An example of the application of weighting windows is shown in Figures 19.24 and 19.25, where the results obtained by applying the WV and the SPWV are shown. In particular, the SPWV is applied by selecting a Kaiser window with shape parameter equal to K = 3. The readed should easily note that cross-terms present in Figure 19.24 tend to disappear in Figure 19.25, although the resolution tend to get worse. A trade-off between cross-terms reduction and resolution loss must always be taken into account when using TFT applied to ISAR imaging.

2.19.11 Polarimetric ISAR (Pol-ISAR) Firstly a brief introduction on polarimetric SAR systems will be given to provide some motivations for using polarimetric radar imaging. As polarimetric radar imaging systems are necessarily more expensive to build than traditional single polarization radar imaging system, their reason to exist must be supported

1022

CHAPTER 19 Introduction to Inverse Synthetic Aperture Radar

FIGURE 19.25 ISAR image obtained by applying the SPWV.

by an increase in performance or in their ability to overcome physical limitations of single polarization sensors.

2.19.11.1 Polarimetric SAR imaging Polarimetric Synthetic Aperture Radar (Pol-SAR) has been widely used for classifying imaged areas. More specifically, Polarimetric Synthetic Aperture Radar (Pol-SAR) systems are used in land, ice and ocean remote sensing to obtain extra information about scattering mechanisms that can be exploited for extracting physical parameters of interest [24–26] as well as in target classification applications [27–32]. Pol-SAR systems can be realized by exploiting fully polarimetric radars. A fully polarimetric radar is able to measure a scattering matrix rather than a reflectivity function, as in the case of single polarization radar. The canonical representation of the scattering matrix is as follows:   SHH SHV , (19.52) S= SVH SVV where each term Si j is the scattering coefficient obtained by transmitting polarization i and by receiving polarization j, where i and j may be horizontal (H) or vertical (V) polarization. Since a generic target

2.19.11 Polarimetric ISAR (Pol-ISAR)

1023

responds differently depending on which polarization is used in transmission and reception, the information contained in a scattering matrix is more complete than that contained in a single polarization scattering coefficient. The fully polarimetric information contained in the scattering matrix may be used to estimate physical parameters from PolSAR images, which cannot be estimated with a single polarization radar. A PolSAR image can be interpreted as the result of a multichannel SAR system where the number of available channels is equal to four, one for each element of the scattering matrix. The result, is that for each image pixel, a complex four-element vector is available that can be seen as a polarimetric signature of the imaged area represented by the same pixel. PolSAR image formation does not differ from single polarization SAR image formation as all four images must be formed by using the same processing. In this way, the geometrical co-registration among all channels is automatically sorted out. Nevertheless, some improvement in the image autofocus processing can be attained by optimally processing all polarimetric data, as it will be shown in Section 2.19.11.2. As the image formation does not require extra attention with respect to single polarization SAR, most of the effort has been spent in finding an optimal use of the polarimetric information contained in a PolSAR image. For this reason, several polarimetric decomposition have been introduced that aim at helping to extract useful information that may be used to estimate physical parameters of interest. A direct information that can be inferred from the fully polarimetric signature relates to the target’s shape. Therefore, having a polarimetric signature for each image scatterer, allows identifying shapes with a resolution that can be equal to the SAR image resolution. This allows separating signal components that are the result of different types of scattering that originate from different scatterer’s shapes. Under the validity of the reciprocity theorem, which corresponds to the physical condition of reciprocal media, it can be demonstrated that a fully polarimetric signal can be represented by means of three orthogonal complex vectors (three dimensional complex base). Generally, PolSAR images are represented by coding each decomposed polarimetric channel with a color. A typical color base is the Red-Green-Blue (RGB). The polarimetric decompositions that have been introduced can be categorized into two groups: Coherent Decomposition (CD) and Incoherent Decomposition (ID). CDs are employed in those cases where the imaged target is characterized by coherent scattering. This typically happens when single dominant scatterers are present in a resolution cell. In this case, the scattering matrix S in (19.52) is able to completely chacterize the scattering mechanism. On the other hand, IDs are employed to characterized incoherent scattering, which is typical of distributed scatterers. This is the case of the presence of a number of non-dominant scatterers in a resolution cell. The most used CDs are the Pauli’s decomposition [33], the Krogager’s decomposition [34] and the Cameron’s decomposition [27], whereas the most commonly used IDs are the Huynen’s Phenomenological decomposition [35], the Cloude-Pottier decomposition [36] and the Freeman’s decomposition [37]. A comprehensive tutorial on PolSAR imaging is available at [ http://earth.eo.esa.int/polsarpro/tutorial.html].

2.19.11.2 Polarimetric ISAR imaging As in the case of SAR, fully polarimetric radar can be exploited to obtain polarimetric ISAR imagery, with the aim of improving target classification and recognition performance. The first idea to exploit

1024

CHAPTER 19 Introduction to Inverse Synthetic Aperture Radar

fully polarimetric information in ISAR imaging was developed in [38], where the ISAR image autofocus is improved by introducing full polarization in the derivation of the autofocus technique. Two algorithms from the class of parametric autofocusing techniques, namely the Image Contrast Based Autofocusing (ICBA) and Image Entropy Based Autofocusing (IEBA) [6,39–44] have been extended by introducing the full polarimetric information contained in the received data. The Image Contrast (IC) and the Image Entropy (IE) represent two ways of measuring the focus of an ISAR image. In the case of a single polarization ISAR system, the success of maximizing the IC or minimizing the IE, which is also used to achieve motion compensation, depends on the polarization used by the system. Evidently, specific scatterers may produce a more stable signal return in a given polarization. Often, in SAR applications, the HH and VV co-polarization channels offer a higher SNR with respect to the cross-polarizations (cross-pol) HV and VH. Nevertheless, in ISAR applications, such a priori knowledge cannot be taken for granted. In any case, the availability of all polarizations provides the basis for optimizing the autofocusing algorithm with respect to the polarimetric space. Polarimetric radars may also maximize the Signal to Noise Ratio (SNR) with respect to the polarization space in order to improve detection performance [45]. The concept of increasing the performance by finding an optimal polarization can be extended to the ISAR image autofocus problem. Similarly to the maximization of the SNR with respect to the radar polarization, one may argue that a polarization exists that maximizes the image focus. This insight can be justified by considering that the image focus strongly depends on the time invariance of the scatterer contributions. Moreover, the Doppler components for each scatterer are generally modulated due to several causes. Among such causes, are target-radar dynamics, modulation induced by scatterer scintillation and the effect of noise. These causes can be reduced by exploiting full polarization. In fact, both the SNR and the modulation effects induced by scatterers when illuminated from different aspect angles can be jointly reduced by finding the optimal polarization. The definition of such an optimality criterion will be defined in Section 2.19.11.2.3. To set the scene, we firstly introduce the signal model and calculate the polarimetric ISAR image PSF in the next subsection.

2.19.11.2.1 Signal model The polarimetric matrix of the received signal, in free space conditions, can be written in the timefrequency domain by extending the signal model defined in [6]:   4π f R0 (t) S R ( f , t) = W ( f , t) exp − j c    4π f  T · ξ (x) exp − j (19.53) x · i LoS (t) dx. c Target The polarimetric matrix of the received signal can be expressed as   HV S HH R ( f , t) S R ( f , t) S R ( f , t) = VV S VH R ( f , t) S R ( f , t) and the scattering matrix



 ξ HH (x) ξ HV (x) ξ (x) = . ξ VH (x) ξ VV (x)

2.19.11 Polarimetric ISAR (Pol-ISAR)

1025

Before proceeding, it is convenient to use a different notation, as detailed in [33], and exploit the characteristics of isotropic media that are encountered in ISAR applications. Therefore, the polarimetric data that represents the received signal can be written according to Pauli’s decomposition: T 1  HH VV HH HV + S , S − S , 2S , (19.54) S R = √ S VV R R R R R 2 where the dependence on (f,t) is omitted for notation simplicity. The same decomposition applies for the target scattering matrix. Therefore, the scattering vector obtained from the scattering matrix is   (19.55) ξ (x) = ξ VV (x) + ξ HH (x), ξ VV (x) − ξ HH (x), 2ξ HV (x) . Thus, the received signal can be seen as a vector in a complex three-dimensional polarimetric space. All possible projections can be obtained by means of an internal product between the received signal vector and a generic polarization vector p: (p)

S R = S R · p,

(19.56)

where vector p can be expressed according to the decomposition introduced by Cloude and Papathanassiou in [46]:  1  p = √ p VV + p HH , p VV − p HH , 2 p HV ⎡2 ⎤ cos α exp ( jϕ) = ⎣ sin α cos β exp ( jδ) ⎦ , (19.57) sin α sin β exp ( jγ ) where: •

• •

α is the scatterer’s internal degree of freedom, which ranges in the interval [0, 90]◦ . The meaning of such an angle is related to the scattering properties of the target, e.g., for an ideal dipole the value of α is equal to 45◦ (see Figure 19.26). β represents a physical rotation of the scatterer on the plane perpendicular to the e.m. wave propagation direction. ϕ, δ, γ are the scatterer phases of the three polarimetric components.

It is worth noting that such a representation is meant to highlight the physical properties of the scattering mechanism induced by a given scatterer. Therefore, by defining the unit vector p, it is possible to define a specific polarization that resonates with a scatterer with given physical properties. Moreover, the decomposition proposed in [46] provides, among other polarimetric decompositions, a suitable domain for IC max and/or IE min search. Nevertheless, other types of decomposition could also be used for the same purpose.

2.19.11.2.2 Image formation The definition of the steps that lead to the image formation will be carried out without including the noise contribution, which will be added subsequently for performance analysis. It is worth pointing out that this is as a standard procedure in SAR/ISAR processing.

1026

CHAPTER 19 Introduction to Inverse Synthetic Aperture Radar

FIGURE 19.26 Interpretation of the internal degree of freedom α.

  Motion compensation consists of removing the phase term exp − j 4πc f R0 (t) due to the radial movement of the focusing point O. The noiseless received signal after perfect motion compensation can be written as follows:    4π f  T (p) (p) ξ (x) exp − j (19.58) x · i LoS (t) dx, SC ( f , t) = W ( f , t) c V where ζ (p) (z) = ζ (z) · p. It is worth pointing out that the signal in (19.58) is a scalar signal and therefore, it has the same characteristics of the single polarization signal expressed in (19.11). Therefore, ISAR image formation can be carried out exactly at the same way as is done for single polarization ISAR. The Pol-ISAR image can be written as:   (p) (p) IC (η, ν) = 2D − I F T SC ( f , t) = K w(η, ν) ⊗ ⊗ξ (p) (η, ν).

2.19.11.2.3 Polarimetric autofocus The idea of jointly processing all polarimetric channels for obtaining highly focussed ISAR images was firstly introduced in [47]. Such an insight relied on the concept of enhancing the image focus by maximizing the IC over the joint space of the focusing parameters α and of the polarization p. In formula: (19.59) (αˆ IC , pˆ IC ) = arg max{IC(α, p)}, α,p

  2  A I (p) (η, ν, α) − A I (p) (η, ν, α)  IC(α, p) = A I (p) (η, ν, α)   and where ξ = α1 , . . . , α N , with αi the model polynomial coefficients.

where

(19.60)

2.19.11 Polarimetric ISAR (Pol-ISAR)

1027

Equation (19.60) represents the new image contrast function defined in the joint domain and where A(·) is the mean operator. It is worth noting that the IC can be interpreted as a normalized standard deviation. Therefore, higher values of the IC mean sharper images. In the same way, image focus can be enhanced by minimizing the Image Entropy (IE), as follows: (αˆ IE , pˆ IE ) = arg min {IE(α, p)} , α,p

where

 IE(α, p) = I

(p)

(η, ν, α) =

ln[I

(p)

(η, ν, α)]I

(p)

|I (p) (η, ν, α)|2 . A[|I (p) (η, ν, α)|2 ]

(η, ν, α)dν dη,

(19.61)

(19.62) (19.63)

2.19.11.2.4 Initialisation In order to proceed with the application of the Pol-ICBA and Pol-IEBA to the fully polarimetric ISAR data, a solution for the initial polarization vector must also be provided. The problem can be solved by means of the following algorithm: 1. The polarization vector that provides the maximum SNR is obtained by solving the optimization problem stated by Eq. (19.64). ⎞ ⎛ $$  2  (p)  S R ( f , t) d f dt  ⎟ ⎜ (19.64) pˆ M = arg max ⎝ $$  ⎠. 2 p   (p) N R ( f , t) d f dt The SNR can be assumed maximum when the signal energy reaches its maximum, provided that the noise level is the same in all the polarization channels (basically when the noise level in the H and V receiving channels are the same). Therefore, Eq. (19.64) can be simplified as follows:     2   (p) (19.65) pˆ M = arg max S R ( f , t) d f dt p

An initial guess for the focusing parameter vector ξ can be obtained by applying the Radon Transform and by running a one-dimensional optimization problem (see [6,8]). Specifically, the scalar ICBA and IEBA must be applied to the received signal with polarization pˆ M as found at step 1:    (pˆ ) (19.66) αˆ ICM = arg max IC(α, pˆ M ) , α    (pˆ ) αˆ IEM = arg min IE(α, pˆ M ) , (19.67) α

where IC(α, pˆ M ) and IE(α, pˆ M ) can be obtained from (19.60) and (19.62) Therefore, the initial guess can be formed by adjoining the polarization vector pˆ M to the focusing   (pˆ M ) (pˆ M ) parameter vector αˆ IE , which can be expressed as αˆ IE , pˆ M .

1028

CHAPTER 19 Introduction to Inverse Synthetic Aperture Radar

Max SNR

Scalar ICBA

Fully polarimetric ICBA

FIGURE 19.27 Polarimetric ICBA flow chart.

2.19.11.2.5 Optimization Once the initial guess is estimated, the optimization problems as stated in (19.59) and (19.61) can be solved iteratively by using numerical methods for maximum (or minimum) search. Several methods for solving optimization problems have been proposed in the literature. Such methods can be grouped into deterministic and statistical methods. The first type, which will be adopted in this paper, make use of the characteristics of the cost function, such as gradient, hessian, etc. to determine the next step in order to converge to the local maximum or minimum. In this case, convergence to the global maximum (minimum) must be ensured by a suitable choice of the initial guess. The method used here is the simplex method, proposed by Nelder and Mead in [7]. Nevertheless, other solutions may be obtained by using statistical methods, such as Genetic Algorithms (GA) [8]. For the sake of clarity the algorithm flow chart is shown in Figure 19.27.

2.19.11.2.6 Most focussed ISAR image The proposed algorithm also provides the most focused ISAR image. The polarization pˆ IC , which maximizes (19.59) or equivalently, the polarization pˆ IE , which minimizes (19.61), are obtained as part of the solution of the optimization problems. Therefore, ISAR images obtained by processing the received data in polarizations pˆ IC and pˆ IE represent the best focused images according to the IC and IE focus indicators, respectively. Results relative to the use of polarimetric ISAR image autofocus are provided in Figure 19.28. Specifically, the VV polarization ISAR image obtained by means of Pol-ICBA (in Figure 19.28a is compared with the ISAR image obtained by applying a single polarization ICBA algorithm (in Figure 19.28b). It is quite evident that the image obtained by using the Pol-ICBA is sharper than that obtained with the single polarization ICBA. For the sake of completeness, also the ISAR image projected onto the polarization that maximizes the IC is shown in Figure 19.28c. A section cut along the cross-range in

2.19.11 Polarimetric ISAR (Pol-ISAR)

1029

−4

(a)

x 10 −300 −2 −200 −4

−6

Doppler Frequency (Hz)

−100

−8 0 −10

100

−12

−14 200 −16 300 0

5

10

15

20

Range (m)

−4

(b)

x 10 −300 −1 −2

−200

−3

Doppler Frequency (Hz)

−100

−4 −5

0

−6 −7

100 −8 −9

200

−10 −11

300 0

5

10

15

20

Range (m)

−3

(c)

x 10 −300

−0.5 −200 −1

Doppler Frequency (Hz)

−100 −1.5

0

−2

−2.5

100

−3 200 −3.5 300 0

5

10 Range (m)

15

20

FIGURE 19.28 Pol-ISAR images obtained by applying (a) Pol-ICBA (VV-Channel), (b) single polarization ICBA (VV-Channel), (c) Pol-ICBA (polarimetric channel that maximizes the IC).

1030

CHAPTER 19 Introduction to Inverse Synthetic Aperture Radar

3.5

x 10

−3

Single Pol Full Pol Best Pol

3

2.5

2

1.5

1

0.5

0 −140

−120

−100

−80 −60 −40 −20 Doppler Frequency (Hz)

0

20

40

FIGURE 19.29 Cross-range sections obtained from Figure 19.28.

correspondence of a scatterer’s peak further demonstrate that there is a SNR ratio increase due to a better image focus, as shown in Figure 19.29.

2.19.12 Bistatic radar imaging A renewed interest in bistatic radar, with specific attention to bistatic radar imaging has led the research effort to the development and implementation of Bistatic Synthetic Aperture Radar (BiSAR) and Bistatic Inverse Synthetic Aperture Radar (B-ISAR) systems. Bistatic SAR algorithms have been proposed in the literature to solve the problem of the bistatic radar geometry. The main idea behind those algorithms has been that of extending monostatic SAR processing techniques to the bistatic case. Such extensions are practically obtained by rewriting the received signal phase model in order to account for the bistatic geometry. Some BiSAR algorithms can be found in [48–52]. Some more details will be provided in Section 2.19.12.1 concerning B-ISAR, in order to demonstrate the usability of monostatic ISAR processors in bistatic geometries.

2.19.12 Bistatic Radar Imaging

1031

2.19.12.1 Bistatic ISAR There are a number of reasons why bistatic radar imaging may be of interest in non-cooperative target imaging applications. The main ones are summarized below: •









Geometrical limitations of monostatic ISAR: In order to obtain ISAR images with a significant Doppler spread, it is necessary that the target changes its aspect angle with respect to the radar during the Coherent Processing Interval (CPI). This produces a set of geometrical cases where even if the target is moving with respect to the radar, an ISAR image cannot be obtained. A simple case is given by a target moving along the radar’s Line of Sight (LOS). In this case the target aspect angle does not change in time and hence an ISAR image cannot be produced. This case is important because a a target moving directly towards a radar may be hostile. ISAR imaging of stealthy targets: Military targets may be constructed to minimize the energy backscattered towards the radar. This makes them almost invisible to a radar. One approach to achieve this is by reflecting the electromagnetic energy towards directions other than that of the radar. Therefore, stealthyness usually refers only to monostatic radars. The use of a bistatic radar may enable the detection and therefore the imaging of stealthy targets. Exploitation of bistatic SAR systems: A number of bistatic SAR experiments have been conducted in the recent years to prove the effectiveness of bistatic radar imaging. The data collected by bistatic SAR systems could be processed as bistatic ISAR data and therefore, non-cooperative targets could be imaged by using a bistatic ISAR processor. Multistatic ISAR imaging: Multistatic ISAR imaging may be achieved by using one or more transmitters and a number of receivers. To maximize the gain out of such configurations, each receiver would benefit from acquiring the signal transmitted by other transmitters. This enables several bistatic configurations where the transmitters and the receivers are not co-located. In order to fully understand multistatic configurations, the bistatic configuration must be studied first. Passive ISAR imaging: There is an increasing interest in the passive radar field, as existing illuminators of opportunity can be used to detect and track targets. Although with limited bistatic range resolution, it has been recently demonstrated that Passive ISAR (P-ISAR) imaging can be enabled by exploiting modern digital broadcast communications [53].

Although several techniques for image reconstruction have been proposed for BiSAR that provide effective tools for radar imaging of static scenes, they do not apply to the ISAR case targets of interest are non-cooperative moving objects. Nevertheless, ISAR imaging usually aims to provide images of relatively small targets when compared to SAR images, where the imaged area can reach the size of hundreds of square kilometers. This advantage of ISAR with respect to SAR allows consideration of the use of monostatic ISAR processors even when the geometry is bistatic. In this subsection, the limits of applicability of a monostatic ISAR processor to a bistatic ISAR configuration are analzsed. The analysis here is limited to the case of no synchronization errors. The effect of synchronization errors on a bistatic ISAR (B-ISAR) have been analyzed in [54]. A well established method for analyzing radar imaging systems is the calculation of the Point Spread Function (PSF) which in our case will be addressed as bistatic ISAR image PSF. It will be shown that the bistatic ISAR image PSF depends on the bistatic angle, which introduces distortions in the bistatic ISAR image. The PSF of an imaging system also depends on the image formation processing adopted. Radial motion compensation followed by a

1032

CHAPTER 19 Introduction to Inverse Synthetic Aperture Radar

FIGURE 19.30 Bistatic geometry.

Range-Doppler technique will be considered as the ISAR image formation method, as it represents the standard procedure for obtaining ISAR images. In this section the terminology and theory for Bistatic ISAR imaging is introduced. The geometry of the problem is illustrated in Figure 19.30. The bistatic configuration produces an almost geometrically equivalent monostatic configuration which can be seen as a virtual transmitting/receiving element that lies at the bisector between the transmitter and the receiver. Such an equivalence and its limitation will be the subject of the next subsections.

2.19.12.1.1 Signal modeling After motion compensation, as for the monostatic case (19.6), the received signal, can be written in a time-frequency format as follows:  S R ( f , t) = W ( f , t)

ξ(x) exp (− jϕ(x, f , t))dx, V

(19.68)

2.19.12 Bistatic Radar Imaging

1033

where ξ(x) represents, in this case, a bistatic reflectivity function and where the phase term (ϕ(x, f , t)), which takes into account the bistatic configuration, can be written as follows:  4π f  R A (t) + R B (t) + x · i A (t) + x · i B (t) c  4π f  R0 (t) + K (t)x · iBEM (t) , = c

ϕ(x, f , t) =

where

R A (t) + R B (t) , 2 i A (t) + i B (t) iBEM (t) , |i A (t) + i B (t)|    i A (t) + i B (t)  , K (t) =   2

R0 (t) =

(19.69)

(19.70) (19.71) (19.72)

and where R A (t) and R B (t) are the distances between point O on the target and the transmitter and the receiver, respectively, i A (t) and i B (t) are the unit vectors that indicate the LOS for the transmitter and the receiver, and x is the vector that locates a generic point on the target. An analysis of the effects of the bistatic geometry on the ISAR image Point Spread Function (PSF) follows.

2.19.12.1.2 PSF of the bistatic ISAR image The term K (t) carries information about the change in time of the bistatic geometry. However, what significantly affects the ISAR image PSF is the change of the bistatic angle during the coherent integration time. In this section the ISAR image PSF will be derived for the bistatic case, and the distortion introduced by the bistatic geometry will be related to the bistatic angle variation. In deriving the PSF, two assumptions are made that will allow e application of the Range Doppler technique when reconstructing the ISAR image (following motion compensation). These two assumptions are: (1) the far field condition and (2) a short integration time. These assumptions avoid the need for consideration of non-constant target rotation vectors and the use of polar reformatting, and are generally satisfied in typical ISAR scenarios where the resolutions required are not exceptionally high. When the target rotation vector is constant, the received signal backscattered by a single ideal scatterer located at a generic point may be rewritten (after motion compensation) in the following way:  ξ  (x1 , x2 ) exp (− jϕ(x10 , x20 , f , t))d x1 d x2 , (19.73) S R ( f , t) = W ( f , t) where the phase ϕ(x10 , x20 , f , t) may be written as: ϕ(x10 , x20 , f , t) =

 4π f  K (t)(x10 sin eff t + x20 cos eff t) . c

(19.74)

In Eqs. (19.73) and (19.74), (x10 , x20 ) are the coordinates of a generic scatterer on the target with respect to a reference system centered on the target itself (see Figure 19.30). Note: the third coordinate (x30 ) does not appear in Eqs. (19.73) and (19.74), as in the monostatic case.

1034

CHAPTER 19 Introduction to Inverse Synthetic Aperture Radar

Under Assumptions (1) and (2) the bistatic angle changes are relatively small, even when a target covers relatively large distances within the integration time. As a result of this, the bistatic angle can be approximated by a first order Taylor (Maclaurin) polynomial: θ (t) ∼ = θ (0) + θ˙ (0)t,

(19.75)

where −Tobs /2 ≤ t ≤ Tobs /2 and θ˙ = dθ dt . Therefore, the term K(t) can also be approximated with a first order Taylor (Maclaurin) polynomial, and by using Eq. (19.72) the following equation may be obtained:     θ (0) θ˙ (0) θ (0) K (t) ∼ (19.76) − sin t = K 0 + K 1 t. = K (0) + K˙ (0)t = cos 2 2 2 Therefore, Eq. (19.74) becomes: ϕ(x10 , x20 , f , t) =

 4π f  (K 0 + K 1 t) · (x10 sin eff t + x20 cos eff t) . c

(19.77)

For small integration angles (short integration time hypothesis) the sinusoids can be approximated by means of linear terms, as follows: ϕ(x10 , x20 , f , t) ≈

 4π f   4π f  (K 0 + K 1 t) eff t x10 + (K 0 + K 1 t)x20 . c c

(19.78)

An ISAR image reconstruction consists of: 1. Radial motion compensation. 2. Image formation. When performing theradial motion  compensation, any of the available techniques may be used. In 4π f R A (t)+R B (t) fact, the phase term c may be removed as in monostatic configurations. This is because 2 the term K(t) does not affect the target radial motion compensation. After motion compensation, the image formation (by means of the RD technique) makes use of two Fourier Transforms (FT): one along the frequency coordinate f (range compression) and one along the time coordinate t (cross-range image formation). In order to obtain the PSF of the bistatic ISAR system, we calculate the two FTs analytically.

2.19.12.1.3 Range compression The range compression is obtained by Fourier transforming equation (19.73) along the variable f.   S R (η, t) = W ( f , t)δ(x1 − x10 , x2 − x20 ) exp (− jϕ(x10 , x20 , f , t)) exp (− j2π f η)d x1 d x2 d f     2 4π f 0 (K 0 + K 1 t) eff t x10 δ η − (K 0 + K 1 t)x20 ⊗η W  (η, t), = exp − j c c (19.79)

2.19.12 Bistatic Radar Imaging

where

1035



   t sinc Bη W (η, t) = FT W ( f , t) = B exp ( − j2π f 0 η)rect t 





and ⊗η is the convolution operator over the variable η. Two effects are induced by the bistatic geometry: 1. the range position x2 is scaled by a factor K 0 ; 2. a range migration is induced by the bistatic angle variation within the integration time. Whilst the first effect can be corrected a posteriori by rescaling the range coordinate, the second effect can be significantly detrimental. If range migration occurs, the position of one scatterer can be moved from one range cell to another during the integration time, thereby resulting in a distortion of the PSF. In order to avoid range migration the following equation must be satisfied: M |K 1 | t x20 < r .

(19.80)

M is the scatterer with maximum distance In Eq. (19.80), r is the range resolution of the radar and x20 from the target’s zero range (focusing center). By substituting the expression of K 1 in (19.80) and by expressing the limitation with respect to the bistatic angle variation, the following relationship is obtained:   2r θ˙ (0) <    . (19.81)   M t x20 sin θ(0)  2

When the constraint (19.81) is satisfied, Eq. (19.79) can be rewritten as follows:   S R (τ, t) = exp (− j 4πc f0 K 0 + K 1 t) eff t x10   × δ η − 2c K 0 x20 ⊗τ W  (η, t).

(19.82)

2.19.12.1.4 Cross-range image formation Cross-range image formation is achieved by FT (19.82) along the coordinate t. The result is a complex image in the time-delay (range) and Doppler domains.       4π f 0  2 PSF(η, ν) = exp − j K 0 + K 1 t eff t x10 δ η − K 0 x20 ⊗τ c c  × W (η, t) exp (− j2π tν)dt     2 (19.83) = CH ν, α0 , α1 δ η − K 0 x20 ⊗η ⊗ν w(η, ν), c where     w(η, ν) = Bt exp (− j2π f 0 η)sinc tν sinc Bη ,      CH ν, α0 , α1 = FT ch t, α0 , α1 ,

(19.84) (19.85)

1036

CHAPTER 19 Introduction to Inverse Synthetic Aperture Radar

     ch t, α0 , α1 = exp − j2π α0 t + α1 t 2 ,   2 f 0 eff x10 θ (0) cos , α0 = c 2   2 f 0 eff x10 θ (0) θ˙ (0) sin , α1 = − c 2 2

(19.86) (19.87) (19.88)

and ⊗ν is the convolution operator over the variable ν. Therefore the PSF of Eq. 19.83 can be rewritten as:     2 PSF(η, ν) = CH ν, α0 , α1 ⊗ν w η − K 0 x20 , ν . c

(19.89)

It is worth recalling that a convolution between an infinite duration chirp and a sinc function is equivalent to a FT of a finite duration chirp, where the parameter of the sinc function is equivalent to the duration of the chirp. As can be seen from Eqs. (19.86–19.88), the chirp rate depends on the position of the scatterer along the cross-range direction. By following [55] as a rule of thumb for negating the chirp effect (when a RD technique is used), the chirp rate must satisfy the following relationship: |α1 | <

1 . t 2

(19.90)

˙ By substituting Eq. (19.90) into (19.88), and by expressing it with respect to θ(0), a rule for determining the maximum bistatic angle variation is obtained         c θ˙ (0) <  .    M sin θ(0)   f 0 t 2 eff x10 2

(19.91)

Bistatic ISAR scenarios which do not satisfy the constraint in (19.91) are likely to provoke image distortion. Nevertheless, such scenarios are particular ones where the ISAR system is pushed to the limit. An example of strong bistatic angle variations is shown to highlight the distortion effects that may be encountered when the constraint in (19.91) is not satisfied. The geometry considered in this simulation is depicted in Figure 19.31, where the target moves along a rectilinear trajectory that is almost aligned with the BEM LoS. In this case the bistatic angle changes are the most severe. The image displayed in Figure 19.32 is obtained by applying the monostatic ISAR processor to the data generated with the bistatic geometry depicted in Figure 19.31. A non-distorted image is also produced by considering a monostatic radar located in the position of the BEM element and displayed in Figure 19.33. From a comparison between the two images it is possible to appreciate the distortion effects.

2.19.12 Bistatic Radar Imaging

FIGURE 19.31 Simulated bistatic geometry.

FIGURE 19.32 Distorted B-ISAR image.

1037

1038

CHAPTER 19 Introduction to Inverse Synthetic Aperture Radar

FIGURE 19.33 Non-distorted BEM ISAR image.

2.19.13 Conclusion The main concepts and algorithms relative to ISAR imaging have been treated in this tutorial. Specifically, the concept of high resolution applied to radar has been used to introduce ISAR imaging. A model based approach has been proposed as a method to derive the ISAR processor. Specifically, ISAR geometry and received signal modeling have been defined. Some physical and mathematical details have been included in this tutorial with the purpose of helping the reader understanding concepts on one side and providing the basis for implementing basic ISAR imaging algorithms. Polarimetric and bistatic ISAR imaging have also been discussed as they represent more recent advances in ISAR imaging that are opening the doors to the use of ISAR when polarimetric radars are employed or when the radar configuration is not monostatic (bistatic and multistatic). Examples have also been shown in a variety of scenarios.

Acronyms 2D 3D BiSAR B-ISAR CD CPI

two-dimensional three-dimensional bistatic SAR bistatic ISAR coherent decomposition coherent processing interval

2.19.13 Conclusion

CTFT DSA EM ESD FT GA HS IC ICBA ID IE IEBA IFT IPP ISAR LoS ML MVA PGA P-ISAR Pol-ISAR Pol-SAR PSF RD SLL SNR TFT WV

1039

Cohen’s class time frequency transformsar = synthetic aperture radar dominant scatterer autofocus electro-magnetic energy spectral density Fourier transform genetic algorithm hot spot image contrast image contrast based autofocus incoherent decomposition image entropy image entropy based autofocus inverse Fourier transform image projection plane inverse synthetic aperture radar radar line of sight maximum likelihood minimum variance algorithm phase gradient algorithm passive ISAR polarimetric ISAR polarimetric SAR point spread function range-doppler side lobe level signal-to-noise ratio time-frequency-transform Wigner-Ville (time frequency transform)

Supplementary data Supplementary data associated with this article can be found, in the online version, at http://dx.doi.org/10. 1016/B978-0-12-411597-2.00019-9.

Acknowledgment Special thanks go to DSTO for releasing data that has been processed to form the ISAR images displayed in this chapter. Relevant Theory: Signal Processing Theory, Statistical Signal Processing, and Array Signal Processing See Vol.1, Chapter 2 Continuous-Time Signals and Systems See Vol. 1, Chapter 3 Discrete-Time Signals and Systems See Vol. 1, Chapter 9 Discrete Multi-Scale Transforms in Signal Processing

1040

CHAPTER 19 Introduction to Inverse Synthetic Aperture Radar

See Vol. 3, Chapter 3 Non-Stationary Signal Analysis Time-Frequency Approach See Vol. 3, Chapter 19 Array Processing in the Face of Nonidealities

References [1] G. Franceschetti, R. Lanari, Synthetic Aperture Radar Processing, CRC Press, 1999. [2] J.L. Walker, Range-doppler imaging of rotating objects, IEEE Trans. Aerosp. Electron. Syst. 16 (1980) 23–52. [3] D.A. Ausherman, A. Kozma, J.L. Walker, H.M. Jones, E.C. Poggio, Developments in radar imaging, IEEE Trans. Aerosp. Electron. Syst. 20 (1984) 363–400. [4] M.A. Richards, J.A. Scheer, W.A. Holm, Principles of Modern Radar, Scitech Publishing, 2010. [5] F. Berizzi, M. Martorella, B. Haywood, E.D. Mese, S. Bruscoli, A survey on ISAR autofocusing techniques, in: Proceedings of the IEEE ICIP 2004, Singapore, 2004. [6] M. Martorella, F. Berizzi, B. Haywood, A contrast maximization based technique for 2D ISAR autofocusing, IEE Proc. Radar Sonar Navig. 152 (4) (2005) 253–262. [7] J. Lagarias, J.A. Reeds, M.H. Wright, P.E. Wright, Convergence properties of the nelder-mead simplex method in low dimensions, SIAM J. Optim. 9 (1998) 112–147. [8] M. Martorella, F. Berizzi, S. Bruscoli, Use of genetic algorithms for contrast maximization and entropy minimization in ISAR autofocusing, J. Appl. Signal Process. 2006 (2006) 1–11 (special issue on Inverse Synthetic Aperture Radar). [9] G.C. Carter, Time delay estimation for passive sonar signal processing, IEEE Trans. Acoust. Speech Signal Process. 29 (1981) 463–470. [10] B.D. Steimberg, Radar imaging from a distorted array: the radio camera algorithm and experiments, IEEE Trans. Antennas Propag. 29 (1981) 740–748. [11] B. Haywood, R.J. Evans, Motion compensation for ISAR imaging, in: Proceedings of ASSPA 89, Adelaide, Australia, 1989, pp. 113–117. [12] C.V. Jakowatz, D.E. Wahl, P.H. Eichel, D.C. Ghiglia, Spotlight-Mode Synthetic Aperture Radar: A Signal Processing Approach, Springer, 1996. [13] M. Martorella, F. Berizzi, Time windowing for highly focused ISAR image reconstruction, IEEE Trans. Aerosp. Electron. Syst. 41 (2005) 992–1007. [14] R.G. Parker, Discrete Optimisation, Academic Press, 1988. [15] M. Martorella, A novel approach for ISAR image cross-range scaling, IEEE Trans. Aerosp. Electron. Syst. 44 (1) (2008) 281–294. [16] O. Yeste-Ojeda, J. Grajal, G. Lopez-Risueno, Atomic decomposition for radar applications, IEEE Trans. Aerosp. Electron. Syst. 44 (2008) 187–200. [17] J. Tsao, B. Steinberg, Reduction of sidelobe and speckle artifacts in microwave imaging: the CLEAN technique, IEEE Trans. Antennas Propag. 36 (1988) 543–556. [18] M. Martorella, N. Acito, F. Berizzi, Statistical clean technique for ISAR imaging, IEEE Trans. Geosci. Remote Sens. 45 (11) (2007) 3552–3560. [19] S.M. Kay, Fundamentals of statistical signal processing: estimation theory, Signal Processing, Prentice Hall, 1993. [20] L. Cohen, Time-frequency distributions—a review, IEEE Proc. 77 (7) (1989) 941–980. [21] V. Chen, Time-Frequency Transforms for Radar Imaging and Signal Analysis, Artech House, 2002. [22] T. Thayaparan, L. Stankovic, C. Wernik, M. Dakovic, Real-time motion compensation, image formation and image enhancement of moving targets in ISAR and SAR using S-method based approach, IET Signal Process. 2 (2008) 247–264.

References

1041

[23] F. Berizzi, E. Mese, M. Diani, M. Martorella, High-resolution ISAR imaging of maneuvering targets by means of the range instantaneous doppler technique: modeling and performance analysis, IEEE Trans. Image Process. 10 (2001) 1880–1890. [24] S. Cloude, E. Pottier, An entropy based classification scheme for land applications of polarimetric SAR, IEEE Trans. Geosci. Remote Sens. 35 (1997) 68–78. [25] S. Jiancheng, J. Dozier, An entropy based classification scheme for land applications of polarimetric SAR, IEEE Trans. Geosci. Remote Sens. 33 (1995) 905–914. [26] M. Migliaccio, A. Gambardella, M. Tranfaglia, SAR polarimetry to observe oil spills, IEEE Trans. Geosci. Remote Sens. 45 (2007) 506–511. [27] W.L. Cameron, N.N. Youssef, L.K. Leung, Simulated polarimetric signatures of primitive geometrical shapes, IEEE Trans. Geosci. Remote Sens. 34 (3) (1996) 793–803. [28] R. Touzi, F. Charbonneau, Characterization of target symmetric scattering using polarimetric SARs, IEEE Trans. Geosci. Remote Sens. 40 (11) (2002) 2507–2516. [29] R. Touzi, F. Charbonneau, Characterization of target symmetric scattering using polarimetric SARs, IEEE Trans. Geosci. Remote Sens. 42 (10) (2004) 2039–2045. [30] W.L. Cameron, H. Rais, Conservative polarimetric scatterers and their role in incorrect extensions of the Cameron decomposition, IEEE Trans. Geosci. Remote Sens. 44 (12) (2006) 3506–3516. [31] M. Martorella, E. Giusti, A. Capria, F. Berizzi, B. Bates, Automatic target recognition by means of polarimetric ISAR images and neural networks, IEEE Trans. Geosci. Remote Sens. 47 (2009) 3786–3794. [32] M. Martorella, E. Giusti, L. Demi, Z. Zhou, A. Cacciamano, F. Berizzi, B. Bates, Target recognition by means of polarimetric ISAR images, IEEE Trans. Aerosp. Electron. Syst. 47 (2011) 225–239. [33] S.R. Cloude, E. Pottier, A review of target decomposition theorems in radar polarimetry, IEEE Trans. Geosci. Remote Sens. 34 (2) (1996) 498–518. [34] E. Krogager, New decomposition of the radar target scattering matrix, Electron. Lett. 26 (18) (1990) 1525–1526. [35] J.R. Huynen, Measurement of the target scattering matrix, Proc. IEEE 53 (2) (1965) 936–946. [36] S.R. Cloude, The characterization of polarization effect in EM scattering, PhD dissertation, Univ. Birmingham, Fac. Eng., Birmingham, UK, 1986. [37] A. Freeman, S.T. Durden, A three component scattering model for polarimetric SAR data, IEEE Trans. Geosci. Remote Sens. 36 (3) (1998) 963–973. [38] M. Martorella, J. Palmer, F. Berizzi, B. Haywood, B. Bates, Polarimetric ISAR autofocusing, IET Signal Process. 2 (3) (2008) 312–324. [39] L. Xi, L. Gousui, J. Ni, Autofocusing of ISAR images based on entropy minimization, IEEE Trans. Aerosp. Electron. Syst. 35 (1999) 1240–1252. [40] F. Berizzi, G. Corsini, Autofocusing of inverse synthetic aperture radar images using contrast optimization, IEEE Trans. Aerosp. Electron. Syst. 32 (1996) 1185–1191. [41] J.R. Fienup, J.J. Miller, Aberration correction by maximising generalised sharpness metrics, J. Opt. Soc. Am. A 20 (2003) 609–620. [42] M. Iwamoto, T. Fujisaka, M. Kondoh, Autofocusing algorithm of inverse synthetic aperture radar using entropy, Electron. Commun. Jpn. 83 (1999) 97–106. [43] R. Morrison, M.N. Do, D.C. Munson Jr, SAR image autofocus by sharpness optimisation: a theoretical study, IEEE Trans. Image Process. 16 (2007) 2309–2321. [44] R. Morrison, D.C. Munson Jr., An experiemntal study of a new entropy-based SAR autofocusing technique, in: Proceedings of 2002 International Conference on Image Processing, vol. 2, 2002, pp. 441–444. [45] L.M. Novak, Studies of target detection algorithms that use polarimetric radar data, IEEE Trans. Aerosp. Electron. Syst. 25 (1989) 150–165.

1042

CHAPTER 19 Introduction to Inverse Synthetic Aperture Radar

[46] S.R. Cloude, K.P. Papathanassiou, Polarimetric SAR interferometry, IEEE Trans. Geosci. Remote Sens. 36 (5) (1998) 1551–1565. [47] M. Martorella, L. Cantini, F. Berizzi, B. Haywood, E. Dalle Mese, Optimised image autofocusing for polarimetric ISAR, in: Proceedings of the Eusipco Conference 2006, 22–25 April 2002, Florence, Italy, Hindawi, 2006. [48] H. Nies, O. Loffeld, K. Natroshvili, Analysis and focusing of bistatic airborne SAR data, IEEE Trans. Geosci. Remote Sens. 45 (2007) 3342–3349. [49] M. Soumekh, Bistatic synthetic aperture radar inversion with application in dynamic object imaging, IEEE Trans. Signal Process. 39 (9) (1991) 2044–2055. [50] M. Soumekh, Synthetic Aperture Radar Signal Processing with MATLAB Algorithms, Wiley, 1999. [51] J.H.G. Ender, I. Walterscheid, A.R. Brenner, New aspects of bistatic SAR: processing and experiments, in: IEEE IGARSS Proceedings of International Geoscience and Remote Sensing Symposium (IGARSS), Anchorage, AK, United States, vol. 3, Institute of Electrical and Electronics Engineers Inc., Piscataway, NJ, United States, 2004, pp. 1758–1762. [52] D. D’Aria, A.M. Guarnieri, F. Rocca, Focusing bistatic synthetic aperture radar using dip move out, IEEE Trans. Geosci. Remote Sens. 42 (7) (2004) 1362–1376. [53] D. Olivadese, M. Martorella, E. Giusti, D. Petri, F. Berizzi, Passive ISAR with dvb-t signal, in: Proceedings of the EUSAR 2012, 2012. [54] M. Martorella, Bistatic ISAR image formation in presence of bistatic angle changes and phase synchronization errors, in: Proceedings of the EUSAR 2008 Conference, Friedrichshafen, Germany, 2008. [55] F. Berizzi, E. Dalle Mese, Sea-wave fractal spectrum for SAR remote sensing, IEE Proc. Radar Sonar Navig. 148 (2) (2001) 56–66.