Ground tests for vision based determination and control of formation flying spacecraft trajectories

Ground tests for vision based determination and control of formation flying spacecraft trajectories

Acta Astronautica ] (]]]]) ]]]–]]] Contents lists available at ScienceDirect Acta Astronautica journal homepage: www.elsevier.com/locate/actaastro ...

4MB Sizes 2 Downloads 60 Views

Acta Astronautica ] (]]]]) ]]]–]]]

Contents lists available at ScienceDirect

Acta Astronautica journal homepage: www.elsevier.com/locate/actaastro

Ground tests for vision based determination and control of formation flying spacecraft trajectories P. Gasbarri a, M. Sabatini b,n, G.B. Palmerini b a b

DIMA, Università di Roma La Sapienza, Italy DIAEE, Università di Roma La Sapienza, Italy

a r t i c l e i n f o

abstract

Article history: Received 16 July 2013 Received in revised form 12 November 2013 Accepted 23 November 2013

The advances in the computational capabilities and in the robustness of the dedicated algorithms are suggesting vision based techniques as a fundamental asset in performing space operations such as rendezvous, docking and on-orbit servicing. This paper discusses a vision based technique in a scenario where the chaser satellite must identify a noncooperative target, and use visual information to estimate the relative kinematic state. A hardware-in-the-loop experiment is performed to test the possibility to perform a space rendezvous using the camera as a standalone sensor. This is accomplished using a dedicated test bed constituted by a dark room hosting a robotic manipulator. The camera is mounted on the end effector that moves replicating the satellite formation dynamics, including the control actions, which depend at each time step by the state estimation based on the visual algorithm, thus realizing a closed GNC loop. & 2013 IAA. Published by Elsevier Ltd. All rights reserved.

Keywords: Vision based navigation Formation flying Orbital maneuvering Kalman filter

1. Introduction Visual based techniques have a long history in space missions. It is however in the last decade that these devices are getting more and more attention as standalone tools for orbital maneuvers. A major application can be seen in close proximity operations between two satellites (typically, a chaser and a target spacecraft), for operations like docking, rendezvous, and on orbit servicing. In particular, CNES (Centre National d'Études Spatiales) has implemented an on-board software for vision based navigation using two cameras accommodated on the chaser satellite, thanks to its participation in the PRISMA mission [1]. In this experiment the target satellite must be considered as a cooperative member of the formation of satellites: in fact, the Close Range Camera designed to estimate the relative attitude and position of the target satellite (Tango) n

Corresponding author. Tel.: + 39 064 458 5324. E-mail addresses: [email protected] (P. Gasbarri), [email protected] (M. Sabatini), [email protected] (G.B. Palmerini).

was based on the extraction of Light Emitting Diode patterns purposely mounted on the spacecraft. In this paper we consider instead the case of a completely noncooperative target, which has to be identified and characterized in terms of relative position and orientation. This problem can present different levels of difficulty, depending on the scenario considered. In fact, a spacecraft on the dark background of the outer space is an easily detectable target, while a more complex situation happens when the Earth or the Sun enters the chaser's camera field of view. Even more complicated situations arise when a particular zone of the target's external structure must be identified to be approached for operations like grasping or docking. In this work, an image identification algorithm, the ScaleInvariant Feature Transform [2,3] will be described and adopted to face these problems. Once the target spacecraft is precisely identified, the estimate of the complete relative state (position and velocity) is performed by means of a Kalman filter, based on the measurement coming from the visual identification process. The approach can be investigated by means of an experimental replica of the orbital scenario. Different strategies can be followed at the

0094-5765/$ - see front matter & 2013 IAA. Published by Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.actaastro.2013.11.035

Please cite this article as: P. Gasbarri, et al., Ground tests for vision based determination and control of formation flying spacecraft trajectories, Acta Astronautica (2013), http://dx.doi.org/10.1016/j.actaastro.2013.11.035i

2

P. Gasbarri et al. / Acta Astronautica ] (]]]]) ]]]–]]]

scope of testing on ground space navigation and control systems. Free floating, unwired, platforms were presented for example in [4,5]. Instead, DLR (Deutsche Zentrum für Luft- und Raumfahrt) has established a test facility [6] called European Proximity Operations Simulator (EPOS) which is a highly accurate testbed for rendezvous and docking simulation under realistic conditions as they occur in space. The DLR mechanism is constituted by a robotic arm moving the chaser satellite, where a vision-based sensor (monocular camera) is used for relative navigation. At University of Würzburg [7], a similar manipulator is used to test a PMD (Photonic Mixer Device), a Time of Flight based 3D imaging sensor, for relative motion estimation of a spacecraft like target. Also in this work the robotic manipulator approach is used, but it refers to larger space relative dynamics (i.e., a smaller scale factor exists between experiment and actual maneuver). A dark room (approximating the space conditions from a visual point of view) has been realized in the Guidance and Navigation Laboratory at Università di Roma La Sapienza, as a testbed for spacecraft formation dynamics. A target satellite is positioned in a reference point (which is assumed as the origin of the orbiting reference frame), while a chaser satellite orbits in close proximity of the target satellite thanks to a robotic arm (already described in [8]), that moves according to the orbital laws. The camera, representing the chaser, is accommodated on the end effector of the robotic arm. This testbed implements a hardware-in-the-loop configuration, where at each time step the relative state estimate, evaluated from the visual information, is used to compute the control actions, that successively moves the robotic arm. Experimental evidence will show the performance and the limits of the proposed approach for the use of visual based techniques as a standalone tool (or as a back-up solution when other sensors fail) for close proximity operations such as rendezvous.

2. Mission scenario The modeled scenario envisages that the chaser satellite has completed a preliminary approach phase, and it is now in proximity of the target satellite, along a closed relative trajectory [9] (a 30  15 m2 in-plane elliptic relative orbit, centered 30 m away from the target satellite). This could be considered an intermediate configuration before completing the rendezvous operations. In Fig. 1 the relative trajectory is plotted, in the Local Vertical Local Horizontal frame (LVLH), centered at the target satellite position. In the proposed scenario, the chaser satellite determines its relative state by means of the camera, used as a standalone instrument, and it completes the rendezvous relying on the estimates of the relative state coming from the visual device. To this aim, it is supposed that the attitude of the chaser spacecraft is kept constant with respect to the y direction of the LVLH frame, so that the target satellite is always in the chaser's field of view Fig. 1). In other words, the chaser's attitude is fixed in the orbital frame. Also the target's attitude is considered stationary in the orbital frame, therefore actually only the relative position has to be estimated. Once the estimates of the relative state converge to the real value, the rendezvous maneuver starts, so that the chaser satellite tracks the desired relative state with respect to the target satellite. 3. The experimental testbed The testbed used for the experiments is based on a dark room where an articulated arm replicates the formation dynamics Fig. 2. Currently, only a two-dimensional motion is realized, thanks to a planar, three links (lengths 58, 24, 6 cm) manipulator with a camera mounted on the endeffector. During the tests, the formation orbital dynamics is

Fig. 1. Parking orbit of the chaser in an orbital reference frame attached to the target.

Please cite this article as: P. Gasbarri, et al., Ground tests for vision based determination and control of formation flying spacecraft trajectories, Acta Astronautica (2013), http://dx.doi.org/10.1016/j.actaastro.2013.11.035i

P. Gasbarri et al. / Acta Astronautica ] (]]]]) ]]]–]]]

3

to the scale of the testbed, the target will be out of focus for distances corresponding to about 10 m in the LVLH plane. 4. Scale invariant feature transform applied to formation flying

Fig. 2. Picture of the manipulator moving the chaser (camera) with respect to the target according to the Hill–Clohessy–Wiltshire dynamics.

Fig. 3. Scheme of the hardware-in-the-loop experiment. The green blocks refer to the actual hardware implementation, while the white blocks refer to the in-line numerical integration of the simulated spacecraft formation dynamics. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

numerically integrated, and at each time step the relative coordinates in the LVLH reference frame are transformed in coordinates of the working plane, where the manipulator moves. Due to the testbed size, a scale factor of 50 between the LVLH simulated world and the real testbed is adopted, so that a centimeter in the working plane corresponds to half a meter in the LVLH plane. To have an idea, the target's mockup SAR antenna is 15 cm large, corresponding to 7.5 m for a real satellite. These new coordinates are then processed in order to find the orientation of the links. In this way the manipulator moves so that its end-effector follows the chaser spacecraft relative dynamics (position and attitude) with respect to the target satellite. At each time step an image is acquired and an estimate of the relative position is computed. If a control action is needed, it will be evaluated on the basis of these estimates; the computed value is then provided as input to the formation orbital dynamics to be integrated up to the following time step, closing the loop that is schematically reported in Fig. 3. Following Fig. 4 explains the relation between the experimental working plane (with the manipulator's workspace reported in dotted red line) and the simulated orbital dynamics (blue elliptic trajectory), reproduced by means of the motion of the robotic manipulator whose configuration, at given times, is depicted in black. The camera used is a commercial webcam, with a field of view of 731, and an autofocus range from 15 cm to infinity. This means that, due

The problem of recognizing a specific – already known – object in a noisy background scene including other possible targets has been tackled by means of several algorithms. Some of them are based on the presence in the target object of lines, circles or other primitive shapes [10,11]; others are related to the target's color, or to a combination of colors that are likely to appear in a target of a given class. The most appealing techniques are the ones that provide a certain degree of robustness to a transformation in scale, rotation or perspective of the target's image, caused by the different pose with respect to a reference configuration. Furthermore, different (even extremely different) lighting conditions, with possible shadowing of a portion of the target, as well as the noise due to low resolution, should be considered. One of the algorithms which is more successful from these points of view is the Scale-Invariant Feature Transform (SIFT), which transforms image data into scale-invariant coordinates relative to local features. According to Ref. [3] the major stages of computation used to generate the set of image features are as follows: (a) Scale-space extrema detection: The first stage of computation searches over all scales and image locations. The scale space of an image is defined as a function Lðy; z; sÞ (where s is the scale factor), that is produced from the convolution of the variable2 2 2 scale Gaussian, Gðy; z; sÞ ¼ ð1=2πs2 Þ e  ðz þ y Þ=2s , with the input image, Iðy; zÞ Lðy; z; sÞ ¼ Gðy; z; sÞIðy; zÞ being y and z the coordinates of a point in the image plane. (b) Keypoint localization: In order to detect stable key points locations in scale space, the Difference-ofGaussian (DoG) function is introduced, which can be computed from the difference of two nearby scales separated by a constant multiplicative factor k Dðy; z; sÞ ¼ Lðy; z; ksÞ  Lðy; z; sÞ in order to detect the local maxima and minima of Dðy; z; sÞ, each sample point is compared to its eight neighbors in the current image and nine neighbors in the scale above and below. (c) Orientation assignment: One or more orientations are assigned to each keypoint location, based on local image gradient directions. The gradient magnitude mðy; zÞ and orientation θðy; zÞ are computed using pixel differences; mðy; zÞ ¼

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ½Lðy þ1; zÞ  Lðy  1; zÞ2 þ ½Lðy; z þ 1Þ  Lðy; z  1Þ2

θðy; zÞ ¼ tan  1 ½Lðy; z þ 1Þ  Lðy; z  1Þ=Lðyþ 1; zÞ  Lðy  1; zÞ

(d) Keypoint descriptor: The local image gradients are measured in the region around each keypoint. These

Please cite this article as: P. Gasbarri, et al., Ground tests for vision based determination and control of formation flying spacecraft trajectories, Acta Astronautica (2013), http://dx.doi.org/10.1016/j.actaastro.2013.11.035i

4

P. Gasbarri et al. / Acta Astronautica ] (]]]]) ]]]–]]]

Fig. 4. Manipulator's workspace and robotic arm, translated in the LVLH frame. (For interpretation of the references to color in this figure, the reader is referred to the web version of this article.)

Fig. 5. A keypoint descriptor is a vector containing the information about the gradient orientation and attitude in the area around the keypoint (from Ref. [3]).

are transformed into a representation that allows for significant levels of local shape distortion and change in illumination. Fig. 5 (left) shows the gradient magnitude and orientation computed in a region around the keypoint location. These samples are then accumulated into orientation histograms, summarizing the contents over N  N subregions (four 4  4 regions in Fig. 5 , right). The descriptor is formed from a vector containing the values of all the orientation histogram entries (relative to the keypoint gradient orientation), corresponding

to the lengths of the arrows on the right side of Fig. 5.

In this paper, SIFT is used to determine whether the specific target spacecraft is present or not in the image acquired by the chaser spacecraft at a given time. To this aim, a reference image of the target is first acquired and a set of relevant SIFT descriptors are identified. Then, the same algorithm is applied to the currently acquired image. If a match between a certain number of reference

Please cite this article as: P. Gasbarri, et al., Ground tests for vision based determination and control of formation flying spacecraft trajectories, Acta Astronautica (2013), http://dx.doi.org/10.1016/j.actaastro.2013.11.035i

P. Gasbarri et al. / Acta Astronautica ] (]]]]) ]]]–]]]

descriptors and currently acquired descriptors is assessed, the object is considered recognized. Images generally allow to identify a large number of keypoints, belonging both to the intended target as well as to other non-interesting parts (background or other bodies). Such large number of keypoints (and associated descriptors) helps in increasing the chances to identify the target object. As a drawback, the computation time, which is a critical aspect in real time operations, grows with the number of keypoints. Usually, a threshold is introduced for rejecting unstable extrema of the filtered image, because of their low contrast. Fig. 6 shows a picture of the target satellite in a given reference configuration (i.e. 30 cm away from the camera, corresponding to 15 m in the LVLH frame). This picture is processed in order to obtain the keypoints and associated descriptors; Fig. 7 reports only a limited number of them for a better readability. In this example the reference image is identified by a large number (1433) of descriptors, while the images that will be compared to the reference one will be described by a reduced number (around 300, depending on the distance from the target satellite) to speed-up the identification process. Alternatively, also the SURF [12] algorithm has been tested. This algorithm is

5

conceptually similar to SIFT, with modified methods for making the feature extraction faster. In any case, these general purpose object detection algorithms are quite time consuming and their performance of course depends on the processor used. For a desktop PC Pentium 3.2 GHz with 4 GB RAM it can take about 0.1–0.5 s, according to the image dimensions and contents. On a space processor we can predict a computation time up to 5 s. A keypoint belonging to a new image is associated to a keypoint of the reference image with a selection process based on the minimum Euclidean distance between the descriptors. However, a number of false matches can be present, a number that can correspond to the whole set of keypoints if the target is not present in the current image. The algorithm adopted to identify possible matches is based on the hypothesis of an affine transformation between the reference and the current image. Let us call y and z the coordinates of a keypoint in the reference image, and u, v the coordinates of the same keypoint in the current image. The relation between the current and the reference keypoint including scale, rotation and translation effects reads as      " # ty y u s cos θ s sin θ ¼ þ ð1Þ tz v  s sin θ s cos θ z where s is the scale factor, θ the rotation angle, and t y ,t z the translation between the current and the feature image. Considering all the N keypoints of the reference image that have been associated to keypoints of the current image, we can write 2 3 2 3 z1 sf 0 2 s cos θ 3 y1 u1 6 v 7 6 z 7 y1 0 sf 7 1 6 1 7 6 76 s sin θ 7 6 7 6 76 7 6 ::: 7 ¼ 6 ::: 76 ty ::: ::: ::: ð2Þ 6 7 6 7 6 76 7 sf 6 uN 7 6 y 7 5 zN sf 0 54 4 5 4 N tz sf vN y 0 sf zN N

Fig. 6. Example of a target satellite image.

where a scaling factors on the translations (sf ¼100) has been introduced for numerical stability, so that all the components of the matrix and of the vectors have the same order of magnitude. The relation (2) can be considered exact only if all the associations are correct, while it is affected by an error if there are some (or many, or all) false matches. Let us rewrite Eq. (2) as u ¼ Ac

ð3Þ

with u being the vector of keypoints of the current image, A the matrix of reference keypoints, c the vector of the affine transform parameters. A least square solution c~ can be found by performing the pseudo-inverse (indicated by apex þ ) of the matrix A, c~ ¼ A þ u

ð4Þ

and an error vector can be defined as e ¼ u  Ac~

Fig. 7. Keypoints extracted from the reference image.

ð5Þ

with a large value of the norm jjejj if many false matches are present. By sequentially erasing from u and A the pairs of keypoints that give a higher contribution to the error, a new c~ and the corresponding new norm jjejj are obtained.

Please cite this article as: P. Gasbarri, et al., Ground tests for vision based determination and control of formation flying spacecraft trajectories, Acta Astronautica (2013), http://dx.doi.org/10.1016/j.actaastro.2013.11.035i

P. Gasbarri et al. / Acta Astronautica ] (]]]]) ]]]–]]]

6

Fig. 8. The norm of the residuals (e in Eq. (5)) decreases as the false matches are discarded from the database.

The process is iterated until a limited number of residual keypoints are present (a value of 10 residual keypoints has been used in the current experiments). Fig. 8 shows the behavior of the norm jjejj as the pairs of associated keypoints with the maximum contribution to the error norm are iteratively discarded. The norm of the error decreases down to a final value (ideally zero) when hopefully no outliers are present anymore in the keypoints set. The rotation, the scale and the translation of the current image can be evaluated with sufficient accuracy from the c~ obtained from the final set of keypoints, provided that the output error is small. In fact, the rotation about the focal axis (normal to the image plane) is given by   ~ cð2Þ θ ¼ tan  1 ð6Þ ~ cð1Þ while the scale factor can be evaluated as s¼

~ cð1Þ cos ðθÞ

ð7Þ

the scale factor plays a key-role in this problem, since it can be easily demonstrated that, being X r the projection along the focal axis of the distance of the target satellite from the camera in the reference configuration, the current projection of the distance along the focal axis, X c , is given by Xc ¼

Xr s

ð8Þ

in previous works, the possibility to use the range measurement for estimating the overall relative state has been analyzed [13]; in the present case, the measurement, instead of the range, is the projection X c of the distance along the camera focal axis (parallel to the y axis of the

LVLH frame), evaluated by solving the visual matching process. If the camera is calibrated, and the focal length is therefore known with good accuracy, the measurement X c , combined with the coordinates in the image plane, could be used to determine also the lateral displacement. However, in the paragraph dedicated to the Navigation it will be shown that only X c is necessary for estimating the relative position and velocity in the orbital plane c c (i.e., X c ; Y c ; X_ ; Y_ ). The following tests will show that this visual process, even though subject to errors, is indeed quite robust and reliable in the proposed field of application. 4.1. Visual algorithm tests An extensive campaign is performed in order to fully understand the possibilities and the limits of the proposed approach. In the first experiment, the reference image is taken at 30 cm away from the camera; the camera is then moved backwards along the focal axis direction, up to a distance of 90 cm. The distance evaluated following the proposed approach suffers from an error, whose behavior is reported in Fig. 9. As expected, the error increases as the difference between the actual and the reference configuration increases. The algorithm has been already proved [14] as robust for scale factors in the range 0.5–2. A suitable solution is therefore to take a couple of reference images (a far reference image – 90 cm away – and a near reference image – 30 cm away) and then run the algorithm twice for each newly captured image. Two results (i.e., two scale factors) for each configuration are obtained, the first one with respect to the far reference and the second one with respect to the near reference, and the relevant residuals

Please cite this article as: P. Gasbarri, et al., Ground tests for vision based determination and control of formation flying spacecraft trajectories, Acta Astronautica (2013), http://dx.doi.org/10.1016/j.actaastro.2013.11.035i

P. Gasbarri et al. / Acta Astronautica ] (]]]]) ]]]–]]]

7

Fig. 9. Error between true and evaluated distance, when a single reference image (acquired 30 cm away from the target) is used.

Fig. 10. Norm of the residuals at different chaser-target distances when two reference images are used.

are computed. As shown in Fig. 10, the norm of the residual is high at large distances when considering the near reference image, and high for close distances when considering the far reference image. The scale factor having the minimum residual norm at the end of the process is selected and, as a result, the error with respect to the actual distance, plotted in Fig. 11, can be considered acceptable in the whole range of interest. Furthermore, this approach could be increasingly improved by adding more reference images. A second test has been performed to verify the algorithm's capability to overcome not only a different distance from the reference image, but also a different view angle. A new picture is taken at a distance of 45 cm from the target, with a lateral displacement of 15 cm. When the process of discarding the outliers is accomplished and only the final 10 pairs of matched keypoints are left (reported

in Fig. 12 MERGEFORMAT in the current and reference image), the norm of the residual e is about 1 pixel, and the scale factor is 0.61 with respect to the near reference. According to Eq. (8) we can evaluate the relative distance as 30 cm=0:61 ¼ 48:4 cm, a result that suffers from an error of 7.4% (3.4 cm) with respect to the actual value. This finding is definitely worse than the previous experiment, where the target was always in the focal axis direction. In fact, the image now is not just a scaled, roto-translated version of the reference image, since the tri-dimensional object shows new characteristics when observed under different viewpoints (in Fig. 12 MERGEFORMAT one of the target's sides is hidden in the current image). The consequences of these inaccuracies will be further investigated when the algorithm will be applied to the formation experiment. Here it is possible to observe that, similarly to what done for the far reference and near reference, the

Please cite this article as: P. Gasbarri, et al., Ground tests for vision based determination and control of formation flying spacecraft trajectories, Acta Astronautica (2013), http://dx.doi.org/10.1016/j.actaastro.2013.11.035i

P. Gasbarri et al. / Acta Astronautica ] (]]]]) ]]]–]]]

8

Fig. 11. Error of the visual algorithm for different chaser-target distances resulting from the adoption of two reference images.

Fig. 12. Matched final keypoints in the current (left) and reference (right) image.

problem of the large error when a lateral displacement is present, could be solved by adding more reference images (for example, far left and far right images). This has not been performed since the results prove that even though the accuracy is degraded, the algorithm can be considered robust to this kind of problem, which is one of the traditional obstacles in image identification processes. 5. Navigation and estimation The formation dynamics, under the hypothesis of circular reference orbit, absence of perturbation, and close proximity between the satellites, can be written following Hill–Clohessy–Wiltshire equations [15] x_ ¼ Ax þ Bu where 2 3 x 6 7 6y7 7 x¼6 6 x_ 7; 4 5 y_

ð9Þ 2

0

6 0 6 A¼6 2 4 3n 0

0

1

0

0

0

0

0

2n

0

3

1 7 7 7; 2n 5 0

2

0

0

0

1

60 6 B¼6 41

3

07 7 7 05 ð10Þ

being n the orbital angular velocity. It must be noted that, following the proposed visual approach, we are only able

to measure the projection of the relative distance along the focal axis. The measurement vector can be therefore defined as   z ¼ Hx with H ¼ 0 1 0 0 ð11Þ by analyzing the observability matrix, defined as 2 3 H 6 HA 7 6 7 O¼6 7 4 HA2 5

ð12Þ

HA3 it is possible to observe that it assumes rank 4 (i.e., full rank) for the given matrix H (while it would be equal to 3  in the case that H ¼ 1 0 0 0 , that is in the case of only along radius measurements). Since in our case the state is fully observable with the available measurement, it is possible to implement a Linear Kalman Filter [16] where the estimate vector x~ is composed by the four components ~ along-track position, y, ~ and relevant (radial position, x, _~ y), _~ while the measurement vector is time derivatives,x; constituted by the projection of the distance along the focal axis. The implementation of the Kalman filter, given the covariance matrices of the error of the measurements, R (indeed a scalar in this case), and of the error of the process, Q, is straightforward and will not be reported for

Please cite this article as: P. Gasbarri, et al., Ground tests for vision based determination and control of formation flying spacecraft trajectories, Acta Astronautica (2013), http://dx.doi.org/10.1016/j.actaastro.2013.11.035i

P. Gasbarri et al. / Acta Astronautica ] (]]]]) ]]]–]]]

9

Q f ¼ s2X ¼ 10  12 I 4x4

X ref . As stated before, the computing time required by the visual algorithm can be up to 5 s (as an order of magnitude) for low performance processors. This time delay will have a poor effect on the control performance, considered that the GNC loop sample time will be set to a much larger time, i.e., 60 s.

6. Control

7. Experiments

The adopted control law for a rendezvous maneuver is the classic Linear Quadratic Regulator (LQR), based on the solution of the Riccati equation for the computation of the control gain matrix K (as for example in [17]), where the state matrix A is the same of the Kalman filter (Eq. (10)). The control action, that can be written as

7.1. Navigation for an uncontrolled formation

the sake of brevity. In the following experiments, the numerical values used for the covariance matrices are Rf ¼ s2X ¼ 10  6 km

2

u ¼  KðX~ X ref Þ

ð13Þ

is applied after one orbital period, so that the Kalman filter has sufficient time to converge from the initial estimate (which is assumed to be the origin). The gain matrix K depends on the weight matrices Rlqr and Qlqr, relevant to the control cost and the control precision, which in the experiments are selected as " # 0 10  11 Rlqr ¼ 0 10  11 2

1

6 60 Qf ¼ 6 60 4 0

0

0

1

0

0 0

0

2

1=n 0

3

7 0 7 7 0 7 5 1=n2

the performance of the maneuver depends of course on the accuracy of the estimate X~ , needed to compute the error at each time step with respect to the reference state

In the first experiment, an uncontrolled relative orbit is considered, such as the one plotted in Fig. 1. Fig. 13 reports the behavior of the along-track coordinate, together with the measurements coming from the visual algorithm. It is possible to appreciate that these measurements are quite accurate. For the radial component, and for the relative velocities we can only rely on the state estimates, whose error with respect to the true state is reported in Fig. 14 A periodic behavior of the error is detected, especially on the along track component. This is due to the fact that the measurements' accuracy decreases in particular when the lateral displacement of the camera with respect to the target is high. As already mentioned in Section 4, the lateral displacement leads to a current image that is not just an affine transform of the reference image, as some parts of the target are hidden and some new parts are included. This is confirmed by Fig. 15, where the residuals of the visual algorithm (Eq. (4)) are reported according to the relevant position on the LVLH plane. Residuals are small when the target is along the camera focal axis (i.e., when the chaser is on the y axis), and become larger if lateral displacements (i.e., in the x direction) are considered. The estimate process is however not seriously affected, leading to a standard deviation of the error of

Fig. 13. True and measured projection of the relative distance along the camera focal axis (coinciding with the y axis of LVLH frame).

Please cite this article as: P. Gasbarri, et al., Ground tests for vision based determination and control of formation flying spacecraft trajectories, Acta Astronautica (2013), http://dx.doi.org/10.1016/j.actaastro.2013.11.035i

10

P. Gasbarri et al. / Acta Astronautica ] (]]]]) ]]]–]]]

Fig. 14. Error of the estimate state with respect to the true state.

Fig. 15. Norm of the residuals (e in Eq. (5)) of the visual algorithm, plotted at the points of the trajectory where the relevant images have been acquired.

the estimate equal to 1 m (for x component), 0.5 m (for y component), and 1 mm/s (for x_ and y_ ). This good performance is also due to the fact that the number of accurate measurements is larger than the number of noisy ones: in fact the relative velocity of the chaser is higher in the points with maximum lateral distance, and therefore few (time-fixed) measurements, with low influence on the filter estimation performance, are relevant to this

condition. A qualitative representation of the distribution along the orbit of the measurements can be found in Fig. 16. 7.2. Orbital rendezvous A second experiment is performed for the simulation of the rendezvous maneuver. Four images acquired by

Please cite this article as: P. Gasbarri, et al., Ground tests for vision based determination and control of formation flying spacecraft trajectories, Acta Astronautica (2013), http://dx.doi.org/10.1016/j.actaastro.2013.11.035i

P. Gasbarri et al. / Acta Astronautica ] (]]]]) ]]]–]]]

11

Fig. 16. Measurement error plotted in the point of the trajectory where the relevant image has been acquired; the measurement density is higher close to the y axis.

Fig. 17. Four images acquired during the motion of the chaser in proximity of the target.

the camera during the maneuver are reported in Fig. 17, showing the different view angles under which the algorithm is able to correctly identify the target satellite, up to the final configuration (right bottom picture), where the image is not even complete. One orbital period is devoted to the convergence of the filter estimate; after that, the control law for reaching the desired state (X ref ¼ ½ 0 m 10 m 0 m=s 0 m=s ) is activated, following Eq. (13). The time histories of the continuous controls along x and y direction are reported in Fig. 18. They are of course null for the first orbital period, and then, based on the difference between the state estimate and the reference state, they accelerate the

chaser towards the target (a motion that, at each time step, is translated in the corresponding movement of the robotic arm). At the end of the maneuver, the control action is nearly zero, since the reference state is reached. This can be appreciated by looking at the resulting true (black line) and estimated (red circled) trajectories, that are reported in Figs. 19 and 20. It is possible to see that in the initial part of the experiment the estimate of the radial component is not very accurate, but it soon converges, and it is sufficient to complete the maneuver with good accuracy; the final errors of the true position with respect to the reference state X ref has a mean value (evaluated over the last 3600 s) of about 0.5 m, almost entirely

Please cite this article as: P. Gasbarri, et al., Ground tests for vision based determination and control of formation flying spacecraft trajectories, Acta Astronautica (2013), http://dx.doi.org/10.1016/j.actaastro.2013.11.035i

12

P. Gasbarri et al. / Acta Astronautica ] (]]]]) ]]]–]]]

Fig. 18. Time history of the control actions during the rendezvous.

Fig. 19. True and estimated trajectory during the parking phase and rendezvous. (For interpretation of the references to color in this figure, the reader is referred to the web version of this article.)

due to measurement inaccuracy. The accuracy of the control cannot be, of course, better than the accuracy of the estimate. On the other hand, as the chaser gets closer to the target, the resolution of the acquired images would increase, allowing for more accurate measurements and a consequent more precise control. Unfortunately, this cannot be verified by means of the testbed, since the images are out of focus below a certain distance. It is nevertheless meaningful to expect that the performance of

the proposed approach could only increase in a real, full scale application. In fact, the visual algorithm greatly benefits from a target that is rich in details (as a real spacecraft would be), so that peculiar keypoints can be easily recognized. This observation, together with the good results obtained in the experiments, suggests that the use of an optical instrument as a standalone sensor for rendezvous with a non-cooperative target can be considered as a real option.

Please cite this article as: P. Gasbarri, et al., Ground tests for vision based determination and control of formation flying spacecraft trajectories, Acta Astronautica (2013), http://dx.doi.org/10.1016/j.actaastro.2013.11.035i

P. Gasbarri et al. / Acta Astronautica ] (]]]]) ]]]–]]]

13

Fig. 20. True and estimated position, projected along x and y directions. (For interpretation of the references to color in this figure, the reader is referred to the web version of this article.)

8. Conclusion The paper deals with the tests, carried on in a purposely built laboratory testbed, of a formation flying navigation and control technique based on the images acquired by an onboard camera. The results of this experimental activity prove that the proposed vision based technique is able to identify the target spacecraft under different view angles, at different distances, and under varying lighting conditions. The correct identification of the target allows quite accurate measurements, that can be processed to estimate the complete relative kinematic state of the target with respect to the chaser satellite, and lead to perform a rendezvous maneuver using the camera as a standalone tool. In addition, the method has a major field of application in the possibility to identify and track also specific parts of the target satellite, a characteristic that could be very useful when a given docking mechanism, or handle, or part to be repaired, must be recognized and reached. The algorithm in the current version takes a few seconds to run, far less than the measurement period of one minute, and it is likely that a large margin for further improvements in the computation time still exists, since optimality in this sense was out of the scope in this work. The good results obtained with the current testbed, where a scaled model of the formation dynamics is realized, encourage further developments of the study. Some important characteristics still miss in this first release of the testbed, such as the possibility to have a three dimensional relative motion, or the possibility to simulate a tumbling target satellite. The proposed method however is not deeply influenced by the current limits. In fact, it could be extended, without any significant changes, to the case of a rotating target satellite, by enriching the database of reference images with images of the target

satellite under different viewpoints. Of course the algorithm computing time grows with the number of images, and the trade-off between pose estimation error and required time is to be performed with respect to the needed control frequency and accuracy. References [1] M. Delpech, J.C. Berges, S. Djalal, P.Y. Guidotti, J. Christy, Preliminary results of the vision based rendezvous and formation flying experiments perfomed during the PRSMA extended mission, in: Proceedings of the 1st DyCoss Conference, IAA-AAS-DyCoSS1-12-07, Porto, Portugal, 19–21 March 2012. [2] D.G. Lowe, Object recognition from local scale-invariant features, in: Proceedings of the International Conference on Computer Vision, Corfu, Greece, September 1999, pp. 1150–1157. [3] D.G. Lowe, Distinctive image features from scale-invariant keypoints, Int. J. Comput. Vis. 60 (2) (2004) 91–110. [4] D. Miller, A. Saenz-Otero, J. Wertz, A. Chen, G. Berkowski, C. Brodel, S. Carlson, D. Carpenter, S. Chen, S. Cheng, D. Feller, S. Jackson, B. Pitts, F. Perez, J. Szuminski, S. Sell, SPHERES: A Testbed for Long Duration Satellite Formation Flying In Micro-Gravity Conditions, American Astronautical Society, 2000 (Paper 00-110, January). [5] M. Sabatini, G.B. Palmerini, R. Monti, P. Gasbarri, Image based control of the “PINOCCHIO” experimental free flying platform, Acta Astronaut. 94 (1) (2014) 480–492, http://dx.doi.org/10.1016/j.actaastro.2012.10.037. [6] H. Benninghoff, T. Tzschichholz, T..Boge, T. Rupp, Hardware-in-theLoop Simulation of Rendezvous and Docking Maneuvers in On-Orbit Servicing Missions, in: Proceedings of the 28th International Symposium on Space Technology and Science, Okinawa, 5–12 June 2011. [7] K. Ravandoor, S. Busch, L. Regoli, Evaluation and Performance Optimization of PMD Camera for RvD Application, in: Proceedings of the 19th IFAC Symposium on Automatic Control in Aerospace, Würzburg, Germany, September 2–6 2013. [8] M. Sabatini, R. Monti, P. Gasbarri, G.B. Palmerini, Adaptive and robust algorithms and tests for visual-based navigation of a space robotic manipulator, Acta Astronaut. 83 (2013) 65–84, http://dx.doi.org/ 10.1016/j.actaastro.2012.10.016. [9] K. Alfriend, S.R. Vadali, P. Gurfil, J. How, L. Breger, Spacecraft Formation Flying: Dynamics, Control, and Navigation, First Edition, Elsevier Astrodynamics Series, 2010.

Please cite this article as: P. Gasbarri, et al., Ground tests for vision based determination and control of formation flying spacecraft trajectories, Acta Astronautica (2013), http://dx.doi.org/10.1016/j.actaastro.2013.11.035i

14

P. Gasbarri et al. / Acta Astronautica ] (]]]]) ]]]–]]]

[10] R.O. Duda, P.E. Hart, Use of the Hough transform to detect lines and curves in pictures, Commun. Ass. Comput. Mach. 15 (1975) 11–15. [11] G. Casonato, G.B. Palmerini Visual Techniques Applied to the ATV/ISS Rendez-Vous Monitoring, in: Proceedings of the IEEE Aerospace Conference, 1, 2004. pp. 613–624. [12] H. Bay, A. Ess, T. Tuytelaars, L. Van Gool, SURF: Speeded Up Robust Features, Comput. Vis. Image Underst. (CVIU) 110 (3) (2008) 346–359. [13] M. Sabatini, G.B. Palmerini Deployment Strategies for a Formation of Pico-Satellites, in: Proceedings of the 60th International Astronautical Congress 2009, IAC, 9, 2009, pp. 7059–7071. [14] G. Palmerini, M. Sabatini, P. Gasbarri, Analysis and Tests of Visual Based Techniques for Orbital Rendezvous Operations, in: Proceedings of the

IEEE Aerospace Conference, AERO 2013, art. no. 6497417, Big Sky, Montana, USA, http://dx.doi.org/10.1109/AERO.2013.6497417. [15] W.H. Clohessy, R.S. Wiltshire, Terminal guidance system for satellite rendezvous, J. Aerosp. Sci. 27 (9) (1960) 653–658. [16] P. Zarchan, H. Musoff, Fundamentals of Kalman filtering: a practical approach, Progress in Astronautics and Aeronautics Series, 190, AIAA, 2000. [17] M. Sabatini, D.R. Izzo, G.B. Palmerini, Minimum Control for Spacecraft Formations in a J2 Perturbed Environment, Celest. Mech. Dyn. Astron. 105 (1) (2009) 141–157, http://dx.doi.org/10.1007/s10569009-9214-5.

Please cite this article as: P. Gasbarri, et al., Ground tests for vision based determination and control of formation flying spacecraft trajectories, Acta Astronautica (2013), http://dx.doi.org/10.1016/j.actaastro.2013.11.035i