Model-free pose estimation using point cloud data

Model-free pose estimation using point cloud data

Acta Astronautica 165 (2019) 298–311 Contents lists available at ScienceDirect Acta Astronautica journal homepage: www.elsevier.com/locate/actaastro...

5MB Sizes 0 Downloads 89 Views

Acta Astronautica 165 (2019) 298–311

Contents lists available at ScienceDirect

Acta Astronautica journal homepage: www.elsevier.com/locate/actaastro

Model-free pose estimation using point cloud data ∗

Lim Tae W. , Charles E. Oestreich

T

1

United States Naval Academy, Annapolis, MD 21402, USA

A R T I C LE I N FO

A B S T R A C T

Keywords: Model-free pose estimation Point cloud data Edge detection Hough transform Plane fitting

A model-free pose (relative attitude and position) estimation process using point cloud data is developed to support envisioned autonomous satellite servicing missions. Unlike model-based pose estimation that typically requires a three-dimensional geometric model of a satellite to be serviced, the model-free pose estimation presented in this paper employs object features detected in situ using point cloud data. Its overall process is similar to existing model-free pose estimation algorithms in that it requires edge detection, object boundary identification, edge vector determination to form an orthogonal triad for attitude estimation, and object area and centroid calculation for position estimation with respect to the sensor. However, significant improvements have been made toward fully autonomous, model-free pose estimation by employing the Hough transform that allows robust line fitting of jagged edges resulting from object rotations and the discrete nature of point cloud data. In addition, a plane fitting algorithm using the homogeneous transformation was developed to compute plane normal vectors to help improve attitude estimation accuracy. Using both measured and simulated point cloud data of test objects, the performance of the developed model-free pose estimation process was examined. It was able to produce accurate position and attitude of the objects. Future effort will be needed to include more general geometric features such as circles, ellipses, and arcs to provide capabilities needed toward fully autonomous model-free pose estimation process.

1. Introduction Recent efforts on autonomous satellite servicing can be categorized into two different rendezvous and capture approaches: cooperative and non-cooperative [1–3]. In the cooperative approach, the target spacecraft (i.e., the one to be serviced) provides assistance to the servicing satellite during rendezvous and capture. This assistance could be in the form of docking adapters, fiducial markings, retro-reflectors, or telemetry data. There have been several successful missions in the realm of cooperative servicing. The Air Force Research Laboratory's Experimental Spacecraft System Number 11 (XSS-11) [2,3] and the Defense Advanced Research Project Agency's (DARPA) Orbital Express [4,5] have demonstrated fully automated rendezvous, capture, and servicing capabilities with cooperative spacecraft. However, the majority of existing satellites in orbit are non-cooperative in that they were not intended to be captured and serviced. To expand on-orbit servicing capabilities to these legacy satellites, NASA and DARPA have both undertaken satellite servicing missions. NASA's Restore-L mission looks to rendezvous with, refuel, and relocate the Landsat 7 satellite, an Earth imaging satellite located in low Earth orbit (LEO) [6,7]. The U.S. Naval Research Laboratory (NRL) is spearheading DARPA's Robotic Servicing

of Geosynchronous Satellites (RSGS) program, which aims at developing a robotic spacecraft capable of servicing non-cooperative spacecraft in geosynchronous orbit (GEO) [8]. Among many challenges in non-cooperative servicing approaches, advancements in autonomous rendezvous and capture, especially advanced imaging and pose (relative orientation and range) estimation capabilities, have been identified as a key technology development area [2]. Non-cooperative pose estimation can be further categorized into model-based and model-free (or non-model-based) approaches, although there are also hybrid approaches that employ model-free algorithms for acquisition and then model-based processes for tracking. 1.1. Model-based pose estimation Both Restore-L and RSGS programs will utilize sensors with modelbased computer vision algorithms to detect capture points and calculate their pose relative to the servicing satellite. Restore-L's potential computer vision and tracking systems are currently being tested onboard the Raven module, operating on the International Space Station (ISS) [9]. Raven utilizes visual, infrared (IR), and LiDAR (Light Detection and Ranging) sensors to track visiting spacecraft to the ISS and to estimate



Corresponding author. Aerospace Engineering Department, Mail Stop 11B, 590 Holloway Road. E-mail address: [email protected] (T.W. Lim). 1 Midshipman 1/C, Aerospace Engineering Department, Mail Stop 11B, 590 Holloway Road, currently Ensign US Navy. https://doi.org/10.1016/j.actaastro.2019.09.007 Received 12 April 2019; Received in revised form 8 July 2019; Accepted 5 September 2019 Available online 19 September 2019 0094-5765/ Published by Elsevier Ltd on behalf of IAA.

Acta Astronautica 165 (2019) 298–311

T.W. Lim and C.E. Oestreich

its relative pose. Raven utilizes two main computer vision algorithms: the Goddard Natural Feature Image Recognition (GNFIR) process for visual/IR images and the FPose algorithm for LiDAR images [10,11]. In the GNFIR process, the algorithm first detects edges of the target spacecraft in the camera sensor image. Then it searches for corresponding edges in the 2-D projected model, which is obtained from an internally stored, 3-D model of the target spacecraft. Errors between the projected model edges and sensor image edges are minimized using a least-squares method, producing a final, corrected pose measurement. The process then repeats for the next frame, using the previous frame's pose measurement. GNFIR has been proven to maintain high accuracy while running in real time (3 Hz). However, due to the use of visual and IR images, GNFIR performance is susceptible to lighting conditions and experiences at times difficulty in acquiring initial pose. The FPose process utilizes the LiDAR images obtained by the Raven ISS module and calculates pose based on the widely used Iterative Closest Points (ICP) algorithm [12]. The underlying process is very similar to GNFIR: the point cloud measured by the LiDAR sensor is compared to a point cloud model of the target spacecraft. Using the ICP algorithm, the least squares error between them are minimized to produce a pose estimate. The RSGS program will utilize a computer vision system known as TriDAR (Triangulation and LiDAR Automated Rendezvous and Docking), which has been successfully tested onboard the Space Shuttle [13]. TriDAR also uses a model-based algorithm with 3-D LiDAR sensor data. The algorithm steps generally follow a similar pattern to the FPose algorithm: 3-D point cloud data is obtained and compared to an internally stored point cloud model of the target spacecraft. The ICP algorithm is used to search for matching points between the sensor data and model point cloud data. The unique aspect of TriDAR is its method of acquiring sparse 3-D point cloud data from its sensor using distinct scanning patterns. These patterns are optimized for the target spacecraft, thereby maintaining accuracy while making the entire algorithm more efficient and able to be implemented in real-time. Aiming at building non-cooperative pose estimation capability, an on-orbit experiment was conducted recently to collect point cloud data using the Laser and Infra-Red Imaging Sensors (LIRIS) demonstrator while the Automated Transfer Vehicle 5 (ATV5) was approaching the ISS [14]. Subsequently, pose estimation results using the data were reported [15]. Although the results represent a step forward to noncooperative pose estimation since they did not rely on conventional navigation aids such as fiducial markings or retro-reflectors, they still relied on the ICP-based pose estimation algorithm and employed a 3-D model of the ISS. There also have been other recent studies conducted to develop algorithms for pose estimation using point cloud data, based primarily on the ICP algorithm [16,17].

Panin [19] conducted a feature-based, model-free pose estimation, employing an iterated extended Kalman filter to estimate object velocity and thus predict pose at a future sample time. The errors between predicted and measured poses of object features were then used to refine the pose estimates. Regoli et al. [20] employed a photonic mixer device (PMD) based camera which provides 3-D range images using the principle of time of flight measurement. Pose estimation experiments were conducted using the PMD camera. In order to improve accuracy and resolve rotation ambiguity about the camera boresight, a plane fitting algorithm using a principal component analysis and a Hough transform that fits a straight line to one of object edges were performed. He el al [21] conducted pose estimation using 3-D point cloud data, features detected using surface curvature data of neighboring points, and a particle filter, which tends to be computationally intensive. Initial experimental results were limited to estimating translational motions. Suqi and Lim [22] demonstrated that, using simulated LiDAR data, the orientation of object edges can be employed directly to define an object coordinate frame and thus calculate its pose. Their results provided good pose estimation accuracy when object edges were aligned with sensor axes, but did not work as well when they were inclined. This was due to the discrete nature of 3-D LiDAR point cloud data. As the object was rotated, edges became jagged and the algorithm mistakenly identified many false corners. These studies indicate that the 3-D nature of LiDAR point cloud data will be advantageous for model-free pose estimation, as the range between the sensor and the target object is inherently available using a LiDAR sensor. The model-free pose estimation algorithm in this paper presents several improvements over existing algorithms. Instead of using the Moore-neighbor edge boundary tracing algorithm [23] to detect lines, it employs the Hough transform [24] which is capable of identifying lines from sensor artifacts like jagged edges. When measured point cloud data is employed for pose estimation, due to reflections of sensor light especially around edges, point cloud data near the edges tends to be scattered and relatively sparse, thus leading to inaccuracies in defining edge vectors, which in turn are employed to define object orientation. A process of removing these spurious points along the edges is introduced to improve the accuracy of edge vectors. The new algorithm presented in this paper further enhances model-free pose estimation techniques by supplementing edge definition with the orientation of the object's surface plane. As more point cloud data are contained within the surface plane, a plane normal vector provides more robust and accurate estimate of object orientation. Studies using both experimental and simulated point cloud data sets were conducted to examine the performance of the model-free pose estimation algorithm.

1.2. Model-free pose estimation

1.3. Organization of the paper

One of the common drawbacks of the model-based algorithms mentioned above is that they require the use of an internally stored model of the target spacecraft generated a priori. New three-dimensional models can be uplinked to the servicing satellite and models can also be created on-orbit by taking series of images at different orientations relative to the target spacecraft, but both of these processes are time-consuming and greatly limit the rate of deployment for the servicing satellite. Therefore, it is advantageous to develop a method of pose estimation that does not require an internally stored model and instead relies on features extracted in situ from the sensor image. The lack of dependence on a target spacecraft model potentially generalizes the capabilities of servicing satellites to rendezvous with and capture many non-cooperative, legacy satellites as well as orbital debris. Several model-free pose estimation studies to support autonomous rendezvous and capture were conducted recently. Du et al. [18] performed model-free pose estimation using simulated dual 2-D camera images. For pose estimation, they employed corner points of an object feature such as a rectangle. Using stereo vision images, Oumer and

This paper is organized as follows. Section 2 describes the overview of the model-free pose estimation process developed in this paper. Section 3 presents the experimental setup employed for point cloud data collection and the test data that will be used to examine the performance of the pose estimation algorithm. Sections 4 through 6 describes the model-free pose estimation process in detail. Improvements made to remove spurious point cloud data that does not belong to edges are presented in detail in Section 4. Section 5 then describes the Hough transform, which is robust to artifacts like jagged edges, to estimate edge unit vectors. Plane normal unit vector estimation algorithm using the homogeneous transformation is also presented. Section 6 describes estimation algorithms for relative attitude and position and shows the results obtained using the point cloud test data. Section 7 provides an assessment of the pose estimation accuracy of the model-free algorithm using simulated point cloud data, absent of measurement errors. Finally, Section 8 provides conclusions that are drawn from the investigation.

299

Acta Astronautica 165 (2019) 298–311

T.W. Lim and C.E. Oestreich

Fig. 1. Model-free pose estimation process. Fig. 2. Experimental setup for point cloud data collection.

2. Overview of model-free pose estimation There are multiple steps involved in the model-free pose estimation process presented in this paper as depicted in Fig. 1. The process begins by collecting point cloud data within the field-of-view (FOV) illuminated by a LiDAR sensor. The process continues by conducting edge detection on the point cloud data. Detected edges are processed to remove undesirable artifacts of edge detection and to identify object boundary. Then, the Hough transform [24] is applied to the points on the edges to fit lines that pass through them and thus corresponding unit vectors in the sensor frame. Additionally, a plane-fitting process is employed using the points interior of the object boundary to estimate a plane normal unit vector to help improve the pose estimation accuracy. Although there could be many objects of various shapes and sizes in the sensor field of view, in this paper the geometry of the test object is limited to straight edges and flat surfaces. The next step is to choose two non-parallel edge unit vectors to form a direction cosine matrix (DCM) which represents the relative attitude of the object with respect to the sensor frame. Alternatively, one edge unit vector and the plane normal unit vector can be employed to define the DCM to help improve pose estimation accuracy. Finally, in order to provide estimates for the object position relative to the sensor frame, area inside of the object boundary and its centroid are computed. The object centroid is used to define the position of the object.

Fig. 3. Sample point cloud test data displayed as a false color image. (For interpretation of the references to color in this figure legend, the reader is referred to the Web version of this article.)

acquisition system was employed to collect and display the data. A sample point cloud test data, collected during the experiment, is displayed as a false color image in Fig. 3. For the study presented in this paper, the test object was tilted by 5 deg about the sensor Y axis and then by −5 deg about sensor X axis. The three-axis linear stage and thus the LiDAR sensor were moved to locate the test object at X = 0 m, Y = 0.1 m, and Z = 0.8 m with respect to the sensor frame. Then the reaction wheel inside the air bearing was activated to provide spin rate of 8 rpm (revolutions per minute) in the counter-clockwise direction (or – Z rotation). Point cloud test data cases employed for pose estimation study presented later are displayed in Fig. 4, captured at about 45 deg interval as the test object spins.

3. Experimental set up for point cloud data collection The experimental setup employed to collect point cloud data to study model-free pose estimation is depicted in Fig. 2. The flash LiDAR sensor [25] is attached to the three-axis linear stage via the structural tubing to provide three translational degrees-of-freedom (DOFs) of motion for the sensor. The linear stage is capable of providing the range of motion of 0.75 m, 0.45 m, and 0.45 m, along the sensor frame axes of X, Y, and Z, respectively. The sensor has a 0.39 × 0.39 deg instantaneous field of view with a 176 × 144 pixel detector array. Further details can be found in Ref. [25]. It is placed above the test object which in turn is attached to an air bearing system. The hemi-spherical air bearing allows continuous spin of the object which is remotely controlled by a reaction wheel installed inside the air bearing. Limited amount of tilt of the object up tp 30 deg in directions perpendicular to the spin axis is allowed. The definition of the sensor frame (X-Y-Z) is also depicted in the figure. Various test conditions were used to collect point cloud test data including different locations of the sensor with respect to the test object, different spin rates, and various tilt angles of the test object. A data

4. Detection and processing of edges and object boundary The model-free pose estimation process begins by detecting edges and thus object boundary using point cloud data such as those shown in Fig. 4. Among many available edge detection algorithms, Canny edge detection [26] was employed for its built-in noise filtering properties. To illustrate the edge and object boundary detection process, figures on the left side of Fig. 5 show the results of Canny edge detection, applied to the point cloud data for Case 0 deg and Case −45 deg. These twodimensional binary images (white pixels or 1 for edges and black pixels or 0 for background) represent the locations of edge points in the sensor 300

Acta Astronautica 165 (2019) 298–311

T.W. Lim and C.E. Oestreich

Fig. 4. Point cloud test data cases employed for pose estimation.

those that have less than a specified number of points [27]. Images after removing these artifacts are shown on the right side of the figure, representing the object boundaries. Note that the lines are not perfectly straight due to measurement

detector pixel rows and columns. As sensor measurement artifacts, additional edge lines that do not belong to the object boundary are detected. These are corrected typically by counting the number of points (or pixels) belong to individual connected lines and removing 301

Acta Astronautica 165 (2019) 298–311

T.W. Lim and C.E. Oestreich

Fig. 5. Examples of edge and object boundary detection.

errors and the tilt and spin of the object with respect to the sensor. Jagged edges are clearly visible for Case −45 deg. As discussed in detail in Ref. [28], they are the artifacts of spatial aliasing caused by discrete sensor detector pixel resolution. Previously reported non-cooperative pose estimation algorithm [22] failed to accommodate jagged edges since it needed to identify lines and corners in order to specify edge unit vectors. Jagged edges, although they represent straight lines, were mistaken as many small line segments and corners. The Hough transform [24] as discussed later resolves this issue and provides a robust way of accommodating jagged edges. Pixel points of the detected object boundary are mapped onto the corresponding three-dimensional point cloud data to examine how well the pixel points correspond to physical object edges. Fig. 6 shows an example using test data of Case 0 deg. The mapped edge points on the object boundary are marked with red circles. They are scattered on the upper (L-shaped object) and lower (square plate) objects as well as on the side walls, showing errors in object boundary detection due to sensor noise and scattering of light. As will be discussed later, since edge unit vectors are formed by fitting straight lines using these edge points, line-fitting accuracy suffers and so does pose estimation accuracy. To improve pose estimation accuracy, it is desirable to have most edge points lie on the raised L-shaped object. The process of achieving that is illustrated in Fig. 7 using the test data of Case 0 deg. It starts with the original boundary shown in the upper right plot of Fig. 5. The interior of the object boundary is filled with 1's (left plot of Fig. 7) where the original boundary points are marked with blue squares. The original boundary points are removed (middle plot) by setting their values to 0's. Then a new boundary, marked with red squares is defined from the remaining interior points (right plot) using the boundary detection algorithm [27]. Fig. 8 depicts the corresponding three-dimensional point

Fig. 6. Point cloud data corresponding to edge points on the object boundary.

cloud data for the new object boundary, providing much cleaner definition of the boundary of the L-shaped object.

5. Estimation of edge unit vectors and plane normal vector Using the edge points on the new object boundary identified, the next step is to estimate edge unit vectors that will be employed to form a right-handed triad, which in turn defines the orientation of the object. Although it was assumed that the object has straight edges, we still do not know how many edges the object boundary contains, their lengths, 302

Acta Astronautica 165 (2019) 298–311

T.W. Lim and C.E. Oestreich

Fig. 7. Boundary cleaning to improve the definition of object boundary.

Fig. 10. Definition of θ and ρ pair and the corresponding line in the Hough transform. Fig. 8. Point cloud data corresponding to the new object boundary.

where x and y pair represent the pixel location (column and row indices) of each boundary point. The origin of the x and y coordinates is located at the upper left corner of the image shown in Fig. 9. The definitions of θ and ρ pair and the corresponding line are depicted in Fig. 10. If points with different x and y pair values produce the same θ and ρ pair, those points belong to the same line. Considering the diagonal distance of the rectangle shown in Fig. 9, the range of ρ values can be anywhere between −84 and + 84 pixels. The computed values of θ and ρ pairs are accumulated to corresponding bins, consisting of 1 deg increment for θ and increment of 2 pixels for ρ, totaling 180 × 84 bins. Number of accumulations in a bin indicates the number of pixels belong to the corresponding line. The larger the number of accumulations gets, the longer the corresponding line becomes. Fig. 11 shows the results of the Hough transform and the bins with six largest accumulations are marked with square boxes. Table 1 summarizes the six bins with largest accumulations and their locations in Fig. 11. The pixel locations (column and row indices) for the largest bins and thus the longest edges are now known and can be plotted as depicted in Fig. 12 for Case 0 deg and Case −45 deg. For plotting, circle (o) and cross (x) were used to mark the starting and ending point of the edges. Note that the edges are properly defined even for the jagged edges using the Hough transform. The two longest edges that are not parallel are chosen to define the edge unit vectors for attitude definition of the object as marked with red boundaries in Fig. 12. In order to estimate edge unit vectors, the procedure in Ref. [29] is employed. First, the points (or pixels) that belong to the edges chosen for pose estimation in Fig. 12 (2-D data) are mapped into the corresponding point cloud data (3-D data). The point cloud data are then used to fit a 3-D line and thus define an edge unit vector. As shown in Fig. 13, for edge j that has nj points, mean range vector is first defined as

and orientations. It is also unknown what the shape of the object is, although we assumed that object contains flat surfaces. Depending on the orientation of the object, the lines may run parallel to sensor X or Y axis (e.g., Case 0 deg in Fig. 5) or may be inclined (e.g., Case −45 deg in Fig. 5), exhibiting jagged lines.

5.1. Hough transform for edge unit vector estimation The Hough transform provides a general framework to perform line fitting tasks [27]. The process begins with the points located on the object boundary. For example, the 145 object boundary points (or pixels) are shown in Fig. 9 (also right plot in Fig. 7) for Case 0 deg. The Hough transform is applied to each of the 145 boundary points to compute ρ values as we change θ values between −90 deg and +90 deg, that satisfies the line equation of

ρ = xcosθ + ysinθ

(1)

Fig. 9. Object boundary points for Case 0 deg. 303

Acta Astronautica 165 (2019) 298–311

T.W. Lim and C.E. Oestreich

Fig. 11. Results of the Hough transform for Case 0 deg. Table 1 Summary of the six largest accumulations of the Hough transform for Case 0 deg. Total Accumulations

41 36 35 29 28 28

Fig. 13. Edge unit vector estimation process [29]. nj

Bin Locations θ (deg)

ρ (pixels)

89 −88 −1 −85 −4 4

42 −40 14 −38 12 16

rm j = ‾

∑ k=1

r jk / nj ‾

(2)

where r jk is the range vector corresponding to the kth point for edge j. ‾ Then relative range vectors with respect to the mean range vector are computed as

p jk = r jk − r ‾ ‾ ‾

m j

for k = 1,2, …, nj

(3)

The mean relative position vector and thus the edge unit vector are computed respectively as

Fig. 12. Six longest edges identified using the Hough transform for Case 0 deg and Case −45 deg. 304

Acta Astronautica 165 (2019) 298–311

T.W. Lim and C.E. Oestreich

Fig. 14. Estimated edge vectors for Case 0 deg and Case −45 deg. nj

p jm = ‾

∑ p jk /nj k=1



(4)

pjm /|| pjm

|| eˆ j = (5) ‾ Further details of edge unit vector estimation are available in Ref. [29]. Examples of estimated edge vectors are depicted in Fig. 14 for Case 0 deg and Case −45 deg. The edge points are shown with red and the edge vectors are depicted with green lines. 5.2. Plane normal vector estimation Alternatively, a plane-fitting process can be employed using the interior points of the object boundary to estimate a plane normal unit vector to help improve the pose estimation accuracy. As depicted in Fig. 15, the plane P that lies on a set of point cloud data can be defined by a unit vector pˆ which is perpendicular to the plane. In order to ‾ define the unit vector, the homogeneous coordinate approach [28] is employed. In the homogeneous coordinates, the plane P, is defined as

P = {a b c d }

Fig. 15. Geometric descriptions of a range vector, plane, and surface normal vector.

in Fig. 15, the product Pv satisfies

P v = (RX a + RY b + RZ c ) + d = 0. ‾ Divide the above equation with -d and rearrange to obtain

(6)

⎛RX a + RY b + RZ c ⎞ = 1 −d −d ⎠ ⎝ −d

where the first three elements define the outward normal vector of the plane and the last element d is related to its distance from the origin of the coordinate frame. A range vector of a point cloud data in the sensor frame is defined as

v = RX i + RY j + RZ k ‾ ‾ ‾ ‾ and can also be written in the homogeneous coordinates as

(9)

(10)

or

(RX aˆ + RY bˆ + RZ cˆ) = 1

(7)

(11)

Define n point cloud data that reside on the plane and the corresponding range vectors as

v = {RX RY RZ 1}T . (8) ‾ Since the end point of the range vector v lies on the plane P as shown

v k = RXk i + RYk j + RZk k , k = 1,2, …, n (12) ‾ ‾ ‾ ‾ ˆ The three unknowns aˆ, b , and cˆ in (11) can be solved using a least305

Acta Astronautica 165 (2019) 298–311

T.W. Lim and C.E. Oestreich

Fig. 16. Plane estimation results for Case 0 deg (left) and Case −45 deg (right).

representing an object frame can be constructed. Two non-parallel unit vectors are required to define the triad. Two edge vectors or, alternatively, one edge vector and plane normal vector can be employed to form a direction cosine matrix (DCM) which represents the orientation of the object frame (XO − YO − ZO ) with respect to the sensor frame (X − Y − Z ) as

squares formulation as

A+ y

p= ‾ ‾ where

(13)

⎡ RX 1 RY 1 RZ1 ⎤ ⎢ RX 2 RY 2 RZ 2 ⎥ A=⎢ . . . ⎥ . . ⎥ ⎢ . R R R Xn Yn Zn ⎦ ⎣

xˆ xˆ xˆ ⎧ XO ⎫ ⎡ 1 2 3 ⎤ ⎧ X ⎫ ⎧X⎫ YO = ⎢ yˆ1 yˆ2 yˆ3 ⎥ Y = CSO Y ⎨Z⎬ ⎬ ⎨ ⎬ ⎢ ⎨ ⎥ ⎩ ⎭ ⎩ ZO ⎭ ⎣ zˆ1 zˆ2 zˆ3 ⎦ ⎩ Z ⎭

T p = {aˆ bˆ cˆ} ‾ y = {1 1 . . 1}T ‾ and the superscript + in (13) represents a pseudo-inverse. The vector is then normalized to obtain a unit vector pˆ that defines the surface ‾ normal vector of the plane as

(15)

If two edge unit vectors ( eˆ1 and eˆ2) are employed, the elements in ‾ one‾of the two unit vectors ( eˆ ) as the the DCM (CSO ) are defined, using 1 ‾ baseline vector, as

xˆ O ≜ eˆ1 ≜ {xˆ1 xˆ2 xˆ3 } ‾ ‾ eˆ × eˆ zˆO ≜ || ‾eˆ1 × ‾eˆ 2 || ≜ {zˆ1 zˆ2 zˆ3 } 1 2 ‾ ‾ ‾ yˆO ≜ zˆO × xˆ O ≜ {yˆ1 yˆ2 yˆ3 } ‾ ‾ ‾

aˆ bˆ cˆ 2 pˆ = i+ j + k , where m = aˆ 2 + bˆ + cˆ2 . (14) m m m ‾ ‾ ‾ ‾ Examples of plane fitting results are depicted in Fig. 16 for Case 0 deg and Case −45 deg. The fitted planes using the boundary interior points are shown with red meshes.

(16)

If one edge unit vector ( eˆ 1) and plane normal vector ( pˆ) are em‾ ployed, alternatively, the elements in the DCM are defined,‾ using the plane normal vector as the baseline vector, which tends to be more accurate, as

6. Pose estimation The final step of the model-free pose estimation process is to conduct relative attitude and position determination of the object with respect to the sensor frame as depicted in Fig. 17.

zˆO ≜ pˆ ≜ {zˆ1 zˆ2 zˆ3 } ‾ ‾ pˆ × eˆ1 yˆO ≜ || ‾pˆ × ‾eˆ || ≜ {yˆ1 yˆ2 yˆ3 } 1 ‾ ‾ ‾ xˆ O ≜ yˆO × zˆO ≜ {xˆ1 xˆ2 xˆ3} ‾ ‾ ‾

6.1. Relative attitude estimation

(17)

This approach allows direct estimation of initial pose of the object without a model of the object, allowing model-free attitude estimation. However, since the estimated attitude depends on the edge vectors employed, the same edges will have to be identified and tracked for the subsequent estimation, in order to maintain the consistency of attitude determination. To help maintain consistency, the longest edge identified in the Hough transform is selected for the baseline edge vector and the next longest edge, which is not parallel to the baseline vector, is selected as the second edge vector. As depicted in Fig. 12 for Case 0 deg and Case −45 deg, this selection strategy works well for this particular measurement data. However, a more general capability of identifying and tracking the same edges will need to developed in the future to make the process more robust. Once the DCM is defined, it can be easily converted into Euler angles or quaternions as needed. For example, using the 3-2-1 (ψ − θ − φ) Euler angle sequence, the DCM and thus Euler angles can be defined as

Using the estimated edge vectors and plane normal vector, a triad

Fig. 17. Object and sensor frames for pose estimation. 306

Acta Astronautica 165 (2019) 298–311

T.W. Lim and C.E. Oestreich

Table 2 Object relative attitude estimation results using two edge unit vectors. Nominal yaw angle settings, deg

0 +45 −45 +90 −90 +135 −135 180

Table 4 Object centroid position and surface are estimation test results.

Estimated attitude, deg

Estimation errors, deg

φ

θ

ψ

φ

θ

ψ

−8.65 −8.43 −5.29 −7.55 −8.76 −7.89 −10.29 −9.32

8.50 5.17 7.24 7.01 10.35 6.38 8.31 7.62

0.38 46.6 −44.8 90.6 −89.1 135.6 −134.3 179.6 Mean Std (1σ)

−3.65 −3.43 −0.29 −2.55 −3.76 −2.89 −5.29 −4.32 −3.27 1.47

3.50 0.17 2.24 2.01 5.35 2.38 3.31 2.62 2.70 1.48

0.38 1.60 0.20 0.60 0.90 0.60 0.70 −0.40 0.57 0.57

Centroid position and area

Nominal yaw angle settings, deg 0

+45

−45

+90

−90

+135

−135

180

X, m Y, m Z, m Area, m2

0.012 0.080 0.802 0.037

0.020 0.087 0.800 0.033

0.007 0.084 0.800 0.032

0.022 0.095 0.801 0.036

0.000 0.094 0.802 0.036

0.017 0.103 0.803 0.032

0.004 0.099 0.803 0.033

0.008 0.101 0.802 0.035

Table 3 Object relative attitude estimation results using plane normal and edge unit vectors. Nominal yaw angle settings, deg

0 +45 −45 +90 −90 +135 −135 180

Estimated attitude, deg

Estimation errors, deg

φ

θ

ψ

φ

θ

ψ

−4.54 −4.67 −4.90 −6.43 −6.50 −6.85 −5.61 −5.51

5.15 4.83 5.66 3.74 5.99 4.30 5.32 4.77

0.65 46.6 −44.7 91.0 −88.6 135.8 −134.0 179.9 Mean Std (1σ)

0.46 0.33 0.10 −1.43 −1.50 −1.85 −0.61 −0.51 −0.63 0.89

0.15 −0.17 0.66 −1.26 0.99 −0.70 0.32 −0.23 −0.03 0.73

0.65 1.60 0.30 1.00 1.40 0.80 1.00 −0.10 0.83 0.55

Fig. 20. Estimated locations of centroids.

Fig. 18. Area computation [22].

Fig. 21. Set-up and coordinate frames employed for the point cloud simulation [22].

cosθcosψ cosθsinψ − sinθ ⎤ ⎡ C = ⎢− cosφsinψ + sinφsinθcosψ cosφcosψ + sinφsinθsinψ sinφcosθ ⎥ ⎥ ⎢ ⎣ sinφsinψ + cosφsinθcosψ − sinφcosψ + cosφsinθsinψ cosφcosθ ⎦ (18) C (1,2) C (1,1)

(19)

θ = −sin−1C (1,3)

(20)

ψ = tan−1 Fig. 19. Estimated centroid (green dot) of the object for Case 0 deg. (For interpretation of the references to color in this figure legend, the reader is referred to the Web version of this article.) 307

Acta Astronautica 165 (2019) 298–311

T.W. Lim and C.E. Oestreich

Table 5 Summary of simulated pose estimation test cases. Test case ID

Description

A B C D E F

Movement of the test object from the initial position

Initial configuration Lateral translation Axial translation Shoulder joint rotation Elbow joint rotation Combined translation and rotation

φ = tan−1

C (2,3) C (3,3)

ΔXL (m)

ΔYL (m)

ΔZL (m)

ΔθE (deg)

ΔθS (deg)

0 0.1 0 0 0 0.1

0 −0.1 0 0 0 −0.1

0 0 0.2 0 0 0

0 0 0 0 12 0

0 0 0 8 0 8

7. Pose estimation accuracy assessment using simulated point cloud data

(21)

Results of relative attitude estimation for 8 test cases shown in Fig. 4 are summarized in Tables 2 and 3. For the test cases, the plate holding the object was tilted by 5 deg about sensor Y axis (θ) and −5 deg about X (φ) , prior to spinning about Z axis (ψ) . Results are obtained using two different set of unit vectors. Table 2 shows estimated attitude using two edge unit vectors and Table 3 using the plane normal unit vector and one edge unit vector. Results of φ and θ angles are more accurate when the plane normal vector is employed. Since the entire points interior of the object boundary are used for plane fitting, more reliable and accurate estimate is provided for attitude determination. The accuracy of ψ angle is comparable when the plane normal vector is employed since it is closely aligned with the spin axis.

Although the pose estimation errors presented in the previous section seem reasonably small but due to errors inherent to experimental setup and point cloud measurements, the accuracy of pose estimation algorithm itself is still not characterized well. Simulated point cloud data [22], which is free from measurement errors, is employed here to assess the pose estimation algorithm performance. Fig. 21 depicts the set up and coordinate frames employed for point cloud simulation and Table 5 summarizes the test cases for the simulated point cloud images shown in Fig. 22. As depicted in Fig. 21, translations of ΔXL , ΔYL, and ΔZL are achieved by moving the 3-axis linear stage and rotations are provided by the elbow (ΔθE ) and shoulder (ΔθS ) joints with respect to XE and XL axes, respectively. Further details of simulation settings and processes can be found in Ref. [22]. The simulated images in Fig. 22 were processed using the pose estimation algorithm developed in this paper to estimate the relative attitude and position of the object with respect to the sensor. Results are summarized in Table 6 for relative attitude and in Table 7 for relative position with respect to the sensor. For attitude estimation, two different sets of unit vectors (plane normal vector and an edge vector or two edge vectors) were employed. Unlike the results obtained using the test data in the previous section, the simulated data produces almost identical results, indicating that the unit vectors choices for attitude determination are not significant when the point cloud data is error free. The results show, for the simulated test cases employed, less than 0.08 deg attitude estimation error can be achieved using the model-free pose estimation algorithm presented in this paper. Considering that the instantaneous FOV of the sensor detector is 0.39 deg × 0.39 deg and the object is located about 1 m from the sensor, the corresponding resolution of the point cloud data is 0.007 m. The position estimation errors in Table 7 mostly stay below the sensor resolution. Comparing the simulated pose estimation results against those obtained using point cloud test data in Tables 2 and 3, significant differences are noticed in that attitude estimation errors are dominated by measurement errors while position estimation errors are controlled by the measurement resolution.

6.2. Relative position estimation In order to provide estimates for the object location, area enclosed by the object boundary and its centroid are computed. The area is computed by forming triangular shaped areas using point cloud range vectors inside the object boundary as depicted in Fig. 18. Triangular area formed by three adjacent range vectors ( rˆ1, rˆ 2, rˆ3) are given as ‾ ‾ ‾ 1 1 A = r 12 × r 13 = ( r 2 − r 1) × ( r 3 − r 1) (22) 2‾ 2 ‾ ‾ ‾ ‾ ‾ or

A=

1 AX2 + AY2 + AZ2 2

(23)

where

AX = Y1 (Z2 − Z3) + Y2 (Z3 − Z1) + Y3 (Z1 − Z2)

AY = Z1 (X2 − X3) + Z2 (X3 − X1) + Z3 (X1 − X2 ) AZ = X1 (Y2 − Y3) + X2 (Y3 − Y1) + X3 (Y1 − Y2) Once the area is estimated, the centroid of the object (XC − YC − ZC ) can be computed using n

XC =

∑i =A1 Ai Xi n ∑i =A1

Ai

n

, YC =

∑i =A1 Ai Yi n ∑i =A1

Ai

n

, and ZC =

∑i =A1 Ai Zi n

∑i =A1 Ai

(24)

7. Conclusions

where Ai is the area of the ith triangular element, nA is the number of elements, and (Xi − Yi − Zi ) are the centroid of the ith triangular element. Fig. 19 shows the centroid of the object (green dot) for Case 0 deg. The centroid serves as the position of the object in the sensor frame. Results of relative position estimation for the 8 test cases shown in Fig. 4 are summarized in Table 4 along with the object area estimates. The data in Table 4 is plotted in Fig. 20 to show the estimated centroid locations as the object spins. The center of the centroids is located at X = 0.011 m and Y = 0.093 m, representing the center of rotation of the object. Based on the experimental setting, the center of rotation is supposed to be located at X = 0 m and Y = 0.1 m, producing estimation error of X = 0.011 m and Y = −0.007 m.

A process of conducting model-free pose (relative attitude and position) estimation using point cloud data was presented to support envisioned autonomous satellite servicing missions. Instead of relying on a three-dimensional geometric model of a satellite to be serviced, the model-free pose estimation presented in this paper employs object features detected in situ using point cloud data. Although the process shares existing pose estimation techniques such as edge detection and object boundary estimation, some significant improvements were reported toward developing fully autonomous, model-free pose estimation approach. Due to the discrete nature of point cloud data, when object edges are not aligned with the sensor frame, straight edges appear to be jagged, 308

Acta Astronautica 165 (2019) 298–311

T.W. Lim and C.E. Oestreich

Fig. 22. Simulated point cloud images.

309

Acta Astronautica 165 (2019) 298–311

T.W. Lim and C.E. Oestreich

Table 6 Attitude estimation results using simulated point cloud data. Test case ID

Truth, deg

Using plane and one edge vector Estimates (deg)

A

θX θY θZ θX θY θZ θX θY θZ θX θY θZ θX θY θZ θX θY θZ

B

C

D

E

F

=0 =0 =0 =0 =0 =0 =0 =0 =0 =0 =0 = −8 = −12 =0 =0 =0 =0 = −8

θX θY θZ θX θY θZ θX θY θZ θX θY θZ θX θY θZ θX θY θZ

= 0 .00 = 0.00 = 0.00 = 0 .00 = 0.00 = 0.00 = 0 .00 = 0.00 = 0.00 = 0 .00 = 0.00 = −8.05 = −12 .00 = 0.00 = 0.00 = −0.01 = 0.01 = −7.92

Using two edge vectors Errors (deg)

Estimates (deg)

ΔθX ΔθY ΔθZ ΔθX ΔθY ΔθZ ΔθX ΔθY ΔθZ ΔθX ΔθY ΔθZ ΔθX ΔθY ΔθZ ΔθX ΔθY ΔθZ

θX θY θZ θX θY θZ θX θY θZ

Truth position vector, meters

Estimates, meters

Errors, meters

A

⎧− 0.056 ⎫ − 0.040 ⎨ 1.022 ⎬ ⎩ ⎭

⎧− 0.067 ⎫ − 0.037 ⎨ 1.023 ⎬ ⎩ ⎭

B

⎧− 0.156 ⎫ − 0.140 ⎨ 1.022 ⎬ ⎩ ⎭

⎧− 0.158 ⎫ − 0.143 ⎨ 1.023 ⎬ ⎩ ⎭

C

⎧− 0.056 ⎫ − 0.040 ⎨ 0.822 ⎬ ⎩ ⎭

⎧− 0.054 ⎫ − 0.045 ⎨ 0.823 ⎬ ⎩ ⎭

D

⎧− 0.030 ⎫ − 0.032 ⎨ 1.022 ⎬ ⎩ ⎭

⎧− 0.032 ⎫ − 0.034 ⎨ 1.023 ⎬ ⎩ ⎭

E

⎧− 0.056 ⎫ − 0.063 ⎨ 0.986 ⎬ ⎩ ⎭

⎧− 0.063 ⎫ − 0.060 ⎨ 0.986 ⎬ ⎩ ⎭

F

⎧− 0.130 ⎫ − 0.132 ⎨ 1.022 ⎬ ⎩ ⎭

⎧ − 0.133 ⎫ − 0.134 ⎨ 1.023 ⎬ ⎩ ⎭

ΔX = −0.011 ΔY = 0.003 ΔZ = 0.001 ΔX = −0.002 ΔY = −0.003 ΔZ = 0.001 ΔX = 0.002 ΔY = −0.005 ΔZ = 0.001 ΔX = −0.002 ΔY = −0.002 ΔZ = 0.001 ΔX = −0.007 ΔY = 0.003 ΔZ = 0.000 ΔX = −0.003 ΔY = −0.002 ΔZ = 0.001

= 0.00 = 0.00 = 0.00 = 0.00 = 0.00 = 0.00 = 0.00 = 0.00 = 0.00 θX θY θZ θX θY θZ θX θY θZ

= 0.00 = 0.00 = −8.05 = −12 .00 = 0.00 = 0.00 = 0.00 = 0.00 = −7.92

ΔθX ΔθY ΔθZ ΔθX ΔθY ΔθZ ΔθX ΔθY ΔθZ ΔθX ΔθY ΔθZ ΔθX ΔθY ΔθZ ΔθX ΔθY ΔθZ

= 0.00 = 0.00 = 0.00 = 0.00 = 0.00 = 0.00 = 0.00 = 0.00 = 0.00 = 0.00 = 0.00 = −0.05 = 0.00 = 0.00 = 0.00 = 0.00 = 0.00 = 0.08

estimation accuracy. There could be many objects of various shapes and sizes in the sensor field of view. In order to perform pose estimation without any a priori knowledge of the scene, the algorithm has to be able to detect and classify objects composed of various geometries such as lines, curves, flat surfaces, and curved surfaces. Although that will be the ultimate goal to enable non-cooperative, model-free pose estimation, in this paper we assumed that there was a single object composed of straight lines and flat surfaces within the sensor field of view although a specific geometric model of the object was not used for pose estimation. Further integration of object detection capability and extension of the Hough transform to include curves as well as straight lines would be desired to develop a more capable model-free pose estimation process.

Table 7 Position vector estimation results using simulated point cloud data. Test case ID

= 0.00 = 0.00 = 0.00 = 0.00 = 0.00 = 0.00 = 0.00 = 0.00 = 0.00 = 0.00 = 0.00 = −0.05 = 0.00 = 0.00 = 0.00 = −0.01 = 0.01 = 0.08

Errors (deg)

Acknowledgments The authors would like to thank ENS Samuel Lacinski, USN for collecting the LiDAR measurement data and Mr. John Van Eepoel at NASA Goddard Space Flight Center for providing summer internship for Midshipmen Charles Oestreich in support of this research.

making it difficult to differentiate edges and corners, especially when a line tracing algorithm is employed to define object boundary. The Hough transform for straight lines that allows a robust line fitting process for jagged edges was employed to help resolve this issue. Sensor light tends to reflect multiple times near edges and measured point cloud data appears to be scattered and can contain spurious points, leading to inaccuracies in defining edge vectors. A process of removing these spurious points along the edges was introduced to improve the accuracy of edge vectors. The new algorithm further enhances model-free pose estimation accuracy by supplementing edge definition with the orientation of the object's surface plane. As more point cloud data typically resides within a surface plane, a plane normal vector provides more robust and accurate estimate of object orientation. Studies using both experimental and simulated point cloud data sets were conducted to examine the performance of the model-free pose estimation algorithm. They indicate that position estimation accuracy which is comparable to the point cloud data resolution is achievable. The attitude estimation accuracy is more sensitive to point cloud measurement errors and depends also on how the unit vectors are selected to define the triad for the object frame. Employing a plane normal vector is definitely advantageous in improving attitude

References [1] D. Zimpfer, P. Kachmar, T. Seamus, Autonomous rendezvous, capture and in-space assembly: past, present, and future, 1st Space Exploration Conference, January 2005 Orlando, FL. [2] On-orbit Satellite Servicing Study Project Report, NASA Goddard Space Flight Center, October 2010. [3] A. Flores-Abad, O. Ma, K. Pham, S. Ulrich, A review of space robotics technologies for on-orbit servicing, Prog. Aerosp. Sci. 68 (2014) 1–26. [4] T. Weismuller, M. Leinz, GN&C technology demonstrated by the Orbital Express autonomous rendezvous and capture sensor system, 29th Annual AAS Guidance and Control Conference, February 2006 Breckenridge, CO, AAS Paper 06-016. [5] A. Ogilvie, J. Allport, M. Hannah, J. Lymer, Autonomous satellite servicing using the Orbital Express demonstration manipulator system, Proc. of the 9th International Symposium on Artificial Intelligence, Robotics and Automation in Space, i-SAIRAS'08, 2008, pp. 25–29. [6] B. Barbee, J. Carpenter, S. Heatwole, F. Markley, M. Moreau, B. Naasz, J. Van Eepoel, Guidance and Navigation for Rendezvous and Proximity Operations with a Non-cooperative Spacecraft at Geosynchronous Orbit, George H. Born Symposium, Boulder, CO, May 2010. [7] B. Reed, The Restore-L Servicing Mission, NASA Technical Reports Server, GFSC-EDAA-TN31128, 2016. [8] G. Roesler, Robotic Servicing of Geosynchronous Satellites (RSGS) Proposers Day, Defense Advanced Research Programs Agency (DARPA) Tactical Technology Office, May 2016. [9] M. Strube, R. Henry, E. Skeleton, J. Van Eepoel, N. Gill, R. McKenna, Raven: an onorbit relative navigation demonstration using International Space Station visiting vehicles, American Astronautical Society Guidance and Control Conference, Feb.

310

Acta Astronautica 165 (2019) 298–311

T.W. Lim and C.E. Oestreich

0089, Jan. 2014. [26] J. Canny, A computational approach to edge detection, IEEE Trans. Pattern Anal. Mach. Intell. 8 (6) (1986) 679–698. [27] R. Gonzalez, R. Woods, S. Eddins, Digital Image Processing Using Matlab, second ed., Gatesmark Publishing, 2009 chapters 11 and 12. [28] T. Lim, Point cloud modeling using the homogeneous transformation for non-cooperative pose estimation, Acta Astronaut. 111 (2015) 61–76. [29] T. Lim, P. Ramos, M. O'Dowd, Edge detection using point cloud data for non-cooperative pose estimation, J. Spacecr. Rocket. 54 (2) (2017) 500–505.

2015 Breckenridge, CO. [10] B. Naasz, J. Van Eepoel, S. Queen, C. Southward, J. Hannah, Flight results of the HST SM4 relative navigation sensor system, American Astronautical Society Guidance and Control Conference, Feb. 2010 Breckenridge, CO. [11] J. Galante, J. Van Eepoel, M. Strube, N. Gill, Pose measurement performance of the argon relative navigation sensor suite in simulated flight conditions, AIAA Guidance, Navigation, and Control Conference, Aug. 2012 Minneapolis, MN. [12] P. Besl, N. McKay, A method for registration of 3-d shapes, IEEE Trans. Pattern Anal. Mach. Intell. 14 (2) (1992) 239–256. [13] S. Ruel, T. Luu, A. Berube, Space shuttle testing of the TriDAR 3d rendezvous and docking sensor, J. Field Robot. 29 (4) (2012) 535–553. [14] B. Cavrois, A. Vergnol, A. Donnard, et al., LIRIS Demonstrator on ATV5: a Step beyond for European Non Cooperative Navigation System, AIAA SciTech Forum, Kissimmee, FL, Jan. 2015. [15] A. Masson, C. Haskamp, I. Ahrns, et al., Airbus DS vision based navigation solutions tested on LIRIS experiment data, Proc. 7th European Conference on Space Debris, April 2017 Darmstadt, Germany. [16] R. Opromolla, G. Fasano, G. Rufino, M. Grassi, Pose estimation for spacecraft relative navigation using model-based algorithms, IEEE Trans. Aerosp. Electron. Syst. 53 (1) (2017) 431–447. [17] H. Martinez, G. Giorgi, B. Eissfeller, Pose estimation and tracking of non-cooperative rocket bodies using time-of-flight cameras, Acta Astronaut. 139 (2017) 165–175. [18] X. Du, B. Liang, W. Xu, Y. Qiu, Pose measurement of large non-cooperative satellite based on collaborative cameras, Acta Astronaut. 68 (2011) 2047–2065. [19] N. Oumer, G. Panin, 3D point tracking and pose estimation of a space object using stereo images, 21st International Conference on Pattern Recognition, 2012 Tsukuba, Japan, Nov. [20] L. Regoli, K. Ravandoor, M. Schmidt, K. Schilling, On-line robust pose estimation for rendezvous and docking in space using photonic mixer devices, Acta Astronaut. 96 (2014) 159–165. [21] Y. He, B. Liang, J. He, S. Li, Non-cooperative spacecraft pose tracking based on point cloud feature, Acta Astronaut. 139 (2017) 213–221. [22] A. Suqi, T. Lim, Detection and identification of objects using point cloud data for pose estimation, AIAA Guidance Navigation and Control Conference, SciTech, San Diego, CA, 2016Jan 2016. [23] R. Pradhan, et al., Contour line tracing algorithm for digital topographic map, Int. J. Image Process. 4 (2) (2010) 156–163. [24] A. Hassanein, M. Sherien, M. Sameer, M. Ragab, A survey on Hough transform, theory, techniques, and applications, Int. J. Comput. Sci. Issues 12 (2) (2015) 139–156. [25] T. Lim, A. Toombs, Pose estimation using a flash lidar, AIAA Guidance, Navigation, and Control Conference, SciTech, National Harbor, MD, 2014AIAA Paper 2014-

Dr. Tae W. Lim is an Associate Professor of Aerospace Engineering at the U.S. Naval Academy (USNA) in Annapolis, MD. Prior to joining the USNA faculty, he worked for Naval Center for Space Technology at the U.S. Naval Research Laboratory in Washington, DC for 12 years and as an Associate Professor of Aerospace Engineering at the University of Kansas in Lawrence, KS for seven years. He conducts spacecraft attitude determination and control system design and analysis as well as related research and development. He holds Ph.D. in Mechanical and Aerospace Engineering from the University of Virginia, Charlottesville, VA.

Charles E. Oestreich is currently a 1/C midshipman studying aerospace engineering at the U.S. Naval Academy (USNA) in Annapolis, MD. He had summer internship experiences at NASA Goddard Space Flight Center in Greenbelt, MD within the Attitude Control Systems Engineering Branch and at the Space Dynamics Laboratory in Logan, UT as part of the Small Satellites Division. His research interests lie in spacecraft guidance, navigation, and control advancements, especially with regards to vision-based relative navigation. He will continue his education as an MS student within MIT's Department of Aeronautics and Astronautics in the fall of 2019.

311