Proceedings,18th IFAC Symposium on System Identification Proceedings,18th IFAC Symposium on System Identification July 9-11, 2018. Stockholm, Sweden on Proceedings,18th IFAC Symposium Symposium on System Identification Identification Proceedings,18th IFAC System Available online at www.sciencedirect.com July 9-11, 2018. Stockholm, Sweden on Proceedings,18th IFAC Symposium System Identification July Sweden July 9-11, 9-11, 2018. 2018. Stockholm, Stockholm, Sweden July 9-11, 2018. Stockholm, Sweden
ScienceDirect
IFAC PapersOnLine 51-15 (2018) 43–48
Experimentally Validated Extended Experimentally Experimentally Validated Validated Extended Extended Kalman Filter for UAV State Estimation Experimentally Validated Extended Kalman Filter for UAV State Kalman Filter for UAV State Estimation Estimation Low-Cost Sensors Kalman Using Filter for UAV State Estimation Using Low-Cost Sensors Using Low-Cost Sensors Using Low-Cost Sensors S.P.H. Driessen ∗ N.H.J. Janssen ∗ L. Wang ∗∗ J.L. Palmer ∗∗∗
∗ ∗∗ ∗∗∗ N.H.J. Janssen Palmer ∗ ∗∗ ∗∗∗ ∗ Wang ∗ L. ∗∗ J.L. ∗∗∗ N.H.J. Janssen L. J.L. Palmer Nijmeijer N.H.J.H. Janssen L. Wang J.L. Palmer ∗ Wang ∗ ∗∗ ∗∗∗ Nijmeijer ∗ Wang N.H.J.H. Janssen L. J.L. Palmer ∗ H. Nijmeijer Nijmeijer ∗ H. H. Nijmeijer ∗ ∗ Eindhoven University of Technology, Den Dolech 2, 5600 MB Eindhoven University of Den ∗ ∗ Eindhoven University of Technology, Technology, Den Dolech Dolech 2, 2, 5600 5600 MB MB Eindhoven, The Netherlands (e-mail:
[email protected]) University of Technology, Den Dolech 2, 5600 MB ∗ Eindhoven Eindhoven, The Netherlands (e-mail:
[email protected]) ∗∗Eindhoven University ofLa Technology, DenMelbourne Dolech 2,VIC 56003000, MB Eindhoven, The Netherlands (e-mail:
[email protected]) 124 Trobe Street, Eindhoven, The Netherlands (e-mail:
[email protected]) ∗∗ RMIT University, University, 124 La Trobe Street, Melbourne VIC 3000, ∗∗ Eindhoven, The Netherlands (e-mail:
[email protected]) ∗∗ RMIT RMIT University, 124 La Trobe Street, Melbourne VIC 3000, Australia (e-mail:
[email protected]) University, 124
[email protected]) Trobe Street, Melbourne VIC 3000, ∗∗ RMIT Australia (e-mail: ∗∗∗RMIT Australia University, 124
[email protected]) Trobe Street, Melbourne VIC VIC 3207, 3000, (e-mail:
[email protected]) 506 Lorimer Street, Port Melbourne Australia (e-mail: ∗∗∗ DST Group, DST Group, 506 Lorimer Street, Port Melbourne VIC 3207, ∗∗∗ Australia (e-mail:
[email protected]) ∗∗∗ DST Group, 506 Lorimer Street, Port Melbourne VIC 3207, Australia (e-mail:
[email protected]) Group, 506 Lorimer Street, Port Melbourne VIC 3207, ∗∗∗ DST Australia (e-mail:
[email protected]) DST Group, 506 Lorimer Street, Port Melbourne VIC 3207, Australia (e-mail:
[email protected]) Australia (e-mail:
[email protected]) Australia (e-mail:
[email protected]) Abstract: Visually Visually based based velocity velocity and and position position estimations estimations are are often often used used to to reduce reduce or or remove remove Abstract: Abstract: Visually based velocity and position estimations are often used to reduce or remove the dependency of an unmanned aerial vehicle (UAV) on global navigation satellite system Abstract: Visually based velocity and position estimations are often used to reduce or remove the dependency of an unmanned vehicle (UAV) on global navigation satellite system Abstract: Visually based velocityinaerial and position estimations are often used to reduce or remove the dependency of be an unmanned aerial vehicle (UAV) on unavailable global navigation satellite system signals, which may unreliable urban canyons and are indoors. In this paper, the dependency of an unmanned aerial vehicle (UAV) on global navigation satellite system signals, which in urban canyons and unavailable indoors. In this paper, the dependency of be an unreliable unmanned vehicle (UAV) onfilter global navigation satellite system signals, which may may be unreliable inaerial urban canyons and are are unavailable indoors. Inthe thisvelocity, paper, a sensor-fusion algorithm based on an extended Kalman is developed for signals, which may be unreliable in urban canyons and are unavailable indoors. In this paper, a sensor-fusion algorithm based on an extended Kalman is developed for the velocity, signals, which may be unreliable inofurban canyons and arefilter unavailable this paper, a sensor-fusion algorithm based an extended filter is for position, and attitude attitude estimation a UAV UAV usingKalman low-cost sensors. Inindoors. particular anvelocity, inertial a sensor-fusion algorithm based on on an extended Kalman filter is developed developed forInthe the velocity, position, and estimation of a using low-cost sensors. In particular an inertial a sensor-fusion algorithm based on an extended Kalman filter is developed for the velocity, position, and attitude estimation of a UAV using low-cost sensors. In particular an inertial measurement unit (IMU) and an optical-flow sensor that includes a sonar module and an position, and unit attitude estimation of a UAV using low-cost sensors. aIn sonar particular an and inertial measurement (IMU) and optical-flow sensor that includes module an position, and attitude estimation of a UAV using low-cost sensors. aaIn sonar particular an inertial measurement unit (IMU) and an an optical-flow sensor that includes sonar module and an additional gyroscope are used. The algorithm is shown experimentally to be able to handle measurement unit (IMU) and an optical-flow sensor that includes module and an additional gyroscope are The algorithm is shown experimentally to be able handle measurement unit and an includes module and an additional gyroscope are used. used. Theoptical-flow algorithm issensor shownthat experimentally to beindoor, able to tolow-light handle measurements with (IMU) different sampling rates and andis missing data, caused aby bysonar thebe additional gyroscope are used. The algorithm shown experimentally to able to handle measurements with different sampling rates missing data, caused the indoor, low-light additional gyroscope are used. The algorithm is shown experimentally to be able to handle measurements with different sampling rates and missing data, caused by the indoor, low-light conditions. State State estimations are compared compared to aamissing ground-truth pose history history obtainedlow-light with aa measurements with different sampling rates and data, caused by the indoor, conditions. estimations are to ground-truth pose obtained with measurements with different sampling rates and data, caused by the indoor, low-light conditions. State estimations are compared to aamissing ground-truth pose history obtained with motion-capture system to show the influence of the optical-flow and sonar measurements on itsaa conditions. State estimations are compared to ground-truth pose history obtained with motion-capture system to the influence of optical-flow and sonar measurements on conditions. State estimations are to the a ground-truth pose history obtained with motion-capture system to show show thecompared influence results of the optical-flow and sonar measurements on its itsa performance. Additionally, the experimental demonstrate that the velocity and attitude motion-capture system to show the influence of the optical-flow and sonar measurements on its performance. Additionally, the experimental results demonstrate that the velocity and attitude motion-capture system to show the influence of the optical-flow and sonar measurements on its performance. Additionally, the experimental experimental results demonstrate demonstrate that the the velocityenvironments. and attitude attitude can be be estimated estimated without drift, drift, despite the the magnetic magnetic distortions typical of indoor performance. Additionally, the results that velocity and typical of indoor environments. can without despite distortions performance. Additionally, the experimental results demonstrate that the velocity and attitude can be estimated without drift, despite the magnetic distortions typical of indoor environments. can be estimated without drift, despite the magnetic distortions typical of indoor environments. © 2018, IFAC (International Federation of Automatic Control) Hosting by Elsevier All environments. rights reserved. can be estimated without drift, despite the magnetic distortions typical of Ltd. indoor Keywords: Unmanned Unmanned aerial aerial vehicle, vehicle, Sensor Sensor fusion, fusion, Extended Extended Kalman Kalman filter, filter, Optical Optical flow, flow, Keywords: Keywords: Unmanned Unmanned aerial vehicle, vehicle, Sensor fusion, Extended Kalman filter, Optical Optical flow, flow, Visual-inertial state estimation, Missing data, Multi-rate sampled data. Keywords: aerial Sensor fusion, Extended Kalman filter, Visual-inertial state Missing data, Multi-rate sampled data. Keywords: Unmanned aerial vehicle, Sensor fusion, Extended Kalman Visual-inertial state estimation, estimation, Missing data, Multi-rate sampled data.filter, Optical flow, Visual-inertial state estimation, Missing data, Multi-rate sampled data. Visual-inertial state estimation, Missing data, Multi-rate sampled data. 1. INTRODUCTION INTRODUCTION wart, 2011). 2011). Within Within these these sensors, sensors, sequential sequential images images are are 1. wart, 1. wart, 2011). Within these sensors, sequential images are used to calculate the optical flow. To relate optical-flow 1. INTRODUCTION INTRODUCTION wart, 2011). Within these sensors, sequential images are used to calculate the optical flow. To relate optical-flow 1. INTRODUCTION wart, 2011). Within sensors, sequential images the are used to calculate the optical flow. To relate optical-flow measurements to the the camera’s translational velocity, used to calculate thethese optical flow. To relate velocity, optical-flow Unmanned aerial vehicles (UAVs) can be used for a wide measurements to camera’s translational the used to calculate the optical flow. To relate optical-flow Unmanned aerial vehicles (UAVs) can be used for a wide measurements to the camera’s translational velocity, the scene depth must be known. It may be measured directly measurements to the camera’s translational velocity, the Unmanned aerial can be used wide range of of applications, applications, such(UAVs) as rescue rescue (P´ ka et for al.,aa2017) 2017) Unmanned aerial vehicles vehicles (UAVs) can(P´ beoollka used for wide scene depth must be known. It may be measured directly measurements to the camera’s translational velocity, the range such as et al., must be known. It be measured directly Unmanned aerial vehicles (UAVs) can(P´ beoollka used for a2017) wide scene using, depth for example, example, sonar or LiDAR LiDAR sensors or estimated scene depth must besonar known. It may may be measured directly range of applications, such as rescue et al., (Waharte and Trigoni, 2010), visual inspection (Omari range of applications, such as rescue (P´ ka et al., 2017) using, for or sensors or estimated scene depth must be known. It may be measured directly (Waharte and Trigoni, 2010), visual inspection (Omari for sonar or sensors or range applications, such as rescue (P´ olka and et al., 2017) using, using sensor sensor fusion. The The latter approach is taken taken in Bleser Bleser using, for example, example, sonarlatter or LiDAR LiDAR sensors or estimated estimated (Waharte and 2010), visual inspection (Omari et al., al., of 2015) andTrigoni, manufacturing (Khosiawan Nielsen, (Waharte and Trigoni, 2010), visual inspection (Omari using fusion. approach is in using, for example, sonar oranLiDAR sensors or estimated et 2015) and manufacturing (Khosiawan and Nielsen, using sensor fusion. The latter approach is taken in Bleser (Waharte and Trigoni, 2010), visual inspection (Omari and Hendeby (2010), where optical-flow-based extended using sensor fusion. The latter approach is taken in Bleser et al., 2015) and manufacturing (Khosiawan and Nielsen, 2016). Knowledge of the UAV’s velocity, position, and atet al., 2015) and manufacturing (Khosiawan and Nielsen, and Hendeby (2010), where an optical-flow-based using sensor latter approach is taken extended inthrough Bleser 2016). Knowledge of the UAV’s velocity, position, and atHendeby (2010), where an extended et al., 2015) and manufacturing (Khosiawan and Nielsen, Kalman filterfusion. (EKF)The is developed developed and validated validated and Hendeby (2010), where an optical-flow-based optical-flow-based extended 2016). Knowledge of the velocity, position, and attitude is essential in these applications. On-board sensors 2016). Knowledge of these the UAV’s UAV’s velocity, position, and at- and Kalman filter (EKF) is and through and Hendeby (2010), where an optical-flow-based extended titude is essential in applications. On-board sensors Kalman filter (EKF) is developed and validated through 2016). Knowledge of the UAV’s velocity, position, and atsimulation. A real-time implementation of an optical-flowKalman filter (EKF) is developed and validated through titude is in applications. On-board generally are required required to be be inexpensive, compact,sensors light, simulation. titude is essential essential in these these applications. On-board sensors A real-time of an optical-flowKalman filter (EKF) is implementation developed through generally are to inexpensive, compact, light, simulation. A real-time implementation of optical-flowtitude is essential inAnthese applications. On-board sensors based EKF EKF with a similar similar approachand can validated bean found in Weiss simulation. A real-time implementation of an optical-flowgenerally are required to be inexpensive, compact, light, and low-powered. inertial measurement unit (IMU) generally are required to be inexpensive, compact, light, based with a approach can be found in Weiss simulation. A real-time implementation of an optical-flowand low-powered. An inertial measurement unit (IMU) based EKF with a similar approach can be found in Weiss generally are required to be inexpensive, compact, light, et al. (2012). LiDAR measurements are used in, for exambased EKF with a similar approach can be found in Weiss and low-powered. An inertial measurement unit (IMU) satisfies these requirements and allows for tri-axial meaand low-powered. An inertial measurement unit (IMU) et al. (2012). LiDAR measurements are used in, for based EKF a similar can be(2017). found in examWeiss satisfies these requirements and allows for tri-axial mea(2012). measurements are in, examand low-powered. An inertial measurement unit (IMU) ple,al. Yun et with al.LiDAR (2016) and approach Goppert et et al.used et al.Yun (2012). LiDAR measurements areal. used in, for for examsatisfies these requirements and allows for tri-axial measurement of the the acceleration and angular velocity. Often satisfies these requirements and allows forvelocity. tri-axial mea- et ple, et al. (2016) and Goppert (2017). et al. (2012). LiDAR measurements are used in, for examsurement of acceleration and angular Often Yun et and et (2017). satisfies requirements and forAs tri-axial meaple, Yunpaper, et al. al. (2016) (2016) and Goppert Goppert et al. al. (2017). sensor surement of the acceleration and angular velocity. Often the IMU IMUthese also features a magnetometer. magnetometer. a result result of ple, surement of the acceleration andallows angular velocity. Often In this this the low-cost low-cost PX4Flow optical-flow ple, Yunpaper, et al. (2016) and Goppert et al. (2017). sensor the also features a As a of surement of the acceleration and angular velocity. Often In the PX4Flow optical-flow the IMU also features a magnetometer. As a result of sensor noise and a slowly slowly varying sensor sensor bias, bias, combined the IMU alsoand features a magnetometer. As a combined result of In this paper, the low-cost PX4Flow optical-flow sensor (OFS) is used. This is an open-source based software software In this paper, the low-cost PX4Flow optical-flow sensor sensor noise a varying the IMU also features a magnetometer. As a result of is used. This is an open-source based sensor noise aa slowly combined In this paper, the low-cost PX4Flow optical-flow sensor with vibrations vibrations generated by the the sensor UAV’s bias, motors, direct (OFS) sensor noise and andgenerated slowly varying varying sensor bias, combined (OFS) is used. This is an open-source based software and hardware platform that uses a CMOS vision sensor, (OFS) is used. This is an open-source based software with by UAV’s motors, direct sensor noise and a slowly varying sensor bias, combined and hardware platform that uses a CMOS vision sensor, with vibrations generated by the UAV’s motors, direct (OFS) is used. This is an open-source based software integration of acceleration and angular velocity leads to with vibrations generated by the UAV’s motors, direct and hardware platform that uses aa CMOS vision a gyroscope, gyroscope, and sonar range finder (Honegger et al., al., and hardwareand platform that uses finder CMOS vision sensor, sensor, integration of acceleration and angular velocity leads to with vibrations generated by the UAV’s motors, aaa sonar range (Honegger et integration of acceleration and angular velocity to hardware platform that uses a CMOS vision sensor, poor velocity velocity and attitude estimations estimations that tend leads todirect drift integration of and acceleration and angular velocity leads to aaand gyroscope, and sonar range finder (Honegger et al., 2013). The sonar module can be replaced with a LiDAR a gyroscope, and a sonar range finder (Honegger et al., poor attitude that tend to drift integration of and acceleration and greatly angularincrease velocity leads to 2013). The sonar can be replaced with a poor velocity attitude estimations that tend to a2013). gyroscope, and module athesonar range (Honegger et al., over time. time. Sensor fusion can can estipoor velocity and attitude estimations that tendthe to drift drift 2013). The sonar module be replaced with aa LiDAR LiDAR sensor to increase datacan output rate and and measurement The sonar module can be finder replaced with LiDAR over Sensor fusion greatly increase estipoor velocity and attitude estimations that tendthe to drift sensor to increase the data output rate measurement over time. Sensor fusion can greatly increase the esti2013). The sonar module can be replaced with a LiDAR mation performance. In this context accelerometer and over time. Sensor fusion can greatly increase the estisensor increase data output rate and quality.to However, LiDAR sensors are considerably more sensor toHowever, increase the the datasensors output are rateconsiderably and measurement measurement mation performance. In this context accelerometer and over time. Sensor fusion cancan greatly increasetothe estiLiDAR more mation performance. In context accelerometer and sensor toHowever, increase the data output rateconsiderably andis measurement magnetometer measurements be combined combined retrieve mation performance. In this this context accelerometer and quality. quality. LiDAR sensors are more expensive. Therefore a different approach taken in this quality. However, LiDAR sensors are considerably more magnetometer measurements can be to retrieve mation In this context accelerometer and expensive. Therefore aa different approach is taken in this magnetometer measurements can be retrieve quality. However, LiDAR sensors are considerably more attitude performance. information. For translational velocityto and posimagnetometer measurements can be combined combined toand retrieve expensive. Therefore different approach is taken in this paper. Sonar measurements are not used directly to scale expensive. Therefore a different approach is taken in this attitude information. For translational velocity posimagnetometer measurements can be combined toand retrieve Sonar measurements are not used directly to scale attitude information. For translational velocity posiexpensive. Therefore different approach is takenvelocity, this tion, global global navigation satellite system (GNSS) signals can paper. attitude information. For translational velocitysignals and posipaper. Sonar measurements are used to the flow flow velocity to athe the camera’s translational paper. Sonar measurements are not not used directly directly toinscale scale tion, navigation satellite system (GNSS) can attitude information. For translational velocity andalways posithe velocity to camera’s translational velocity, tion, global navigation satellite system (GNSS) signals can paper. Sonar measurements are not used directly to scale provide a reference. However, GNSS signals are not tion, global navigation satellite system (GNSS) signals can the flow to translational velocity, instead state representing representing the UAV UAV height, which which is the flowaavelocity velocity to the the camera’s camera’s translational velocity, provide a reference. However, GNSS signals are not always tion, global navigation satellite system (GNSS) can instead state the height, is provide aaand reference. However, GNSS signals are not the flowaawith velocity toand theIMU camera’s translational velocity, reliable are typically typically unavailable indoors. provide reference. However, GNSS signals are signals not always always instead state representing the UAV height, which is updated sonar measurements, is used. This instead state representing the UAV height, which is reliable and are unavailable indoors. provide aand reference. However, GNSS signals are not always updated with sonar and IMU measurements, is used. This reliable are unavailable indoors. instead representing the UAV height, which is reliable are typically typically unavailable indoors. sonar and IMU is This results in inawith a state more robust translational-velocity estimate. An updated with sonar andtranslational-velocity IMU measurements, measurements, is used. used. This Recentlyand developed camera-based vision sensors are are light light updated reliable and are typically unavailable indoors. results a more robust estimate. An updated with sonar and IMU measurements, is used. This Recently developed camera-based vision sensors results in a more robust translational-velocity estimate. An EKF-based framework, modified to handle multi-rate samresults in a more robust translational-velocity estimate. An Recently developed vision are light and inexpensive inexpensive andcamera-based can also also be be used used tosensors estimate velocRecently developed camera-based visionto sensors areveloclight EKF-based framework, modified to handle multi-rate samresults in a more robust translational-velocity estimate. An and and can estimate framework, handle multi-rate samRecently developed camera-based visionto sensors areveloclight EKF-based pled measurements measurements andmodified missing to data, is used used to perform perform EKF-based framework, modified to handle multi-rate samand inexpensive and can also be used estimate ity (Mebarki et al., 2013) and position (Weiss and Siegand inexpensive and can also be used to estimate velocpled and missing data, is to EKF-based framework, modified to handle multi-rate samity (Mebarki et al., 2013) and position (Weiss and Siegpled measurements and missing data, is used to perform and inexpensive and can also be used to estimate velocmeasurements and missing data, is used to perform ity ity (Mebarki (Mebarki et et al., al., 2013) 2013) and and position position (Weiss (Weiss and and SiegSieg- pled pled measurements and missing data, is used to perform ity (Mebarki et IFAC al., 2013) and position (Weiss and Sieg2405-8963 © 2018, (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved.
S.P.H. S.P.H. S.P.H. S.P.H.
∗ Driessen ∗ Driessen Driessen ∗∗ Driessen
Copyright © 2018 IFAC 43 Copyright © under 2018 IFAC 43 Control. Peer review responsibility of International Federation of Automatic Copyright © 43 Copyright © 2018 2018 IFAC IFAC 43 10.1016/j.ifacol.2018.09.088 Copyright © 2018 IFAC 43
2018 IFAC SYSID 44 9-11, 2018. Stockholm, Sweden July
S.P.H. Driessen et al. / IFAC PapersOnLine 51-15 (2018) 43–48
scene depth is needed. It is related to the UAV’s position, T p = [px py pz ] (m), in the z-direction along with the roll and pitch angles. The sonar measures the scene depth, sbz (m). The optical flow and scene depth can be expressed as cos(φ) cos(θ)vxb b b + ωy τ f + µρx , ρ˜x = − pw z cos(φ) cos(θ)vyb b b (2) ρ˜y = − + ω x τ f + µρy , pw z pw z + µ sz . s˜bz = cos(φ) cos(θ)
this update and to estimate the UAV’s velocity, position, and attitude. The resulting fusion algorithm is validated experimentally. 2. SENSORS MODELS 2.1 Notation For simplicity, the IMU and OFS are assumed to be aligned and to produce measurements with respect to the same right-handed body frame, which is attached to the UAV. The right-handed world frame is fixed to the Earth, with the x-axis aligned with the local magnetic north and the z-axis pointing opposite to gravity. Superscripts b and w are used to indicate that a variable is expressed in the body or the world frame, respectively; while subscripts x, y, and z refer to the component of a variable along each axis. The rotation matrix, representing the rotation of the world frame with respect to the body frame, is denoted T as R(Φw )bw , with attitude Φw = [φ θ ψ] , where φ, θ, and ψ are the roll, pitch, and yaw of the UAV, respectively, measured with respect to the x-, y-, and z-axes of the world frame. To transform the angular velocity from the world to body frame, the matrix T (Φw )bw is used. However, if the IMU and OFS measure in different frames, due to, for example, misalignment, the EKF can be adapted to account for the difference in their reference frames. Even if the relative position and rotation are not known, the EKF can be used to estimate those quantities (Weiss and Siegwart, 2011).
Noise terms are modelled as µi ∼ N (0, Σi ), with i ∈ {a, ωI , m, ρx , ρy , sz , ωO } and Σi diagonal, where N (0, Σi ) denotes a normal (Gaussian) distribution with a zero mean and a covariance of Σi . Bias terms are often modelled independently, following a random-walk process (Sabatini, 2011): (3) b˙ j = µbj , j ∈ {a, ωI , m, ωO }, with µbj ∼ N (0, Σbj ) and Σbj diagonal. Because magnetometer bias drift is usually negligible, bm is primarily used to compensate for magnetic distortions. Note that variables v and ω in the model for the pixel flow are expressed in the body frame. Transformation to the world frame can be done using the rotation matrices. 3. SENSOR-FUSION ALGORITHM An EKF is used to estimate the UAV’s position, attitude, and velocity. Therefore, the state vector is defined as
2.2 Sensor equations
T ζ = (pw )T (v w )T (Φw )T (ω w )T bTa bTωI bTωO bTm . (4)
The accelerometer, magnetometer, and gyroscopes measure in three perpendicular directions. These measurements are modelled with bias and noise terms, b and µ, respectively, which yields (Sabatini, 2011):
The biases in the sensors are included in the state space, so they can be estimated by the EKF algorithm, which consists of multiple steps.
b (Φw )(aw + g w ) + ba + µa , a ˜b = Rw
ω ˜ Ib = Twb (Φw )ω w + bωI + µωI , b m ˜ b = Rw (Φw )mw + bm + µm ,
3.1 Prediction step (1)
An a-priori state prediction is made using a discrete-time process model with time step, t (s). The accelerometer and gyroscope the IMU are used to define the input T Tfrom as uT = a ˜ I . This allows for a state vector without ˜ ω acceleration states and therefore reduces its size. Fewer noise covariances must be estimated, which simplifies the tuning of the EKF. Furthermore it saves one measurement update, which reduces the size of the matrix inversions (Bleser and Stricker, 2009). Rewriting (1) and (3), the discrete equations governing the state are obtained: w w pw k+1 = pk + tvk ,
b ω ˜O = Twb (Φw )ω w + bωO + µωO ,
where a tilde (~) indicates the measurement under considT T eration. Variables a = [ax ay az ] , ω = [ωx ωy ωz ] , and T m = [mx my mz ] represent the acceleration (m/s2 ), angular velocity (rad/s), and magnetic field (µT), respectively. Subscripts I and O distinguish between the angular velocity measured by the IMU and that from the OFS, respectively. The world’s gravitational vector is given by T g w = [0 0 9.81] (m/s2 ). Scale factors are omitted (i.e., chosen to be equal to 1), as they have a limited influence compared to the bias drifts in these sensors (Sabatini, 2011).
w = vkw + t[Rbw (Φw ab − ba,k − µa ) − g w ], vk+1 k )(˜ w w w Φk+1 = Φk + tωk , w = Tbw (Φw ωIb − bωI ,k − µωI ), ωk+1 k )(˜ ba,k+1 = ba,k + tµba ,k , bωI ,k+1 = bω,k + tµbωI ,k , bωO ,k+1 = bω,k + tµbωO ,k , bm,k+1 = bm,k + tµbm ,k ;
The OFS measures the optical flow, ρ (pixel), in the x- and y-directions of the body frame, between two consecutive images. This flow is induced by the translational and T angular velocities of the UAV, v = [vx vy vz ] (m/s) and ω, respectively (Honegger et al., 2013). To relate the optical flow to the UAV’s velocity, the time between two consecutive images, τ (s), and focal length of the sensor’s optics, f (pixel), are needed. Furthermore, the
(5)
or, in short, ζk+1 = f (ζk , uk , wk ). 44
(6)
2018 IFAC SYSID July 9-11, 2018. Stockholm, Sweden
S.P.H. Driessen et al. / IFAC PapersOnLine 51-15 (2018) 43–48
ηIMU,k , ηOFS,k , ηIMU,k ηsonar,k ηk = ηOFS,k , ηIMU,k ηsonar,k , ηIMU,k
The process noise, wk ∼ N (0, Q), where Q is the covariance matrix, given by wkT = [µTa µTωI µTba µTbωI µTbωO µTbsz ], is set to zero in the prediction because it is unknown. This results in the a-priori state prediction: ζˆ− = f (ζˆk , uk , 0), (7) k+1
where the caret (ˆ) denotes that the variable is estimated and the superscript “−” denotes a prediction. The a-priori covariance matrix estimation is defined as − Pk+1 = Fk Pk FkT + Wk QWkT ,
(8)
where the matrices Fk and Wk are given by ∂f ˆ (ζk , uk , 0), Fk = ∂ζ Wk =
∂f ˆ (ζk , uk , 0). ∂w
45
if β = 0 ∧ γ = 0, if β = 1 ∧ γ = 0, (14) if β = 1 ∧ γ = 1, if β = 0 ∧ γ = 1;
or, in short, (9)
ηk = h(ζk− , νk ),
(10)
where ν ∼ N (0, R) defines the zero-mean Gaussian measurement noise and R is the noise covariance matrix.
3.2 Measurement model
3.3 Kalman gain
Data from the IMU and the gyroscope included in the OFS are assumed to have the same sampling rate and to be available at each time step. The corresponding measurement model is taken as: b ˆw w ˆ Rw (Φk )g + ba,k + µa,k R b (Φ ˆ w )mw + ˆbm,k + µω ,k w k I ηIMU,k = (11) w T b (Φ ˆbω ,k + µω ,k . ˆ w )ˆ + ω w k k I I ˆ w )ˆ T b (Φ ω w + ˆbω ,k + µω ,k
The Kalman gain, K, is computed using
w
k
k
O
− − T T T Kk+1 = Pk+1 Hk+1 (Hk+1 Pk+1 Hk+1 + Vk+1 RVk+1 )−1 , (16)
where the matrix Hk+1 is defined as Hk+1 =
ηOFS,k
pˆw z + µsz , ˆ cos(θ) ˆ cos(φ)
ˆ cos(θ)ˆ ˆ vb cos(φ) x,k b +ω ˆ y,k w pˆz,k
∂h ˆ− (ζ , 0) ∂ζ k+1
(17)
O
and the matrix Vk+1 is defined as
Note that the first line in (11) assumes that the linear accelerations are small compared to the gravitational acceleration (Strohmeier and Montenegro, 2017). The measurement models for the sonar and optical-flow measurements are, in correspondence with (2), given by ηsonar,k =
(15)
Vk+1 =
∂h ˆ− (ζ , 0) = I, ∂ν k+1
(18)
where I is the identity matrix.
(12) 3.4 Correction step
Similar to the measurement model, the measurement matrix also varies with time. The IMU and gyroscope of the OFS have new measurements at each time step, with the corresponding matrix: b a ˜k b m ˜ k η˜IMU,k = . (19) b ω ˜ I,k b ω ˜ O,k
τk f + µρx − = . ˆ cos(θ)ˆ ˆ vb cos( φ) y,k − b +ω ˆ x,k τk f + µρy w pˆz,k (13)
The OFS does not output new flow data at a constant rate due to its automatic exposure control (AEC). The flow quality, α ∈ [0, 255], is an output of the PX4Flow that depends on the number of pixels that match in two consecutive images. Sometimes this flow-data quality is equal to α = 0 due to, e.g., poor lighting conditions from which no flow could be calculated. Because not all sensors generate a usable output at every time step, the variables β ∈ {0, 1} and γ ∈ {0, 1} are introduced. If new flow values have become available since the last time step and the corresponding quality is larger than zero, β equals 1. If new sonar information has become available, γ equals 1. Otherwise both variables equal zero.
A new OFS measurement results in two flow values: b ρ˜ , , (20) η˜OFS,k = x,k ρ˜by,k while the measurement vector of the sonar is given by η˜sonar,k = s˜bz,k .
(21)
Because there are multiple possibilities at each time step, the total measurement vector is defined as
Four measurement models can be distinguished: 45
2018 IFAC SYSID 46 9-11, 2018. Stockholm, Sweden July
η˜IMU,k , η˜OFS,k , η˜IMU,k η˜sonar,k η˜k = η˜OFS,k , η˜IMU,k η˜sonar,k , η˜IMU,k
S.P.H. Driessen et al. / IFAC PapersOnLine 51-15 (2018) 43–48
As described in Section 3.2, flow data is recorded with a variable frequency due to the AEC. Thus, the firmware of the PX4Flow is altered so that the time between the two images used for the optical-flow calculation is also recorded, for use in (13). Furthermore, the built-in bias correction of the gyroscope is turned off, as the correction has been taken into account in the EKF. The sampling rate of the gyroscope of the PX4Flow and the IMU is chosen to be 200 Hz. The sonar sensor has a range of 0.3–5 m, and its measurements are limited to a sampling rate of ∼10 Hz.
if β = 0 ∧ γ = 0, if β = 1 ∧ γ = 0, (22) if β = 1 ∧ γ = 1, if β = 0 ∧ γ = 1.
The initial states are estimated by leaving the UAV on the ground for a period of time. This automatically implies the initial states, v0 = 0 and ω0 = 0. The positions, px,0 and py,0 , are chosen to be zero; and the initial vertical position, pz,0 , is the measured distance from the sonar to the ground. The initial attitude of the UAV is calculated using (Munguia and Grau, 2011) b a ¯y φ0 = arctan , a ¯b z a ¯bx θ0 = arctan − b , (25) a ¯y s(φ0 ) + a ¯bz c(φ0 ) ¯ bz s(φ) −m ¯ by c(φ0 ) + m , ψ0 = arctan m ¯ bx c(θ0 ) + m ¯ by s(φ0 )s(θ0 ) + m ¯ bz c(φ0 )s(θ0 )
The corrected state vector and covariance matrix can now be calculated as − − + Kk+1 η˜k+1 − ηk+1 (ζk+1 , 0) and (23) ζˆk+1 = ζˆk+1 − Pk+1 = (I − Kk+1 Hk+1 )Pk+1 .
(24)
4. EXPERIMENTAL VALIDATION Experiments are carried out to validate and tune the EKF. 4.1 Experimental setup A stand-alone sensor module featuring a PX4Flow, an Adafruit Precision NXP 9-DOF Breakout Board, and a Teensy 3.6 development board is attached to a hexacopter with a 3DR Pixhawk flight controller. All sensors are calibrated prior to the experiments. A photograph of the hexacopter is shown in Fig 1.
where a ¯ and m ¯ are the average acceleration and magnetic field measured during the initialisation period and c(·) = cos(·) and s(·) = sin(·). To derive estimates of the covariance matrices Q and R, the UAV’s motors are turned on while the UAV remains on a level surface. During this experiment the UAV does not move, yet vibrations and disturbances also present during flight, are introduced. All accelerations other than that due to gravity are zero, because the UAV is not moving. The standard deviations of the values obtained from these measurements are used to estimate Q and R.
The UAV is flown under manual remote control in an indoor flight laboratory, while sensor outputs are written to a micro-SD card by the Teensy board. The flight time and therefore the measurement duration is limited by battery capacity to approximately 150 s. This dataset is postprocessed by the EKF. The position and attitude used as ground truth are measured by an OptiTrack motioncapture system, which yields positional and attitudinal data with uncertainties of <1 mm and <0.02 rad, respectively. The angular and translational velocities are obtained by differentiation.
The magnetic field in the experimental environment is not constant during flight due to the metallic structure of the laboratory, which causes bad magnetometer measurements. Therefore the covariance of the magnetometer bias is set to a large value to compensate. 4.2 Results
Sensor module
The gathered dataset is fused in post-processing using the described EKF and the state estimates are compared to the motion-capture results. The first 40 s of the measurement are used for sensor calibration and are not taken into account otherwise. The EKF and motion-capture datasets are synchronised by hand using data collected during the calibration period. The attitude estimation is shown in Fig. 2, where it can be seen to be estimated without drift. The magnetometer bias (not shown) has the same trends as the position of the UAV and is able to compensate for position-dependent magnetic distortions.
Active markers (LEDs) used with the motioncapture system
In Fig. 3, the translational-velocity estimates are shown, along with the flow quality. Between 40 and 53 s, reliable flow values are unavailable, which means that the velocity is estimated by integrating the accelerometer data. This clearly leads to errors in the horizontal translational-
Fig. 1. UAV with sensor module attached 46
2018 IFAC SYSID July 9-11, 2018. Stockholm, Sweden
S.P.H. Driessen et al. / IFAC PapersOnLine 51-15 (2018) 43–48
47
Fig. 2. Estimated attitude compared to ground truth velocity estimates. When the OFS supplies reliable data, this error is corrected. The OFS improves the velocity estimate and compensates for the drift that is normally present as a result of integrating the IMU data. As long as the OFS outputs data with a sufficiently high rate and quality, the velocity error remains relatively low. Fig. 3. Estimated velocity compared to ground truth
The results for the position estimate are shown in Fig. 4. Drift as a result of the bad flow quality at 40–50 s is clearly visible in the y-position. When the flow data are available, the drift becomes significantly smaller. Drift in the z-direction also occurs during the first few seconds, when the actual height of the UAV is less than 0.3 m and thus not within the range of the sonar. As soon as the UAV’s height is within the sonar range, the position estimate in z-direction is corrected and drift does not occur. The other two components of position do not have absolute measurements, and drift can therefore not be entirely prevented.
large differences. When neither flow nor the sonar data are used, state estimates have the RMSE values shown in column five. The effects described earlier are visible, but the yaw error does not increase as much as when the sonar sensor data alone are omitted. This is caused by the bad scaling of the flow data, due to a lack of an adequate range measurement, which corrupts the yaw estimate. To validate that the EKF also performs well with different datasets and that the matrices Q and R do not have to be adapted to obtain good estimates, the unaltered fusion algorithm is used on measurement data from another flight. The RMSE of the resulting estimates are given in the last column of Table 1. For this estimate, all available measurements were used. It can be seen that for this dataset, the EKF provides similar results to those shown in the first column.
Table 1 summarises the estimation results in terms of rootmean-squared error (RMSE) values. The second column represents the RMSE of the state estimates using all sensor measurements in the EKF. These values correspond to the estimates shown in Figs. 2, 3, and 4. The third column represents a case when the optical-flow measurements are not used. The main difference is the RMSE of vx and vy , which create poor estimates for px and py . The vertical velocity estimate (vz ) has RMSE values similar to those in column two, because the estimate is independent of the flow data. Furthermore, the influence on the attitude estimation is negligible. The fourth column represents the RMSE values when the sonar measurements are neglected and shows that the error in pz has increased significantly. Because pz is also used to scale flow data, all velocity estimates have higher RMSE values. Also the yaw error increases significantly, while other estimates do not show
4.3 Discussion During the experiments, the PX4Flow sensor is found to be very sensitive to the lighting conditions and ground surface. Measurements taken outside, on an asphalt surface, result in high-quality flow data without gaps. The indoor measurements are performed on a smooth, concrete surface, resulting in the flow quality shown in Fig. 3, in which gaps are clearly visible. Fast movements in low47
2018 IFAC SYSID 48 9-11, 2018. Stockholm, Sweden July
S.P.H. Driessen et al. / IFAC PapersOnLine 51-15 (2018) 43–48
made by implementing the algorithm on-board the UAV and inputting estimates from the EKF output to a flight controller. REFERENCES Bleser, G. and Hendeby, G. (2010). Using optical flow for filling the gaps in visual-inertial tracking. European Signal Processing Conference, 1836–1840. Bleser, G. and Stricker, D. (2009). Advanced tracking through efficient image processing and visual-inertial sensor fusion. Computers and Graphics, 33(1), 59–72. Goppert, J., Yantek, S., and Hwang, I. (2017). Invariant Kalman filter application to optical flow based visual odometry for UAVs. Ninth International Conference on Ubiquitous and Future Networks, 99–104. Honegger, D., Meier, L., Tanskanen, P., Pollefeys, M., and Eth, Z. (2013). An open source and open hardware embedded metric optical flow CMOS camera for indoor and outdoor applications. IEEE International Conference on Robotics and Automation, 1736–1741. Khosiawan, Y. and Nielsen, I. (2016). A system of UAV application in indoor environment. Production and Manufacturing Research, 4(1), 2–22. Mebarki, R., Cacace, J., and Lippiello, V. (2013). Velocity estimation of an UAV using visual and IMU data in a GPS-denied environment. IEEE International Symposium on Safety, Security, and Rescue Robotics. Munguia, R. and Grau, A. (2011). Attitude and heading system based on EKF total state configuration. Proceedings of the 2011 IEEE International Symposium on Industrial Electronics, 2147–2152. Omari, S., Gohl, P., Burri, M., Achtelik, M., and Siegwart, R. (2015). Visual industrial inspection using aerial robots. Proceedings of the 3rd International Conference on Applied Robotics for the Power Industry, (1). P´ol¤ka, M., Ptak, S., and Kuziora, L ¤ . (2017). The use of UAV’s for search and rescue operations. Procedia Engineering, 192, 748–752. Sabatini, A.M. (2011). Kalman-filter-based orientation determination using inertial/magnetic sensors: Observability analysis and performance evaluation. Sensors, 11(10), 9182–9206. Strohmeier, M. and Montenegro, S. (2017). Coupled GPS/MEMS IMU attitude determination of small UAVs with COTS. Electronics, 6(1), 15. Waharte, S. and Trigoni, N. (2010). Supporting search and rescue operations with UAVs. International Conference on Emerging Security Technologies (ETS), 142–147. Weiss, S., Achtelik, M.W., Lynen, S., Chli, M., and Siegwart, R. (2012). Real-time onboard visual-inertial state estimation and self-calibration of MAVs in unknown environments. IEEE International Conference on Robotics and Automation, 957–964. Weiss, S. and Siegwart, R. (2011). Real-time metric state estimation for modular vision-inertial systems. Proceedings of the IEEE International Conference on Robotics and Automation, 231855, 4531–4537. Yun, S., Lee, Y.J., and Sung, S. (2016). Range/optical flow-aided integrated navigation system in a strapdown sensor configuration. International Journal of Control, Automation and Systems, 14(1), 229–241.
Fig. 4. Estimated position compared to ground truth Table 1. RMSE values for different cases
epx (m) epy (m) epz (m) evx (m/s) evy (m/s) evz (m/s) eφ (rad) eθ (rad) eψ (rad) eφ˙ (rad/s) eθ˙ (rad/s) eψ˙ (rad/s)
Ref.
No flow
No sonar
0.4281 0.8573 0.0782 0.1150 0.1666 0.0461 0.0118 0.0116 0.0051 0.0431 0.0266 0.0176
13.5987 3.7590 0.0783 0.4850 0.3166 0.0461 0.0125 0.0131 0.0074 0.0450 0.0264 0.0176
1.3174 1.6040 1.3594 0.3075 0.3407 0.1341 0.0138 0.0139 0.0283 0.0426 0.0267 0.0176
No flow, no sonar 13.3775 3.6392 3.9486 0.4846 0.3161 0.1361 0.0125 0.0131 0.0074 0.0450 0.0264 0.0176
2nd dataset 0.7826 1.0779 0.0810 0.1054 0.1224 0.0486 0.0115 0.0135 0.0067 0.0399 0.0306 0.0186
lighting conditions also resulted in low-quality flow data. Sonar measurements are greatly affected by the ground surface: a smooth hard surface gives good quality height measurements, compared to a softer and rougher surface. 5. CONCLUSIONS In this paper, an EKF is proposed to fuse sensor data from an IMU, an OFS, and a sonar sensor. Bias compensation is used to correct for drift in these sensors. The EKF is validated by comparing the state estimates to the ground truth provided by a motion-capture system. The experimental results indicate the translational velocity of the UAV, as well as its attitude and angular velocity, may be estimated without drift. The availability of flow data greatly reduces the error in velocity and position in the x − y plane. Sonar measurements result in better estimation of the velocity and position in the z-direction. The sonar and optical-flow measurements do not significantly influence the attitude estimates. Future extensions can be 48