Precise localization of the mobile wheeled robot using sensor fusion of odometry, visual artificial landmarks and inertial sensors

Precise localization of the mobile wheeled robot using sensor fusion of odometry, visual artificial landmarks and inertial sensors

Robotics and Autonomous Systems 112 (2019) 168–177 Contents lists available at ScienceDirect Robotics and Autonomous Systems journal homepage: www.e...

1MB Sizes 0 Downloads 33 Views

Robotics and Autonomous Systems 112 (2019) 168–177

Contents lists available at ScienceDirect

Robotics and Autonomous Systems journal homepage: www.elsevier.com/locate/robot

Precise localization of the mobile wheeled robot using sensor fusion of odometry, visual artificial landmarks and inertial sensors ∗

Dušan Nemec, Vojtech Šimák, Aleš Janota, Marián Hruboš , Emília Bubeníková Department of Control and Information Systems, Faculty of Electrical Engineering, University of Zilina, Zilina, Slovak Republic

highlights • • • •

Article proposes new sensor fusion method combining variety of sensors’ readings. Method utilizes inertial sensors, landmarks, odometers and magnetometer. The method is suitable for estimating the position of a wheeled vehicle. The simplicity of the method allows its implementation into low-cost hardware.

article

info

Article history: Available online 3 December 2018 Keywords: Localization Sensor fusion Odometry Landmarks Inertial sensors Mean square error

a b s t r a c t This article proposes a method for sensor fusion between odometers, gyroscope, accelerometer, magnetometer and visual landmark localization system. The method is designed for estimation of all 6 degrees of freedom (both translation and attitude) of a wheeled robot moving in uneven terrain. The fusion method is based on continuous estimation of the mean square error of each estimated value and allows different sampling rates of each sensor. Due to the simple implementation, it is suitable for real-time processing in the low-cost hardware. In order to evaluate the precision of the estimated position, stochastical models of sensors (with parameters matching real hardware sensors) were used and random trajectories were simulated. The virtual experiments showed that the method is resistant to the failure of any sensor except the odometers; however, each sensor provides improvement in the resultant precision. © 2018 Elsevier B.V. All rights reserved.

1. Introduction The localization is a critical task for achieving autonomy in mobile robotics. Without precise estimate of the position in real time the robot is not capable of controlling its movement and navigating along some trajectory. In the general case any rigid object has 6 degrees of freedom (DoF) — 3 translational and 3 rotational. In this article we will express rotation of the robot using Euler angles in Z-Y-X convention (longitudinal inclination called roll α , lateral inclination called pitch β and heading γ ). Translation will be expressed in Cartesian coordinates x, y, z. Axes are oriented by NED convention (x — north or forward, y — east or right, z — down). There are several approaches to localize the wheeled robot. Goel et al. proposed a localization method using sensor fusion of odometer and GPS by the Kalman filter [1]. Jetto et al. used the extended Kalman filter (EKF) to combine readings from odometer and sonar. A fuzzy logic was used to estimate the input errors of the sensors [2]. Li et al. combined the odometer with the single-axis ∗ Corresponding author. E-mail address: [email protected] (M. Hruboš). https://doi.org/10.1016/j.robot.2018.11.019 0921-8890/© 2018 Elsevier B.V. All rights reserved.

gyroscope and laser radar. Their readings were combined using the EKF. The laser radar readings formed an environmental map which was compared with current observations [3]. Zikos and Petridis and many others used SLAM method (Simultaneous Localization and Mapping), which processes readings from the laser radar or sonar and forms a probabilistic map of points (obstacles) [4]. In order to avoid the need to create a global map of the environment, Tesilic et al. used local mapping and paired local line segments of points in subsequent observations. Then the displacement of the robot can be obtained [5]. Another approach (Samaili et al.) uses Dynamic Bayesian Network to combine readings from ABS sensors (odometer in vehicles), Differential GPS receiver and digital roadmap [6]. B.-S. Cho et al. proposed a method of using inertial sensors (gyroscope, accelerometer) combined with the odometry in order to achieve more precise result. Both sensor systems provide relative position and have increasing error in time with time but after using the Kalman filter the output drift is smaller [7]. Hoang et al. used two encoders (odometers), the compass sensor, laser radar and omi-directional camera. The image from camera was used to estimate the heading of the robot by comparing lines in the image detected by the Hough transform. Again, the EKF was used to fuse all sensors together [8]. Another approach localizes

D. Nemec, V. Šimák, A. Janota et al. / Robotics and Autonomous Systems 112 (2019) 168–177

the mobile robot using landmarks (Betke and Gurvits). The system identifies surrounding landmarks in the captured environment and then searches for them in the stored map [9]. Another way to improve the precision of the localization is to estimate the error of the odometry (Martinelli and Siegwart). Both systematic and non-systematic errors were estimated from the readings of an external sensor (laser range finder). Then, the augmented Kalman filter was used [10]. Data measured by sensors may be processed onboard, or (in case of very low available computational power) they may be sent to an external device (PC, smartphone) with greater computational power [11]. Especially, image processing algorithms for visual localization (e.g. visual odometry) are computationally demanding. Low-energy Bluetooth (BLE) or other energy-efficient wireless communication may be used for this purpose. Only one sensor is not capable to provide an information precise enough to be used in navigation, especially indoors where satellite localization cannot be used. In this article we will combine odometer, gyroscope, accelerometer, magnetometer and visual localization by landmarks. These sensors are affordable, easy to implement into existing robots and may have very small dimensions, therefore they can be used in miniature low-cost robots. The sensors’ readings are combined by our own fusion method. The system is designed to be flexible — it can be easily extended by another sensor. In the case of a wheeled robot it may look like there are only 3 DoF — 2 translational coordinates x, y and 1 rotational (heading γ ). These coordinates are relevant for the navigation of the robot. Changes of the remaining 3 DoF (α , β and depth z) are caused by uneven terrain and may be needed for other services like camera stabilization or 3D mapping of the environment. Except ideal conditions (flat floor) the localization problem cannot be solved only in the horizontal plane. In order to express all 6DoF of the robot in one form we use the spatial matrix. It defines transformation from the coordinate system of the robot to the coordinate system of the environment [12]:



T11

⎢T T = ⎣ 21 T 31

0



cβ cγ ⎢cβ sγ =⎣ −sβ 0

T12 T22 T32 0

T13 T23 T33 0



T14 T24 ⎥ T34 ⎦ 1

sα sβ cγ − cα sγ sα sβ sγ + cα cγ sα cβ 0

cα sβ cγ + sα sγ cα sβ sγ − sα cγ cα cβ 0



x y⎥ , z⎦ 1

(1)

where x, y, z are coordinates of the robot’s origin w.r.t. environment and sα = sin α, cα = cos α , etc. The matrix T will be referred as a spatial matrix. The coordinates and above-mentioned Euler angles α , β , γ can be extracted from the matrix T by following [13]: x = T14 ,

(2)

y = T24 ,

(3)

z = T34 ,

(4)

α = atan2(T32 , T33 ),

(5)

β = arcsin(−T31 ),

(6)

γ = atan2(T21 , T11 ).

(7)

In the next chapters, we will discuss existing approaches for localization and propose a new method fusing multiple sources together with providing complete and more precise estimation of the position. Each input information is inaccurate which can be expressed by the mean square error (MSE). The MSE of a function

169

Fig. 1. Principle of the odometry.

f (x1 , x2 , . . .., xn ) is approximately: MSE(f ) =

)2 n ( ∑ ∂f i=1

∂ xi

MSE(xi ),

(8)

For example, the MSE of the Euler angles (5)–(7) can be computed from MSE of the spatial matrix using (8): MSE(α ) = MSE(β ) = MSE(γ ) =

2 2 MSE(T33 ) MSE(T32 ) + T32 T33

(

2 2 T32 + T33

MSE(T31 ) 2 1 − T31

)2

,

(9)

,

(10)

2 2 MSE(T11 ) MSE(T21 ) + T21 T11

(

2 2 + T21 T11

)2

.

(11)

1.1. Odometry The odometry is a localization method usable for wheeled nonsliding robots. Such a robot must have at least one independent steerable wheel (car-like chassis has two steerable wheels but they steer dependently) or Mecanum wheels providing omnidirectional drive which is much more complex and expensive than ‘‘standard’’ wheels. The position of the robot is incrementally calculated from the distance rolled by wheels of the robot. The distance rolled by the wheel ∆swheel is proportional to the radius of the wheel Rwheel and the angle of its rotation ∆ϕ :

∆swheel = Rwheel ∆φ.

(12)

In order to provide enough information for an independent estimation of the robot’s position, wheels on both sides of the robot’s axle have to be equipped with rotation sensors (we will call it the odometric axle). Higher precision is achieved when the sensors are mounted on the non-powered axle due to the possible skid of the powered wheels during acceleration. The calculation is simplified if the origin of the robot is placed in the middle of the odometric axle (see Fig. 1). Arc angle ∆γ of the turn is:

∆γ =

∆sL − ∆sR

, (13) b where b is the width of the sensor axle. The radius Rturn of the turn is: Rturn =

∆sL + ∆sR b = 2∆γ 2

(

∆ sL + ∆ sR ∆ sL − ∆ sR

)

.

(14)

170

D. Nemec, V. Šimák, A. Janota et al. / Robotics and Autonomous Systems 112 (2019) 168–177

The change of the position in the robot’s local coordinate system defined by unit vectors ex′ and ey′ according to Fig. 1 is:

⎧ ⎨Rturn sin ∆γ ∆x′ = ∆sL + ∆sR ⎩

if |∆sL − ∆sR | ≥ ε,

(15)

if |∆sL − ∆sR | < ε,

2

⎧ ⎨Rturn (1 − cos ∆γ ) ∆y′ = ∆s2 − ∆s2 R ⎩ L

if |∆sL − ∆sR | ≥ ε, (16)

if |∆sL − ∆sR | < ε,

4b

where ε is the smallest positive real number which can be represented by the used datatype without loss of the precision. The condition had to be introduced due to the singularity in (14). The spatial matrix of the robot is computed incrementally from the last known spatial matrix T : Fig. 2. Navigation by artificial visual landmarks.

T ← T · Todo ,

(17)

where: cos ∆γ ⎢ sin ∆γ =⎣ 0 0



Todo

− sin ∆γ cos ∆γ

⎤ ∆x′ ′ ∆y ⎥ . 0 ⎦

0 0 1 0

0 0

(18)

1

Note that without additional information it is not possible to calculate the vertical coordinate, nor the roll and pitch angle. Main disadvantage of the odometry is its increasing error of the integrated position because of the iterative character of (17). The error model of the odometry is very complex. In order to simplify it, we may assume that relative error δ of each odometric distance ∆s is constant. Then the MSE of the local increments is approximately: MSE(∆x′ ) ≤ MSE(∆y′ ) ≤ MSE(∆γ ) =

MSE(∆sL ) + MSE(∆sR )

δ2 (

∆s2L + ∆s2R , 4 4∆s2L MSE(∆sL ) + 4∆s2R MSE(∆sR ) δ2 ( 4

16b2 MSE(∆sL ) + MSE(∆sR ) b2

=

)

=

=

δ2 ( b2

4b2

(19)

) ∆s4L + ∆s4R , (20)

) ∆s2L + ∆s2R .

(21)

The MSE of trigonometric functions is approximately (according to the (8)): MSE(sin ∆γ ) = (cos ∆γ )2 MSE(∆γ ) ≤ MSE(∆γ ),

(22)

MSE(cos ∆γ ) = (− sin ∆γ ) MSE(∆γ ) ≤ ∆γ MSE(∆γ ).

(23)

2

2

Using similar approach, we obtain the MSE of the whole spatial matrix: ⎤ ∆γ 2 MSE(∆γ ) MSE(∆γ ) 0 MSE(∆x′ ) ⎢ MSE(∆γ ) ∆γ 2 MSE(∆γ ) 0 MSE(∆y′ )⎥ ⎢ ⎥ ◦2 MSE (T ) ← MSE (T ) · Todo + T ◦2 · ⎢ ⎥. ⎣ ⎦ 0 0 0 0 ⎡

0

0

0

0

(24) ◦2

where ments).

Fig. 3. Computing the position of the robot from position of the landmark (bottom view).

denotes Hadamard power of two of a matrix (per ele-

1.2. Landmark localization The global position of the robot can be obtained from relative distance between the robot and one or multiple landmarks with known position. There are several approaches depending on the type of the landmark. In this article, we will discuss artificial directional landmarks (the orientation of the landmark can also be measured). Even if only one landmark is visible to the robot

it is capable of computing its full position including direction in global coordinates. To keep the system low-cost and simple, the landmarks can be visual (2D code like QR code) which can be captured by a standard camera. Since the proposed localization system is intended for indoor usage, the landmarks can be mounted on the ceiling (see Fig. 2). The landmark should encode its own coordinates and orientation so the robot is capable to read it. Another approach uses the system of ID numbers encoded by landmarks and a search table which assigns position to the obtained ID. This requires less pixels to be shown in the landmark (smaller resolution of the camera is required while covering the same area). The length covered by the camera is: Lcam = 2hcam tan

( ) θ 2

,

(25)

where hcam is the height of the landmark with respect to the camera (see Fig. 2) and θ is the angle of the view of the camera. Lcam is the minimal distance between landmarks which ensures continuous information about the position. The mathematical principle of computing the position from the position of the landmark in the image captured by the camera is shown in Fig. 3. The spatial matrix of the robot obtained from landmark localization is (see Fig. 3): −1 −1 T = Trobot(env) = Tmark(env) · Tmark(cam) · Tcam(robot) .

(26)

The spatial matrix of the camera with respect to the robot Tcam(robot) is usually constant since the camera should be fixed on the robot

D. Nemec, V. Šimák, A. Janota et al. / Robotics and Autonomous Systems 112 (2019) 168–177

in vertical direction (which avoids deformation of the captured landmark). The spatial matrix of the landmark with respect to the environment Tmark(env) is also constant and might be encoded directly in the landmark code (e.g. QR code): cos γmark ⎢ sin γmark =⎣ 0 0

− sin γmark cos γmark



Tmark(env)



0 0 1 0

0 0

xmark ymark ⎥ = const, 0 ⎦ 1

(27)

cos ψm ⎢ sin ψm =⎣ 0 0

− sin ψm cos ψm

Tmark(cam)



0 0 1 0

0 0

Ccam · Xm Ccam · Ym ⎥ ⎦, 0 1

(28)

hcam hcam − fcam

≈ const,

(29)

where fcam is the focal length of the camera. Note that the vertical coordinate of the landmark is not taken into account (also the height of the camera is omitted). Since the Tmark(env) and Tcam(robot) are constant, the only source of the errors is Tmark(cam) . If we assume that the landmark has square shape with the side length amark , the MSE of coordinates Xm , Ym and direction ψm in radians which corresponds to size of one pixel (MSE(Xm ) = MSE(Ym ) = 1) is: MSE(xc ) = MSE(yc ) = MSE(γc ) ≈

2 Ccam MSE(Xm )2

MSE(xc ) + MSE(yc ) a2mark

=

2 Ccam

2 2Ccam 2 amark

=

,

(30)

.

(31)

1.3. Gyroscope Gyroscope is a basic part of the AHRS (Attitude and Heading Reference Systems). These systems provide real-time measurements of the inclination in all three axes (expressed as Euler angles α , β , γ or 3D rotation matrix). Current MEMS sensors have high precisionto-cost ratio compared to the traditional rotary gyroscopes. However, the angular velocity measured by the MEMS gyroscope has to be converted to the update matrix. Then the spatial matrix of the robot is updated sample-by-sample using the following formula [13]: T ← T · Tgyro ,

(32)

Tgyro

1

⎢ ⎢ ω′ ∆t s ⎢ =⎢ z ⎢ −ω′ ∆t y ⎣ 0

Due to the non-zero bias of the gyroscope the error of the estimated spatial matrix (32) increases in time. An accelerometer is used to compensate the error – it measures reaction to the gravity acceleration −g ′ (directs upwards) and own acceleration a′robot of the robot in its (dashed) coordinate system: a′ = −g′ + a′robot + a′noise ,

(35)

where a is the acceleration vector measured by the accelerometer and a′noise is the noise vector of the accelerometer including vibrations of the robot. The accelerometer should be mounted in the origin of the robot (centrum of the non-steered axle) and oriented along the vehicle’s axes. Two of the Euler angles can be computed from the vector of gravity (see [13] for an explanation):

( ) α = atan2 g ′ y , g ′ z , ) ( √ ′ 2 2 ′ ′ β = atan2 −g x , g y + g z ,

−ωz ∆t

ωy ∆t ′

(38)

If we apply a low-pass filter to the accelerometer’s readings, the noise vector can be attenuated but never eliminated. If the robot accelerates (or turns) longer than the time constant of the low-pass filter, the induced error becomes significant. The wheeled robot cannot constantly accelerate along its local z-axis so we neglect the vertical component of the robot’s own acceleration: a′robot = [afwd , arad , 0],

(39)

where afwd is the forward acceleration and arad is the radial acceleration. The MSE of the measured gravity is: MSE(g ′ a,x ) = a2fwd + a2RMS ,

(40)

,

(41)

MSE(g



MSE(g



a,y ) a,z )

=

a2rad

=

a2RMS

+

a2RMS

,

(42)

where aRMS is the RMS of the measured acceleration noise after filtration. The MSE of the Euler angles (36)–(37) are:

0

(

−ωx ∆t

ωx′ ∆t

0 ⎥ ⎥

1

0

0

0 ⎥ ⎦

⎥,

(

1

MSE(βa ) =

where ωx′ , ωy′ , ωz′ are components of the angular velocity measured by the gyroscope and ∆t is the sampling period of the gyroscope. If we assume that the MSE of each axis of the gyroscope is the same and the sampling period is constant, the MSE of the spatial matrix (according to (8)) is: 0

⎢ 1 ◦2 MSE (T ) ← MSE (T ) · Tgyro + T ◦2 · ⎣ 1 0



(33)

1 0 1 0

1 1 0 0



0 0 ⎥ 2 · ωRMS ∆t 2 , 0 ⎦ 0

+(

g ′ 2y + g ′ 2z 2

2

g ′ a,y + g ′ a,z

g ′ 2a,y + g ′ 2a,z

)2

g ′ 2y + g ′ 2z 2

g ′ 2a,y + g ′ 2z

( a2RMS g′2 x

+

g′2 y

+

(43)

g ′ z a2rad

g′2 z

)2 ,

[

2

2

MSE(g ′ a,x ) + g ′ a,x g ′ a,y MSE(g ′ a,y ) + g ′ a,z MSE(g ′ a,z )

( ≈

)2

2

a2RMS



1



2

g ′ a,z MSE(g ′ a,y ) + g ′ a,y MSE(g ′ a,z )





(37)

g′a = −a′ .

MSE(αa ) =



(36)

The widely-used approach neglects robot’s own acceleration and the noise:

2

where the update matrix Tgyro is:



where ωRMS is the RMS error of each gyroscope’s axis including the bias. The Euler angles of the robot are computed from T using (5)– (7); their MSEs are computed from (34) using (9)–(11). Note that the gyroscope is only capable to provide estimation of the Euler angles and the error of the estimate increases in time.



where Ccam is the scale factor of the between distance units (meters) and pixels in the image: Ccam =

(34)

1.4. Accelerometer

where xmark , ymark are coordinates and γmark is the horizontal orientation of the landmark with respect to the environment. Tmark(cam) is computed from the position Xm , Ym of the landmark in pixels and its orientation ψm in the captured image:



171

+(

)(

2

2

g ′ 2a,x + g ′ 2a,y + g ′ 2a,z

g′2 x

+

g′2 y

)2

)

g ′ y + g ′ z a2fwd

+

)2 g′2 z

2

+(

]

2

g ′ x g ′ y a2rad g′2 y

+

g′2 z

)(

g′2 x

+ g ′ 2y + g ′ 2z

)2 ·

(44) 1.5. Magnetometer The third Euler angle (heading) can be obtained from the magnetometer. Since it is computed from the horizontal components

172

D. Nemec, V. Šimák, A. Janota et al. / Robotics and Autonomous Systems 112 (2019) 168–177

Above is valid only if:

Bx , By of the magnetic field in the global coordinates:

( ) γm = atan2 −By , Bx .

(45)

The horizontal components Bx , By can be computed from the readings of the magnetometer B′x , B′y , B′z and the Euler angles α , β : Bx = B′x cos β + B′y sin α sin β + B′z cos α sin β,

(46)

By = By cos α − Bz sin α.

(47)

For an explanation of (46)–(47) please refer to [13]. The MSE of the horizontal components are:

(

′2

)

′2

+ MSE(α ) By cos 2α + Bz sin 2α sin 2β+ ) ( 2 2 (48) + MSE(β ) B′y sin 2α + B′z cos 2α cos 2β, ) ( 2 2 (49) MSE(By ) = B2RMS + MSE(α ) B′y sin 2α + B′z cos 2α , MSE(Bx ) =

B2RMS

C MSE(xi )

MSE(γm ) =

(

B2x + B2y

)2

(55)

where C is any nonzero constant (e.g. C = 1). Applying (55) to (53) we obtain: MSE(xresult ) =

)−1

MSE(xk )

.

(56)

Note that the proposed fusion operation is associative and commutative (as expected). For the special case when fused variable is an angle, the proposed method has to be modified because two angles around 180◦ like 179.9◦ and −179.9◦ with the same MSE would be 0◦ instead of 180◦ or −180◦ . The Eqs. (52) and (55) for the case of two variables can be re-written in the form of the adaptive complementary filter:

xresult =

,

1

∑ k

where BRMS is the RMS error of each axis of the magnetometer. The MSE of the magnetic heading is: B2y MSE(Bx ) + B2x MSE(By )

,

(





wi =

MSE(x2 ) · x1 + MSE(x1 ) · x2

(50)

MSE(x1 ) + MSE(x2 )

= x1 +

MSE(x1 ) MSE(x1 ) + MSE(x2 )

=

· (x2 − x1 ) = x1 + K · ∆x,

2. Proposed system

(57)

Above mentioned subsystems estimate some of the six degrees of freedom of the robot. All available estimates of the same variable should be merged together into one result. If the subsystem works incrementally (odometry, gyroscope), the merged result will override the internal state of the iterative algorithm. The result of the sensor fusion should be more precise while maintaining the high sampling rate without increasing the delay. 2.1. Sensor fusion method

[ MSE(x(t)) ≈ MSE(x(t0 )) +

dx(t0 ) dt

]2

(t − t0 )2 .

(51)

The goal of the sensor fusion is to minimize the MSE of the result. The straightforward solution is to use the adaptive complementary filter:

wk xk . k wk

xresult = ∑k

(52)

where wk is the weight of the kth sensor or subsystem and xk is its output. The MSE of the result is:

( MSE(xresult ) =

1



)2 ∑

k wk

wk2 MSE(xk ).

(58)

where ∥x∥ denotes the nearest integer to the given value x. Also the summation of two angles has to use similar formula which results in:

 x  result  . 2π

(53)

(59)

For the formal simplification of the following chapters we will assume that the operation (59) is implicitly applied to each angle after each numeric operation.

2.2. Fusion of the odometry with the accelerometer According to (43) and (44) the error of the roll angle αa and pitch angle βa obtained from the accelerometer increases with the robot’s own acceleration (39). Using the odometry, the forward speed is:

vfwd =

∆ sL + ∆ sR , 2∆todo

(60)

where ∆todo is a sampling period of the odometry. Quantization noise of the odometer (1/2 of the odometer’s increment) may cause significant spikes in the computed speed. Due to the robot’s inertia, the velocity cannot change too rapidly; therefore the velocity may be computed as an average of multiple samples:

k

v fwd [k] =

The MSE is minimal when:

∂ MSE(xresult ) 1 = 0 ⇒ −2 ∑ ∂wi k wk

(

∀i:

   x2 − x1   ∆x = (x2 − x1 ) − 2π   2π  ,

xresult ← xresult − 2π 

Previously we have approximated the output error of each subsystem using mean square error (MSE). Each subsystem can have different sampling rate. The formula (8) is valid only if input variables are independent. Since we want to combine two values x1, x2 representing the same variable, the difference between values ∆x = x2 − x1 has to be taken into account. We may compensate the MSE:



where K represents the variable gain of the fusion. The difference between two angles is given by:

)3 ∑

wk2 MSE(xk )+

k

( )2 1 +2 ∑ wi MSE(xi ) = 0. k wk

(54)

sL [k] + sR [k] − sL [k − N ] − sR [k − N ] 2N ∆todo

,

(61)

where [k] denotes kth sample and N is count of the averaged samples. The forward acceleration is then: afwd [k] =

v fwd [k] − v fwd [k − 1] . ∆todo

(62)

D. Nemec, V. Šimák, A. Janota et al. / Robotics and Autonomous Systems 112 (2019) 168–177 Table 1 Gyroscope ITG3205 [14].

Using a similar approach, the averaged radial acceleration is: arad [k] = ω[k] · v fwd [k]

= ·

(sL [k] − sL [k − N ]) − (sR [k] − sR [k − N ]) · bN ∆todo (sL [k] − sL [k − N ]) + (sR [k] − sR [k − N ]) 2NDeltatodo

(63)

(sL [k] − sL [k − N ]) − (sR [k] − sR [k − N ]) , 2 2bN 2 ∆todo 2

=

=

2

where ω[k] is the averaged heading rate. The own acceleration vector (39) of averaged components is then subtracted from the measured acceleration a′ to obtain the gravity vector (k- indices are omitted):

) g′ao = − a′ − [afwd , arad , 0] . (

(64)

The Euler angles αao and βao are then computed from the gravity vector g′ao using (36) and (37) respectively, but their MSE do not contain components of the robot’s own acceleration (compared to (43) and (44)): MSE(αao ) = MSE(βao ) =

a2RMS g ′ 2ao,y + g ′ 2ao,z

,

(65)

a2RMS g ′ 2ao,x + g ′ 2ao,y + g ′ 2ao,z

(66)

·

173

2.3. Fusion with the gyroscope Above Euler angles αao , βao can be fused with the Euler angles αg , βg , γg obtained from the spatial matrix (32) by the formulas (5)–

Parameter

Value

Full-scale range Resolution Initial sensitivity error Initial bias error Calibrated bias error Nonlinearity RMS noise Sample rate

± 2000 degree.s−1 16 bit signed

< 6% < 60 degree.s−1 < 0.1 degree.s−1 0.2% 0.7 degree.s−1 200 Hz

Table 2 Accelerometer ADXL345 [15]. Parameter

Value

Full-scale range Resolution Initial sensitivity error Initial bias error Calibrated bias error Nonlinearity RMS noise Sample rate

± 40 m.s−2 (±4 g) 11 bit signed < 10% < 2.5 m.s−2 < 0.01 m.s−2 0.5% 0.11 m.s−2 200 Hz

Table 3 Magnetometer HMC5883L [16]. Parameter

Value

Full-scale range Resolution Initial sensitivity error Nonlinearity RMS noise Sample rate

±120µT 12 bit signed

< 5% 0.1% 2 µT (measured) 15 Hz

(7). We use our proposed fusion principle described in Section 2.1:

αaog = αao + MSE(αaog ) =

=

(

MSE(αao ) MSE(αg ) + MSE(αao ) 1

+

(67)

)−1

1

MSE(αao ) MSE(αg ) MSE(αao ) · MSE(αg ) MSE(αao ) + MSE(αg )

) ( αg − αao ,

.

spatial matrix is will be used by the odometer and gyroscope in the next iteration (see Algorithm 1). 3. Evaluation and experiments

(68)

Using the same principle, we compute:

• Pitch angle βaog from βao (accelerometer + odometer) and βg (gyroscope), • Heading angle γog from γo (odometer) and γg (gyroscope). All three sensors (accelerometer, odometer and gyroscope) should have the same sampling period ∆t. Due to the high sampling rate of available MEMS gyroscopes and accelerometers (up to 1 kHz) the synchronization is not necessary. This sampling rate determines the sampling rate of the whole localization system. Low-cost magnetometers have usually much lower sampling rate (up to 50 Hz). When a new vector of the magnetic field is measured, the heading γog is combined with the magnetic heading γm into γmog . If there is no sample from the magnetometer available, then γmog = γog . Coordinates xo , yo , zo are computed from the spatial matrix updated by the odometry. If there is a new sample from the visual landmark localization system, the obtained coordinates xv , yv , zv and heading γv are combined with xo , yo , zo and γmog respectively into xvo , yvo , zvo and heading γvmog. If there is no position from landmark localization available, then xvo"− = xo , yvo"− = yo , zvo"− = zo and γvmog = γmog . The final spatial matrix T is formed from the fused Euler angles αaog , βaog , γvmog and coordinates xvo , yvo , zvo using (1). This new

The performance of the proposed fusion algorithm has been evaluated using simulation based on real error characteristics. If some real wheeled robot would be used, its true position would be unknown, therefore the error of the fusion result could not be evaluated. The error characteristics of real sensors will be described in the following sections. 3.1. Used sensors and hardware For the evaluation, we have selected the integrated 3-axial MEMS gyroscope ITG3205 (see Table 1), 3-axial MEMS accelerometer ADXL345 (see Table 2) and 3-axial magnetometer HMC5883L (see Table 3). The initial bias and sensitivity errors are high, but using static calibration it is possible to eliminate them to much lower values. In order to test odometers, we have used the e-puck robot [17] which is differentially driven by two stepper motors (see Fig. 4). The odometry is then implemented by counting steps of the motors. The main source of errors is sliding of wheels on the floor. Using simple measurements we have estimated the relative error of each wheel to be 2.7% in the given laboratory conditions (plain rubber floor). Simulations have shown that the true error of the odometry is not Gaussian; it contains only low-frequency components which is caused mainly by changing traction of the wheels. The output of the odometer is also quantized by the following steps: step =

2π Rwheel Nodo

.

(69)

174

D. Nemec, V. Šimák, A. Janota et al. / Robotics and Autonomous Systems 112 (2019) 168–177

D. Nemec, V. Šimák, A. Janota et al. / Robotics and Autonomous Systems 112 (2019) 168–177

175

Fig. 4. E-puck robot.

where Nodo = 1000 is count of ticks per one turn. The simulated noisy odometric data were generated using function in the Algorithm 2: Visual landmark localization has been implemented inversely: square landmark formed by RGB pixel turret (70 ×70 mm) was mounted at the top of the robot and the camera was fixed in the environment. The absolute error of the estimated horizontal position has been measured to be approx. RMSE(x) = RMSE(y) = 5 mm and the error of the estimated direction has been RMSE(γ ) = 5.8◦ = 0.10 rad. Since the camera is fixed, the error is independent from the attitude of the robot.

Fig. 5. Example of one simulated random trajectory, trajectory estimated only by the odometer and by all sensors fused together.

normalized accelerometer output function:

[ ya = √

3.2. Results

ax , ay , az

]T 1 T ha (x) = T − 3x3 · [0, 0, 1] ,

a2x + a2y + a2z

(73)

normalized magnetometer output function: The performance of the algorithm can be compared by RMSE of the estimated position and the Euler angles. Each RMSE value represents the average difference between the measured estimated coordinate (or angle) and its true value during 20 simulated random trajectories (see Fig. 5). Each simulated trajectory lasts 100 s. The simulated characteristics of the sensors are set equal to above mentioned real characteristics of the real hardware sensors. Performance of our sensor fusion method is compared with extended Kalman filter (EKF) fusing mentioned sensors (odometer, gyroscope, accelerometer, magnetometer and visual landmark navigation system) together. The Kalman filter is defined by the following state vector: x = [x, y, z , ϕ, θ , ψ ] , T

(70)

prediction (action) input: u = ∆sL , ∆sR , ωx , ωy , ωz

[

]T

,

(71)

state prediction function:





xn−1 ⎢yn−1 ⎥ ⎢ ⎥ ⎢zn−1 ⎥

⎥ ⎢ϕn−1 ⎥ ⎣θn−1 ⎦ ψn−1

⎢T 3x3 ·

⎢⎡ +⎢ ⎢ 1 ⎣⎣0 0

[

∆sL +∆sR 2

,

sin ϕ tan θ cos ϕ sin ϕ cos θ

∆s2L −∆s2R 4b

B2x

+

B2y

]T

+

1 T hm (x) = T − 3x3 · [cos δ, 0, sin δ ] ,

B2z

(74)

and visual landmark localization output function: y v = [xv , yv , ψv ]T

hv (x) = [x, y, ψ ]T ,

(75)

where T3×3 is the rotational part of the spatial matrix and δ is magnetic inclination (considered to be constant). State covariance matrix P after prediction is: P ← F · P · F T + G · U · GT,

(76)

where F is the Jacobi matrix of the function f w.r.t. state x and G is the Jacobi matrix of the function f w.r.t. input u and U is the covariance matrix of the input vector u. Residual error and its covariance matrix of each output is: ei = y i − hi (x),

(77)

S i = H i · P · H Ti + R i ,

(78)

where Hi is the Jacobi matrix of the output function hi = {ha , hm , hv }. Kalman gain is then:

xn = f (xn−1 , un ) = ⎢



Bx , By , Bz

[ ym = √

1 K i = P · Hi · S− i .

(72)

,0

]T



⎥ ⎤ [ ] ⎥ ⎥, cos ϕ tan θ ωx ⎥ − sin ϕ ⎦ · ωy ∆t ⎦ cos ϕ ωz cos θ

(79)

The state vector is then updated by ith sensor: x ← x + K i · ei ,

(80)

P ← (I − K i · H i ) · P .

(81)

176

D. Nemec, V. Šimák, A. Janota et al. / Robotics and Autonomous Systems 112 (2019) 168–177

Table 4 RRMSE of the estimation at zero slope (β = 0◦ ) Used sensors Odometer only Odometer +gyro Odometer +gyro +accel. Odometer +gyro +accel. +mag. Odometer +gyro +accel. +mag. +landmarks

RMSE(xyz)[10−3 m]

RMSE(α )[10−3 rad]

RMSE(β )[10−3 rad]

RMSE(γ )[10−3 rad]

EKF

Our method

EKF

Our method

EKF

Our method

EKF

Our method

14.8 10.0 9.1 2.8 2.3

14.8 9.2 9.2 8.2 4.4

0 13.6 1.9 1.9 1.9

0 3.8 2.3 2.3 2.3

0 13.3 1.8 1.8 1.8

0 3.7 2.4 2.4 2.4

68.5 50.6 40.6 15.8 15.8

68.5 47.7 47.7 41.7 41.3

Table 5 RRMSE of the estimation at high slope (β = 30◦ ) Used sensors Odometer only Odometer +gyro Odometer +gyro +accel. Odometer +gyro +accel. +mag. Odometer +gyro +accel. +mag. +landmarks

RMSE(xyz)[10−3 m]

RMSE(α )[10−3 rad]

EKF

Our method

EKF

82.9 81.6 7.3 2.6 2.5

82.9 81.9 9.5 8.3 4.3

375.7 373.0 2.6 2.6 2.6

As can be seen in Table 4, the gyroscope significantly improves the estimation of heading, which results in lower RMSE of the position, which is calculated by the following formula:

RMSE(β )[10−3 rad]

RMSE(γ )[10−3 rad]

Our method

EKF

Our method

EKF

Our method

375.7 347.8 3.4 3.4 3.4

368.6 366.6 4.3 4.3 4.3

368.6 367.8 4.3 4.3 4.3

90.2 76.4 43.6 14.6 14.5

90.2 74.0 48.5 41.9 41.5

Table 6 Computational time. Method

Computational time [s]

EKF Our method

17.33 4.01

RMSE(xyz)

  N 1 ∑ ] [ =√ (x[n] − xtrue [n])2 + (y[n] − ytrue [n])2 + (z [n] − ztrue [n])2 , N

(82)

n=1

where xtrue , ytrue , ztrue are the true coordinates of the robot and x, y, z are the estimated ones. The notation [n] means the nth sample. Since the odometry does not consider the slope, at zero slope it is correct, but at higher slopes the estimation of roll α and pitch β is erroneous (see Table 5). This is significantly improved by the accelerometer. Another small improvement is achieved by the magnetic compass, which lowers only the error of the estimated heading γ . Localization by landmarks significantly improves only the precision of the estimated position, the precision of the estimated Euler angles is maintained. If the heading estimated by landmark localization was more precise, it would affect also the precision of the fusion result.

In some cases our method provides worse precision than EKF, but its main advantage is its computational effectiveness. The comparison of the computational time required by our fusion method and by EKF is shown in Table 6. Both algorithms were implemented using C++ with optimization. Each algorithm processed 10 000 s of record on Intel Core i5-8400 2.8 GHz CPU. 4. Conclusion The proposed method of the sensor fusion between the odometer, gyroscope, accelerometer, magnetometer and landmark navigation appears to be especially suitable for applications where precise estimation of the position of the mobile robot is required. It significantly (more than 6-times) decreases the error of the position estimated only by the odometer under the experimental conditions (100 s long random movement); the achieved RMS error

D. Nemec, V. Šimák, A. Janota et al. / Robotics and Autonomous Systems 112 (2019) 168–177

is below 5 mm. It can be assumed that the error of the estimated position is proportional to the maximal speed of the robot. The method fully supports different sampling rates of each sensor, the output rate of the method is equal to the highest sampling rate (usually the highest is the sampling rate of the inertial measurement unit integrating the gyroscope and accelerometer). The system is also resistant to the failure of any sensor except for the odometer — if the sensor is unavailable, it is simply not considered. According to the results obtained by simulation, using a full set of sensors is important in case when the terrain is uneven (non-zero, changing slope). Our method provides computationally more effective alternative for extended Kalman filter — our method consumes approx. 4-times less computational time than EKF. Acknowledgment This work was supported by Slovak Research and Development Agency, Slovak Republic under the project no APVV-16-0006: ‘Automated robotic assembly cell as an instrument of concept Industry 4.0’. References [1] P. Goel, S.I. Roumeliotis, G.S. Sukhatme, Robust Localization using relative and absolute position estimates, in: Proc. 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems, Kyongju, Korea, Oct. 17–21, 1999, pp. 1134-1140. [2] L. Jetto, S. Longhi, D. Vitali, Localization of a wheeled mobile robot by sensor data fusion based on a fuzzy logic adapted kalman filter, Control Eng. Pract. 7 (6) (1999) 763–771, Elsevier, Pages. [3] G. Li, D. Quin, H. Ju, Localization of wheeled mobile robot based on extended kalman filtering, in: International Conference on Engineering Technology and Application, ICETA 2015, MATEC Web of Conferences, vol. 22, 2015, p. 5, pages. [4] N. Zikos, V. Petridis, Generative simultaneous localization and mapping (gslam), CoRR (2016). [5] L. Teslic, I. Skrjanc, G. Klancar, EKF-Based localization of a wheeled mobile robot in structured environments, J. Intell. Robot. Syst. (2010) http://dx.doi. org/10.1007/s10846-010-9441-8, Springer. [6] Ch. Smaili, M.E. El Najjar, F. Charpillet, Multi-Sensor fusion method using dynamic bayesian network for precise vehicle localization and road matching, CoRR (2007) 0709.1099. https://doi.org/10.1109/ICTAI.2007.70. [7] B.S. Cho, W. Moon, W.J. Seo, K.R. Baek, A Study on Localization of the Mobile Robot Using Inertial Sensors and Wheel Revolutions, in: Jeschke S. Liu H. Schilberg D. (eds) Intelligent Robotics and Applications. ICIRA 2011. Lecture Notes in Computer Science, vol 7101. Springer, Berlin, Heidelberg, http://dx. doi.org/10.1007/978-3-642-25486-4_57. [8] T.T. Hoang, P.M. Duong, N.T.T. Van, D.A. Viet, T.Q. Vinh, Multi-sensor perceptual system for mobile robot and sensor fusion-based localization, in: 2012 International Conference on Control, Automation and Information Sciences, ICCAIS, http://dx.doi.org/10.1109/ICCAIS.2012.6466599. [9] M. Betke, L. Gurvits, Mobile robot localization using landmarks, IEEE Trans. Robot. Autom. 13 (2) (1997) 251–263, http://dx.doi.org/10.1109/70.563647. [10] A. Martinelli, R. Siegwart, Estimating the Odometry Error of a Mobile Robot during Navigation, in: European Conference on Mobile Robots, ECMR, 2003. [11] P. Papcun, et al., Control and teleoperation of robot khepera via android mobile device through bluetooth and wifi, IFAC Conference on Programmable Devices and Embedded Systems PDES 2016: Brno, Czech Republic, 5–7 October 2016, in: IFAC PAPERSONLINE, vol. 49, (25) 2016, pp. 188–193, http: //dx.doi.org/10.1016/j.ifacol.2016.12.032. [12] D. Nemec, A. Janota, M. Hruboš, V. Šimák, Intelligent real-time MEMS sensor fusion and calibration, IEEE Sens. J. 16 (19) (2016) 7150–7160, http://dx.doi. org/10.1109/JSEN.2016.2597292. [13] A. Janota, V. Šimák, D. Nemec, J. Hrbček, Improving the precision and speed of euler angles computation from low-cost rotation sensor data, Sensors 15 (3) (2015) 7016–7039. [14] InvenSense, ITG-3205 Product Specification, datasheet, document: PS-ITG3205A-00, revision: 1.0, released: 16.8.2010. [15] Analog Devices, Digital Accelerometer ADXL345, datasheet, revision: 0, released: 2009. [16] Honeywell, 3-axis Digital Compass IC HMC5883, datasheet, revision: A, released: 2010. [17] D. Floreano, S. Mitri, J. Hubert, E-puck: A robotic platform for studying the evolution of communication, in: S. Nolfi, M. Mirolli (Eds.), Evolution of Communication and Language in Embodied Agents, Springer, 2009.

177

Dušan Nemec was born in Žilina, Slovakia, in 1991. He received the MSc. degree in automation from the University of Žilina in 2015. He is currently pursuing the Ph.D. degree in automation with the Department of Control and Information Systems, Faculty of Electrical Engineering, University of Žilina. His research area is focused on mobile robotics and sensor systems, especially inertial navigation and flying mobile robots. In 2015, he received the Laureate award: ‘‘Student personality of Slovakia for the academic year 2014/2015 in the area of electrical engineering and industrial technologies’’.

Vojtech Šimák was born in Žilina, Czechoslovakia, in 1980. He received the M.S. degree in information and safety-related systems and the Ph.D. degree in control engineering from the Faculty of Electrical Engineering, University of Žilina (UNIZA), Slovakia, in 2004 and 2008, respectively. During his Ph.D. study, he stayed for five months with the Control Engineering Laboratory, Helsinki University of Technology, in 2007. From 2007 to 2009, he was a Researcher with the Department of Industrial Engineering, Faculty of Mechanical Engineering, UNIZA. Since 2009, he has been with the Department of Control and Information Systems, Faculty of Electrical Engineering, UNIZA, Slovakia. His research interests include industrial and mobile robotics and positioning systems, particularly navigation, inertial systems and computer vision.

Aleš Janota was born in 1963. He received the M.Sc. degree from the Technical University of Transport and Communications, Žilina, Czechoslovakia, in 1981 and the degree in telecommunications from the University of Žilina (UNIZA), Slovakia, in 1998. From 2003 to 2009, he was an Assistant Professor of Information and SafetyRelated Systems. Since 2010, he has been a Professor of Control Engineering with the Department of Control and Information Systems, UNIZA. His research interests include sensors, artificial intelligence, and intelligent transportation systems. From 2010 to 2014, he was a National Delegate with the Domain Committee for Transport and Urban Development of the COST program.

Marián Hruboš was born in 1987 in Martin, Slovakia. He is currently a Post-Doctoral Researcher with the Department of Control and Information Systems, Faculty of Electrical Engineering, University of Žilina (UNIZA). His research is focused on industrial automation, electrical engineering, and programming, particularly on data fusion from multiple sensors for creation of 3-D macrotextured models of the real spaces and their use in both virtual and real applications. During the B.Sc. and MSc. studies, he was with the Institute of Competitiveness and Innovations, UNIZA. He was awarded a Certificate of Merit from the Association of Electrotechnical Industry of the Slovak Republic as a Designer of the Year 2014.

Ing. Emília Bubeníková, PhD. Works with the Dept. Of Control and Information Systems, Faculty of Electrical Eng., University of Zilina. Her research and educational activities are related to information and signal theory, theory of automatic control and digital signal processing methods. She deals with image processing with a special attention paid to design of new algorithms for line detection usable in transport applications.