Copyright ® IFAC Robot Control. Vienna. Austria. 2000
USING REDUNDANCY TO PERFORM A VISION-BASED TASK AMIDST OBSTACLES Viviane Cadenat, Philippe Soueres, Michel Courdesses
I,AAS - CVRS 7 AL'. du Colond Rorhe :flon Toulo1l8f' Cede.r: 04
PRANCR email: {radenat. 801lere8 . r:ourdess}@laas.fr
A hstra('t: Tht' problt'm of ('ontrolling a nonholonomi(' mohilf' rohot amidst ohstaclt's is addrt'sst'd in this papt'r. Two control laws . defint'd on the hast' of visual and telt'mt'tric rlata , art' prt'st'ntf'd . Tht'y rely on tht' redundancy of the robot with respt'ct to tht' vision hast'd task to pt'rform simultaneollsly tllf' tracking and t.hf' avoidanct' movt'n1f'nt. Roth t.t'clll1iqut's art' compart'd and tht'ir ft'asahility is discussf'd . Finally, simulation rt'sults for t'ach method arf' given at the t'nd of tht' papf'r. Copyright @2000 [FAC Kt'ywords: Robot control, Robot navigation , Robot vision , Mobilt' rohot.s , Visual motion.
1. INTRODUCTION
idt'a, several stratf'gies allowing to perform sensorbast'd navigation tasks in cluttered environments have het'n proposf'd (Cadt'nat et al. 1999a) (Cadf'nat et al. 1999b) (Swain et al. 1998). In t.llf'se works. tile robot wa~ not redundant with respect to tbe tracking ta~k , and it was necessary to switcb to anotber c:ont.rol law for driving t he ha~e near thf' ohstacles.
Visual sf'rvoing tf'clll1iqllf's df'vf'loped in (Espiau r;t al. Hl!J2) I'dy on tht' rt'gulation to zt'ro of the f'rror ht'tween what tht' robot really sees and what it wants to set' when the desirf'd task is perfectly performt'd. Initially developt'd for manipulat.ors (Hagn and Hutcllinson 1990). they ha\'f~ been t'xtt'nded to control nonltolonomic mobile robots by adrling degrees of freedom allowing the camera to move independantly from the hasf' (PissardGibollet and Rives 199,1). Nonf'theless , a navigation task amirlst ohstacles requirt's to consider not only the visual featllTt's hut also proximetric data. A first way to deal with robot navigation is to plan a rf'alizable path. and to use a path following tedll1ique . In this case. the control laws art' expressed in the configuration space. and a localization stf'P is required (Kosaka and Kak 1992) (Ohya et al. 19911). Other authors propose to treat this prohlem using pott'ntial field tt'chniqut's (Khatib 19R6) (Khatib and Cllatila 1995). These works demonst Tilt I' t ht' interest to take into accounl different kinds of data at thf' same It'vel when designing c:ontl'ol strategies. Following tllis
Here , two c.ontrol strategies are prt'sented . Tht'y advantage from tbe system redundancy witb respect to tbe nominal vision task to perform simultaneously obstacle avoidance and target tracking. Tht' vision task consists of posirionning the camera in front of a squared fixed target of four points. Introducing redundanc.y to the system can be dont' eitber by defining a low dimensioned vision based ta~k, or hy incrt'asing tht' mobility of tbe mechanical structure of tbe robot. 1 ake
2. IvJODELl1\G AND
PROBLE~vf
STATE\1E\,T
In tllis work , tIle modt'l of a mobile ha.se t'quipped wit h a can1f'ra mounted on a 6 dof-arm is cons id('red (figllTf' 1). The objec1ivt' is to t'laiJorale dif-
63
The coefficients 04 = (D% d 2 ) cos 95,
_........... 4,
_
4:
....................
_
- 95)
= (d 1 + d2 ) cos95 + 01
06
=
+ 01 COS(91
COS(91 - 95)
- 95)
+ 02 sin(91
+ (d 1 + - 95),
-02 sin(91 - q5) - 01 COS(91 - 95),
=
ferent control strategies allowing the robot to position its camera in front of a fixed target and avoid stationary obstacles. To avoid target occlusions, the obstacles are supposed to be smaller than the target. A configuration of the mobile base with respect to the world frame:F is denoted by [x y of. Dx represents the distance between M and 0 0 , while :FM(M,XM,YM,ZM) and :Fc(C,xe ,Ye , ze) are two frames respectively linked to the robot and to the camera. As the target height is supposed to be equal to the camera optical center height, planar movements suffice to perform our visual task . That is why the 6 dof-arm is fixed in a configuration which simulates the behaviour of a planar manipulator as shown in figure 2.
are defined as follows:
05
07 -(D% d 2 ) sin 95,
Fig. 1. Mobile base with a manipulator arm
OJ
+ 02)sin(91
+ 02)COS(91
-(d 1 + d2 ) sin q5
08
=
09
=02
COS(Ql -
- 95)
+ 01
+ 01 sin(91
- 95) - (d 1
+
sin(ql - q5) - 02 COS(91 - q5) ,
95) - 01 sin(ql -
Q5) '
where 01 and 02 represent the coordinates of 06C in frame :Fa, while distances dj are defined as in figure 2. As Jarm is a 3 x4 matrix, the three degrees of freedom of the camera are controlled by four actuators. The robot is therefore redundant with respect to the positioning of :Fe. As previously mentioned, in addition to the CCD camera. the robot is equipped with a 2D laser rangefi~der. On the base of the laser signal, a functional module computes a pair of data (d , o) characterizing the closest obstacle: d is the signed distance between the robot reference point M and the closest point Q on the obstacle , 0 is the angle between the tangent T to the obstacle at Q and the robot direction ZM (see figure 3) . For the problem to be well stated, the distance between two obstacles is supposed to be greater than 2d+. This allows to consider each obstacle independantly.
4,
............... y<
3. VISUAL SERVOING AND REDUNDANCY
Fig. 2. A configuration of the arm
The visual servoing technique which is considered here has been introduced in (Espiau et al. 1992) and relies on the task function formalism (Samson et al. 1991). A task is defined by the regulation of a task function e to zero . A sufficient condition for the control problem to be well-conditioned is that e be p-admissible, insuring the existence of a diffeomorphism between the task space and the state space around the unique ideal trajectory qr corresponding to e O. The target is made of four points whose coordinates (Xj, Yi) (i 1 . . . 4), define an 8-dimensional vector of visual signals s in the camera plane At each configuration of the robot. the variation of the signals is related to the kin~matic screw T;.d by means of the 8 x 3 interaction matrix L red as follows:
Only the first and fifth joints ql and qs of the arm will be controlled. The frames attached to each link of the arm (see figure 2) are denoted by :Fj (OJ , Xj , Yj, Zj). ql and qs are respectively defined by the oriented angles (XO,XI) and (X4,XS) about Zo and Z4. The control input of the whole mechanical system is defined by q = [v W ql qsf, where v and ware the linear and the angular velocities of the cart. The state vector q is then given by [se 0 ql qsf (se representing the curvilinear abscissa of the reference point M of the robot). Let re [VJc/F,nFc/~Y be the kinematic screw representing the translational and rotational velocity of :Fe with respect to :F, expressed in frame :Fe. re is related to the joint velocity vector by the robot jacobian matrix J. As the camera is constrained to move horizontally, it is sufficient to consider a reduced kinematic screw Tr~d [V."c' V'c' Oycl T , and a reduced jacobian matrix Jarm . Tr~d is then given by J arm 9 where:
=
=
=
(2) where Lred is given by the optic flow equations (Espiau et al. 1992).
=
Consider the following task function e(q(t)):
e(q(t)) (1)
= C(s(q(t)) -
s·)
(3)
where s· is the constant value of the visual signals corresponding to the perfect realization of the
64
th e obstacle. \Vith the external definition of 1.' and the choice of 1'1 , two degrees of freedom out of three are fixed. The remaining degree of freedom will 1)(" fixed depending on the Jistance between th e robot and the closest obstac.le.
task. As the target is fixed . s depends only on
q(t). C is a constant. full rank. 3 x 8 matrix . As it depends on the 3D information which is difficult to estimate on line . L,eo will be taken eonst ant and f'qllal to its value when s s" dllring th e whole task execution as in (Espiall et al. 1992) (Swain d al. 1998). In this case , f TPO can be proven to be of full rank 3. Therefore. a positioning task constrains tile tilree degrees of freedom of the camera defined by T,:d' As the jarobian {if / f)qT C.d.J",m is a 8 x 4 matrix. F is not a p-admissihl e task fllnction. The rohot is then redundant with respect to the positioning task and an infinity of ideal tr ajectori es qr correspond to the regulation of f' to zero. The problem is under-constrained. il nd further constraints ltin'e to he introdllCed to define the motion in a unique way. See (Samson Ft al. 1991) for morf' details.
=
Phase 1: Far from the obstacles: l' is assumed to be fixeJ to l ;m,., The strategy consists of imposing the linear velocity 1' 1 of the first joint Cf'nter of rot ation 0 1 to be directed along the X1axis to make the basf' movl' towards the target. Let i.f ql - ,,/2 be the angle belWel'll J: .H and X 1 . As l' 1; 1 cos(i.f) , :"'; is given by: v (;) ) u.; tan(-;)
=r
=
=
= -[):r
This definition of:...; requires 1-;1 < ~. A simple way to avoid this singularity is to c~nstrain th e motion of 0 1 progrl'ssively: at the beginning of the task, 1'1 is orientated along th e bisector of i.f, then its direction is continuollsly modified to becoml' rapidly tangent to t hI' x-axis. Introducing
4. GETTING REDUNDAi\Cl ' FROM THE VISJO,,\-RASED TASK
=
This strategy is hased on the definition of a low-dimensioned vision-hased task allowing to increase the rf'dundan cy of tile robot to executf' simllltanf'ously the tracking and the avoidance movement in the vicinity of an obstac.le. Tile vision- based task will he simply defined as orientating the camera towards th e target. To this end, the previous medlani cal system will be reduced to a simple pan-platform . The fifth joint q5 is fixed to zero and control only q1' q and q are then reduced to [so () q, and [1; :...; qJ]T The corresponding jacobian J pl is oht ained from equation 1 hy suppressing the fourth column of J"m and setting q" to zero in the expressions of (}i. J pl is then a 3 x 3 matrix. The vision-based task is defined by the follO\\'ing one d imensional task funct ion :
{
=
.
T.·T ·
f·T[O q" T]T
u.;
= -tani.f
e2='~2q=_\2
T..T[ +1\2 1;
T 0 01 J
Dx
["""[,,,;"g
A
~ (:Io) (::)
(6)
and
f
(7 ) and, assuming tllat A is invertible' . the reduced control vector i/ [u.; q,]T is expressed by:
=
(8)
(4)
Phase 2: Close to the obstacles: Three envelopes are defined. They arf' respect ively located at 11+. do, (L from the obstacles (figure 3).
<+. <0. <_
=
where IT 1/ 4 [0 1 0 1 0 1 0 1]. This task fun cti on represents the ordinate of the f'Tror between the projection of the cent er of gravity of the target when the camera is at the desired position and when it is far from it. Its regulation makes t hI' camera face the target. guarantef'ing the visual features will not he lost during the movement of tile robot. Classically. the control input is deducl'd from t hI' exponential decay of a tr-adm issible task fUllction (Espiau et al. 1992) (Swain et al. 1998) (Cadenat f:l al. 1999a) (Cadenat et al. 1999/)). Here. only:...; and ql are deduced from sensory data rf'gulation. while 1.' is determinl'd at another level of control. allowing to avoid actuator saturation Juring obstacle avoidance . A redllcf'd control vector q' = [:...; Q1V is defined, while the linear velocity is spf'cified a~ follows: in the free space. I' is rf'gBlated to its maximal vailll' after an acceleration phase: in till' vicinity of tlte obstacles. r is reducl'd to a secllrity valu e 1; ... 0 to avoid saturation arollnd
Remark 1. Thl' direction to pass round the obstacle is determined by the orientation of the robot with respf'c.! to the obstacle when it is detected (d ~ d+). The dirf'ction of the unit vector r is chosen sllch that the angle n: betwef' n ZM and T helongs to [-;; / 2. rr/2]. As soon as the robot enters the zone delimitl'd hy t· is continuously reduced by the external loop to reach tile security value 1; ... 0 on On
<+ .
<_.
1 A clir~cl compulation of
65
qs
where represent the desired configuration of the fifth joint of the arm. Its jacobian matrix ,Iv;: oev;:)8qT is given by:
=
,Iv;: = (C L~J:~J"m
qs .
tile otller hand. as soon as d ~ IL, q' is calculated sHch tllal 1:1 be oirected along T. As before, a short transition phase is introduced to smooth the motion and avoid the singularity 0 = 11"/2 by constraining the point 0 1 to move along the direction defined by r) = 0(1 - e-(I-Io) /2). Defining:
2
= ( -D ;;-).2tanf l0_) _ ( 0KJ1 0 )
Phase 2: Close to the ohstacles: The robot will pass round the obstacle hy following a security envelope C:O located at a constant distance do. The following ta<;k function is defined : e,\\"
(~)
x
0
Control strategy: To oesign a mntinuous global control law , a convex combination of controllers (K) and (9) is liSI'd to commute smoothly from one phase to the ot her as in (Cadenat et al. 1999a) (Cadenat et al. 1999b):
- J.l(d))q v5
+ J.l(d)q,\V
J AV --
(10)
5. GETTING REDCNDANCY FROM THE MECHANICAL STRUCTURE
,,' = (I -'l(d))f\';: + p(d)f' ,\\,
• )T
q" - q5
(14)
=
(15)
where f'vs and f.,\\, are respe(:t iwly given by equations (11) and (1:1) and p E [0 , I) is a scalar function depending on d (see figure 4). \Vllf'n t Ill' robot is far from the obst ades (d > do) . p equals zero and only f'.v;: is performed. When it is close 10 it (d < do) . J.l jumps to l. and
Phase 1: Far from the obstacles: In this ca<;e . the following task function ev;: is considered: T
dJ"m)
L ..,Id
Control strategy: The control df'sign rf'lies on the mixing of Pvs and fAV at thf' task levd. This approach consists of defining a global ta<;k pi beforf' designing a controller allowing to makf' it decrea<;e to zero. The global t a<;k funct ion e' is then defined by:
The constraint on q5 is now relaxed. Tile VISIon hased tracking task is defined by (3). and the robot hemmes reoundant. Two primary objectives are defineo: the first one realizes the sole vision based task in the free space, tile second one performs botil the vision based task and the obstacle avoidance when necessary. As previously, the control strategy requires to switch from one objective to the other depending on the distance between the robot and the ohst ade . IIowever. the control law is designed hy mixing these objectives at I he task level and not at the mntrol level a<; in section 4.
f
(C
=
=
=(
(13)
where .Id [sinQ - DTmsQ 0 0]. Remark that J AV is singular when sin(q1 + Q) 0 , t.hat is when tIle arm bemmes perpendicular to the followed envelope dllTing the avoidance pha<;e. In this case, it is physically impossible for the robot to perform simultaneously the vision based task and the passing-round movemf'nt. Tt is then necessary to execute sequentially theses two tasks 11.<; in (Caoenat Pt al. 19991l) (Cadenat et al. 1999b) or (Tsakiris Pt Ill. 199i).
where qv;: and q,\\, are the visual servoing and avoidance mntrol laws , and Il(d) is a third degree polynomial in rl. J.l(d) = 0 so long as d > do and varies cont inllollsly from 0 to 1 between t:o and t:-: J.l(d) = 1 when d < d-. The avoidance phase ends when t he camera direction becomes tangent to the trajectory of the mobile base (q l rr/2).
f" .<
d-do)T
the oistance d cannot be described as the distanre between AI and Q .11.<; in figure 3. Indeed . with su(:h a oefinition, a~ d = 1; sin (), the ta<;k jacobian ,lA\, is not of full rank if () = 0. Therefore, tile task is no more p-admissible. This problem is mainly related to tIle nonholonomj(' constraint of tile mobile ba'>e . To overcome it, it suffi(:es to define d as the distance between any holonomic point of the mobile base (for example, 00) and the closest point on t.he obstacle . .I,\\, is tilen given as follows:
(9)
= (1
= (pT
~onetlleless,
and assuming A invertible oefined as before by (6), the solution is given by:
q
where ,Iq, = [000 1] (12)
As J v;: is always regular (its determinant is equal to - Dx). tllis task function is always p-admissible. It allows 2 to realize both the posit ioning of the camera and the regulation of q5 to
Fig . :1. Mobile frame and envelops
a
)
2 Th~ ohj~~tiv~
in this paper is not 10 IIse al h~st the redl/ndanc.y of the rohot in the free spa~" , hI/I 10 d.,fin~ two glohalp-admissihle I asks whi~h ~an he mixed consist~nlly al I he t ask level.
(11 )
66
~
• 0
Figures 7, 8 and 9 represent the velocity profiles , showing that no saturation occurs during the execution of the task. The avoidance phase stops when the camera axis becomes tangent to the envelop. Finally, when the robot enters a vicinity of the target , defined by le11 < ( and free of obstacle. the execution of the full rank task is performed (3) to position the camera perpendicularly to the target.
~ - 0
Fig. 4. The control strategy eAV is realized. The passing-round sense is then the one which allows to satisfy the decrease of both e and d - do. To avoid discontinuities , a surrounding the obstacle at second envelope a constant distance d+ do + E is introduced. Between ~o and ~+, J.l varies continuously from 0 to 1 as a third degree polynomial in d. To regulate e' to zero , an exponential decrease is imposed:
<+
9
=
1
:t ~ .•~ ~= '(t~
=
-)..Ji 1e' with)"
>0
"-i.
(16)
Fig. 5.
Robot trajectory: method 1
= =
where h (1 - J.l)Jvs + J.lJAv + (eAV - evs)JI' with JI' 8J1/8 qT Note that the final control law involves the derivative of J1 which must be itself continuous. The control law (16) allows to guarantee the feasability of the task. Indeed, actuators saturation can be avoided by choosing ).. sufficiently small. Moreover, the exponential decrease of d towards do insures non collision, as d will never exactly reach do with such a dynamic. Finally, as the positioning task is performed even during the obstacle avoidance, the target cannot be lost. However, the main limitation of this approach comes from the necessary compatibility of the vision based task and the obstacle avoidance. The positioning task fixes three degrees of freedom out of four available and constrains the camera to move on a "virtual rail" perpendicular to the target and passing through the center of it. The passing-round motions are then extremely reduced and limited by the length of the manipulator arm. Therefore, only small obstacles will be avoided using this method.
Fig. 6.
Image features: method 1
!3~C~"'---- ::r
t
i--.---,.-...'~=:--,~·.--.-~
Fig. 7.
v with method 1
Fig. 9. ql
Fig . 8.
w with method 1
with method 1
2- Simulations for the second method: The initial configuration of the robot is qT [0.770900). The reference distances are do 0.6 m , d+ = 0.7 m, and d_ = 0.5 m. The maximal bounds on the velocities are: v < 0.9 m/s, W < 1 rad/s, 91 < 1.4 rad/s, and 95 < 2 rad/s. Finally, 0.1. The robot trajectory and the image features are respectively represented on figures 10 and 11. When the robot is outside the dangerous zone (d > d+), J.l is fixed to 0 (see figure 12), and the vehicle executes the vision based task while maintaining q5 to q5' When the vehicle enters a neighbourhood of the obstacle (d < d+) , J.l increases and tends to 1 as it approaches ~o· During this phase, thanks to the redundancy, the arm compensates the avoidance movement to keep on performing the positioning task perfectly while the robot passes round the obstacle. The evolution of det(h) represented on figure 13 shows that the visual servoing task and the passing round movement remain compatible during the whole
=
6. SIMULATION RESULTS
1- Simulations for the first method: The initial configuration of the robot is qT = [-15 0 ~l. The obstacle is a vertical cylinder of radius 1 m centered at the origin (0,0). The reference dis5 m, do 3 m and d_ 2 m. tances are d+ The maximal velocities are respectively: V rn . . 0.9 m/s, Wrn~x 1 rad/s, and 91max 2 rad/s. The security velocity is v .. c 0.5 m/so Figures 5 and 6 represent respectively the trajectory of the robot and the evolution of the image features. So long as the robot is outside the zone delimited by ~o the visual servoing control steers the robot. As soon as d ~ d+, the linear velocity decreases for the robot to reach ~_ with the specified security velocity V . . c . At that time, as J1 equals 1, the avoidance starts and the robot follows the envelop.
=
=
=
=
=
=
).. =
=
67
=
execution of the mIssIOn. Finally, when the obstacle is no more dangerous for the vehicle, J1 decreases progressively and vanishes when the robot leaves the zone surrounded by .;+. Note that once again the control inputs do not saturate during the mission (see figures 14 , 15, 16 , and ]i).
Fig. 10. Robot trajectory:
method 2
,
/':
1
"""'"\,,.
!
.,
Cadenat , V. , P. Soueres and M. Courdesses (1999a). An hybrid COl, ' i'Ol for avoiding obstacles during a vision-based tracking task. In : Proceedings of the European Control Conference (ECC' 99). Karlsruhe, Germany. Cadenat , V" R. Swain , P. Soueres and M. Devy (1999b) . A controller to perform a visually guided tracking task in a cluttered environment. In: Proceedings of the International Conference on Intelligent Robots and Systems (lROS' 99). Kyongju , Korea. Espiau , B., F. Chaumette and P. Rives (1992) . A new approach to visual servoing in robotics. IEEE Transactions on Robotics and Automation . Rager , G.D . and S. Rutchinson (1996). Vision based control of robotic manipulators. Special section of IEEE Transa ction on Robotics and Automation (12), 649-774 . Khatib, M. and R . Chatila (1995). An extended potential field approach for mobile robot sensor-based motions. In: Proceedings of the 4th International Conference on Intelligent Autonomous Systems (IAS ' 9.5). Karlsruhe, Germany. Khatib, O. (1986). Real-time obstacle avoidance for manipulators and mobile robots. The International Journal of Robotics Research. Kosaka , A. and A. Kak (1992) . Fast vision guided mobile robot navigation using modelbased reasoning and predict of uncertainties . CVGIP - Image Understanding. Ohya, A., A. Kosaka and A. Kak (1998). Visionbased navigation by a mobile robot with obstacle avoidance using single-camera vision and ultrasonic sensing. IEEE Transactions on Robotics and Automation. Pissard-Gibollet, R . and P. Rives (1995). Applying visual servoing techniques to control a mobile hand-eye system . In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA' 9.5). Nagoya , Japan. pp. 166- 171. Samson , C., M. Le Borgne and B. Espiau (1991) . Robot Control: Th e Task Function Approach. Clarendon Press. Oxford , England . Swain , R. , M. Devy and V. Cadenat (1998). Controlling the execution of a visual servoing task. In: Proceedings of the Symposium on Intelligent Robotic Systems (SIRS' 98). Edimburgh , Scotland , United Kingdom. Tsakiris , D .P. , P. Rives and C. Samson (1997) . Applying visual servoing techniques to control non-holonomic mobile robots. In : Proceedings of the workshop on new trends in image-based robot servoing (IROS'97). Grenoble , France.
Fig. 11. Image features:
method 2
·f..
REFERE NCES
I
1
I .
'r
.. ·f
I"~
"t 'r
I
;
-
..:..
1
Fig. 12. Parameter J.L with
Fig. 13. Det(Jr) with
method 2
method 2
Fig. 14. v with method 2
:1 : ~\ ,
'/ '
,
.
.-f
~
\-: -t
-f
1
-1
;
-~
-
-
-
-
.. ~
.
~1
(~
Fig. 15.
~ " ' ~r ;
'/'
T-. . /-
! '
-1
.
'
\j
: "'
1
4·- -/
o~,
'- 1
I"' :- - ,
04
.!.
Fig. 16. 'h with method 2
w with method 2
-
---
~ { ~
Fig. 17. qS with method 2
CONCLUSION Two sensor-based control laws have been proposed in this paper. They rely on the robot redundancy and allow to position the camera in front of a target while avoiding obstacles. In the first method , the redundancy comes from the definition of the vision-based task while it is obtained from the mechanical structure of the robot in the second one. Both techniques provide satisfactory results , although the second one has proven to be limited by the compatibility condition which prevents the robot from avoiding large obstacles. In this case, better results could have been obtained with a less stringent vision based task .
68