A Novel Vision Localization Method of Automated Micro-Polishing Robot

A Novel Vision Localization Method of Automated Micro-Polishing Robot

Journal of Bionic Engineering 6 (2009) 46–54 A Novel Vision Localization Method of Automated Micro-Polishing Robot Zhao-jun Yang, Fei Chen, Ji Zhao, ...

826KB Sizes 2 Downloads 132 Views

Journal of Bionic Engineering 6 (2009) 46–54

A Novel Vision Localization Method of Automated Micro-Polishing Robot Zhao-jun Yang, Fei Chen, Ji Zhao, Xiao-jie Wu College of Mechanical Science and Engineering, Jilin University, Changchun 130022, P. R. China

Abstract Based on photogrammetry technology, a novel localization method of micro-polishing robot, which is restricted within certain working space, is presented in this paper. On the basis of pinhole camera model, a new mathematical model of vision localization of automated polishing robot is established. The vision localization is based on the distance-constraints of feature points. The method to solve the mathematical model is discussed. According to the characteristics of gray image, an adaptive method of automatic threshold selection based on connected components is presented. The center coordinate of the feature image point is resolved by bilinear interpolation gray square weighted algorithm. Finally, the mathematical model of testing system is verified by global localization test. The experimental results show that the vision localization system in working space has high precision. Keywords: micro-polishing robot, vision localization, threshold segmentation, distance constraint Copyright © 2009, Jilin University. Published by Elsevier Limited and Science Press. All rights reserved. doi: 10.1016/S1672-6529(08)60104-3

1 Introduction The importance of polishing robot has drawn the attention of many researchers to investigate robotic polishing technologies. The major goal is to improve the time efficiency and the quality of the polished surface[1–4]. Machine vision can provide the same functions as the human eyes – distinguishing among thousands of hues, providing acuity throughout a great range of ambient lighting, and perceiving objects in three dimensions and more. Effective and accurate localization system plays an important role in the realization of robotic surface polishing. The robot localization has been considered as the most fundamental factor to provide robots truly autonomous capabilities[5]. Since localization is a fundamental factor of mobile robotics, many methods have been developed and discussed in the literature. These methods can be broadly classified into two major types: relative positioning method and absolute positioning method[5–8]. Relative positioning method means locating a place relative to other landmarks while absolute positioning method means locating a place using a coordinate system. The relative positioning method includes inertial navigation Corresponding author: Ji Zhao E-mail: [email protected]

or odometer. The error of this method is accumulated with time, so it is not suitable for precise positioning, in spite of low cost and easy to implement. Relative position estimation often refers to dead reckoning that uses the inertial sensor system to keep pose tracking with no delay. Because the method requires knowing the initial or starting position of a navigating robot, it cannot recover by itself from any failures such as slipping motions[2–4,8,9]. Absolute position estimation refers to global localization. In many situations, it is impossible for a robot to know its initial position. The method of localizing a mobile robot in a global map with no or limited initial pose information is called absolute position estimation. Active beacons, artificial and natural landmark recognition, model matching, GPS and behavior-based approaches are the basic techniques of absolute position measurement which have been used in recent years[3,10]. Active beacons rely on using the artificial active beacons to localize the robot. The artificial and natural landmark recognition relies on the recognition of homogenous landmarks to keep the robot localizable geometrically. While landmark methods can achieve geometric localization[3], they require either engineering the

Yang et al.: A Novel Vision Localization Method of Automated Micro-Polishing Robot

environment to provide a set of adequate landmarks or efficient recognition of gestures to use as landmarks. Model matching attempts to use whatever sensor scans with a surface map of the environment, without extracting landmarks features. GPS is based on a constellation of 21 orbiting satellites[1,3,11]. The system allows an appropriately equipped receiver to compute its absolute position, velocity, and time to substantial accuracy. However, GPS cannot work well in indoor environments. The behavior-based approach relies on the interaction of robot actions with the environment to navigate[3,10,12]. Based on the advantages and disadvantages of various methods motioned above, we proposed a novel method of micro-polishing robot vision localization with high precision and lower price. The proposed method can meet the requirement of polishing robot localization. The remainder of this paper is organized as follows. Section 2 presents the hardware structure of the system and the mathematical model in detail. In section 3, we show how getting the center coordinates of feature points and the method of image segmentation. Specific formula of mobile robot position and posture was presented in section 4. Section 5 presents the experimental results. We conclude and give possible future research directions in section 6.

2 System structure and mathematical model 2.1 System structure The Fig. 1 shows the components of the vision positioning system, which includes industrial camera, focalizing lens, infrared filter, image grab, mobile robot and computer with positioning measurement software. Six infrared light-emitting diodes are fixed on the mobile robot as an auxiliary positioning characteristic point.

47

The 3D coordinates of feature points in the robot coordinate system is calibrated critically. First, the camera intrinsic parameters are calibrated accurately. Second, the mathematical model is constructed based on the central projection model of photogrammetry, then the coordinates of six feature points in the camera coordinate system are calculated according to mathematical model. Finally, the robot position and orientation in the working space can be obtained. 2.2 Mathematical model analysis The image pixel coordinate system (o, u, v), the image physical coordinate system (O, XI, YI), the camera coordinate system (OC, XC, YC, ZC), the robot coordinate system (OR, XR, YR, ZR) and the world coordinate system (OW, XW, YW, ZW) were established, and the geometry of camera model were shown in the Fig. 2. 2.2.1 Transform relationship between the coordinate systems The transformation between the image pixel coordinate [u, v]T and image physical coordinate [XI, YI]T can be written as[13] ­° X I ® °¯ YI

(u  u0 ) ˜ d x , (v  v0 ) ˜ d y

(1)

where (u0, v0) is image pixel coordinate of the original point of the image physical coordinate system; dx and dy are the physical size of pixel in XI and YI directions respectively; (u, v) is the pixel coordinate of feature point’s center. XC

OC o

YC u

v

qi pi XI

OI YI

Feature image point ZW OW XW

ri ZC

YW

Feature point ZR

Polishing robot

4 1

2 OR 6

Fig. 1 The structure of localization system.

XR

Fig. 2 Coordinate system.

3 Pi YR

Journal of Bionic Engineering (2009) Vol.6 No.1

48

In Fig. 2, the coordinates of the point P translated from the image physical coordinate system to the camera coordinate system is educed as following ­YC X I ° ® YC YI , °Z ¯ C f

(2)

where, f is the effective focal length of the camera. Lastly, the transformation from the 3D camera coordinates (XC, YC, ZC) to the world coordinates (XW, YW, ZW) can be written as ªXW º «Y » « W» «¬ Z W »¼

ªXC º RCW «« YC »»  TCW , «¬ Z C »¼

(3)

where RCW is a 3 × 3 rotation matrix from the 3D camera coordinate to the world coordinate and TCW is a translation vector. 2.2.2 System mathematical model According to pinhole camera model, the principal point OC, feature image point pi (i = 1, 2, …, 6) and feature point Pi are on the straight line. The unit vector qi denotes the vector which is from point OC toward along JJJJG the direction OC Pi , PCi is the vector of the feature point Pi in the camera coordinate system, and pCi is the vector of the feature image point pi in the camera coordinate also. ri is the distance between the OC and Pi (Fig. 2). The relationship of these variables can be written as

PCi

ri qi

pCi . pCi

ri

(4)

We introduced dij that denotes the distance between the feature point Pi and Pj, șij denotes angle between the unit vectors qi and qj, (i, j = 1, 2, …, 6). An equation based on the spatial geometrical properties and Eq. (4) can be written as

ri 2  2ri rj cos Tij  rj2  dij2 ( pCi , pCj )

cos Tij

where

RRW

ª r11 «r « 21 «¬ r31

pCi pCj

r12 r22 r32

r13 º r23 » » r33 »¼

.

0,

(5)

According to Eq. (5), fifteen binary quadratic constraint equations can be obtained for the six feature points. It has the form of[14–16] ­ f12 (r1 , r2 ) r12  2r1r2 cos T12  r22  d122 0 ° 2 2 2 ° f13 (r1 , r3 ) r1  2r1r3 cos T13  r3  d13 0 ° f14 (r1 , r4 ) r12  2r1r4 cos T14  r42  d142 0 ° . # ® ° f (r , r ) r 2  2r r cos T  r 2  d 2 0 4 4 5 45 5 45 ° 45 4 5 ° f 46 (r4 , r6 ) r4 2  2r4 r6 cos T 46  r62  d 462 0 ° 2 2 2 ¯ f 56 (r5 , r6 ) r5  2r5 r6 cos T 56  r6  d56 0

(6)

The single valued solution of Eq. (6) can be obtained by the Singular Value Decomposition Method about unknown proportion factor ri. The six points coordinate PCi of camera coordinate system were obtained according to the Eq. (4), and the coordinate PWi of the world coordinate system can be calculated by the Eq. (3). RRW is a 3 × 3 rotation matrix from the 3D robotic coordinate to the world coordinate which is given at the bottom of this page. In the equation, Ȧ, ij and ț are three Euler angles. TRW is a translation vector which has the form of TRW

ªt x º «t » . « y» «¬ t z »¼

They can be solved by the six points vector PWi and PRi. Pwi is the feature points in the world coordinate system and PRi is the feature points in the robotic coordinate system respectively. Details of obtaining the elements of matrix RRW and vector TRW will be given in subsection 4.1.

3 Digital image processing 3.1 Acquisition of gray image High signal-noise ratio and high contrast gray image of optical character points are obtained by adjusting camera focal length, aperture, image frame frequency, exposure time and working current of optical feature point in the polishing robot. The experimental setup and gray image are shown in Fig. 3.

ª cos N cosM sin N sin Z  cos N sin M sin Z sin N sin Z  cos N sin M cos Z º «  sin N cos M cos N cos Z  sin N sin M sin Z cos N sin Z  sin N sin M cos Z » « » «¬  sin M »¼  cos M sin Z cos M cos Z

Yang et al.: A Novel Vision Localization Method of Automated Micro-Polishing Robot

49

searching step į = 10 and End = 6, image segmentation threshold Threshold = 78, it is obtained by the method of image self-adaptive threshold. Concrete procedure is shown in Fig. 5.

Fig. 3 The experimental setup and the gray-scale image.

3.2 Image segmentation According to the image characteristics, the gray value of feature point center is higher than the background, thus the self-adaptive threshold image segmentation method based on tagged image’s connected components is introduced[17]. Main process of the algorithms is as follows. (1) Thresholdi is defined as ith threshold to image segmentation, the corresponding maximum image connected components tag can be written as Labeli = max{Labeli1, Labeli2, …, Labelin}, where i = 0, 1, 2,…, and Labeli1, Labeli2, …, Labelin are n labels of the obtained labeling image using connected components tag for this time. Threshold0 is defined as initial form threshold, į is searching step length. Labeli = End is judging condition of threshold search, where End is a certain set value. (2) Establishment of labeling image according to the gray images. (3) Binary processing of gray image according to Thresholdi saving the results to the labeling image, then labeling it with connected components to return Labeli. (4) If Labeli > End, then Thresholdi+1 = Thresholdi + į, go to steps (2) and (3); if Labeli = End, then setting initial value Threshold0 = Thresholdií1 again, searching step length į/2 and return steps (2) and (3). (5) If Labelií1 > End, Labeli = End and į = 1, then Threshold = Thresholdi, output the labeling image corresponding to the threshold. (6) Using the threshold to segment the object and background, distinguishing six feature image points with the labeling image corresponding to the threshold. In order to verify the reliability of image segmentation method, a gray image with high noise was collected in this paper and the 3D graphics of the image is shown in Fig. 4. Setting the initial value Threshold0 = 10, initial

Fig. 4 The 3D graphics of gray image under high noise.

Fig. 5 The process of image segmentation.

3.3 Resolution method of feature point center coordinate Obviously, feature image points of optical feature points have obvious differences with image background. On the one hand, 2D image of feature image point is approximately round shape or ellipse in the shape. On

Journal of Bionic Engineering (2009) Vol.6 No.1

50

the aspect of gray, the gray value of feature point is higher than that of the background. There are many sub-pixel edge detection algorithms of the circular or elliptic objects such as moment method, elliptical curves fitting method, digital correlation method and pattern recognition method, etc.[1,17]. After comprehensive consideration of all the algorithms, bilinear interpolation gray square weighting algorithm is chosen for the feature point center coordinate in this paper. It is so difficult to get the theory center of feature point that the accuracy of algorithm is not verified by the actual image. A simulated image of feature image point is established using Gaussian function according to the image characteristics. The image theory center coordinate is (25.00, 25.00), Gauss at the amplitude is 230, ı is five pixels and the gray value of background is 10. Fig. 6 shows the 3D graphics of simulated image with ȝ = 0, ı = 0 and ȝ = 0, ı = 25 respectively. After the accession of different intensity noise, the calculate center coordinates are listed in Table 1. The Standard Deviation (SD) of P is 0.058 pixel, SD of v is 0.042 pixel, SD of center coordinate is 0.069 pixel. Table 1 shows that bilinear interpolation gray square weighting algorithm has high location precision and strong anti-noise.

4 Robot position and orientation īRk and ȁRl unit vectors can be obtained from the feature point coordinates that have been calibrated accurately in the robot coordinate. īWk and ȁWl unit vectors can be calculated by the feature point coordinate in the world coordinate. īRk and ȁRl constitute a matrix MR, īWk and ȁWl constitute a matrix MW. Using the matrixes MR and MW we can obtain the transformation rotation matrix RRW and the translation vector TRW from the robot coordinate to the world coordinate. Then the robot position and orientation in the world coordinate can be obtained by the following method. 4.1 Robot position calculation Vector īRk is defined as

* Rk

PRi  PRj PRi  PRj

,

(7)

where PRi = [XRi, YRi, ZRi]T, PRj = [XRj, YRj, ZRj]T, i = 1, 2, …, 5, j 2,3,...,6 , k 1, 2,...,15 , i  j . ȁRl is solved by īRk, which has the form of ­ / °° Rl ® ° °¯ /Rl

* Ri u * Rj * Ri u * Rj

where * Ri u * Rj z 0 where * Ri u * Rj

0

,

(8)

0

2,3,...,15 , i  j , L 1, 2,...,105 .

where, i 1, 2...,14 , j

The constitute matrix MR of unit vectors īRk and ȁRl has the form of

MR

[* R1 ,..., * R15 /R1 ,..., /R105 ]3u120 .

(9)

In the same way, the constitute matrix MW can be written as M W [* W1 ,..., * W15 /W1 ,..., /W105 ]3u120 , (10) Fig. 6 The 3D graphics of simulated image. Table 1 Coordinate of center points under different V V

u (pixel)

v (pixel)

5 10

24.981 24.962

24.993 24.951

15

24.993

25.023

20

24.995

25.058

25

24.901

24.987

30

25.011

25.038

35

24.962

25.055

40

24.954

24.990

45

25.097

24.922

50

25.076

25.006

where, īWk is obtained by the two optional feature points coordinate in the world coordinate which has the form of Eq. (7) and ȁWl also has the form of Eq. (8). The following relationship can be obtain between the matrix MR and MW, MW

RRW M R

(11)

where, RRW is the orthogonal rotation matrix from the robot coordinate to the world coordinate, which has the form of RRW

M W M RT ( M W M RT ) 1 ,

(12)

Yang et al.: A Novel Vision Localization Method of Automated Micro-Polishing Robot

51

Table 2 Coordinate measuring results of feature points

TRW is a translation vector which has the form of 1 6 ¦ ( PWi  RRW PRi ) . 6i1

Coordinate

P1

P2

P3

The theoretic coordinate

(í90,-80)

(í90, í30)

(í90,80)

4.2 Robot Posture As shown in Fig. 7, an assistant coordinate

The actual coordinate

(í90,-80)

(í90.047,-29.991)

(í90.0,79.898)

OW X R' YR' Z R' is given by translating the robot coordinates

Coordinate

P4

P5

P6

The theoretic coordinate

(120,0)

(70,100)

(70,í100)

The actual coordinate

(118.948,0.0051)

(70.402,100.662)

(67.391,í99.262)

TRW

(13)

OR X R YR Z R , origin of coordinates is coincident with the origin of the world coordinate OW. Yaw angle Ȧ is defined as the rotation angle about XR-axis from robot coordinate to the world coordinate system, pitch angle ij is defined as the rotation angle about YR-axis from robot coordinate to the world coordinate system, ț is the tilt angle defined as the rotation angles about ZR-axis from robot coordinate to the world coordinate system[13]. After obtaining the values of the elements rij of RRW in Eq. (12), we can obtain the three Euler angles as r32 ­ °Z arctan(  r ) 33 °° , M arcsin( ) r ® 31 ° r °N arctan(  21 ) r11 °¯

5.1 Experimental equipment As shown in Fig. 8, the experimental system consists of an Adimec-1000m CCD camera, a computer with OC-64E0-IPRO image grabber, a 760 nm infrared filter and a set of measuring software. The camera has the focal length of 8 mm, image size 1000 pixel × 1004 pixel and pixel size is 7.0 ȝm × 7.0 ȝm.

(14)

where, rij represents the element of the ith row and the jth column of rotation matrix RRW. ZR' Fig. 8 Experimental equipment.

YR'

Z

N

Ow

XR' XR''

ZR

M YR

OR

Xw

Yw Zw'

Zw

XR Fig. 7 Euler angle definition.

5 Experiments and analysis To ensure the precision and reliability of measuring results, the center coordinates of feature points are measured accurately by the advanced microscope and NC Machine Tool. Without loss of generality, we assume ZP = 0. Measuring results are listed in Table 2.

5.2 Experimental analysis In order to verify the accuracy of the measuring system, several measurements are performed.

5.2.1 Verification experiment of measuring system model According to the 3D coordinate of a point, the 2D image coordinate can be calculated by the following collinear equation ­ °u ° ® °v °¯

fx fy

r11 xW  r12 yW  r13 zW  t x  u0 r31 xW  r32 yW  r33 zW  t z r21 xW  r22 yW  r23 zW  t y r31 xW  r32 yW  r33 zW  t z

.

(15)

 v0

where fx and fy are the focal lengths of the camera in x and y directions. Compared calculated values with actual measuring values, the correctness of measuring model and feasibility of solution are proved, as shown in Table 3.

Journal of Bionic Engineering (2009) Vol.6 No.1

52

Table 3 Coordinate measuring results of feature point centers Comparative point 1 2

3D coordinate (mm) (í100,í50,í100) (í100,í110,í100)

Measuring result (pixels) (414.0906, 467.5796) (442.5501,467.2143)

Calculated result (pixels) (413.6154,467.8827) (441.9656,467.3964)

(0.4752,í0.3031) (0.5845,í0.1821)

3 4

(í100,í140,í100)

(456.7605,466.9323)

(455.1393, 467.1532)

(1.6212,í0.2209)

(í100,í170,í100)

(471.0088,466.6301)

(469.3119,466.9100)

(1.6969,í0.2799)

5

(í100,í200,í100)

(485.2183,466.4731)

(484.4835,466.6668)

(0.7348,í0.1937)

6

(í100,í230,í100)

(499.4175,466.2559)

(497.6541,466.4237)

(1.7634,í0.1678)

7

(í100,í260,í100)

(513.5459,465.9704)

(511.8235,466.1805)

(1.7224,í0.2101)

8

(í100,í290,í130)

((526.9798,479.8684)

(525.7885,480.0519)

(1.1913,í0.1835)

9

(í200,í50,í130)

(411.1666,480.4891)

(410.1660,480.7660)

(1.0006,í0.2769)

10

(í200,í140,í160)

(456.0155,494.5373)

(454.3152,494.7568)

(1.7003,í0.2195)

11

(í200,í230,í250)

(499.3774,538.2777)

(498.0518, 538.3735)

(1.3256,í0.0958)

12

(í300,í80,í220)

(423.4173,525.4300)

(422.3840,525.5076)

(1.0333,í0.0776)

13

(í300,í170,í280)

(469.4934,555.5411)

(468.3949,555.7172)

(1.0985,í0.1761)

14

(í400,í110,í130)

(439.0951,477.0186)

(437.6513,477.3794)

(1.4438,í0.3608)

15

(í400,í260,í160)

(519.5380,491.7527)

(518.5014,492.1482)

(1.0366,í0.3955)

5.2.2 Accuracy analysis of localization system We measured the coordinates of six feature points and calculated the distance between any two points. Results are shown in Table 4 and Table 5. The distance between any two points also can be calculated by the point coordinate in the image coordinate system. The ratios of the distances between any two Table 4 Coordinate values in the image plane coordinate Image pixel coordinate

Camera coordinate

U (pixel)

V (pixel)

X (mm)

Y (mm)

1 2

616.406 628.151

440.193 467.867

204.981 224.276

í107.3800 í61.6797

Z (mm) 1899.50 1898.74

3

654.348

528.896

265.992

38.7878

1888.06

4

518.696

534.350

43.6055

47.9083

1894.87

5

570.166

577.700

127.939

118.901

1888.81

6

522.659

467.261

50.4874

í62.9989

1908.37

Table 5 Measurement repeatability errors Measuring serial number

given points in the robotic coordinate system, which are calibrated using NC Machine Tool, to the distances of the corresponding points in the image physical coordinate system are shown in Fig. 9. Experimental results show that the center coordinate of feature point has high accuracy. On the other hand, compared the distance between any two points calculated by feature points coordinate of camera coordinate system with calibrated distance measured by accurately using NC Machine Tool, the accuracy of localization system is determined, and the maximum errors of localization system were less than 2.0 mm. 5.2.3 Single-point repeatable measurement Multiple measurements of the same fix point can be defined as single-point repeatable measurement, and we adopt it in our experiment. Twenty measurements were

Comparative analysis (mm) Measuring value

Calculated value

Error

1 2

d12 d13

49.61240 158.8025

50.009 159.898

í0.3966 í1.0955

3

d14

224.0043

223.758

0.2463

4

d15

240.2757

241.594

í1.3183

5

d16

159.9864

158.565

1.4214

6

d23

109.3069

109.889

í0.5821

7

d24

211.3441

211.143

0.2011

8

d25

204.9117

206.916

í2.0043

9

d26

174.0602

172.003

2.0572

10

d34

222.6776

223.685

í1.0074 í1.1238

11

d35

160.6162

161.740

12

d36

239.1971

238.475

0.7221

13

d45

110.4030

111.711

í1.3080

14

d46

111.9376

111.898

0.0396

15

d56

198.6679

199.947

í1.2791

Proportional number

Feature point

Error

Fig. 9 Distance ratio of robotic coordinate system to image physical coordinate system.

Yang et al.: A Novel Vision Localization Method of Automated Micro-Polishing Robot

53

Table 6 The results of measurement repeatability experiment World coordinate value

Measuring serial number

Xw (mm)

Yw (mm)

Zw (mm)

Ȧ (Û)

M (Û)

ț (Û)

1

334.18

13.54

1904.81

í4.53

í4.53

í37.93

2

334.15

13.55

1904.75

í4.47

í4.50

í37.94

3

334.12

13.55

1904.54

í4.47

í4.53

í37.94

4

334.16

13.56

1904.72

í4.54

í4.61

í37.93

5

334.11

13.57

1904.57

í4.51

í4.54

í37.93

6

334.16

13.56

1904.71

í4.49

í4.55

í37.93

7

334.17

13.57

1904.61

í4.62

í4.65

í37.92

8

334.14

13.57

1904.61

í4.59

í4.71

í37.92

9

334.07

13.58

1904.32

í4.49

í4.67

í37.92

10

334.08

13.58

1904.38

í4.53

í4.71

í37.92

11

334.11

13.58

1904.56

í4.59

í4.69

í37.92

12

334.04

13.58

1904.21

í4.46

í4.62

í37.93

13

334.08

13.59

1904.39

í4.62

í4.78

í37.91

14

334.09

13.59

1904.43

í4.61

í4.70

í37.92

15

334.04

13.60

1904.28

í4.64

í4.75

í37.91

16

334.08

13.60

1904.37

í4.60

í4.73

í37.91

17

334.05

13.60

1904.30

í4.62

í4.73

í37.91

18

334.07

13.61

1904.38

í4.70

í4.83

í37.90

19

334.09

13.60

1904.51

í4.68

í4.76

í37.91

20

334.10

13.61

1904.52

í4.67

í4.74

í37.91

Mean

334.1037

13.5795

1904.50

í4.5767

í4.6667

í37.9216

SD

0.0450

0.0206

0.1707

0.0770

0.0934

0.0101

performed for the same position of mobile robot in the world coordinate system, the determining position error and repeatability precision of system were shown in Table 6. The results indicated that the method is feasible, the deviation of repeatability and measurement uncertainty are less than 0.2 mm and 0.1Û respectively.

6 Conclusions A novel localization method of restricting micro-polishing robot within a certain working spaces is presented. Combined with photogrammetry technology, a novel mathematical model of mobile polishing robot localization is presented. The position and posture of mobile micro-polishing robot are measured by using a precise calibration camera. The measuring principle of mobile polishing robot vision localization was analyzed. During the stage of image processing, the self-adaptive threshold image segmentation based on tagged image’s connected components is presented. The proposed method realized not only image segmentation but also effectively distinguished six feature image points. The experimental results show that the presented model and solution method has correctness and high precision. The method is also suitable for localization of

mobile robot. Many error sources exist in measuring system, so establishing rational error model to improve the location precision is the targets of future research.

Acknowledgement This research is supported by the National High Technology Research and Development Program of China (Grant No. 2006AA04Z214) and the National Natural Science Foundation of China (Grant No. 50575092).

References [1]

Chen F, Brown G B, Song M M. Overview of three-dimensional shape measurement using optical methods. Optical Engineering, 2000, 39, 10–22.

[2]

Zhao J, Zhan J, Jin R, Tao M. An oblique ultrasonic polishing method by root for free-form surfaces. International Journal of Machine Tools and Manufacture, 2000, 40, 795–808.

[3]

Ishii M, Sakane S, Kakikura M, Mikami Y. A 3-D sensor system for teaching robot paths and environments. International Journal of Robotics Research, 1987, 6, 45–59.

[4]

Betke M, Gurvits L. Mobile robot localization using landmarks. IEEE Transactions on Robotics and Automation, 1997, 13, 251–264.

54 [5]

Journal of Bionic Engineering (2009) Vol.6 No.1

Cox L J. Blanche: Position estimation for an autonomous

for robotic polishing. Journal of Materials Processing

robot vehicle. IEEE/RSJ International Workshop on Intelli-

Technology, 2005, 159, 69–82.

gent Robots and Systems, Tsukuba, Japan, 1989, 432–439. [6]

Tam H Y, Lui C H, Mok C K. Robotic polishing of free-form surfaces using scanning paths. Journal of Materials Processing Technology, 1999, 95, 191–200.

[7]

mining depth form focus. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1993, 15, 97–108. [13] Lenz R K, Tsai R Y. Techniques for calibration of the scale

Se S, Lowe D, Little J. Mobile robot localization and map-

factor and image center for high accuracy 3D machine vision

ping with uncertainty using scale-invariant visual landmarks.

metrology. IEEE Transactions on Pattern Analysis and

The International Journal of Robotics Research, 2002, 21, 735–758. [8]

[12] Ens J, Lawrence P. An investigation of methods for deter-

Machine Intelligence, 1988, 10, 713–720. [14] Fischler M A, Bolles R C. Random sample consensus: A

Unnes U, Faia P, de Almeida A T. Sensor-based 3-D

paradigm for model fitting with applications to image

autonomous surface-following control. Mathematics and

analysis and automated cartography. Graphics and Image

Computers in Simulation, 1996, 41, 429–444. [9] Lee M C, Go S J, Lee M H, Jun C S, Kim D S, Cha K D, Ahn

Processing, 1981, 24, 381–395. [15] Tang J L. Some Necessary Conditions on the Number of

J H. A robust trajectory tracking control of a polishing robot

Solutions for the P4P Problem. In: Lecture Notes in Com-

system based on CAM data. Robotics and Computer - Inte-

puter Science: Computer Algebra and Geometric Algebra

grated Manufacturing, 2001, 17, 177–183. [10] Pfister S T. Algorithms for Mobile Robot Localization and Mapping Incorporating Detailed Noise Modeling and Multi-Scale Feature Extraction, California Institute of Technology, Pasadena, California, 2006. [11] Marquez J J, Perez J M, Rios J, Vizan A. Process modeling

with Applications, Springer, Berlin, 2005, 3519, 56–64. [16] Wu Y, Hu Z. PnP problem revisited. Journal of Mathematical Imaging and Vision, 2006, 24, 131–141. [17] Harris C. Geometry from visual motion. In: Blake A, Yuille A (eds). Active Motion, MIT Press, Cambridge, 1993.