Haptic rendering based on spatial run-length encoding

Haptic rendering based on spatial run-length encoding

ARTICLE IN PRESS Robotics and Computer-Integrated Manufacturing 20 (2004) 237–246 Haptic rendering based on spatial run-length encoding Yonghua Chen...

691KB Sizes 1 Downloads 41 Views

ARTICLE IN PRESS

Robotics and Computer-Integrated Manufacturing 20 (2004) 237–246

Haptic rendering based on spatial run-length encoding Yonghua Chen*, Zhengyi Yang Department of Mechanical Engineering, University of Hong Kong, Pokfulam Road, Hong Kong Received 21 February 2003; received in revised form 14 September 2003; accepted 29 September 2003

Abstract In this paper, an extendable volumetric representation based on run-lengths called spatial run-length encoding (S-RLE) is presented. The S-RLE representation is developed for a haptic shape modeling system that is based on simulated machining processes. In the system, shape modeling is simulated as virtual material removal processes similar to machining processes with volume-based haptic rendering. The object and the tools are represented by S-RLE. The data structure of S-RLE consists of two cross-referenced databases: one is a stack of lists in geometrical domain, recording the runs describing the space occupation of the object; the other is a table in physical domain, describing the physical properties of each element. The latter is extendable to include more diverse physical properties such as parts composed of heterogeneous materials. Algorithms for geometric operations and haptic rendering based on S-RLE are developed. The proposed S-RLE data structure has the features of efficient memory usage, quick collision detection, inherent representation for heterogeneous objects, and fast visual rendering. r 2003 Elsevier Ltd. All rights reserved. Keywords: Haptic rendering; Run-length encoding; Haptic shape modeling; Force model; Collision detection; Machining simulation

1. Introduction Haptic shape modeling is an emerging technology [1]. Due to the demanding requirement of computational power and data storage space, representation schemes in haptic shape modeling must provide efficient memory usage, quick collision detection and fast visualization. Run-length encoding (RLE) has long been well known as a simple and efficient binary image data compression method in image processing. This naturally leads to the anticipation that its application to three-dimensional (3D) volumetric representation will simplify the operations that are required by haptic rendering. This research reports the development of haptic rendering based on a spatial run-length encoding (S-RLE) method. S-RLE is a volumetric representation technique. Volumetric representation has seen several decades of very active research and development [2–4]. Most of the researches were concentrated on graphic visualization. Shen et al. proposed a general-purpose 3D region *Corresponding author. Tel.: +852-2859-7910; fax: +852-28585415. E-mail addresses: [email protected] (Y. Chen), [email protected] (Z. Yang). 0736-5845/$ - see front matter r 2003 Elsevier Ltd. All rights reserved. doi:10.1016/j.rcim.2003.09.002

representation based on run-lengths [5]. They detailed the data structure of 3D run-length encoding, and developed three classes of basic operations on 3D regions: algebraic, geometric, and topological operations. But their work is only limited to visualization. Gibson reported a data structure called distance maps to represent the surfaces in a sampled volume [6,7]. Her method used a distance-to-closest-surface metric to encode object surfaces. Such an encoding scheme is suitable for visual rendering based on Phong illumination model. Lee et al. presented a set of algorithms called template-based rendering of run-length-encoded volumes [8]. Recently, a data structure called Slice-based Binary Shell was devised for manipulating and rendering of binary volume data [9]. In the data structure only surface voxels with selected attributes are stored in a slice-based data structure. Pflesser et al. used multivolume representation to deal with the 3D model acquired from medical CT scan [10]. The anatomical objects were represented in a 3D rectilinear grid of volume elements, where each voxel was associated with a value (density) and a set of attributes, such as membership to anatomical regions or color. But their representation is not storage efficient since no compression method was adopted.

ARTICLE IN PRESS 238

Y. Chen, Z. Yang / Robotics and Computer-Integrated Manufacturing 20 (2004) 237–246

Compared to the vast amount of research on visual volume rendering, volume-based haptic rendering has not been thoroughly studied yet. One reason is the late introduction of haptic devices to the market; and the other is that most haptic researches were focused on simulating the contact response between certain tools and a specific environment without consideration of change of the physical properties of the environment. Therefore, surface representation is sufficient and dominant. However, several examples of haptic rendering based on volume data structures can be seen. A dexel model was used to describe the virtual object in the dynamic deforming algorithms for 3D shape modeling [11]. McNeely et al. used a three level voxel tree to represent the objects for the sake of memory efficiency and scalability [12] in their haptic rendering system. Haptic devices used as an interface for modeling freeform surfaces and NC programming were reported in [13]. In the research, two types of haptic devices were designed: a passive force feedback device for surface modeling and an active force feedback device for NC programming. A combination of z-map and parametric surfaces were used for 3D object representation. Another application of haptic-based tool path generation method is developed by Balasubramaniam et al. [14]. They used haptic rendered CAD model surfaces in a virtual machining environment, and defined a set of force constraint planes to generate desired tool path pattern. Their haptic rendering method is based on surface representation. In this paper, the S-RLE is used to define a data structure for the storage of 3D object data for a haptic shaping system based on simulated machining processes. By the term of machining, we mean removing material volume from an object using processes similar to the traditional machining processes that are widely used in the manufacturing industries. It is intuitive to use volume representation to describe the objects in such a shaping system. The representation scheme of a 3D object consists of two parts: a data structure describing its space occupation and a group of algorithms operating on the object.

2. Spatial run-length encoding Most of the existing volumetric representation methods suffer from the difficulty of describing objects with varying mechanical properties because only spatial occupation information is emphasized in these methods and the properties of spatial cells are ignored or simplified. In contrast, S-RLE can store information about both geometrical domain and physical domain clearly in two separated databases.

2.1. Data structure As shown in Fig. 1, the data structure of S-RLE consists of two cross-referenced databases: one is a stack of lists in geometrical domain, recording the runs describing the spatial occupation of the object; the other is a table in physical domain, describing the physical properties of each element. The former is called position array and the latter property list. The property list is extendable to include more physical properties. The element of position array is called voxel. Each voxel has a property index pointing to the property list. If the properties of two voxels are identical, they are called homogenous voxels. Each run in position array contains homogenous voxels only. 2.2. Geometrical domain In geometric domain, S-RLE is analogous to 2D RLE. An example of 3D RLE is pictured in Fig. 2. A run is denoted as z ¼ zk ; y ¼ yj : ðxstart ; xend Þ; where ðxstart ; yj ; zk Þ and ðxend ; yj ; zk Þ are the start voxel and end voxel, respectively, or in a concise format: ðzk ; yj : xstart ; xend Þ: Assuming the black voxels are homogeneous voxels, the topmost slice is represented by RLE as follows: z ¼ 3; y ¼ 2 : ð2; 3Þ; z ¼ 3; y ¼ 3 : ð2; 3Þ; z ¼ 3; y ¼ 4 : ð2; 3Þ; z ¼ 3; y ¼ 5 : ð2; 5Þ; z ¼ 3; y ¼ 6 : ð2; 4Þ; z ¼ 3; y ¼ 7 : ð3; 4Þ; z ¼ 3; y ¼ 8 : ð3; 4Þ:

Position Array Run 1 Run 2

Property List

. . .

. . .

Run n

Geometrical

Physical

Fig. 1. Structure of S-RLE.

ARTICLE IN PRESS Y. Chen, Z. Yang / Robotics and Computer-Integrated Manufacturing 20 (2004) 237–246

239

Fig. 3. Milling tool for haptic rendering. Fig. 2. Example of S-RLE in geometrical domain.

2.3. Physical domain

A volumetric 3D object denoted by V can be represented by a 3D array of point voxels pðx; y; zÞ with integer coordinates as follows: V ¼ fpðx; y; zÞj1pxpX ; 1pypY ; 1pzpZg;

ð1Þ

where X ; Y ; Z are the maximum coordinates in the three axes respectively. In RLE, a 3D object is considered as a stack of 2D slices parallel to a coordinate plane in the coordinate system. The object is represented by enumerating its space occupation in each slice, respectively. Given an object V represented by an array of 3D voxels in 3D space W; its position array of S-RLE is constructed by the following steps: 1. Retrieve the first slice at z; 2. Retrieve the first line at y; 3. Retrieve the first voxel at x; store its coordinates in position array as paðxstart ; y; zÞ; store its property in property list as pl [1]; 4. Retrieve the next voxel at x ¼ x þ 1 and its property, if voxel x ¼ 1 and x ¼ x þ 1 are homogenous voxels, repeat step 4 until a heterogeneous voxel appears and store the coordinates of last homogeneous voxel scanned into position array as paðxend ; y; zÞ; 5. Repeat step 3 until the end of current line is approached, set y ¼ y þ 1; 6. Repeat step 2 until the end of current slice is approached, set z ¼ z þ 1; 7. Repeat step 1 until the end of the 3D voxel array is approached. In geometrical domain, the S-RLE describing the 3D solid object by decomposing it into a set of rasterized runs by three clusters of parallel coordination planes. SRLE is an approximation of 3D solid object and the resolution is determined by the distance between the decomposing planes. There is a one-to-one map from voxel in runs to voxel in object. Thus it is a compression method of 3D data array without distortion.

In physical domain, S-RLE employs a list to store the concerned properties of a voxel. Heterogeneous material properties can be easily represented in this way. In the haptic rendering of machining, material properties are considered. For modeling based on real machining processes, more detailed description of cutting tool is needed. For example, a milling cutter is divided into two parts as shown in Fig. 3: one is the part with cutting edge which is called effective cutting part, the other the cutter shank which is modeled as a cylinder. The reason of such a partition is: the collision between the effective cutting part and workpiece-in-process is permitted, but collision between cutter shank and workpiece is considered as an abnormal operation. This is more complicated than existing practice that models the haptic probe as a point or a line segment.

3. Basic operations on S-RLE In this section, several basic operations in geometrical domain are discussed. These operations are the foundation of other operations and can be categorized as algebraic operations. In the following discussion, a run is denoted as Rðu0 ; u00 ; v; wÞ; where u0 and u00 are the xcoordinates of the starting and ending of the run; v is the index of scanline of w slice; w is the index of the slice. A voxel is denoted as pðx; y; zÞ: 3.1. Belonging and locating Given a voxel pðx; y; zÞ; we often need to check whether it belongs to the object V. The operation is called belonging. If p is inside or on the surface of V (as the voxel B illustrated in Fig. 4), it is denoted as pAV; otherwise, it is denoted as peV (as the voxel A in Fig. 4).

ARTICLE IN PRESS Y. Chen, Z. Yang / Robotics and Computer-Integrated Manufacturing 20 (2004) 237–246

240

Fig. 6. Removing a voxel from an object. Fig. 4. Check the belonging of a voxel to an object.

run (2, 3; 4, 3) to (2, 4; 4, 3). However, the adding of voxel C makes no change on the object. 3.3. Removing Assuming a given voxel pðx; y; zÞAV; and its containing run is R; removing a voxel pðx; y; zÞ results in two cases:

Fig. 5. Inserting a voxel to an object.

The checking of occurrence needs to traverse all the runs in the scanline which is indexed by y ¼ v and z ¼ w; until the existence of run or the end of the scanline. The traverse is done in a run-by-run manner. If we want to determine which run contains the given voxel, i.e., locate a voxel in the volume, just modify the belonging algorithm and let it return the last checked run, if it contains the voxel. This operation is important to collision detection.

1. p is the head or tail voxel of a run R (as the voxel B shown in Fig. 6). The deletion of voxel p will shorten the run by one voxel; 2. p is neither the head nor the tail voxel of a run R (as the voxel C shown in Fig. 6). The deleting operation will break R into two runs. For example, in Fig. 6, the run (2, 3; 2, 3) is shortened to (3, 3; 2, 3) by removing voxel B; after removing voxel C; the run (2, 5; 5, 3) is broken into two runs: (2, 2; 5, 3) and (4, 5; 5, 3). The removing of voxel A will return a failure flag, since it does not belong to the object.

4. Haptic rendering 3.2. Inserting Inserting a voxel pðx; y; zÞ results in three cases as shown in Fig. 5: 1. the voxel is outside the existing object (voxel A). The inserted voxel forms a new run and a discrete object is generated; 2. the voxel is outside the existing object but adjacent to the object (voxel B). The insertion of the voxel will extend an existing run; 3. the voxel belongs to the existing object (voxel C). The inserting operation makes no change to the object. For example, the inserted voxel A forms a new run (5, 5; 2, 3); the insertion of voxel B extended the existing

4.1. Haptic shape modeling based on simulated machining processes The presented S-RLE data structure is developed for a haptic shape modeling system based on simulated machining processes. In the authors’ research, shape modeling is simulated as a virtual material removal process similar to the milling process. When interactively removing material using a virtual tool such as a ball-end milling cutter, a user can feel the physically realistic presence of the material with force feedback throughout the process. For rough shape modeling, tool paths are un-constrained like a free-hand sculpturing; for more accurate modeling, a tool path is constrained by a structured set of machining parameters.

ARTICLE IN PRESS Y. Chen, Z. Yang / Robotics and Computer-Integrated Manufacturing 20 (2004) 237–246

4.2. Haptic modeling of contact In machining processes, collisions among tool, tool holder, workpiece, and fixtures must be avoided to prevent damage. In a haptic shape modeling system, the mechanics model changes when collision occurs. Therefore, collision detection is an important function in haptic rendering of machining. Ho et al. proposed a real-time collision detection method [15]. Their method works on point cloud representation for the object and implicit representation for the tool. Collision detection is an issue in geometric domain. Once collision between the surface voxels of two objects occurs, the two objects collided with each other. Obviously, use only surface voxel to perform collision detection would be computational efficient. Fig. 7 is a 2D illustration of the collision occurrence between the tool and the workpiece. To check collision, the boundary of a volumetric representation must be easily retrieved. In the position array of an object V; the voxels with at least one empty voxel e in their 26-neighborhood defines the surface S of the object S ¼ fpAVj(eA26nhðpÞ; eeVg;

ð2Þ

where 26nhðpÞ means the 26 neighborhood of the voxel p: Inherently, S is closed in 3D space. A straightforward

241

approach to boundary detection is to check the voxels one by one. But it is very time-consuming and impractical, especially in haptic rendering of machining process, where the surfaces of an object are constantly changing. Study the structure of S-RLE further, we can obtain that both the start and end voxel of each run are boundary voxels. As for the internal voxels of each run, we have to check its neighbor scanlines both in the same slice or adjacent slices. Let Rðv; wÞ denote the run in the scanline indexed by y ¼ v and the slice indexed by z ¼ w: Assuming the object V is 26-connected, the voxel p is a boundary voxel if and only if it is a start or end voxel of a run, or it is an internal voxel of a run and satisfies one of the following conditions for an i and j; i; jAf 1; 0; 1g and jij þ jjja0 [5]: 1. Rðv þ i; w þ jÞ does not exist; 2. (Rk ðx0k ; x00k ; v þ i; w þ jÞ; such that x00k px and rkþ1 does not exist; 3. (Rk ðx0k ; x00k ; v þ i; w þ jÞ; such that x00k Xx and rk 1 does not exist; 4. (Rk ðx0k ; x00k ; v þ i; w þ jÞ and Rkþ1 ðx0kþ1 ; x00kþ1 ; v þ i; w þ jÞ; such that x00k pxpx0kþ1 : Because of the relatively small number of boundary voxels and the requirement of fast access to boundary voxel, the boundary voxels detected are stored into a voxel list, rather than a RLE structure. Given an empty voxel list vl; the 26-connective boundary of object V can be obtained and stored into vl using the following algorithm: Algorithm 1. Boundary detecting (V) for (all the voxels p) /traverse through the object/ {if (p is boundary voxel) /using the above criteria to judge/ vl.add(p); /append it to the boundary voxel list/} In haptic rendering of machining, the cutting tool and workpiece are represented in S-RLE in 3D space W as Vt and Vw ; respectively. The surface of cutting tool St is represented by S t ¼ S te ,S ts ;

ð3Þ

where Ste and Sts are the surfaces of effective cutting part and cutter shank, respectively. The surface of workpiece is Sw : Collision occurs when Sts and Sw have at least one common voxel p in W; and the boundary of collision area C is Fig. 7. Collision in 2D: (a) at time t0 and (b) at time t1 :

C¼ fpAWjpAS ts -pAS w g

ð4Þ

ARTICLE IN PRESS 242

Y. Chen, Z. Yang / Robotics and Computer-Integrated Manufacturing 20 (2004) 237–246

Sw is updated in each haptic rendering cycle because some materials may be removed and new in-process surface generated. The updating is quite simple when the collision area boundary is acquired. When collision is detected, a brake force is feedback to the user via the haptic device and an alarm is sent to the user visually. 4.3. Haptic modeling of material removing processes 4.3.1. Union and intersection operations on S-RLE There are three types of spatial relationships between two runs in the space: intersecting, adjacent, or separate. Accordingly, the relationships may exist between two objects. Such relationships between objects can be determined by checking the spatial relationship between 0 the runs in two respective objects. Let R1 ðx1 ; x001 ; y1 ; z1 Þ 0 and R2 ðx2 ; x002 ; y2 ; z2 Þ denote the runs in objects V 1 and V 2 ; respectively. The relationship between V 1 and V 2 can be determined as follows [5]: 1. V1 and V2 are intersecting, if and only if there exists a pair of R1 and R2 satisfying the following conditions: y1 ¼ y2 ; z1 ¼ z2 ; x01 px02 px001 or y1 ¼ y2 ; z1 ¼ z2 ; x02 px01 px002 ; 2. V1 and V2 are adjacent, if and only if they are not intersecting and there exists a pair of R1 and R2 satisfying the following conditions for i; jAf 1; 0; 1g and jij þ jjja0: y1 ¼ y2 ; z1 ¼ z2 ; x01 ¼ x002 þ 1 or y1 ¼ y2 ; z1 ¼ z2 ; x02 ¼ x001 þ 1 or y1 ¼ y2 þ i; z1 ¼ z2 þ j; x01 px002 px001 þ 1 or y1 ¼ y2 þ i; z1 ¼ z2 þ j; x02 p x001 px002 þ 1; 3. V1 and V2 are separate, if and only if they are neither intersecting nor adjacent. 0

0

The intersection of two objects (denoted as V 1 -V 2 ) is the union of the intersecting parts of runs from the two 0 0 objects. For example, assuming x1 px2 ; x001 px002 ; the 0 0 intersecting parts of R1 ðx1 ; x001 ; y; zÞ and R2 ðx2 ; x002 ; y; zÞ 0 forms a new run Rðx2 ; x001 ; y; zÞ: The other cases are not presented here. 4.3.2. Removed volume calculation In machining processes, the volume removed in one cutting cycle is the intersection of workpiece and the tool swept volume. In the end milling process as illustrated in Fig. 8, the VR in one haptic cycle can be calculated by VR ¼ dc we Vf r;

ð5Þ

where dc is the depth of cut ðmÞ; we is the effective cutting width (mÞ; Vf is the feedrate ðm=sÞ; r is the time ðsÞ: The calculation of tool swept volume in multi-axis machining process is much more complicated. The swept volume of a ball-end milling cutter between a time interval is illustrated in Fig. 9. Jang et al. developed a voxel-based machining simulation method based on voxel representation [16]. In their method, NC machining simulation is carried out in three steps: incrementally calculating the swept volume of a cutting tool for each tool path element; voxelizing the swept volume and subtracting the voxelized volume from in-process workpiece, which is also described as voxel model; visually rending the in-process workpiece. Because the swept volume has to be calculated analytically first and this is time consuming, their method is not applicable to haptic rendering that demands a high update rate. What is more, the tool is not represented in voxel, thus complicated tool with arbitrary shape, which is useful for free-form surface modeling, cannot be described. In haptic shape modeling, the virtual tool movement is much more complex and the tool swept volume

dc

Given two runs R1 ðx1 ; x001 ; y1 ; z1 Þ and R2 ðx2 ; x002 ; y2 ; z2 Þ in objects V 1 and V 2 ; respectively, the relationship between them can be determined as above. The union operation of V 1 and V 2 (denoted as V 1 and V 2 ) is implemented by adding separated runs of V 1 to V 2 and merging the intersecting runs together. The outline of the union operation is as follows:

add R2 into V; }

V=empty; /declare an empty RLE to store the result of union operation/ V ¼ V 1; /adding runs in V 1 to V 2 / for (each run R2 in V 2 ) {if((ðRAVÞ satisfying R-R2 af) /existing intersecting runs/ merge(R and R2 ) and store the result into R; else if (R is adjacent to R2 ) /adjacent runs/ 0 R ¼ ðx1 ; x002 ; y1 ; z1 Þ; /connect the two runs, with0 out losing generality, here suppose x1 px002 / else /two separate runs/

we

Algorithm 2. Union(V 1 ; V 2 )

Fig. 8. Volume removed in end milling.

ARTICLE IN PRESS Y. Chen, Z. Yang / Robotics and Computer-Integrated Manufacturing 20 (2004) 237–246

And the following calculation of VR is trivial:

Position 2 At time = t + 1 Position 1 At time = t

VR ¼ V s -V w :

Swept volume

P ¼ K ðMRRÞ;

cannot be calculated from simple equations. In our proposed system, both the in-process workpiece and tool are represented in S-RLE. The volume removed is the intersection of workpiece and tool swept volume. Tool swept volume is calculated directly from the tool voxel model by tool position interpolation. The position of the tool is checked by the haptic device in discrete time. The time interval is the period of haptic cycle. If the time interval is 1 ms, the feedrate is 10 mm/s, and then the tool will move 0.01 mm in one haptic cycle. Such a distance might be larger than the voxel dimension. Thus, the swept volume of cutting tool V s cannot be calculated directly from the union of cutting tool volume at time t and t þ 1: In this paper, tool position interpolation method is employed to compute the tool swept volume. The linear interpolation of tool position is calculated from the tool position at time t and time t þ 1: The beginning position is recorded as a vector Pt ¼ ðxt ; yt ; zt ; ot ; yt ; ft Þ and the ending is Ptþ1 ¼ ðxtþ1 ; ytþ1 ; ztþ1 ; otþ1 ; ytþ1 ; ftþ1 Þ: Then the tool position Ptþdt at time t þ dt is calculated as Ptþdt ¼ Pt þ ðPtþ1 Pt ÞM;

ð6Þ

where M is the transform matrix calculated from Pt and Ptþ1 M ¼ ðkx ; ky ; kz ; kw ; ky ; kf ÞT :

ð7Þ

According to the feedrate, the number of intermediate tool position n can be decided to satisfy the following inequality: Maxðtranslation of voxelÞovoxel dimension:

ð8Þ

When all the intermediate tool positions are obtained and represented as Vsi ; the tool swept volume V s is the union of them. n [ i¼1

Vsi :

ð10Þ

4.3.3. Machining force modeling Many efforts have been taken on modeling the cutting force in various machining processes, such as turning, drilling, and milling. But not every force model is feasible to be used for haptic rendering that features a relatively high update rate. The simplest model relates the cutting power P to the material removal rate (MRR) by the following equation [17]: Fig. 9. Cutter swept volume.

Vs ¼

243

ð9Þ

where K is the unit power motor power (P) is equal to times the tooth velocity. cutting force is easily found Ft ¼ K ðMRRÞ=v;

ð11Þ consumption. The spindle the tangential cutting force Therefore, the tangential by the following: ð12Þ

where v is the tooth velocity. The radial force is calculated by multiplying the tangential force by a constant Kr : Fr ¼ Kr Ft :

ð13Þ

The volumetric model is simple to implement and can be calculated efficiently to meet the need of high update rate in haptic rendering. The values of K and Kr depend on workpiece material, cutting tool geometry and cutting conditions. The estimation of MRR is also simple: MRR ¼VR=r;

ð14Þ

where VR is the material removed in a period of haptic cycle. Here r is the period of haptic cycle (generally, smaller than 1 ms). The haptic rendering of milling process is divided into two parts: the haptic rendering of contact and the haptic rendering of material removing. The former is related to the collisions between tool shank and in-progress workpiece; and the later is related to the force generated by the movements of effective cutting part. 4.3.4. Milling force measuring In order to determine the coefficients Kr and K as in Eqs. (12) and (13), milling force experiments are conducted based on the combinations of materials, cutting depths, spindle speeds, feedrates and tool orientations. All current CNC milling machine configurations are significantly different from the haptic device PHANToMs that is used in our haptic shape modeling system. The articulated robot machining system developed by the authors as shown in Fig. 10(a) is very similar in physical configuration to PHANToMs as shown in Fig. 10(b) [18], thus force measurement from this robotic milling system can be realistically mapped to

ARTICLE IN PRESS 244

Y. Chen, Z. Yang / Robotics and Computer-Integrated Manufacturing 20 (2004) 237–246

(a) Aluminum Al-6061

(a) Robot machining system in real world

(b) Haptic device – PHANToM ® in virtual world

(b) Wax

Fig. 10. Similar configuration between an articulated robot and PHANToMs. (a) Robot machining system in real world. (b) Haptic device—PHANToMs in virtual world.

Fig. 11. Milling experiments on different materials: (a) aluminum Al6061 and (b) Wax.

the haptic devices. The main part of the robotic system is an ABB IRB1400 articulated robot with six-degree-offreedom, which has a similar configuration to the 6DOF PHANToMs. Therefore, dynamics data from the robotic machining system may directly be used for the haptic force feedback in the PHANToM device by simple scaling operations. A tool assembly consisting of a ball-end milling cutter, a tool holder, and a Kistlers 3DOF force sensor is fixed to the robot end-effector. The force sensor contains three pairs of quartz rings, which are mounted between two steel plates in the sensor housing. Two quartz pairs are sensitive to shear and measure the force components Fx and Fy ; while one quartz pair sensitive to pressure measures the component Fz of force acting on the sensor. Force measuring is conducted on milling both metal and wax with different rate of volume removal. Fig. 11 shows the sample parts milled by a ball-end milling cutter ((a) is aluminum and (b) is model wax). As shown in Fig. 12, given a tool orientation, the vector from the tool center point along its axis is denoted as d~: Then Fr and Ft are calculated by the sum of the projections of the three axis forces onto d~ and ! ! ! d > ; where d > is a vector perpendicular to d

z

! ! ! Ft ¼ Proj:ðFx ; d > Þ þ Proj:ðFy ; d > Þ þ Proj:ðFz ; d > Þ

d y x Fig. 12. Coordinate system transform.

and ! ! ! Fr ¼ Proj:ðFx ; d Þ þ Proj:ðFy ; d Þ þ Proj:ðFz ; d Þ:

ð15Þ

Now, the two coefficients are calculated K ¼ Ft vðVRÞ=t;

ð16Þ

Kr ¼ Fr =Ft :

ð17Þ

4.3.5. Experiment A prototype system based on the proposed algorithms has been implemented in VC++. Ghost API is used to interface with PHANToMs for force rendering. The

ARTICLE IN PRESS Y. Chen, Z. Yang / Robotics and Computer-Integrated Manufacturing 20 (2004) 237–246

245

loop decoupling technique. A VTK wrapper around the gstForceFeedback class in GHOST API is developed. In the callback function, our own class that implements the volume-based haptic rendering is called.

5. Conclusion

(a) Screen shot of the haptic modeling application

(b) Haptic milling for shape modeling

(c) Haptic tool path planning

Fig. 13. Experiments: (a) screen shot of the haptic modeling application, (b) haptic milling for shape modeling and (c) haptic tool path planning.

open source package VTK (Visualization Toolkit) is employed to do graphics related work. Fig. 13(a) shows the configuration of the virtual haptic milling station. Fig. 13(b) shows a material block is haptically milled. The block is a 3D dataset with the dimension of 40 32 40. The radius of the ball-end milling cutter is 0.5. Fig. 13(c) demonstrates the application of haptic milling for haptic-based milling tool path generation. The CAD model of a mouse is placed in a virtual material block. Then the haptic milling process is performed by the user and the tool path can be generated from the recorded milling cutter movements. As analyzed in [5], the RLE representation has a better performance on the memory expense than the octree representation, in the worst case. In our experiments, the haptic update rate is sustained at 900 Hz or more. Such an update rate is achieved by haptic loop and graphic

This paper has presented a data structure called SRLE for a haptic shape modeling system based on simulated machining processes. In the system, shape modeling is simulated as virtual material removal processes similar to actual machining processes. The object and the tools are both represented by the volumetric data structure S-RLE. Essentially, S-RLE is a kind of compressed data storage method. The extendable data structure of S-RLE consists of two cross-referenced databases: one is a stack of lists in geometrical domain, recording the runs describing the space occupation of the object; the other is a table in physical domain, describing the physical properties of each element. The latter is extendable to include more physical properties to meet the needs in haptic rendering of machining processes. Basic geometric operations on the S-RLE data structure are also described. Algorithms for volume-based haptic rendering: collision detection and removed volume calculation are developed. The proposed S-RLE data structure is suitable for haptic rendering because of its efficient memory usage, quick collision detection, inherent representation for heterogeneous objects, and fast visual rendering.

Acknowledgements This research is supported by a grant from Hong Kong Research Grants Council under the code HKU 7073/02E.

References [1] Dachille IXF, Qin H, Kaufman A. A novel haptics-based interface and sculpting system for physics-based geometric design. Comput Aided Des 2001;33:403–20. [2] Galyean TA, Hughes JF. Sculpting: an interactive volumetric modeling technique. Comput Graphics 1991;25(4):267–74. [3] Avila RS, Sobierajski LM. A haptic interaction method for volume visualization. Proceedings IEEE Visualization 1996: San Francisco, CA, USA. 197–204. [4] Ferley E, Cani MP, Gascuel JD. Resolution adaptive volume sculpting. Graphical Models 2001;63:459–78. [5] Shen XQ, Spann M. 3D region representation based on runlengths: operations and efficiency. Pattern Recognition 1997; 31(5):575–85. [6] Gibson S, Fyock C, Grimson E, Kanade T, Kikinis R, Lauer H, McKenzie N, Mor A, Nakajima S, Ohkami H, Osborne R, Samosky J, Sawada A. Simulating surgery using volumetric object

ARTICLE IN PRESS 246

[7]

[8]

[9] [10]

[11]

Y. Chen, Z. Yang / Robotics and Computer-Integrated Manufacturing 20 (2004) 237–246 representations, real-time volume rendering and haptic feedback. MERL Technical Report, 1996, TR96-16. Gibson S. Using distance maps for accurate surface representation in sampled volumes. Proceedings of the 1998 IEEE symposium on Volume visualization, 1998: Research Triangle Park, NC, USA. p. 23–30. Lee CH, Koo YM, Shin YG. Template-based rendering of runlength-encoded volumes. J Visualization Comput Animation 1998;9:145–61. Kim BH, Seo JW, Shin YG. Binary volume rendering using the slice-based binary shell. The Visual Comput 2001;17(4):243–57. . Pflesser B, Petersik A, Tiede U, Hohne HK, Leuwaer R. Volume cutting for virtual petrous bone surgery. Comput Aided Surg 2002;7(2):74–83. Yamamoto K, Ishiguro A, Uchikawa Y. A development of dynamic deforming algorithms for 3D shape modeling with generation of interactive force sensation. Proceedings of IEEE Virtual Reality Annual International Symposium, 1993: Seattle, WA, USA. p. 505–511.

[12] McNeely WA, Puterbaugh KD, Troy JJ. Six degree-of-freedom haptic rendering using voxel sampling. Proceeding of SIGGRAPH 99, Los Angeles, CA, USA, 1999. p. 401–408. [13] Kanai S, Takahashi H. Modeling and NC programming for freeform surfaces by haptic interfaces. Proceedings of DETC/Design for Manufacturing, ASME 1996. Irvine, CA, USA, 1996, p. 509–518. [14] Balasubramaniam M, Ho S, Sarma S, Adachi Y. Generation of collision-free 5-axis tool paths using a haptic surface. Comput Aided Des 2002;34(4):267–79. [15] Ho S, Sarma S, Adachi Y. Real-time interference analysis between a tool and an environment. Comput Aided Des 2000;33:935–47. [16] Jang DG, Kim KS, Jung JM. Voxel-based virtual multi-axis machining. Int J Adv Manuf Technol 2000;16:709–13. [17] Choi BK, Jerard RB. Sculptured surface machining theory and applications. Dordrecht: Kluwer Academic Publishers; 1998. [18] Yang ZY, Chen YH, Sze WS. Layer-based machining: recent development and support structure design. J Eng Manuf, Proc Inst Mech Eng Part B 2002;216:979–91.