Robotics and Autonomous Systems 56 (2008) 615–624 www.elsevier.com/locate/robot
Path planning for laser scanning with an industrial robot S¨oren Larsson ∗ , J.A.P. Kjellander ¨ Orebro University, Department of Technology, SE-701 82, Sweden Received 30 May 2006; received in revised form 2 October 2007; accepted 11 October 2007 Available online 25 October 2007
Abstract Reverse Engineering is concerned with the problem of creating CAD models of real objects by measuring point data from their surfaces. Current solutions either require manual interaction or expect the nature of the objects to be known. We believe that in order to create a fully automatic system for RE of unknown objects the software that creates the CAD-model should be able to control the operation of the measuring system. This paper is based on a real implementation of a measuring system controlled by CAD software, capable of measuring along curved paths. Some details of the system have been described in earlier publications. This paper is concerned with the problem of automatic path planning for a system that can move along curved paths. c 2007 Elsevier B.V. All rights reserved.
Keywords: Reverse engineering; 3D measurement systems; Laser scanner; Path planning; Cross-sections; Industrial robot
1. Introduction In the development of new products CAD systems are often used to model the geometry of the objects to be manufactured. Reverse Engineering (RE) of geometry is the reverse process where the objects already exist and CAD models are created by interpreting geometrical data measured directly from the surfaces of the objects. An introduction to RE which is often referred to is a paper by Varady, Martin and Cox 1997 [1]. In that paper the RE process is divided into the following four steps: (1) (2) (3) (4)
Data capture Preprocessing Segmentation and surface fitting CAD-model creation.
Step 1 is closely related to measurement technology. Optical systems such as laser scanners in combination with manually-controlled mechanisms for orientation are often used to measure the 3D coordinates of large numbers of points from the surface of the object. In the context of automatic RE, ∗ Corresponding author. Tel.: +46 (0) 19 301048.
E-mail addresses:
[email protected] (S. Larsson),
[email protected] (J.A.P. Kjellander). URL: http://www.oru.se/tech/cad (S. Larsson, J.A.P. Kjellander). c 2007 Elsevier B.V. All rights reserved. 0921-8890/$ - see front matter doi:10.1016/j.robot.2007.10.006
however, we are only interested in systems where the scanner orientation is controlled by the RE software itself. A simple system of that kind is achieved by combining a fixed scanner with a turntable. See [2] for a description of such a system. A more flexible solution is described in [3] where a coordinate measuring machine is used in combination with a laser scanner. A recent development by the same author is presented in [4]. Another autonomous system is described in [5], where the authors use a range camera, a turntable and an industrial robot. This setup may appear similar to what we will present here, but their robot does not move during scanning, so their path planning is only a NBV (Next Best View) problem. We have developed a measuring system based on a laser profile scanner mounted on the arm of an industrial robot. Both are connected with our system through TCP/IP. This makes it possible to scan an object from any direction even along curved paths. The hardware and basic system configuration is described in [6]. We have also developed the software needed for robot motion and data capturing, see [7]. A key ability in our system is the possibility to scan along curved paths. For some situations this may not be needed to get a good result, there will, however, always exist situations where this gives better results in terms of fever scans needed and/or avoiding occlusions. For an automatic system a path planning approach that support scanning along curved paths is therefore desirable. This issue is addressed in this paper.
616
S. Larsson, J.A.P. Kjellander / Robotics and Autonomous Systems 56 (2008) 615–624
In order to reach all surfaces of the object many individual scans may be needed. Each scan then produces a pointcloud that needs to be merged with pointclouds from earlier scans. The problem of determining the number of scans needed and how to orient the measuring system relative to the object in each individual scan is usually referred to as path- or view-planning. A survey of planning techniques for automated data capturing and preprocessing is given in [8]. The sequential RE process as described by Varady, Martin and Cox in [1] was not meant to be fully automatic. We believe that an automatic procedure should be iterative. In [6] we proposed an automatic procedure for RE of unknown objects based on three steps. The first step Size scan is concerned with the problem of determining the overall size of the object. The next step Shape scan should perform an automatic scanning, covering the objects surface. The result from the shape scan represented by, for example, a pointcloud or a triangle mesh are directly useable for some purposes and can be regarded as a measurement result. We also proposed the possibility to add a third step RE scan, in which the system can use the intermediate facet/point model to plan more accurate scans required to create the final CAD model. This paper is concerned with the first two steps. Algorithms and implementations of Size scan and Shape scan are presented with test results. Our work will now continue with the last step of the process and we hope to present a working implementation of this in papers to come. 2. Size scan The maximum working volume of our system is defined by a vertical cylinder 650 mm high and 250 mm in radius. These values are determined from the reachability limits of our hardware setup. The purpose of size scan is to reduce this volume to the size of the object in terms of its bounding box. The object coordinate system is located with its origin in the centre of the turntable and the bounding box is simply the max and min coordinates of the object along the three axes of the coordinate system. Size scan is made by scanning with vertical strokes from four orthogonal directions. Due to the limited size of the scan window, scanning is performed by stepwise approaching the centre of the coordinate system until materia is found. When all four directions are processed, the distance from the centre in each direction together with the highest detected point defines the bounding box of the object. The lowest point is by default set to a value slightly above the turntable. If a fixture is used to raise the object above the turntable, the height of the fixture is used instead; see, for example, Fig. 9(a). Since accuracy is not a critical issue, Size scan can be performed with relatively high speed. We have used 30 mm/s which with our equipment will yield capture of profiles at 10 mm spacing. 3. Shape scan The Shape scan module is more complex. It starts with the assumption that the objects bounding box is known and
ends with an approximate model accurate enough for planning step 3 in the RE process, segmentation and surface fitting. The algorithm is iterative and each new scan path is planned individually based on the information available at that moment. Each scan path creates a pointcloud that represents some part of the object and we are thus faced with the problem of merging local pointclouds into a global model. We also have to ensure that all parts of the object are scanned and establish the end criterion that lets us know when the process is finished. The survey by Scoot and Rooth [8] describes several planning techniques that could be used for this purpose. We decided to develop a method influenced by the OCS model published by Milroy, Bradley and Vickers in [9]. The reason for this choice is mainly that the OCS model facilitates the merging of individual scans into a global model, it also reduces the amount of data compared to a triangulated pointcloud but still captures sufficient information about the surface of the object to support path planning. 3.1. The OCS model The OCS model (Orthogonal Cross-Sections) is created by first triangulating each local pointcloud and then intersecting it with a number of equispaced x, y and z cutting planes, thus creating three curve sets referred to as a local section. Each local section is then merged with the global model (the sections from previous scan paths) by discarding portions that overlap, adding the new sections and making appropriate connections to form continuous curve segments. Free curve ends indicate portions of the object not yet scanned and are thus used to plan the next scanning path. 3.2. The local section As described in [7] pointclouds generated by scanpaths are registered by the Varkon CAD-system [10] and stored as MESH objects. A local section is created by intersecting a MESH with planes parallel to the global yz-, x z- and x y-plane respectively. The same planes are used for all local sections. In [9] the authors use equispaced planes with a distance of 3 mm. In order to get a reasonable reduction of data the distance between two parallel planes, the OCS-distance, should be large compared to the size of the MESH triangles. Our equipment is designed to scan about 5–10 times larger objects than Milroy et al. [9] used in their experiment. The cross-section distance used in the test examples presented here is 5 and 10 mm. For each plane we find all triangles that intersect. The result is a set of intersection lines, one set for each plane. Next, we connect the lines to continuous sets. This is done by connecting each line in a plane with a line in the same plane that shares one of its end points. The result is one or more ordered linesets representing continuous portions of the surface of the object. Next step in creating the local section is to reduce the number of lines in each lineset using an algorithm that removes lines in regions with low curvature. This is done by a simple chord height calculation and two lines that don’t contribute enough are discarded if they can be replaced by a new line
S. Larsson, J.A.P. Kjellander / Robotics and Autonomous Systems 56 (2008) 615–624
617
which is not too long. Minimum chord height in our tests was set 0.01 mm. By using a value which is low compared to the system accuracy we do not add approximation errors here. Finally, we calculate approximate surface normal directions for all points in each lineset. This is done by interpolating the averaged normals of MESH triangles connected to the vertices of the MESH edge that was used to create each point. 3.3. Merging of local sections The first local section created is saved as the initial global model. Additional sections are then merged with the global section one at the time until all portions of the object are scanned. A local section is merged with the global model by checking for overlap in each plane of the local section. If the global model contains a lineset in the plane of a local section and the distance between a global end point and the local lineset is small we may have an overlap. This threshold distance, OCSNEARLIMIT, should be set to a value guaranteed to be higher than the worst case system position error. The threshold value should also be compensated for the effect of the angle between the crossing plane normal and the surface normal at the endpoint. In our experiments we used NEARLIMIT = 1 mm (which is bigger than the system position accuracy) OCSNEARLIMIT = NEARLIMIT/(1 − cos(alpha)) where alpha is the angle between the crossing-plane normal and the surface normal at the endpoint in question. If some part of the object is very thin and we merge sections from different sides of the object we could find points that are close to linesets but belonging to opposite sides. To ensure that overlap really exists we therefore also check that normals of points involved in the overlap do not have opposite direction. End points that don’t overlap indicate portions of the object not yet scanned. We call such points air points. 3.4. Air point classification Air points are classified as follows: (1) BOUNDINGBOX — The point is on the border of the bounding box. Such points are not used when planning future scans. (2) VOLUME — The point is on the border of the volume scanned so far. (3) INTERNAL — The point lies inside the scanned volume. In this case the following subcategories apply: (a) STEEP — The surface of the object gradually turns away from the line of sight until it becomes to steep to be detected. (b) CORNER — The point is at the corner of a sharp drop-off. (c) OCCLUSION — The surface beyond the point was occluded relative to the line of sight from either the laser or the camera.
Fig. 1. Schematic 2D case of border classification and estimation of expected view direction. Black arrows show surface normals, tangents and view directions at the air points. Grey arrows show expected view directions for unseen parts of the object. Border classes: V = Volume, C = Corner, O = Oclussion, S = Steep.
These categories are similar to what Millroy et al. [9] described. Their method uses the variation of intensity in the reflected laser beam to detect situation (3)(a). They showed that this works in good light conditions and with the surface of the object evenly coated with diffusive paint. We used an alternative method to distinguish between the subcases (a)–(c) which is purely geometrical: (1) If the angle between the surface normal and the line of sight to the laser or the camera is more than 50◦ the air point class is STEEP. (2) If not, follow the 3D lines of sight from the air point to the laser source and the camera respectively. If any part of the global mesh is detected along these lines, the air point class is OCCLUSION. (3) Remaining air points are assigned class CORNER. The value 50◦ is selected so that we still have a margin to the angle where the scanner’s ability to detect a profile vanishes (at approx. 60◦ –70◦ ). 3.5. The collision model In order to avoid collision during scanning or repositioning, a collision model is automatically maintained. Check for collision is done between scanner and object but also between scanner and robot. The scanner and robot are modelled coarsely using a limited number of bounded planes. The object is modelled using space cells, see Fig. 2. Due to the stand-off between the scanner head and the scan window, a coarse model is sufficient also for the object. The cell size in our experiments is set to 30 mm (compare to the 150 mm scan widow stand off). Before Shape scan begins, cells are created to represent the volume of the bounding box as provided by Size scan. Initially
618
S. Larsson, J.A.P. Kjellander / Robotics and Autonomous Systems 56 (2008) 615–624
Fig. 2. Space cell model used for collision test (the car body example).
all cells are labelled UNKNOWN. During Shape scan cells are relabelled as EMPTY or FULL. The collision model is updated after each scan path by letting the registered scan windows perform space carving on the model. Each cell hit by a laser beam more than twice is labelled EMPTY. When the local OCS sections are created we also compare all new OCS curves with the collision model. Each cell hit by a curve is labelled FULL and is no longer subject to change. See Fig. 2, the dark (red) squares are confirmed FULL and the grey (green) squares are UNKNOWN. Both UNKNOWN and FULL cells will be passed to the motion control procedure and used for collision check. 3.6. Initial and final scanning Shape scan is done in two steps. First the sides of the object’s bounding box are scanned. If the height of the bounding box is less than twice the depth of view of the scanner, initial scanning is performed from the top surface, otherwise it is done from the four vertical sides. This initial step of the scanning process creates an OCS model which is used as a start point for the final part of Shape scan. During the final part of Shape scan each path is planned individually on basis of the current global OCS model. The planning of each new path is then based on the following: (1) Information associated with each air point makes it possible to guess a good viewing direction for unmapped regions. (2) The ability of our equipment to move and reorient freely in 3D space makes it possible to plan scan paths that follow 3D curves. (3) The freedom that the size of the scan window gives us can be used to relax the scan path so that crossing scan windows are avoided. (4) The iterative scanning of the surface is mainly performed from outside approaching the centre of the bounding box. This will enhance the reachability of unmapped regions due to the space carving of the collision model that is performed in parallel. (5) A visibility check is done and the path is adjusted to decrease the amount of expected occlusions.
3.6.1. Finding a scan path tube Path planning as described by Milroy et al. [9] is restricted to linear scan paths parallel to the axes of the basic coordinate system and with constant viewing direction. In our case we are free to scan along curved paths in 3D space with continuously changing viewing direction as long as we don’t let scan windows cross. A scan path is generated by finding a “tube” that includes a set of air points that we would like to eliminate in the next scan. Only neighbouring air points with associated surface normals and surface tangents within specified tolerances are considered. This scan path tube will later serve as a target for the actual scan path generation. Finding a scan path tube is done as follows: (1) Find the air point furthest away from the center of the bounding box. This point is denoted the pivot point. For upcoming iterations of the path planning procedure, air points that have been covered by a scan volume will have a penalty added to prevent them from being used again until all air points are processed. (2) Find all neighbours to the pivot point within a radius of three-times the OCS distance. Only air points whose surface normal does not differ more than 60◦ from the surface normal of the pivot point are considered. This is to avoid influence from air points on the opposite side of a thin object. (3) Calculate average values for position, surface normal and surface tangent for this initial group of points. The average position will be the start for a curve growing procedure in two directions, A and B. The average surface normal and tangent are used as initial search values for A and B. (4) Select initial search directions for A and B by finding the two points in the group at the largest distance from each other. An iterative growing procedure is now performed. Initially both directions A and B are labelled GROWING. (1) If direction A is GROWING then continue with step (2), otherwise jump to step (7). (2) Use the actual search position and search vector for direction A to find air points inside a cone. The cone starts in the search position and points in the search vector direction. The cone length is five-times the OCS distance and the radius at the ends are one- and three-times the OCS distance respectively. The search is restricted to air points that have not yet been involved in the creation of this tube and whose surface tangent and surface normal do not differ more than 90◦ and 60◦ from the actual target values for surface normal and tangent respectively. These coarse values prevent the tube-growing procedure to follow or get influenced by any “meeting” edge that may fall within the cone. (3) If the search returns no points, then label direction A as FINISHED and continue to step (5). (4) Calculate average values for position, surface tangent and normal for this new point group. A new search vector is defined by the direction from last search position to the
S. Larsson, J.A.P. Kjellander / Robotics and Autonomous Systems 56 (2008) 615–624
619
Fig. 3. A telephone handset with scan path tube (dashed), pivot point, initial search circle and cones.
Fig. 4. Example of generated scan path tube.
(5)
(6) (7) (8)
average position of this point group. These values will be used in the next search loop in direction A. Fit a tube to the so far calculated search positions using the FitTube algorithm, see Fig. 5. The radius limitation at 12 mm is selected to give a tube diameter of approx. half the scan window width. The curvature limit of 0.012 mm−1 is selected to avoid crossing scan windows. A smaller scan window would allow a more curved path. If FitTube returns FALSE, then label direction A as FINISHED. If direction B is labelled GROWING then repeat step (2)–(6) also for direction B. When direction A and B are both FINISHED, exit the procedure and keep the last successful tube. Otherwise repeat from step (1).
The ends of the tube are finally extended twice the OCS distance in both directions. Fig. 3 shows a 2D top view of the tube-growing procedure. An example of a generated scan path tube is shown in Fig. 4. For the remainder of the path planning procedure only air points found during this iterative tube-growing procedure are considered. 3.6.2. Selecting scan window planes The scan path tube represents a volume that shall be covered by the next scan, hopefully to eliminate as many air points as possible. What remains to determine is from what directions to view the tube and how to position the scan volume around the tube to cover as much new parts of the surface as possible. We then sample discrete points on the tube centre curve (simply denoted curve below). Sampling distance used is fivetimes the OCS distance. At each point we determine a scan window plane as follows, see also Fig. 6: (1) Create a local coordinate system C1 with the sampled point as its origin, y axis near the average surface normal
Fig. 5. The FitTube algorithm.
direction of neighbouring air points and z axis in the curve tangent direction. (2) For the first sampled point, compare the x-axis of C1 with the average surface tangent associated with the neighbouring air points. If they point in opposite directions, then flip C1, including all upcoming instances of C1 so that the z axis points in the negative tangent direction of the tube curve. (3) As the laser source and projected laser line by definition will be in the scan window plane, we chose to rotate the plane +15◦ around the x axis to retrieve coordinate system C2. This is to get the expected surface normal to point approximately between the laser source and the camera (for our equipment this relative angle is approximately 30◦ ). 3.6.3. Adjusting the scan window position We have now determined the scan window planes represented as x y planes of coordinate system C2. The y axis points near to the initial averaged surface normal direction and can be seen as an initial estimation of a suitable view direction of the not yet seen surface. The scan window plane will not change from here, but we will make adjustments in 2D on the viewing direction and the position of the scan window in C2’s x y-plane. Due to the restriction of the curvature applied to the FitTube algorithm, and the limited size of the scan window, we are now
620
S. Larsson, J.A.P. Kjellander / Robotics and Autonomous Systems 56 (2008) 615–624
Table 1 Adjustment angles, α Border type
Adj. angle, α
Volume Corner Occlussion Steep
0 β − 60◦ β + 60◦ 0
Fig. 7. Relative placement in the 2D scan window plane of coordinate systems C2–C5.
Fig. 6. Coordinate system C1 and C2.
sure that scan windows will never cross. This is a requirement for the input to the scan procedure described in [7]. Let us now recall the border classification according to Section 3.4. The border class tells us the reasons for the air points to appear, and may give a hint in what directions we would expect the unseen surface to continue, see Fig. 1. For each air point we retrieve the border class and apply an adjustment angle according to Table 1. These adjustment angles are the same as proposed by Millroy et al. [9]. It make sense since this angle is close to the visibility limit at steep surfaces. The adjustments will, after averaging, be performed as a rotation around the z axis of coordinate system C2. β is the signed angle between the projected surface normal and the projected view direction associated with the air point. Positive direction is defined from the surface tangent towards the surface normal, see Fig. 1. For each scan window plane an average angle is then calculated from air points within a distance of 10 times the OCS distance. After applying the averaged adjustment angle as a rotation around the z-axis of coordinate system C2, we retrieve a new adjusted coordinate system C3. Next step is to position the scan window. Along the view direction the position is fixed at the used tool centre point, which for the Shape scan procedure is set to a distance with optimal performance (laser focus) for our equipment. Along the x-axis the scan window is moved to the side so that we cover as much new space as possible while still including the scan path tube. I.e for a ragged edge on the previous scan that would yield a big tube radius, we will lose some space for detecting new surface. A smooth edge, however, will yield a small tube radius and hence position the scan window to detect as much new surface as possible. The new position retrieved after this translation along the x axis is denoted coordinate system C4. See Fig. 7.
3.6.4. Visibility check The Coordinate systems C4 defines a viewing direction as near as possible to the estimated surface normal of the unseen surface, but the surface would be reasonably well detected also from other directions. This allows us to make further adjustments to avoid occlusions possible to forecast. So, for each coordinate system C4, we test a range of +/−20◦ rotation around its z-axis for intersection with the so far scanned points (OCS curves). Check is done by following the two lines of sight from C3’s origin to the optical centres of the camera and the laser respectively. Hence for each sample point along the planned scan path we end up with a possible adjustment range of −20◦ to +20◦ if no occlusions were detected, or a narrower range if some occlusions were detected. Final adjustments are calculated within the derived ranges in a common operation for all sample points, so that the adjustment will change smoothly along the scan path. After applying these visibility adjustments to all instances of coordinate system C4 we retrieve instances of coordinate system C5 see Fig. 7. 3.7. Stop criteria Theoretically Shape scan should end when no more air points exist. However, our tests have shown that noise from the measuring, self-occluding geometry or areas out of reach will in most cases make this impossible. In [9] the authors restrict each air point to be used as pivot point only one time, in this way the procedure will terminate when no more unused air points are available. In our case with an equipment allowing larger objects, but also introducing more uncertainty this has shown to be unsatisfactory, since it will introduce far more attempts to scan than what is actually needed. We have chosen to stop when all air points have been covered by a scan volume once. This may in some cases leave regions unscanned, see, for example, Fig. 9(d), where the limitation in the equipments working range caused a part of the surface facing downwards to be unscanned.
621
S. Larsson, J.A.P. Kjellander / Robotics and Autonomous Systems 56 (2008) 615–624 Table 2 Scanner data Camera resolution Scan window stand off Depth of view Width at min dist. Width at max dist.
768 × 576 pix 150 mm 110 mm 40 mm 60 mm
(1) The OCS distance. (2) Optional restriction of the working volume: (a) z-upper limit (b) z-lower limit (c) Radius (3) Optional bounding box of support structure. Specifying the lower z limit is useful to avoid the support structure from becoming a part of the objects bounding box. The z-upper limit and the radius may be specified to reduce the number of scans needed for Size scan, but it does not influence the final result. If a support structure is specified the program will consider this as being a part of the collision model and air points appearing inside the support structure will not be considered during path planning. 4.3. Test examples We have tested the system on three objects with different geometrical characteristics. See Figs. 9–11 and Table 3 for results. 4.3.1. The handset The first example is a standard telephone handset in white plastic material. Before scanning we prepared the handset by filling holes for cable connection, microphone etc. Geometrical characteristics of the handset:
Fig. 8. Robot, turntable and new scanner head with laser and camera.
4. Examples and results 4.1. Experimental setup The hardware of our system, presented in [6] is based on a laser profile scanner mounted on an industrial robot ABB IRB140 with a turntable and S4C controller. Compared to the original setup, the tests presented here were made using a new scanner head with a more suitable field of view compared to the robot size and focal depth of the laser line. See Fig. 8 and Table 2. 4.2. User input Before scanning the user supplies the following data:
• The object size is in the same range as the scanners depth of view. • There are no difficult self-occluding parts. • The object has a vertical pin as support structure. 4.3.2. The car body The second object is a scaled clay model of a car body made by a design student. Only the left half is modelled so we used the possibility to define the bounding box of a support structure to replace the right half of the car body. In this way no scans were made from that direction. Geometrical characteristics of the car body: • • • •
The object size is 2–4-times the scanners depth of view. The object is smoothly shaped. There are no difficult self-occluding parts. The bottom side of the model is omitted by setting the zlower limit.
Table 3 Statistics, for example, objects Handset
Clay model
Watering can
User specified: Working volume z-min Working volume z-max Working volume radius SHAPE scan OCS distance Defined support structure
250 500 150 5 Vertical pin
150 400 200 10 Vertical column
320 650 250 10 Not modeled half
Retrieved data: Number of scans, SIZE Retrieved bounding box size Number of scans, SHAPE, initial Number of scans, SHAPE, final
12 215 × 78 × 77 2 11
48 420 × 155 × 150 4 5
80 458 × 179 × 282 28 43
622
S. Larsson, J.A.P. Kjellander / Robotics and Autonomous Systems 56 (2008) 615–624
Fig. 9. Shape scan, the handset example: (a) Photo, (b) OCS model after initial two top scans, (c) OCS model after seven scans, (d) OCS model completed in 13 scans. Figure (b) and (c) also show the planned path for the next scan.
Fig. 10. Shape scan, the car example: (a) Photo, (b) OCS model after initial four top scans, (c) OCS model after six scans, (d) Completed in nine scans (shaded). Figure (b) and (c) also show the planned path for the next scan.
4.3.3. The watering can The last example is a watering can in glossy plastic material. To avoid unwanted reflections of the laserbeam we coated the watering can with grey paint before scanning. Geometrical characteristics of the watering can: (1) The object size is 2–4 times the scanner’s depth of view. (2) The object has self-occluding parts (the handle and the water outlet). (3) The height of the object yields the initial Shape scan to be performed from the sides. (4) The bottom side of the can is omitted by setting the z-lower limit.
4.3.4. Discussion It seems that large objects with complex shape take more advantage of the ability to scan along curved paths than smaller objects. The handset in the first example could probably just as well have been scanned with paths along straight lines. The handset was raised from the turntable with a vertical pin in order to make it possible to scan from all sides. As shown in Fig. 9(d), this was not completely successful. The reason is the stand-off between the imaging window and the robot wrist centre. We could have reached the side facing down by elevating the object further from the turntable, but then we would not have reached the side facing up. It seems that we have to accept the limited working range of our robot. If an
S. Larsson, J.A.P. Kjellander / Robotics and Autonomous Systems 56 (2008) 615–624
623
Fig. 11. Shape scan, the watering can example: (a) Photo, (b) Completed in 71 scans.
object needs to be seen from all sides a practical solution could be to perform two scan operations with different setup and then merge the models using a method like ICP (Iterative Closest Point). The car body is larger than the handset and it’s relatively simple shape makes it possible to scan with a good angle. The automatically generated path plan for the car model, thus seems to be similar to what a human being would plan using a manually operated scanner of the same size. The shape of the watering can is more complex and requires more scans to be fully covered. In this case we believe that the automatically generated path plan is different from what a human being would produce, mainly due to the fact that a human can see the entire object from start. The automatic planning procedure starts with a box and then only sees what it has scanned so far. Similar to a human though it always tries to use a good angle between the scanner and the surface. A possible future development could be to use information from camera pictures of the object to give the path-planning procedure a better a priori knowledge of the overall shape of the object. The watering can needed far more scans, some of them also shorter, than the car body. This is partly due to the fact that the surface area of the watering can is larger but also because of self-occlusions and large curvatures in some places. This makes it impossible to generate long scan paths in some places without scan windows crossing. 5. Conclusions and future work In this paper we have presented a method for automatic path planning capable of planning paths along 3D curves with continuously changing viewing direction. The advantage of this method is that it automatically adopts to the shape of the object and thus, if possible will scan the object using an orientation of the scanner which gives the most accurate result. It will also try to avoid occlusions which is important for objects with complex shape to be fully covered.
In [8] Table 1 Scott et al. list a number of requirements for view planning. We believe that our method fulfils the following requirements: • Generalizable algorithm. Our method is general in the sense that it is adoptable to a robot of different size or a scanner with different specifications. • Generalizable viewpoints. Our scanner is moving along 3D curves with continuously changing viewing direction. • View overlap. Successive scans in our method always overlap. • Self-terminating. Our method is self-terminating. • Limited a priori knowledge. Our method does not need any a priori knowledge. Bounding box and centroid are established automatically. • Shape constraints. Our method does not imply shape constraints. • Frustum. The sensor frustum is modelled, in our text referred to as the scanning window or scanning volume. • Shadow effect. We model shadow effects (occlusions) as a part of the classification of air points, but also to decrease the number of occlusions. • 6D pose. Our method uses six degrees of freedom. Strategies. Scott et al. also argue for the possibility to divide the view planning problem into stages using different techniques at each stage. We use several stages, Size scan, initial Shape scan and final Shape scan Those of Scott’s et al. requirements that are not fulfilled are either not applicable or remain to be addressed. For example, we would like to develop an error model. That would make it possible to implement support for a Model Quality Specification. To make the system easy to use practical enhancements are needed. One example is automatic calculation of parameters like OCS distance or scan speed after Size scan. Also, set of standard support fixtures could be modelled and made easily available to the user.
624
S. Larsson, J.A.P. Kjellander / Robotics and Autonomous Systems 56 (2008) 615–624
References [1] T. V´arady, R.R. Martin, J. Cox, Reverse engineering of geometric models — an introduction, Computer-Aided Design 29 (1997) 255–268. [2] R. Pito, R. Bajcsy, A solution to the next best view problem for automated cad model acquisition of free-form objects using range cameras, Tech. Rep., RASP Labaratory, Department of Computer and Information Science, University of Pensylvania. URL: ftp://ftp.cis.upenn.edu/pub/pito/papers/nbv.ps.gz. [3] V. Chan, C. Bradley, G. Vickers, A multi-sensor approach for rapid digitization and data segmentation in reverse engineering, Transactions of the ASME. Journal of Manufacturing Science and Engineering 122 (2000) 725–733. [4] V.H. Chan, M. Samaan, Spherical/cylindrical laser scanner for geometric reverse engineering, in: Proceedings of SPIE — The International Society for Optical Engineering, vol. 5302 /Three-Dimensional Image Capture and Applications VI, 2004, pp. 33–40. [5] M. Callieri, A. Fasano, G. Impoco, P. Cignoni, R. Scopigno, G. Parrini, G. Biagini, Roboscan: An automatic system for accurate and unattended 3d scanning, in: Proceedings of the 2nd Intl Symp 3D Data Processing, Visualization, and Transmission, 2004. [6] S. Larsson, J. Kjellander, An industrial robot and a laser scanner as a flexible solution towards an automatic system for reverse engineering of unknown objects, in: Proceedings of ESDA04 — 2004 7th Bienal Conference on Engineering Systems Design and Analysis, 2004. [7] S. Larsson, J. Kjellander, Motion control and data capturing for laser scanning with an industrial robot, Robotics and Autonomous Systems 54 (2006) 419–512.
[8] W.R. Scott, G. Roth, J.-F. Rivest, View planning for automated threedimensional object reconstruction and inspection, ACM Computing Surveys 35 (1) (2003) 64–96. [9] M. Milroy, C. Bradley, G. Vickers, Automated laser scanning based on orthogonal cross sections, Machine Vision and Applications 9 (1996) 106–118. [10] Varkon homepage. URL: www.tech.oru.se/cad/varkon. S¨oren Larsson received his M.Sc. in mechanical ¨ engineering from Orebro university in Sweden 2001. He is currently a Ph.D. student at the department of technology working with the development of new iterative algorithms for automated reverse engineering of unknown objects.
J.A.P. Kjellander received his M.Sc. in mechanical engineering at Link¨oping university in Sweden 1979. He received his Ph.D. from the same university in 1984 working with smoothing algorithms for curves and surfaces for Computer Aided Design. After this he started Microform AB and lead the development of Varkon. He left Microform AB in 1997 to return to ¨ the academic world, this time the University of Orebro in Sweden where he became an assistant professor in mechanical engineering and currently the head of the department of technology. He now shares his time between teaching, administration and research related to geometrical modelling.