Planning strategy for task of unfolding clothes

Planning strategy for task of unfolding clothes

Robotics and Autonomous Systems 32 (2000) 145–152 Planning strategy for task of unfolding clothes Kyoko Hamajima∗ , Masayoshi Kakikura1 Department of...

351KB Sizes 4 Downloads 108 Views

Robotics and Autonomous Systems 32 (2000) 145–152

Planning strategy for task of unfolding clothes Kyoko Hamajima∗ , Masayoshi Kakikura1 Department of Electronic Engineering, Tokyo Denki University, 2-2, Kanda-Nishiki-cho Chiyoda-ku, Tokyo, 101-8457 Japan Received 15 February 1999; received in revised form 20 August 1999

Abstract Research on saving labor of household work and automation for housekeeping robots has gained importance due to the growing population and fast advancement in electronics. Improvements are made in such a way that a robot can deal with a variety of “nonsolid” household objects. We have studied the technology by which a robot handles nonsolid objects, for example, putting clothes in order at a specified site. Concrete subtasks involve removing a cloth from the washing machine, spreading out, classifying, folding, and putting it in a specified place. This paper proposes an “unfolding” planning strategy. © 2000 Elsevier Science B.V. All rights reserved. Keywords: Nonsolid-form objects; Planning; Clothes; Housekeeping robot

1. Introduction In our laboratory, we are developing several kinds of housekeeping robots. The special problems concerning housekeeping robots are huge in number, that is they have to handle a variety of objects that exist in a house. Robots should be able to treat various kinds of “nonsolid” household objects. However, by the state-of-the-art of robotics, a robot cannot manage nonsolid objects sufficiently. The problem is that it is very difficult for the robots to recognize and grasp nonsolid objects such as clothes. Since nonsolid objects cannot be represented by a geometric model, there is no stable posture for grasping. We are trying to solve ∗ Corresponding author. Present address: Physical Engineering Safety Research Division, National Institute of Industrial Safety, 1-4-6, Umezono, Kiyose, Tokyo, 204-0024 Japan. Tel.: +81-424-91-4512; fax: +81-424-91-7846. E-mail addresses: [email protected] (K. Hamajima), [email protected] (M. Kakikura) 1 Tel.: +81-3-5280-3348; fax: +81-3-5280-3565.

this problem through development of a robot which can tidy up the clothes, i.e. a robot has to take out one piece of cloth at the first step, then unfold, and finally fold it in a fine way. In this paper, we especially describe how to isolate one cloth from a washed mass, and how to classify it. The experimental environment is the following: OS: Solaris 2.4, Machine: Sun SS20, Language: EusLisp Ver. 8.05 [1], Image processing library: SPIDER, Image input device: FUJITSU Color Tracking Vision. 1.1. Goal of task and class of clothes The flow of a robot’s total task is shown in Fig. 1. The task consists of three parts: isolating, unfolding, and folding. The unfolding task consists of three subtasks: rehandling, classifying, and shaping. The robot performs these tasks until there is no washed mass on the working table. • Isolating task. In this task, one cloth is isolated from a washed mass. For this purpose, a top view image

0921-8890/00/$ – see front matter © 2000 Elsevier Science B.V. All rights reserved. PII: S 0 9 2 1 - 8 8 9 0 ( 9 9 ) 0 0 1 1 5 - 3

146

K. Hamajima, M. Kakikura / Robotics and Autonomous Systems 32 (2000) 145–152

local deformation, and finally the clothes are folded. There may exist many methods for unfolding clothes, but we propose the rehandling subtask using two hemlines on the clothes for grasping. In this subtask, two hemlines on clothes are detected, and grasped by two robot arms. The clothes are unfolded by extending two robot arms. The clothes’ form could be further extended rather than that was in the hung-up state right after the isolating task. In this situation, classifying and shaping tasks become simpler since characteristic features are detected easily. We have a lot of clothes to wash at home. In our research, we restrict the class of clothes managed by the robot to only four, namely, towel, socks, shirt, and pants. For tidying up the clothes, the robot has to infer the class of clothes. 2. Execution of isolating task Fig. 1. Flow of task.

of the washed mass is segmented into regions of each cloth using the color of the image. The most appropriate region is selected as a grasping region, and the grasping point is determined. The cloth is hung up by holding the grasping point. • Unfolding task. We are planning to use two robot arms in unfolding task. Therefore, at first, two grasping points should be decided on unfolding clothes, and after that, the robot holds these grasping points. This rehandling task is done not only right after isolating task but also in unfolding task. Namely, if the cloth is not properly spread after some unfolding task, the form of clothes is made simpler by rehandling process, in which the robot tries the unfolding task again. The classification of clothes under hanging up with two grasping points is inferred from the region connection since the form of clothes under hanging up is simpler than that one on the working table. This process is called the classifying task. After these steps, the robot puts the clothes on the working table and unfolds the clothes. This process is called the shaping task. • Folding task. The robot folds the clothes. Usually when we fold the clothes, at first, the clothes are roughly unfolded, and next, shaped with

In this task, it is attempted to isolate a single cloth from a washed mass, since class of isolated clothes is inferred only in later tasks. The process of isolating task is shown in Fig. 2. 2.1. Hand for grasping and way of handling [2] The hand for grasping clothes is shown in Fig. 3, left. This hand has parallel gripper mounted on a rotating system and the hollow-body rubber tires serve as shock absorbers. These tires can be turned in both directions. The way of grasping clothes is shown in Fig. 3(a)–(c). The hand used in this experiment can grasp many sizes of clothes stablely if the grasping point is set properly. The limit of the size of clothes depends on the size of the hand [2].

Fig. 2. Isolating task. (a) A top view image of washed mass is gained. (b) The image of washed mass is segmented into the independent region of each clothes using the color image. (c) The area of each region is measured. The largest region is determined as the region including grasping point and then the grasping point is determined. (d) The robot isolates one cloth from washed mass.

K. Hamajima, M. Kakikura / Robotics and Autonomous Systems 32 (2000) 145–152

147

Fig. 3. Hand for grasping clothes. (a) The robot presses the hand against clothes under opened parallel gripper. (b) The hand takes up clothes into the rotating tires. (c) If enough clothes is taken up, the parallel gripper is closed.

2.2. Condition of grasping point We have to take into account the ability of the hand for deciding the grasping point. For, the grasping point should be set in the middle of grasping region. In most cases, if the grasping point is set near the boundaries of the grasping region, the hand may hold the grasping region and another clothes’ region together. This means that the robot cannot achieve the stable grasping process. Therefore the grasping region must be larger than the minimum region which is decided as the critical area of clothes which the hand can grasp. A priori knowledge about the minimum grasping region has to be given to the robot. The size of the minimum grasping region is decided based on the radius of minimum circle which includes the grasping hand’s area projected on the working plane [3]. The largest region in the segmented region is decided as the grasping region. If the largest region is smaller than the minimum grasping region, the grasping point is set in the middle of the region of whole washed mass.

For this purpose, the robot examines whether the color of washed mass is solid or not. Three histograms (R, G, B) of the region of washed mass are formed from the color image. The outstanding peak is detected from these histograms. The occupation of the pixels which correspond to the outstanding peak in the region of washed mass is measured. If the occupation is larger than the threshold, the color of the washed mass is regarded as solid. The threshold value is set to 60% by several repeated experiments. The flow of the region segmentation method for deciding grasping region is shown below (see Fig. 4).

2.3. Consideration on color [3] In order to segment the image of washed mass into regions of each cloth, we use the color information of the image. Three color components of image (red, green, and blue) are used to segment the image. Since this region segmentation method is based on color information and if color of the washed mass is solid, detection of regions of each cloth becomes difficult. Thus the color of washed mass is examined at first, and the procedure of region segmentation could be changed, if necessary, according to this color check.

Fig. 4. Outline of isolating task.

148

K. Hamajima, M. Kakikura / Robotics and Autonomous Systems 32 (2000) 145–152

Process of region segmentation method. 1. Three histograms of components (Red, Green, Blue) of a template region are formed from the color image. The template region is made by image binarization. The contour which forms the boundary between region of washed mass and background is detected by binarization process. The internal region of contour is regarded as the initial template region. 2. Extraction of a region from the image is also based on binarization technique. The value of the threshold and the color components are decided by the following process. The outstanding peak (Redpeak , Greenpeak , Bluepeak ) is detected from each histogram. Then, the most outstanding peak in Redpeak , Greenpeak , or Bluepeak is decided as the threshold for extracting a region by binarization. For example, if Redpeak is more outstanding than Greenpeak and Bluepeak , then Redpeak is decided as the threshold and the image of red component is decided as input image for binarization. 3. The pixels corresponding to the most outstanding peak are extracted from that component image by binarization, and then the region which consists of connection of those pixels are detected. The plural connected regions are detected in general. 4. Processes 1–3 are repeatedly applied for that connected region gained in process 3 to find much smaller connected region, and also for other remaining connected regions. The extracted connected region in process 3 could be segmented into more smaller regions. In this segmentation, the component which is used for detecting one connected region is not used again. For example, if a connected region S1 is extracted from a red component image by binarization, in the succeeding process, the histograms of blue and green components are formed using the connected region S1 as the template region. The threshold and the component to extract regions are chosen from histograms of blue and green components. The pixels corresponding to the most outstanding peak are extracted from that component image by binarization, and the region which consists of those connecting pixels is detected. This connected region, say S2 , is used as a template region in the following repeating segmentation process.

For example, if the connected region S2 is extracted from green component image, then in the following process, the histogram of blue component is formed using the connected region S2 as a template region. Thus the most outstanding peak Bluepeak is detected. This Bluepeak is selected as threshold for binarization. The pixels corresponding to the Bluepeak are extracted from blue component image by binarization, and the region which consists of those connecting pixels are detected. The robot treats the extracted region as it does not include any segmentable regions. The extracted region is saved as the result of region segmentation. The region segmentation processes 1–3 are repeated for the remaining connected regions, here the remaining regions are obtained from the subtraction of extracted regions from the template region. The largest region in the extracted regions is decided as the grasping region. However, if all extracted regions are smaller than the minimum grasping region, the region of whole washed mass is regarded as the grasping region. 2.3.1. Case of solid color If the color of washed mass is solid, variation of pixel value in color space is mild, therefore, each region of clothes is not detected clearly. In this case, we use shadows that appear on clothes. Two kinds of shadows appear on the surface of clothes, that is, the shadows on clothes represent discontinuity and roughness of surface. These shadows are produced on clothes using multidirectional lighting. The grasping region is determined using edge information which is detected by binarization. Here we assume that the detected edges by binarization are the boundaries of clothes. The region enclosing these edges is regarded as the grasping region. If the detected region is smaller than the minimum grasping region, the region of whole washed mass is regarded as the grasping region as same as the one mentioned before. The grasping point is set in the middle of the grasping region. In Fig. 5(a), the washed mass of solid color with lighting is shown. Fig. 5(b) shows detected connected region by binarization of Fig. 5(a). The connected middle part of Fig. 5(b) is detected as the grasping region.

K. Hamajima, M. Kakikura / Robotics and Autonomous Systems 32 (2000) 145–152

Fig. 5. Solid color washed mass and grasping region: (a) washed mass with lighting; (b) connected region for grasping.

Fig. 6. Washed mass of nonsolid color (case 1): (a) washed mass; (b) candidate region resulted by binarization; (c) detected grasping region.

2.3.2. Case of nonsolid color If the washed mass consists of clothes of many colors, the grasping region is detected by region segmentation using color information. If some texture clothes exist in the washed mass, those regions are not extracted clearly and the region segmentation process takes a longer time due to the difficulty in detecting minor regions. Therefore, texture regions are excluded from template regions for segmentation at the initial stage. The density of the edges in texture region is used for index of exclusion. At first step, the gray image of washed mass is binarized and connected regions are detected. The connected regions which correspond to texture regions do not have enough area for grasping because it consists sets of fine edges. If the largest region in connected

149

regions is larger than the minimum grasping region, this region is decided as a candidate for the grasping region. Fig. 6(a) is an example of washed mass. In the middle of Fig. 6(a), there are some clothes which have texture. The result of binarization is shown in Fig. 6(b), many fine edges have appeared in the parts of texture region. Since connected regions which correspond to texture parts are smaller than the minimum grasping region, these regions are discarded at the region segmentation. Fig. 6(c) shows detected grasping region. In this way, the clothes which have texture are isolated from washed mass for later task. If the major part of washed mass is occupied by texture clothes, the candidate for grasping region cannot be detected by binarization (Fig. 7(a)). In this case, the region of whole washed mass is regarded as one template region and the region segmentation is done using this template region. Fig. 7(a) shows some intermediate state of cloth isolation starting from Fig. 6(a). The middle of Fig. 7(a) shows texture regions which are hidden in Fig. 6(a) by other clothes. In the region segmentation, Fig. 7(a), two texture clothes are merged into one region, since regions of texture clothes are not segmented clearly due to the overlapping of fine edge regions. In this process, the region of Fig. 7(c) is regarded as one grasping region. The isolating task is executed using the grasping point which is shown as a circle in the middle part of Fig. 7(d).

3. Execution of unfolding task The unfolding task is the preparation process for folding clothes. Since it is difficult to unfold clothes wholly at one try, the clothes is unfolded roughly in first step. Then the cloth is unfolded by shaping with

Fig. 7. Washed mass of nonsolid color (case 2): (a) washed mass; (b) binarization; (c) grasping region; (d) decided grasping point.

150

K. Hamajima, M. Kakikura / Robotics and Autonomous Systems 32 (2000) 145–152

Fig. 8. Three subtasks of unfolding task. Initial state (0) is the hung-up state; subtasks: (1) rehandling, (2) putting on the working table, and (3) shaping.

local deformation. Thus, the cloth is folded step by step. We assume the above process has three subtasks of unfolding task. The flow of unfolding task is shown in Fig. 8. Three subtasks of unfolding task are shown in Fig. 8(1)–(3). The initial state of unfolding task is the hung-up state of cloth (Fig. 8(0)). Two points on hemlines of cloth are found out as grasping points for rehandling, and the change of grasping is performed (Fig. 8(1)). The cloth is unfolded to a large extent with this change of grasping points. Next, the cloth is put on the working table lest the cloth is crumpled (Fig. 8(2)), and unfolded by shaping the parts of cloth with local deformation (Fig. 8(3)). 3.1. Rehandling subtasks with two arms The rehandling subtask is performed double the number of iterations of grasping. That is, two points on hemlines of clothes are found out under hung-up state, and grasped with two robot arms. Finally, the cloth is hung-up with both arms. The hemline of clothes is set as in Fig. 9. For the prevention of crumpling of cloth when the cloth is put on the working table, two grasping points are selected which are not set closely on the same hemline. This is equivalent to that two grasping points that are not set on the hemlines of the same number as in Fig. 9, except setting on each end point

Fig. 9. Hemline of clothes.

of the same hemline. The reason is the close grasping points make sag of cloth. Two properties of clothes are available for detecting hemlines on clothes, that is: (1) shadows that appear on the clothes and (2) approximate shape of outline of clothes in hung-up state. The reason of using these properties is based on the fact that there are some relationships between shadows and outlines. The relationships are shown in Fig. 10. The shape of the edge of clothes’ hemlines forms convex shape frequently by the effect of gravity (the edge line made by grasping point, point A, and point B, in Fig. 10, right). Therefore, if the region of shadow is detected on the convex shape of clothes’ outline, this indicates the existence of hemlines. The detection of shadow area is based on the smoothing of original images. The threshold value “sigma” of the Gauss function in smoothing process is decided by cut-and-try method at each experiments, but if we can strictly restrict and control the environmental conditions, for example, lighting of working table, lighting from outer world, etc., the value is obviously decided uniquely.

Fig. 10. Relationship between hemline and shadow on clothes.

K. Hamajima, M. Kakikura / Robotics and Autonomous Systems 32 (2000) 145–152

3.1.1. Grasping hemline: Rehandling subtask 1 At the rehandling subtask 1 as the first step of unfolding task, the goal is to grasp at one hemline on clothes. The appropriate regions of shadows should be decided for determination of detecting hemlines, since some regions of shadow on clothes may exist. This process is the following: 1. The image of clothes under hung-up state is taken. 2. The image is blurred using the function of Gauss, and the level of gray is converted into 4-levels. The values of sigma of Gaussian function is adjusted for 4-levels of gray in advance. 3. Histograms of regions for 4-levels is formed, and the region which has the least outstanding peak value of histogram is decided as the shadow region. 4. The approximate shape of the outline of clothes is detected, and the convex point which corresponds to that of Fig. 10 is detected. 5. Among the detected shadow regions, the region which is close to the detected convex point is determined as shadows of hemline. 6. The above steps (1)–(5) are repeated for all the images taken during the rotation of hand’s wrist. 7. If some plural regions are detected in the entire detecting process, the shadow region which has the widest distribution along the horizontal direction is determined as the candidate for the existing place of hemlines. The detecting process is shown in Fig. 11. The white line in Fig. 11(d) is presumed position of the hemline. 3.1.2. Grasping hemline: Rehandling subtask 2 The second rehandling subtask is carried out as same as the first rehandling process. In this task, the hemline which has the equal number as that of

Fig. 11. State of shadows on clothes: (a) towel under hung-up; (b) 4-levels gray image; (c) shadow region and approximated shape; (d) result of detected hemline.

151

Fig. 12. Hung-up state grasping one hemline on towel and long-sleeve shirt: (a) grasping hemline detected in Fig. 11; (b) shadow region of (a); (c) hung-up state after isolating task; (d) grasping hemline detected in (c).

grasped at the first rehandling process seldom forms convex shape since the cloth is sagged down by its weight. Therefore the possibility that shadow region exists close to hemline of the same number is reduced greatly, thus the condition that two grasping positions may not be set within the hemline of the same number is satisfied easily. Fig. 12(a) shows the clothes under hung-up state right after the regrasping of the hemline detected in Fig. 11. Fig. 12(c) shows the long-sleeve shirt under hung-up state after the isolating task. There exists a large shadow region in the middle of clothes. The result of rehandling subtask 1 using this shadow region is shown in Fig. 12(d). 3.1.3. Unfound hemline: Rehandling subtask 3 In case of large cloth, the shadow region may not exist since there is no gap between hemline and cloth, because the cloth is sagged down by its weight. Also, edges of the hemline could not be detected, if the hemline gets hidden in the folds of the cloth. For these reasons, if the edge of hemline could not be detected, the rehandling subtask, by which the lowermost part of cloth is grasped with a hand, is performed as rehandling subtask 3. By combining these three rehandling

Fig. 13. Determined grasping point: white circle — grasping point.

152

K. Hamajima, M. Kakikura / Robotics and Autonomous Systems 32 (2000) 145–152

Fig. 14. Result of rehandling subtask (towel and long-sleeve shirt): (a) hung-up state after rehandling subtask (towel); (b) approximation shape of (a); (c) hung-up state after rehandling subtask (long-sleeve shirt); (d) approximation shape of (c).

subtasks, the form of cloth is reached by the state of grasping two hemlines. The case of undetected hemlines is shown in Fig. 13. 3.2. Result of rehandling subtask The result of rehandling subtask using the detected hemlines in Fig. 12(b) is shown in Fig. 14(a), and the result for Fig. 12(d) is shown in Fig. 14(c). 4. Conclusion In this report, we have described the planning strategy of a robot which can tidy up clothes. Some of the concrete subtasks of this robot are: taking out one cloth, expanding, classifying, folding, and putting it on the specified place. We have mentioned early the processes involved in the unfolding task of washed wears. In these processes, we have examined isolation of cloth from washed mass, and obtained reasonable results on the region segmentation of images of washed mass and the determination of grasping points. The robot performed the isolation task well. We have also described the planning strategy of unfolding task after isolating task. Especially we have fairly succeeded in basic experiments on the rehandling subtask for unfolding clothes to the full extent, and the detection of hemlines has been proved successful. The algorithms proposed in this paper are the first versions and not so robust concerning the changes of environmental conditions, and also have various parts to be improved. Therefore our next step is to overcome these problems.

The future goals may be listed as follows, where we try to perform the final unfolding process: • After the rehandling subtask, the clothes are put on the working table lest the cloth is crumpled. The characteristic points are tracked during the putting process, and if the forms of clothes gets crumpled, the process is repeated again. • The cloth on the working table is unfolded to the full extent. At first, the folded parts are unfolded by two grasping hands. Next, the locally deformed parts are unfolded step by step. Acknowledgements This research is partly supported by the fund of Japanese Ministry of Education under the title “Research on Emergent Mechanism for Machine Intelligence — A Tightly Coupled Perception Motion Behavior Approach”. References [1] T. Matsui, EusLisp Reference Manual, ver. 8.02, 1995. [2] T. Kabaya, M. Kakikura, Service robot for housekeeping — clothing handling Journal of Robotics and Mechatronics 10 (3) (1998) 252–257. [3] K. Hamajima, M. Kakikura, Planning strategy for task untangling laundry — isolating clothes from a washed mass Journal of Robotics and Mechatronics 10 (3) (1998) 244–251.

Kyoko Hamajima obtained the Master’s and Doctoral degrees in 1996 and 1999, respectively, at Graduate School of Engineering, Tokyo Denki University. In 1999 she joined the National Institute of Industrial Safety.

Masayoshi Kakikura joined the Electrotechnical Laboratory in 1965. He was with the University of Edinburgh between 1974 and 1975, and joined the Tokyo Denki University in 1990.