Three-dimensional object representation and recognition based on surface normal images

Three-dimensional object representation and recognition based on surface normal images

Pattern Recognition, Vol. 26, No. 6, pp. 913 921, 1993 Printed in Greal Britain 0031 3203/93 $6.00+.00 Pergamon Press Ltd (~ 1993 Pattern Recognition...

2MB Sizes 0 Downloads 20 Views

Pattern Recognition, Vol. 26, No. 6, pp. 913 921, 1993 Printed in Greal Britain

0031 3203/93 $6.00+.00 Pergamon Press Ltd (~ 1993 Pattern Recognition Society

THREE-DIMENSIONAL OBJECT REPRESENTATION AND RECOGNITION BASED ON SURFACE NORMAL IMAGES JONG HOON PARK, TAE GYU CHANG and JONG Soo CHO! Department of Electronic Engineering, Chung Ang University, Hukseok-Dong, Dongjak-Ku, Seoul 156-756, Korea (Received 30 April 1992 in revised form 19 November 1992; receivedfor publication 27 November 1992)

Abstraet--A new idea of using surface normal images (SNIs) for object description and recognition dealing with range images is presented. The surface normal images of an object are defined as the projected images obtained from view angles facing normal to each surface of the object. The proposed approach can significantly alleviate the difficulty involved in obtaining a correct correspondence between a scene object and a model by explicitly providing a transform for the matching. The SNI-based object description is applied to the construction of a three-dimensional (3D) object recognition system targeted for the 20 simulated objects~The operation of the overall system is shown through the experiment for various synthetic and real range images. Model-based object recognition Model base construction

Surface normal image

I. INTRODUCTION Model-based three-dimensional (3D) object recognition has been studied by many researchers as one of the most important research issues in computer vision. In general, the model-based approach can be characterized by the fact that the overall recognition is based on matching of features extracted from scene objects against those of the preconstructed models. Thus, the structure and performance of a recognition system heavily depends on the appropriate choice of the object description method. The exact representation and the multi-view feature representation can be regarded as the two major categories of the 3D object description methods. 11-31 In the exact representation, generally a single generic model is constructed for an object. Examples of the approach include the volume representation, the surface boundary representation, and the generalized cylinder representation methods. 11-6) In the multiview feature representation, an object is described in the form of an aspect graph which is constructed with object views from multiple view points. The discrete view-sphere and the characteristic view representations are among the typical examples of the multiview feature representation methods, u'2'7-91 Since a scene object is described in a different coordinate system from that of a model, one of the coordinate systems must be transformed into the other to obtain a correspondence for the matching. However, it is well understood that the correspondence is generally difficult to obtain, since there does not exist any straightforward reference to get a proper transformation. Instead, most of the approaches have to rely on some type of time-consuming search methods based on the hypothesis generation and verification.14-J 2)

Object description

Spatial correspondence

It is one of the objectives of this research to provide an easier way of obtaining correspondence by introducing a new approach, surface normal image (SNI) based object description. The surface normal images of an object are defined as the orthogonally projected images obtained from view angles facing normal to each surface of the object. The recognition is based on the matching of the preconstructed model which consists of SNIs, against the rotated input images (RIIs). The Rlls are obtained by rotating the input image into several directions so that the surface normal vector of each scene object surface points to the view direction, i.e. z-axis direction in the model coordinate system. Therefore, in the proposed approach, the surface normal view direction readily provides a straightforward reference for the correct transform between model and scene objects saving much of the time required to obtain the correspondence between model features and scene features. An example of the SNI-based object recognition system is also constructed in this research. A hierarchical model base is constructed around the SNI structure of each object. The recognition is basically performed by matching between the SNIs of the model against the RIIs obtained from the scene object. The orientation information of each surface of the scene object is used to obtain the rotated input images. It is obvious that this approach requires the 3D range information. Therefore, this approach assumes the range images as the application tasks for the object recognition. A formal definition of surface normal image is presented in Section 2. Section 3 describes the SNI-based object description method. Section 4 presents the construction of an example recognition system and its operation, and Section 5 includes the discussions and conclusion.

913

914

J.H. PARKet al.

(b)

2. SURFACE N O R M A L I M A G E (SNI)

The generic shape of a planar surface can be best described with its formal view. The viewer direction which gives the frontal view is termed as the surface normal view (SNV) of the surface. Surface normal images (SNIs) of an object are defined as the set of frontal view images obtained from all the SNVs of the object. Thus, the number of SNIs of an object is equal to the number of surfaces of which the object is composed. Figure 1 shows an example of SNV and the corresponding SNI for an object composed of planar surfaces, As is shown in the figure, the SNV is just the same as the surface normal vector N s = (nx, nr, n~). The number of the surface normal viewer directions is confined to the number of the surfaces of the object, while there are infinitely many general viewer directions. One SNI is generated from the projected shape of the object to the viewer at the SNV. In this case, the corresponding surface associated with the SNV is termed as the base surface of the SNI. Therefore, the general pattern of SNI is composed of several neighbor surfaces around one base surface. In this way, a delineation of an object tends to be simpler compared to general views of the object. Figure 2 shows the SNIs of a sample object. So far, base surfaces are assumed to be planes for the definition of SNI. The same definition of SNI cannot be applied to curved surfaces, since the normal vectors are not uniform on the curved surfaces. The concept of SNI can be readily extended to the case of a curved surface by partitioning the surface into several patches. Therefore, for a curved surface, several SNIs can be generated according to the partition, where the average normal vector of each patch is regarded as the SNV for the corresponding SNI. In this way, the total view range of a curved surface becomes approximately represented by the finite number of SNVs. Therefore, the number of partitioning patches is directly related to the orientation error. More patches are required as the tolerable error bound gets smaller. The shape and the size of a surface are the most important factors for the determination of the partition. In general, it is very hard to provide the systematic partitioning method for an arbitrarily shaped surface. However, for some typical types of

(a)

lff

l (a) z

Fig. 1. Surface normal view and the corresponding surface normal image: (a) general viewer direction; (b) surface normal viewer direction.

5

7

Fig. 2. Surface normal images ([] denotes a base surface).

curved surfaces, we can obtain the analytic relation between the partitioning and the error bound. The cylindrical surface and the spherical surface are among the most typical examples of such a case. Figure 3 shows the partitioning examples where a half-cylinder is partitioned into five equiangled patches (in Fig. 3(a)) and nine equiangled patches (in Fig. 3(b)) with the error bounds of 18 ° and 10°, respectively. Because of the uniform curvature and symmetry characteristics of these surfaces, the partitioning is relatively easy and a smaller number of patches is required. The same partitioning concept of the error bound can be applied to the partitioning of arbitrarily shaped surfaces.

Co)

Fig. 3. An example of the partitioning for the cylindrical surface.

Three-dimensional object representation and recognition based on surface normal images

An individual surface is described by a set of attributes of the surface, such as curvature, area, compactness, etc., as is used in other general surface boundary representation methods. <+ 9~ The attributes used in this research to describe the surface and line features are summarized as shown in Table 1. The attributes of a feature can be categorized on the bases of its shape (S), relations (R), and position/orientation (L). u°~ A shape attribute describes the local geometric shape of the surfaces. A relation attribute indicates how a given feature is topologically related to other features. A position/orientation attribute specifies position and orientation of the feature with respect to some coordinate system. The position/orientation attributes are the information used in rotating the object for the matching. It is also used in estimating the pose of a scene object after the matching. The relation attributes are used in order to determine the relative positions of neighboring surfaces. The shape attributes determine whether

3. SNI-BASED 3D O B J E C T D E S C R I P T I O N

3.1. Construction o f a model base

The set of SNIs of an object provides the basic structure for the construction of a model base. The overall structure of a model base is in the form of a layered tree, as is shown in Fig. 4. An object node at the top layer consists of a set of subnodes each representing one surface normal image of the object. Each SNI node consists of a set of subnodes representing the surfaces belonging to the SNI. An object node is associated with a certain base coordinate system, which can be used as the reference for the determination of the location and orientation of the scene object. A set of SNIs and the geometrical relationships among them fully describe an object at the top level. Similarly, an SNI node is described by a set of surfaces which make up the SNI, and by their spatial relationships.

Object node • a set of SNIs

........ . . . . .

Object

geome~ical ~e~tions

Lllyer

~

among SNIs

J

Slql

i N

~

. L~yQr

l

I ~

.,~ ~

l n

-

.

.

+

-

o ' -. . +

d

* a set of surfaces geometricalrelations among the segmented surfaces

Surface ¢e node

---

+~

Attributes of Surface features

Fig. 4. Structure of the model base.

Table 1. Summary of the attributes of features used for the construction of the model base

8~

915

Attributes

Content of attributes

surface orientation (L) relative area (S) surface type (S) eccentricity (S) compactness (S) adjacent_to (R) length of conjunction edge (R) conjunction type (R) conjunction angle (R) number of boundary (S)

orientation of the surface relative area of the surface plane, cylindrical, or other curved surface neighbor surfaces around the surface length of conjunction edge between two surfaces conjunction type (convex or concave) conjunction angle of two surfaces number of lines in the surface

line type (S) relative length of line (R) conjunction angle (S)

straight line or curved line relative length of line conjunction angle between lines

916

J.H. PARKet al.

a surface in an SNI is matched with some of the RII or not. A noticeable difference of the layered structure of a model base described above compared to that of the conventional surface boundary representation method is that there exists an intermediate SNI layer between the surface nodes and the object modes. 3.2.

I

I

(a)

(b)

5 of orientation difference

5°of orientation

difference

Object matchin9

The object description presented in the previous section can be also regarded as a particular example of the surface boundary-based object description methods. Therefore, a basically similar approach, as shown in references (7, 9, 10), forms the core structure of the object recognition. The problem of finding a correct correspondence between the scene object coordinate system and the model object coordinate system is one of the major burdens in the conventional surface feature based object recognition. On the other hand, the proposed approach assumes a straightforward reference, i.e. the surface normal viewer direction, for obtaining the correct transform between the model and scene objects. Therefore, the complexity of obtaining correspondence is significantly reduced. From an input image, several RIIs are generated by rotating the scene object to make the orientation of each visible surface normal to the viewer. The RIIs are tried for the matching against the SNIs of the model base. Therefore, the problem of obtaining a proper transform is simplified to a single rotation about the z-axis and a linear translation. Consequently, the 3D matching between the model and scene objects is simplified into the 2D matching between the SNIs a~d the Rlls, as is shown in Fig. 5. It is obvious that the generic shape of a surface observed from the frontal view direction offers better clarity and tolerance for the purpose of object descrip-

Wodd-coordin**e Model object ~ N ~ ° ~

pOsed of

Object coordinate system

_

x

y

Fig. 5. Matching between an SNI and an RII which have the same base surface.

Fig. 6. Pairs of views with the same orientation difference of 5°: better tolerance (less sensitivity) is observed in (b), which are closer to SNIs, to the same amount of orientation error; generic shape of the surface, i.e. a circle in this case, is better observed with pair (b) since it is closer to the frontal view.

tion and recognition compared to the case where oblique views are used for the same purpose. As illustrated in Fig. 6, pair (b) is much clearer than pair (a) to recognize the shape of the surface, a circle in this case, since view (b) is closer to the frontal view. In Fig. 6, each pair of surfaces has the same absolute amount of orientation difference, i.e. 5'~. It is observed that the effect of orientation difference is more distinctive in case (a), compared to case (b) where the two surfaces are posed closer to the frontal view direction. This indicates that the frontal view offers better tolerance (less sensitivity) to measurement errors. The proposed approach aims for such advantages of better transparency and tolerance by explicitly introducing the rotated images in the procedure of object description and recognition. This can be regarded as a major difference compared to the multi-view feature representation where the surface shape obtained from the viewer direction forms the basis for the recognition. However, since the matching is tested with rotated input images, there exist a couple of problems which do not occur in the other conventional schemes. The major difference comes from the fact that partially occluded surfaces do not preserve the generic shape of the surface in the noted image, while the surface normal images in the model base are constructed based on the generic shape of each surface. Therefore, when the input image contains a partially occluded surface, special considerations need to be given to the matching. Since a large number of rotated input images are generated from an input image, special consideration is needed for the proper treatment of the RIIs in the matching. The object matching procedure is briefly described in the following with detailed discussions on the above-mentioned considerations. Matching of the RIIs against the corresponding SNIs in the model base forms the basic structure of the object i'ecognition, as depicted in Fig. 7.

Three-dimensional object representation and recognition based on surface normal images

Fig. 7. Flow of the object recognition. One of the preconstructed model objects is expected to contain the SNIs which are matched with the RIIs. Therefore, the RIIs have to be compared with the SNIs of each model object in a predetermined sequence to test for the matching. The sequence can be determined according to a couple of attributes of each base surface such as the size of the area and type of the base surface. In the experiment, a planar surface is prior to a curved surface and a larger surface is prior to smaller ones. In the beginning of the matching procedure, all model objects are candidates for the identification of the scene object. The matching procedure is continued until a unique candidate is found. As is general in the surface boundary description methods, internal surface features and the relations among the surfaces are the main data used for the matching. ~7'9) Finally, the pose of the scene object can be obtained by using the position/orientation attributes of the surfaces of the object, after the determination of the matched model object.

917

As was mentioned before, if a partially occluded surface is included in the scene object, the occluded surface needs to be treated in a special way, because some of the attributes of the surface are affected by the occlusion. The attributes of the surface features are categorized into the occlusion-variant (OV) group and occlusion-invariant (OI) group. For the OI attributes (for example, surface type, line type, conjunction angle between surfaces, conjunction edge type, etc.), the attribute values of the RII are required to be equal to that of the SNI for the matching. For some of the OV attributes, we can find specific rules which reflect the nature of attribute changes resulting from the occlusion. Such OV attributes can be a part of the useful information for the matching. Examples of such attributes are the number of neighboring surfaces, the number of holes, and the length of the conjunction edge, etc. The value of these attributes always becomes smaller than the original ones as a result of the occlusion.

4. EXPERIMENTS

In this paper, an experimental procedure is presented to show the application of the concept of SN1 to the 3D object recognition. The approach described in Section 3 is applied to the construction of a 3D object recognition system. A model base is constructed for the total of 26 objects, consisting of 6 blocks, 10 workpieces, and 10 toys, using the CAD information of each object. Various synthetic and real images are used in the test of the

(a) Fig. 8. Models used in experiments: ta) workpiece object models; (b} toy object models; tc) block object models.

918

J.H. PARK et al.

(b)

(e) Fig. 8. (Continued.)

Fig. 9. Input range images: (a) scene #, RD3 (synthetic image); (b) scene #, RD4 (synthetic image); (c) scene #, Poly 2 (real image).

T h r e e - d i m e n s i o n a l object r e p r e s e n t a t i o n a n d recognition based on surface n o r m a l i m a g e s

Fig. 10. S e g m e n t a t i o n results of the r a n g e images: (a) scene #, RD3; [b) scene #, RD4: (el scene #, Poly 2.

T a b l e 2. Results of m a t c h i n g (a) O u t p u t listing of the m a t c h i n g p r o c e d u r e of scene # R D3

SCENE # : RD3 SCENE NAME : WORKPIECE-i NUMBER OF RIIs OF IOB.i PREPARED FOR MATCHING 6 MATCHING ORDER OF IOB.I RILl Rll.4 RH.6 RII.5 RII.2RII.3 RII.1 IS MATCHED TO MOB.! SNI.I RU.I IS MATCHED TO MOB.3 SNI.I RII. l IS MATCHED TO MOB.5 SNI. I RII.I IS MATCHED TO MOB.t4 SNI.6 RILl tS MATCHED TO MOB.t5 SNI.2 MATCHING CANDIDATE IS MOB. 1 MOB.3 MOB.5 MOB.t4 MOB.15 RIIA IS MATCHED TO MOB.1 SNI.2 RII.4 IS MATCHED TO MOB.3 SNI.5 RII.4 IS MATCHED TO MOB.t5 SNI.2 MATCHING CANDIDATE IS MOB.1 MOB.3 MOB.15 RII.6 IS MATCHED TO MOB.3 SNI.7 MATCHING IS COMPLETED !!! IOB.1 IS MATCHED TO MOB.3 NUMBER OF RIIs OF IOB.I USED FOR MATCHING : 3 NUMBER OF Rlls OF IOB.2 PREPARED FOR MATCHING 8 MATCHING ORDER OF lOB.2 RII.7 RII.3 RH.4 RII.6 Ril.2 RII.8 RIi.I Ril.5 Rll.7 IS MATCHED TO MOB.8 SNI.9 RII.7 IS MATCHED TO MOB.I 1 SNI.3 RII.7 IS MATCHED TO MOB.13 SNI.2 MATCHING CANDIDATE IS MOB.8 MOB.11 MOB.13 RII.3 IS MATCHED TO MOB.8 SNI.5 RIL3 IS MATCHED TO MOB.11 SN1.2 RII.3 IS MATCHED TO MOB.13 SNI.! MATCHING CANDIDATE IS MOB.8 MOB.it MOB.13 RII.4 IS MATCHED TO MOB.8 SNI.6 MATCHING IS COMPLETED !!! IOB.2 IS MATCHED TO MOB.8 NUMBER OF RIIs OF IO8.2 USED FOR MATCHING : 3 NUMBER OF RIIs OF IOB.3 PREPARED POR MATCHING 6 MATCHING ORDER OF lOB.3 RII. l RII.3 RII.2 RII.6 RII.5 RII.4 RILl IS MATCHED TO MOB.6 SNI.! RILl IS MATCHED TO MOB.7 SNLI MATCHING CANDIDATE IS MOB.6 MOB.7 RII.3 IS MATCHED TO MOB.6 SNI.4 MATCHING IS COMPLETED !!! IOB.3 IS MATCHED TO MOB.6 NUMBER OF RIIs OF IOB.3 USED POR MATCHING : 2

919

920

J.H. PARKet al. (b) Results of the matching of scene # RD3, RD4, and Poly 2 Scene no. Object no.

RD3 RD4 Poly2 lOB,1 IOB.2IOB.3 IOB.1 IOB.2IOB.3 IOB.4 IOB.I IOB.2

Number of surfaces a.~r segmemation Number of surfaces after surface merging Number of all RIIs prepaid for matehin~ Number of RIIs used for successful matching Matched model object

9

l0

8

18

23

19

18

13

5

6

10

6

It

21

12

15

7

3

6

8

6

7

10

8

10

6

3

3

3

2

3

4

3

2

2

3

MOB MOB MOB

03

#8

#6

system operation. This paper illustrates the operation examples for the cases of two selected synthetic images and one real range image. The experiment system is implemented using C language in SUN 4 workstation. The resolution of the input images is 256,256*256. Figure 8 shows the model objects used in the experiment. The segmentation of the image is the first step needed for the recognition. The segmentation is achieved through a hybrid approach of the edge detection and the region growing procedure is as presented in references (13-15). Discontinuity of depth and orientation is detected for the edge detection. Then, for the segmentation of internal regions surrounded by edges, mean curvature H and Gaussian curvature K are used. Merging of small regions into neighboring ones is needed, because curvatures H and K are very much sensitive to noise. The final result of the segmented image is shown in Fig. 10. The matching sequence of the RIIs is determined based on the surfaces in the segmented image. The matching procedure is continued until the unique candidate model object is found. The pose of the scene object can be computed by using the position/orientation attributes of the matched scene object. Table 2 shows the final result of the matching. In the case of the relatively small surfaces and the surfaces which have relatively large eccentricity, the attributed values of the surfaces are not reliable because of the segmentation error. Therefore, such surfaces are merged to the neighboring ones that have the most similar orientation. The surfaces which are not merged which have small area and large eccentricity, need to be treated in a special way. It needs to be mentioned that the performance of matching may deteriorate depending on the relative amount of range sensing or segmentation errors in this approach. Since the scene object is rotated before matching, the orientation error caused by the range sensing or segmentation errors, when it is relatively big, will affect various other features. This is in contrast to the other approaches where the scene object is not rotated for the matching; the error of an individual

MOB MOB MOB MOB MOB MOB

#l l

#14

#12 #17

#26

#25

feature is generally confined to itself without affecting other features. Therefore, previously mentioned advantages, such as better transparency of matching and tolerance to the measurement errors, can be expected only under the condition that the orientation error is sufficiently small. Consequently, in the SNI-based object recognition, it is natural that the overall recognition strategies, including the environmental setup, preprocessing, feature extraction algorithms, etc., need to be coined to enhance the measurement accuracy of the orientation for the best capitalization of the advantages. The input range images used in the experiment are good enough to yield the accurate orientations without any significant affect of occluding or fatal segmentation error. Therefore, the experiment shows the correct matching results as expected. 5. DISCUSSIONAND CONCLUSION A new approach of SNI-based 3D object representation is proposed. It is shown that an object model can be effectively constructed, based on the representation method. An experimental recognition system is constructed for the selected 20 objectes by applying the proposed SNI-based representation. In the experiment the detailed procedure how the new representation method is applied to 3D object recognition is presented. The difficulty involved in obtaining a correct correspondence is significantly reduced compared to other conventional model-based approaches. This stems from the fact that the correspondence becomes a much simpler problem of 2D image matching because of the straightforward reference. This approach also gives the advantages of better transparency of matching and greater tolerance to measurement error, resulting from the use of frontal view images in the recognition procedure. It is concluded from this research, that the proposed SNI-based object description provides an easier way to obtain correspondence between the models and scene objects for matching in 3D object recognition. The SNI-based approach will be effective especially

Three-dimensional object representation and recognition based on surface normal images

for the recognition of the objects which c o n t a i n mainly p l a n a r surfaces. However, this a p p r o a c h is difficult to apply to curved or concave surfaces, because objects possessing such surfaces yield self-occluding surfaces consequently resulting in surface shape deformation. T h e partially occluded surfaces need to be treated in a special way by using the attributes which are not affected by occlusion as is m e n t i o n e d before.

7. 8. 9. 10. I I.

REFERENCES

1. P. J. Besl and R. C. Jain, Three-dimensional object recognition, Computing Surveys 17, 75-145 (1985). 2. R.T. Chin and C. R. Dyer, Model-based recognition in robot vision, Computing Surveys 18, 68-108 (1986). 3. T.O. Binford, Survey of model-based image analysis systems, Int. J. Robotics Res. !, 18-64 (1982). 4. W. Eric, L. Grimson and T. Lozano-Perez, Model-based recognition and localization from sparse range or tactile data, Int. J. Robotics Res. 3, 3-35 (1984). 5. R. C. Bolles, 3DPO: A three-dimensional part orientation system, Int. J. Robotics Res. 5, 3-26 (1986). 6. O. D. Faugeras and M. Hebert, The representation, rec-

12. 13. 14. 15.

921

ognition, and locating of 3-D objects, Int. J. Robotics Res. 5, 27-52 (1986). M. Oshima and Y. Shirai, Object recognition using threedimensional information, I EEE Trans. Pattern Analysis Mach. lntell. 5, 353 361 (1983). K. lkeuchi, Generating an interpretation tree from a CAD model for 3D-object recognition in bin-picking tasks, Int. J. Comput. Vision 1, 145 165 (1987). T. J. Fan, G. Medioni and R. Nevatia, Recognizing 3-D objects using surface descriptions, IEEE Trans. Pattern Analysis Mach. Intell. II, 1140 1157 (1989). C. H. Chen and A.C. Kak, A robot vision system for recognizing 3-D objects in low-order polynomial time, IEEE Trans. Syst. Man. Cybern. 19, 1535 1563 (1989). G. Stockman, Object recognition and localization via pose clustering, Comput. Vision Graphics Imaqe Process. 40, 361-387 (1987). D.G. Lowe, Three-dimensional object recognition from single two-dimensional images, Art![~ lntell. 31,355 395 (1987). P. J. Besl and R. C. Jain, lnvariant surface characteristics for 3D object recognition in range images, Comput. Vision Graphics lrnaqe Process. 33, 33 80 (1986). T.J. Fan, G. Medioni and R. Nevatia, Segmented descriptions of 3-D surfaces, I EEE Trans. Robotics Automn 3, 527-538 (1987). R. Hoffman and A. K. Jain, Segmentation and classification of range images, IEEE Trans. Pattern Analysis Mach. lntell. 9, 608 620 (1987).

About the Author--JoNG HOON PARK was born in Seoul, Korea, on 15 August 1961. He received the B.S.,

M.S., and Ph.D. degrees from Chung Ang University, Seoul, Korea, all in electronics engineering, in 1984, 1986, and 1992, respectively. He is now a senior researcher at the Broadband Communications Department of Electronics and Telecommunications Research Institute, Taejon, Korea. His current research interests include computer vision and artificial intelligence.

About the Author--TAE GYU CHANG received the B.S. degree in 1979 from Seoul National University, Seoul, Korea, the M.S. degree in 1981 from the Korea Advanced Institute of Science and Technology, Seoul, Korea, and the Ph.D. degree in 1987 from the University of Florida, Gainesville, Florida, all in electrical engineering. From 1981 to 1984, he was with Hyundai Engineering Company and Hyundai Electronics Inc., both in Seoul, Korea, as a Systems Engineer. From 1987 to 1990, he was an Assistant Professor of Information Systems Engineering at Tennessee State University, Nashville, Tennessee. In 1990, he joined the faculty of Chung Ang University, Seoul, Korea, where he is currently an Associate Professor in the Department of Control and Instrumentation Engineering. His research interests include digital signal processing and pattern recognition. Dr Chang is a member of Eta Kappa Nu.

About the Author--JoNG SOD CHOI received the B.S. degree from Inha University~ Incheon, Korea, the M.S. degree from Seoul National University, Seoul, Korea, and the Ph.D. from Keio University, Yokohama, Japan, all in electrical engineering, in 1975, 1977 and 1981, respectively. He joined the faculty at Chung Ang University in 1981 where he is now a professor in the Department of Electronics Engineering. His current research interests are in computer vision and image coding.