Pattern Recognition 41 (2008) 130 – 138 www.elsevier.com/locate/pr
Hierarchical contour matching for dental X-ray radiographs夡 Omaima Nomir a , Mohamed Abdel-Mottaleb b,∗ a CS Department, University of Mansoura, Egypt b ECE Department, University of Miami, USA
Received 6 October 2006; received in revised form 9 April 2007; accepted 18 May 2007
Abstract The goal of forensic dentistry is to identify individuals based on their dental characteristics. In this paper we present a new algorithm for human identification from dental X-ray images. The algorithm is based on matching teeth contours using hierarchical chamfer distance. The algorithm applies a hierarchical contour matching algorithm using multi-resolution representation of the teeth. Given a dental record, usually a postmortem (PM) radiograph, first, the radiograph is segmented and a multi-resolution representation is created for each PM tooth. Each tooth is matched with the archived antemortem (AM) teeth, which have the same tooth number, in the database using the hierarchical algorithm starting from the lowest resolution level. At each resolution level, the AM teeth are arranged in an ascending order according to a matching distance and 50% of the AM teeth with the largest distances are discarded and the remaining AM teeth are marked as possible candidates and the matching process proceeds to the following (higher) resolution level. After matching all the teeth in the PM image, voting is used to obtain a list of best matches for the PM query image based upon the matching results of the individual teeth. Analysis of the time complexity of the proposed algorithm prove that the hierarchical matching significantly reduces the search space and consequently the retrieval time is reduced. The experimental results on a database of 187 AM images show that the algorithm is robust for identifying individuals based on their dental radiographs. 䉷 2007 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved. Keywords: Dental biometrics; Dental radiograph; Postmortem; Antemortem; Segmentation; Identification; Hierarchical; Multi-resolution; Chamfer distance
1. Introduction Human identification based on dental features has always played a very important role in forensics. The main goal of dental biometrics is to identify deceased individuals, where the conventional biometric features, i.e., iris, fingerprint, and face may not be applicable [1,2]. The dental radiographs provide information about the teeth, such as shapes of the crowns and the roots, and dental works such as fillings and bridges. The radiographs acquired after death are called postmortem (PM) radiographs, and the radiographs acquired while the person is alive are called antemortem (AM) radiographs. In dental biometrics, the identification is carried out by analyzing and 夡 This research is supported in part by the U.S. National Science Foundation under Award number EIA-0131079. ∗ Corresponding author. Tel.: +305 284 3 825; fax.: +305 284 4044. E-mail addresses:
[email protected] (O. Nomir),
[email protected] (M. Abdel-Mottaleb).
comparing PM dental records of a decedent against a database of AM records to find best matches. Sometimes the decedent’s teeth are compared to AM written records although the most accurate and reliable method is the comparison of AM and PM radiographs [3,4]. Dental features survive most PM events that may disrupt or change other body tissues, e.g. bodies of victims of motor vehicle accidents, violent crimes, and work place accidents, whose bodies can be disfigured to such an extent that identification by a family member is neither reliable nor desirable [5–7]. As a result, dental features are regarded as the best candidates for PM biometric identification; this is due to their survivability and diversity. Although there is a number of effective solutions for biometric identification that are currently available, new approaches and techniques are necessary to overcome some of the limitations of current systems [1,8]. Currently we are building an automated dental identification system (ADIS) [9–11] for identifying individuals using their dental X-ray records. The
0031-3203/$30.00 䉷 2007 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved. doi:10.1016/j.patcog.2007.05.015
O. Nomir, M. Abdel-Mottaleb / Pattern Recognition 41 (2008) 130 – 138
system can be used by law enforcement agencies to locate missing individuals using databases of X-ray dental radiographs. There are only few published works for dental image matching. In Refs. [12–14], they proposed a method for dental X-ray image segmentation, and contour matching. They measure the distance between the PM and AM radiographs by combining the distance between the contours of the teeth and the distance between the shapes of the dental work. In Ref. [15], they proposed a method for dental radiograph registration based on genetic algorithms. This method is a preprocessing step of the image comparison component of ADIS. They used two multi-resolution techniques, image subsampling and wavelet decomposition to reduce the search space. A genetic algorithm was adopted to search for the best transformation parameters that give maximum similarity score. In Ref. [16], they proposed a method for dental radiograph registration based on the geometric properties of the teeth contours for subject and reference regions obtained from the spatial derivatives of the contour points. They used a dual stage based on the hypotheses generation–verification paradigm. The registration of the dental regions of interest is the basis for hypotheses generation. They used a statistical method to generate hypotheses for registration to reduce the computational time. In Ref. [17], they proposed a method for dental radiograph registration based upon an analytical projective geometry. The algorithm describes each image pixel by using a coordinate system called a reference triangle. In Ref. [18], they proposed a fuzzy reasoning strategy for the registration of periapical and panoramic radiographs. The registration technique outputs a set of transformations that can be used to correctly map the input image to the reference image. In Ref. [19], we presented a fully automated system for archiving and retrieval of dental images. During retrieval, the system retrieves from the AM database the images with the most similar teeth to the PM image based on Hausdorff distance between the teeth contours. In Refs. [20,21], we presented a fully automated system for archiving AM dental images into a database, and also, searching the database for the best matches to a given PM image. The AM images are archived by segmenting and separating the individual teeth in bite-wing images and then extracting a set of signature vectors for each tooth. During searching, matching scores are generated based on the distance between the signature vectors of AM and PM teeth. In this paper, we present a new algorithm for matching dental X-ray radiographs. It uses a hierarchical contour matching algorithm at multi-resolutions. During searching, matching scores are generated based on a distance criteria between the features extracted from the AM and PM teeth at the different resolution levels. The algorithm has two main stages: feature extraction and teeth matching. At the feature extraction stage, the algorithm constructs the multi-resolution representation, where the AM teeth’s contours are extracted and distance transformation, DT, images are built for each AM tooth in the database at the different resolution levels. At the matching stage, given a PM query image, the teeth are segmented and numbered. Then, the
131
algorithm constructs the multi-resolution representation for the teeth of the PM image. At any resolution level, the contour of each PM tooth is constructed from the PM tooth contour at the higher resolution level. For each resolution level, the matching scores are generated based on the distance between the contours of the PM and the AM teeth that have the same tooth number. This is achieved by using the contour of the PM tooth and the DT obtained from the contour of the AM tooth. The goal of our research is to reduce the retrieval time. Using the multi-resolution representation; the search space is significantly reduced and consequently the computational cost. Accordingly, the retrieval time is improved as will be shown by calculating the time complexity for the proposed algorithm. Preliminary experiments show that the algorithm is feasible for human identification using dental X-ray radiographs. Dental X-ray images are classified according to the view they are captured from and their coverage. The most commonly used images are panoramic, periapical, and bite-wing. Panoramic views are taken to provide complete view of the upper and lower jaws, they do not show as fine details as bite-wing and periapical views. Periapical views are captured to obtain a view of the entire tooth area including the tip of the root and the surrounding tissues. Bite-wing views are captured to view the back teeth, only the crowns and the roots for two to four adjacent teeth in both the upper and the lower jaws are captured. Bite-wing images hold more information about the curvature, and the roots, and these images are the most common views made by dentists, therefore, we used them in our system. The paper is organized as follows: Section 2 gives a brief description of the radiograph segmentation, Section 3 introduces the proposed matching algorithm, Section 4 presents the time complexity analysis of the proposed algorithm, Section 5 presents the experimental results, and finally, Section 6 concludes the paper.
2. Radiograph segmentation The goal of radiograph segmentation is to localize the region of each tooth in a dental X-ray image. Dental radiographs may suffer from poor quality, low contrast and uneven exposure that complicate the task of segmentation. Dental X-ray images have three different regions: soft tissue regions and background with the lowest intensity values, bone regions with average intensity values, and teeth regions with the highest intensity values. In some cases the intensity of the bone areas is close to the intensity of the teeth, which makes it difficult to use a single threshold for segmenting the entire image. In this paper we used our segmentation technique introduced in Refs. [20,21], which starts by applying iterative thresholding followed by adaptive thresholding to segment the teeth from both the background and the bone areas. After thresholding, horizontal integral projection and vertical integral projection are applied to separate each individual tooth. The contour pixels for each tooth are then extracted and sampled to represent each tooth by an equal number of contour pixels. Fig. 1 shows the segmentation results on a few bite-wing images.
132
O. Nomir, M. Abdel-Mottaleb / Pattern Recognition 41 (2008) 130 – 138
Fig. 1. Teeth segmentation and separation results: (a) the original images; (b) the images after applying adaptive thresholding; (c) the detected separating lines overlaid over the original image.
3. Hierarchical contour matching of dental X-ray radiographs We developed a contour matching algorithm for matching dental X-ray radiographs based upon the hierarchical chamfer matching method [22–25]. The algorithm finds the best match for a given image by minimizing a predefined matching criterion in terms of the distance between the contour points of the two images. The matching is performed in a hierarchical fashion using multi-resolution representation. The algorithm has two main stages: feature extraction and teeth matching. At the feature extraction stage, the contour pixels are extracted and a DT image [22] is built for all the AM teeth in the database. Then, a multi-resolution representation, which contains the DT images at different resolution levels, is constructed for each tooth from the DT information at the higher resolution level, as will be explained in the feature extraction section. The DTs at all resolution levels for each AM tooth are archived in the AM database. Fig. 2 shows a block diagram of the proposed algorithm. At the teeth matching stage, given a PM query image, the teeth are first segmented and numbered. Matching is achieved in a hierarchical fashion from the lowest resolution to the highest resolution. At any resolution level, the matching scores are generated based on the distance between the contour of the PM tooth and the contour of each AM tooth that have the same tooth number. This is achieved by superimposing the contour pixels of the PM tooth on the DT of each AM tooth, and then, calculating the distance between the PM and the AM contours. The contour of a PM tooth at a given resolution level is constructed from the contour of the same tooth at the higher resolution level, as will be explained in the feature extraction section. The AM teeth are ranked according to the matching distance in an ascending order, i.e., the first ranked AM tooth is the one with the minimum matching distance. Then, majority voting
AM segmented images
PM Segmented image
Feature extraction
Feature extraction
Searching
AM teeth feature vectors
Best matched list
Fig. 2. Block diagram of the hierarchical teeth contour matching.
is used for the whole image to obtain the ranked list of best matching AM images. The DT can be computed by two methods. In the first method, the contour of the AM tooth, at a certain resolution level, is constructed from its contour information at the higher resolution level. Then, the DT [27] is computed from the contour information at that resolution level [28]. In the second method, given the contour of the AM tooth, the DT is computed for the highest resolution level. Then, for the following lower resolution levels, the DT is computed using the DT information from the higher resolution level without creating the pyramid of the AM contours (as will be explained later). We used this method because it requires less computation than the first method. To increase the accuracy of our matching and reduce the search space, we only consider matching corresponding teeth, i.e., teeth that have the same number. For all the images in the database, the segmented teeth are automatically classified and numbered according to the universal teeth numbering system using our algorithm in Ref. [26]. This eliminates the possibility of matching teeth that have different numbers.
O. Nomir, M. Abdel-Mottaleb / Pattern Recognition 41 (2008) 130 – 138
133
For a given resolution level, l, if the coordinates of a point at the original image are x and y, then the corresponding pixel coordinates at resolution level l will be xl = 2−l (x + 2l − 1)
and
yl = 2−l (y + 2l − 1).
(2)
3.2. Teeth matching Fig. 3. Example of the distance transformation: the zero entries represent the pixels’ positions of an AM tooth contour. The dark-edge entries represent the pixels’ positions of a given PM tooth contour.
The details of the feature extraction and hierarchical teeth matching are given in Sections 3.1 and 3.2, respectively. 3.1. Feature extraction The DT [22] for the AM tooth at the highest resolution level is computed iteratively by setting each contour point to zero and non-contour points to infinity, then each pixel is assigned k equal to a new value vi,j
k vi,j = min
⎧ k−1 vi,j ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ v k−1 + 3, i,p k−1 ⎪ ⎪ vm,j +3, ⎪ ⎪ ⎪ ⎩ k−1 vm,p +4,
p = j −1, j +1, m = i − 1, i+1,
(1)
m = i − 1, i+1; p = j − 1, j +1,
k is the value of the pixel at position i, j at iteration k. where vi,j This iterative procedure continues till no changes occur in the values. We can notice that the global distances in the DT image are approximated by propagating local distances, i.e., distances between neighboring pixels over the image. This computation is only applied for an area around the contour points rather than the whole image, which further reduces the number of computations. From our experimental results, 10–15 iterations were sufficient for convergence. Fig. 3 shows an example of a DT. The DT at a given resolution level in the hierarchical pyramid is constructed from the DT at the higher resolution level by replacing each block of four pixel values by one pixel. The value of this new pixel is the result of averaging the four parent pixel values. Then, the DT images at all resolution levels for each AM tooth are archived in the AM database. This accelerates the calculation of the matching scores between a PM tooth and an AM tooth at the matching stage. For a PM tooth, we need to construct the contour images at the different resolutions to perform matching. The contour image at a given resolution level in the multi-resolution pyramid is constructed from the contour image at the higher resolution level by replacing each block of four pixels by one pixel. This new pixel is the result of the “OR” of the four parent pixels. In Ref. [22] this process is repeated till only one pixel is left, but from our experimental results we found that it is sufficient to use six levels. This is due to the fact that there are no details in the lower resolution levels.
The idea of our matching algorithm is to perform matching iteratively at the different resolution levels. Starting at the lowest archived resolution level, the search space is large, i.e., contains all the images, while the matching between two teeth is fast. At each resolution level, the distance between the AM and the PM teeth contours is calculated. Then, the AM teeth are arranged in an ascending order based upon the calculated distance. Half of the AM teeth with the largest distances are removed from the search space and the remaining AM teeth are marked as the possible candidates for further match. As a result, the search space is decreased while moving to higher resolution levels. Before calculating the matching distance between a PM image and an AM image, the contour of each tooth in the PM image has to be aligned with the contour of the corresponding tooth (i.e., tooth that has the same number) in the AM image. There may be variations in scale, rotation, and translation between the AM and the PM teeth. To solve this problem, we apply a transformation, T [13,29] to the query tooth in order to align both teeth, and result in a minimum matching distance between them. T is in the form T (q) = A ∗ q + ,
(3)
where q is the query contour, T (q) is the result of applying the transformation to the query contour, A is a transformation matrix that represents both rotation and scaling, and is a translation vector. A and can be represented as cos sin Sx 0 A= ∗ , (4) − sin cos 0 Sy x = , (5) y where is the rotation angle, Sx and Sy are vertical and horizontal scale factors, and x and y are vertical and horizontal translations. The five parameters (i.e.,, , Sx , Sy x , y ) are optimized to obtain the minimum matching distance, between the transformed contour of the query tooth and the contour of the AM tooth. The matching distance is M 1 D(qj , kj ) =
vi2 , (6) M i=1
where qj is tooth j in the query image Q, kj is tooth j in the AM image K, vi is the value of the DT for the AM image at position i after superimposing the contour of the PM tooth, where position i lies on the contour of the PM tooth, and M is the number
134
O. Nomir, M. Abdel-Mottaleb / Pattern Recognition 41 (2008) 130 – 138
of contour points. D will be zero if we have a perfect match between the contours of the AM and the PM teeth. Our goal is to search for the best match between a given PM and the AM teeth in the database. The matching procedure starts from the lowest resolution level and proceeds to the higher levels, where the results at the low level will guide the matching at the higher level. The AM teeth are then arranged in an ascending order according to the calculated distance D. We ignore 50% of the teeth with the largest distances. The remaining AM teeth are marked as the possible candidates and the matching process proceeds to the following, higher, resolution level with more feature points (contour points), and less search space (less number of teeth). This is one of the important advantages of using hierarchal algorithm, i.e., to speed up the search. In order to obtain the best matching image, majority voting is used so that the best matching AM image is the image with the maximum number of teeth ranked first. For a given PM image, we order the matched AM images according to the maximum number of teeth that rank first, then to the maximum number of teeth that rank second and so on. The best AM match is the first image in the list. If there is a tie, the one that has the minimum average matching distance for the whole AM image is chosen. 4. Performance evaluation Most of the processing time of the retrieval step is spent in matching. The idea of using the hierarchical algorithm is to speed up the computations and accordingly decrease the retrieval time. In this section we study the performance of the proposed algorithm. The matching performance will be evaluated with and without the use of the hierarchical search. Fig. 4 shows the flowchart of the hierarchical matching scenario. The time complexity for the archiving step, i.e., computing the AM DT images, is O(I ∗J ), where I andJ are the dimensions of the contour image of the tooth. Suppose, we have H AM teeth, the time required for calculating matching scores between a PM tooth and the AM teeth without using the multi-resolution representation and the hierarchal search is equal to the sum of the times required to align and calculate the matching score between the PM tooth and each AM tooth, which is O(H M 2 ) plus O(HM), where M is the number of the contour points. Accordingly, the time complexity of the algorithm is O(H M 2 ). Using the hierarchical search, suppose we have n resolution levels and the comparison between a PM and each AM tooth starts at the lowest resolution level, n, accordingly, the computational cost is equal to the sum of the times required for aligning the PM tooth and each of the retained AM teeth contours at all resolution levels, the time required to construct the multi-resolution representation of the contour of the PM tooth, and the time required for calculating the matching scores at all resolution levels. Assume that at any resolution level l, H /2n−l represents the number of AM teeth, because we ignore 50% of the teeth with the largest distances when moving to a higher resolution level, and M/4l−1 represents the number of contour
Contour of the PM tooth
Contour of the AM tooth
Matching distance
Distance map
Sort
Reject
Level ?
Matched AM teeth
Fig. 4. Flowchart of the hierarchical step.
points, since each block of four pixels is replaced by one pixel. Using this terminology, following is the time complexity for each of the three terms:
H ∗ 20
M 4n−1
2 H M + n−1 ∗ 2 40
C0 = W 0 ∗
= W0 H M 2
n y=1
2
1 42(n−y)
+
∗
H ∗ 21
1 2y−1
M 4n−2
2 + ···
,
(7)
where C0 is the total time required to align a PM tooth and the retained AM teeth at all resolution levels, and W0 is a constant.
C1 = W1 ∗
M M M + n−2 + · · · + 0 n−1 4 4 4
= W1 M
n y=1
1 4n−y (8)
where C1 is the total time required to construct the multiresolution representation of the contour of the PM tooth, and W1 is a constant:
H H M M C2 = W2 ∗ + 1∗ + ··· ∗ 20 4n−1 2 4n−2
H M + n−1 ∗ 2 40
= W2 H M
n y=1
1 1 ∗ , 4n−y 2y−1
(9)
where C2 is the total time required to compare a PM tooth to the retained AM teeth at all resolution levels, and W2 is a constant.
O. Nomir, M. Abdel-Mottaleb / Pattern Recognition 41 (2008) 130 – 138
Therefore the computational complexity C for calculating the matching scores for a PM image is C = C0 + C1 + C2 ⎛ n 2 ⎝ = W HM y=1
+M ⎛ =W ⎝
n y=1
1 4n−y
1 1 ∗ 42(n−y) 2y−1
H
1 2y−1
n M 2 3y−1 2 42n−1 y=1
+M
⎞ +1 ⎠
n y=1
1 4n−y
H
1 2y−1
⎞ + 1 ⎠,
(10)
where W is a constant. The expression for C is equivalent to O(H (M 2 /2n−1 )). Comparing the computational complexity for retrieval time for both cases, i.e., with and without the hierarchical search, it is clear that the hierarchical method reduces the retrieval time. Even though, as the number of levels, n, increases the computational time decreases, we should limit the number of levels so that we do not end up with useless contour representations at the lowest resolution levels.
between the PM tooth and the AM teeth in the database that have the same number using the hierarchical matching described in Section 3.2. The AM teeth are ranked in an ascending order according to the matching distance. Then, majority voting is used to order the matched AM images according to the maximum number of teeth that rank first, the maximum number of teeth that rank second and so on. The best AM match is the first image in the list. The matching algorithm was evaluated using 50 PM query images, the correct matches were always retrieved for the 50 PM query images, 42 were ranked first, three were ranked second, two were ranked fourth, and three were ranked fifth. The matching performance curve is shown in Fig. 6. Fig. 7 shows the retrieval results for one query PM image. Each row contains the query PM tooth and the best matched AM tooth from the database, in columns a and b, respectively. Column c shows the corresponding AM tooth for the same person when it is not ranked first. The matching distance D is shown under each retrieved tooth. The PM image in Fig. 7 contains six teeth; four out of the six teeth were correctly matched to an AM image of the same person, while for the other two incorrectly matched teeth, their correct matches were ranked fourth and third. By studying the mismatched cases, we found that there are different reasons for mismatching. In some cases the contour of the tooth is not correctly extracted during the segmentation step because of the poor quality of the image. In other cases, the teeth
5. Experimental results We tested the hierarchical teeth matching algorithm on a database of 187 AM bite-wing dental images. Fig. 5 shows a sample of the X-ray AM images in the database. During archiving, the teeth were segmented, using the segmentation algorithm presented in Ref. [20], to obtain their contours. Then, the teeth were classified and numbered using our algorithm in Ref. [21]. DTs were calculated for the contour of each individual tooth in the AM database at the different resolution levels. During retrieval, the matching distance D is calculated
135
Fig. 6. Hierarchical matching performance curve.
Fig. 5. Sample of X-ray AM images from the database.
136
O. Nomir, M. Abdel-Mottaleb / Pattern Recognition 41 (2008) 130 – 138
1
1
D=1.3342 Rank =1
D=1.034 Rank = 1
2
2 D=1.1328 Rank = 1
D=2.0954 Rank=4
D=1.4098 Rank=1
D=1.6004 Rank=3
D=0.9970 Rank=1
D=1.9196 Rank=2
D=1.3761 Rank=1
3
3 D=1.6548 Rank=4
D=1.1083 Rank = 1
4
4 D=1.4421 Rank = 1
Fig. 8. Retrieval results for a misclassified subject: (a) query PM tooth marked with black; (b) the AM tooth for the correct person marked with black; (c) the ranked first tooth, when matched with a wrong person, marked with black (rows 2, 3, and 4).
5 D=0.9705 Rank = 1
6 D=1.8863 Rank = 3
D=0.9976 Rank =1
Fig. 7. Retrieval results of the hierarchical matching algorithm: (a) query PM tooth marked with black; (b) the AM tooth for the correct person marked with black; (c) the ranked first tooth, when matched with a wrong person, marked with black (rows 3 and 6).
at the corners of the image have poor quality and sometimes parts of the teeth do not appear in the image. In other cases, because the X-ray image is a 2D projection of a 3D object, the 2D shapes of the contours were similar which lead to wrong matches. It is also important to note that if the PM images are captured long after the AM images were captured, the shapes of the teeth can change because of artificial prosthesis, teeth growth, and teeth extraction. Fig. 8 shows an example of one mismatched result. The PM image in Fig. 8a contains four teeth, Fig. 8b shows two out of the four teeth were matched to the same wrong person (rows 2 and 3), based on voting, this wrong match was ranked first. It also shows that one out of the four teeth was correctly matched with a corresponding AM tooth for the same person (row 1). A tie appeared between two AM images for the second place (rows 1 and 4). Therefore, we calculated the total matching distance for the two AM images. The AM image that has the
minimum matching distance was chosen, which happened to be for the correct match (row 1). We also applied the same matching algorithm to our test set using only the original images without using the multiresolution representation and the hierarchical matching. Comparing the retrieval time of both methods, the hierarchical matching method reduces the retrieval time by 31%, on average. Also, using the same test set, we applied our matching algorithm based on signature vectors that was introduced in Ref. [20]. This algorithm relies on selecting a set of salient points on the contours of the AM and PM teeth. Then, it generates a signature vector for each salient point. The signature vectors capture the curvature information for each salient point. Each element in the vector is the distance between the salient point and a point on the contour. During searching, matching scores are generated based on the distance between the signature vectors of the AM and PM teeth. For the 50 PM query images, the correct matches were always retrieved for the 50 PM query images, but 40 correct matches were ranked first, three were ranked second, two were ranked third, four were ranked fifth, and one was ranked seventh. The matching performance curve using the signature vectors is shown in Fig. 9. We also compared the retrieval time for this method and the hierarchical matching method presented in this paper. The hierarchical matching method is 26% faster than the method in Ref. [20], on average. We can see from the results that the new algorithm outperforms our previous algorithm in accuracy, and at the same time
O. Nomir, M. Abdel-Mottaleb / Pattern Recognition 41 (2008) 130 – 138
Fig. 9. Performance curve of the signature vectors matching technique.
Fig. 10. Performance curve of our proposed technique using data set used in Ref. [15].
is faster, because of the use of the multi-resolution representation and the hierarchical matching. We also compared our introduced technique and the technique introduced in Ref. [15] using the same data set. In Ref. [15], they select regions from the subject and reference dental images as input for registration. A genetic algorithm was adopted to search for the best transformation parameters that give maximum similarity score. They perform the initial registration at the lowest resolution to reduce the search space. The relative percentage errors between the known and estimated transformation parameters were less than 20%. In our proposed technique, the algorithm starts at the lowest resolution level. At each resolution level, we calculate the distance between the AM and the PM teeth contours. Then, half of the AM teeth with the largest distances are removed from the search space. We apply this rejection criterion at each resolution level to decrease the search space. For the comparison, we used the data set that was used in Ref. [15], which has 132 bite-wing AM images that contain 851 teeth and 40 bite-wing PM images that contain 262 teeth. Among the 262 queries, 221 correct matches were ranked first, with recognition rate 84%. The correct matches were always retrieved for the 40 PM query images, 34 were ranked first, three were ranked second, one was ranked third, and two were ranked fifth. The matching performance curve is shown in Fig. 10. 6. Conclusion and future work In this paper, we introduced a new algorithm, based upon hierarchical chamfer distance algorithm, for dental X-ray matching. The AM images are first segmented, the teeth are numbered, and the distance transformations, DTs, of individual teeth are archived in an AM database. Given a PM image,
137
the teeth are segmented and numbered. The matching scores are generated based on the distance between the contours of the PM and the AM teeth. The hierarchical algorithm applies matching at different resolution levels. DT was used to further speed the matching at each resolution level. Using this algorithm, the search space as well as the computational load are significantly reduced. We also studied the time complexity of the technique with and without applying the multi-resolution hierarchy to formally prove that the retrieval time is reduced when applying the multi-resolution representation. We also showed that the new technique outperforms our previous technique in accuracy as well as speed. The experimental results show that the matching technique is robust. Currently we are working on enlarging the dental radiographs database by including panoramic and periapical radiographs. This requires modifying our segmentation technique to handle the different characteristics of these images.
References [1] D. Maltoni, A.K. Jain, Biometric authentication, in: The Eighth European Conference on Computer Vision, International Workshop, BioAW 2004, Vol. 3087, Prague, Czech Republic, May 2004, pp. 11–14. [2] L. Hong, A. Jain, S. Pankanti, Biometric identification, Commun. ACM 43 (2) (2000) 91–98. [3] B.G. Brogdon, Forensic Radiology, CRC Press, Boca Raton, 1998. [4] I.A. Pretty, D. Sweet, A look at forensic dentistry—part 1: the role of teeth in the determination of human identity, Br. Dent. J. 190 (7) (2001) 359–366. [5] F.S. Malkowski, Forensic dentistry: a study of personal identification, Dent. Stud. 51 (3) (1972) 42–44. [6] V.W. Weedn, Postmortem identifications of remains, Clin. Lab. Med. 18 (1998) 115–137. [7] R.B. Dorion, Disasters big and small, J. Can. Dent. Asso. 56 (1990) 593–598. [8] A.K. Jain, A. Ross, S. Prabhakar, An introduction to biometric recognition, IEEE Trans. Circuits Syst. Video Technol., Special Issue on Image and Video-Based Biometrics 14 (1) (2004) 4–20. [9] G. Fahmy, D.M. Nassar, E. Haj-Said, H. Chen, O. Nomir, J.D. Zhou, R. Howell, H.H. Ammar, M. Abdel-Mottaleb, A.K. Jain, Towards an automated dental identification system (ADIS), in: Proceedings of National Conference on Digital Government Research, DGO, Seattle, WA, USA, 2004, pp. 789–796. [10] G. Fahmy, D. Nassar, E. Haj-Said, H. Chen, O. Nomir, J. Zhou, R. Howell, H. Ammar, M. Abdel-Mottaleb, A. Jain, Toward an automated dental identification system (ADIS), J. Electron. Imaging 14 (4) (2005). [11] D. Nassar, E. Haj Said, A. Abaza, S. Chekuri, J. Zhou, M. Mahoor, O. Nomir, H. Chen, G. Fahmy, H. Ammar, M. Abdel-Mottaled, A. Jain, Automated dental identification system ADIS, in: Proceedings of National Conference on Digital Government Research, DGO, Atlanta, GA, USA, May 2005. [12] A.K. Jain, H. Chen, S. Minut, Dental biometrics: human identification using dental radiographs, in: 4th International Conference of Audio and Video-Based Biometric Person Authentication, AVBPA2003, Guildford, UK, 2003, pp. 429–437. [13] A.K. Jain, H. Chen, Matching of dental X-ray images for human identification, Pattern Recognition 37 (7) (2004) 1519–1532. [14] A.K. Jain, H. Chen, Alignment and matching of dental radiographs, IEEE Trans. Pattern Anal. Mach. Intell. 27 (8) (2005) 1319–1326. [15] O. Mythili, Multi resolution dental image registration based on genetic algorithm, Master Thesis, Department of Electrical and Computer Engineering, WVU, USA, 2005.
138
O. Nomir, M. Abdel-Mottaleb / Pattern Recognition 41 (2008) 130 – 138
[16] Z. Millwala, A dual stage approach to dental image registration, Master Thesis, Department of Electrical and Computer Engineering, WVU, USA, December 2004. [17] J. Ostuni, E. Fisher, P.V. Stelt, S. Dunn, Registration of dental radiographs using projective geometry, J. Dentomaxillofacial Radiol. 22 (4) (1993) 199–203. [18] F. Samadzadegan, F. Bashizadeh, M. Hahn, P. Ramzi, Automatic registration of dental radiograms, Geo-Imagery Bridging Continents, Istanbul, Turkey, 12–23 July 2004. [19] J.D. Zhou, M. Abed-Mottaleb, A content-based system for human identification based on bitewing dental X-ray images, J. Pattern Recognition 38 (2005) 2132–2142. [20] O. Nomir, M. Abdel-Mottaleb, A system for human identification from X-ray dental radiographs, J. Pattern Recognition 38 (2005) 1295–1305. [21] M. Abdel-Mottaleb, O. Nomir, D. Nassar, G. Fahmy, H. Ammar, Challenges of developing an automated dental identification system, in: IEEE Mid-West Symposium for Circuits and Systems, Cairo, Egypt, 27–29 December 2003. [22] G. Borgefors, Hierarchical Chamfer matching: a parametric edge matching algorithm, IEEE Trans. Pattern Anal. Mach. Intell. 10 (6) (1988) 849–865.
[23] J. You, W. Zhu, E. Pissaloux, H. Cohen, Hierarchical image matching: a chamfer matching algorithm using interesting points, Int. J. Real-time Imaging 1 (1995) 245–259. [24] M. Akmal, P. Maragos, Optimum design of chamfer distance transforms, IEEE Trans. Image Process. 7 (10) (1998) 1477–1484. [25] J. Arlandis, J.C. Perez-Cortes, The continuous distance transformation: a generalization of the distance transformation for continues-valued images, Pattern Recognition Appli. 56 (2000) 89–98. [26] M. Mahoor, M. Abdel-Mottaleb, Classification and numbering of teeth in bitewing dental images, J. Pattern Recognition 38 (2005) 577–586. [27] H.G. Barrow, J.M. Tenenbuam, R.C. Bolles, H.C. Wolf, Parametric correspondence and chamfer matching: two new techniques for image matching, in: 5th International Joint Conference on Artificial Intelligence, Cambridge, MA, USA, May 1977, pp. 659–663. [28] O. Nomir, M. Abdel-Mottaleb, Hierarchical dental X-ray radiographs matching, in: Proceedings of 2006 IEEE International Conference on Image Processing (ICIP’2006), Atlanta, GA, USA, 8–11 October 2006, pp. 2677–2680. [29] T. Cootes, C. Taylor, Statistical models of appearance for computer vision, Technical Report Wolfson Image Analysis Unit, Imaging Science and Biomedical Engineering University of Manchester, UK, 2000.
About the Author—OMAIMA NOMIR received the Ph.D. degree in electrical and computer engineering from University of Miami, Coral Gables, FL, in 2006. Currently, she is lecturer in the department of Computer Science, School of Computer and Information Sciences, University of Manosura, Egypt. Her research interests are human identification, pattern recognition, medical image processing, and neural networks. She has published 10 research papers. About the Author—MOHAMED ABDEL-MOTTALEB received his Ph.D. in computer science from University of Maryland, College Park, in 1993. He is an associate professor in the Department of Electrical & Computer Engineering, University of Miami, where his research focuses on 3D face recognition, dental biometrics, visual tracking, and human activity recognition. Prior to joining the University of Miami, from 1993 to 2000, he worked at Philips Research, Briarcliff Manor, NY. At Philips Research, he was a Principal Member of Research Staff and a Project Leader, where he led several projects in image processing, and content-based multimedia retrieval. He holds 20 US patents and published over 75 papers in the areas of image processing, computer vision, and content based retrieval. He is an associate editor for the Pattern Recognition journal.