Knowledge-based analysis and understanding of medical images

Knowledge-based analysis and understanding of medical images

Computer Methods and Programs in Biomedicine, 33 (1990) 221-239 Elsevier 221 COMMET 01163 Section II. Systems and programs Knowledge-based analysi...

4MB Sizes 6 Downloads 21 Views

Computer Methods and Programs in Biomedicine, 33 (1990) 221-239 Elsevier

221

COMMET 01163

Section II. Systems and programs

Knowledge-based analysis and understanding of medical images A t a m P. D h a w a n i a n d Sridhar Juvvadi 2 J Department of Electrical and Computer Engineering, University of Cincinnati, Cincinnati, OH 45221, U.S.A., and 2 Knowledge-Based Systems Inc., Houston, TX, U.S.A.

Knowledge-based image analysis and interpretation of radiological images is of significant interest for several reasons including a means to identify and label each part of the image for further automated diagnostic analysis. Also, there is a need to develop a knowledge-based biomedical image analysis system which can analyze and interpret the anatomical images (such as those obtained from X-ray computed tomography (CT) scanning) in order to help analysis of functional images (such as those obtained from positron emission tomography (PET) scanning) of the organ of the same patient. This paper deals with the design and implementation of a knowledge-based system to analyze and interpret CT anatomical images of the human chest. In the approach presented here, the emphasis has been on the development of a strong low-level analysis system with the capability of analyzing in both bottom-up and top-down modes; and on the use of hierarchical relational, spatial, and structural knowledge of human anatomy in the process of high-level analysis and recognition. Medical image analysis; Computed tomography image analysis; Medical-vision system; Knowledge-based analysis; Image analysis

1. Introduction

The computerized analysis and interpretation of three-dimensional medical images is vital for diagnosis as well as for studying the pathology of the disease. The potential of anatomical imaging modalities such as X-ray computed tomography (CT) and magnetic resonance imaging (MRI) in diagnostic radiology has been well recognized for several years. The recent developments in nuclear medicine imaging modalities such as positron emission tomography (PET) have made functional imaging feasible and important in studying diseased organs as well as in the diagnosis of certain diseases. Present PET technology claims the in

Correspondence: Dr. Atam P. Dhawan, Department of Electrical and Computer Engineering, 826 Rhodes Hall, University of Cincinnati, Cincinnati, OH 45221-0030, U.S.A.

vivo assessment of biochemical activity in an organ but the images are difficult to interpret because of the lack of anatomical information and poor image quality. In many cases, the correlation studies among various modalities may lead to valuable diagnoses which are otherwise difficult to arrive at. The objective of the proposed research work is to develop an automated computerized image analysis system which can analyze the anatomical three-dimensional (3D) images and label each organ utilizing a knowledge-base of human anatomy, and then correlate the components of 3D anatomical image to the functional image of the same organ. The knowledge-based system may also be used to predict and display the normal 3D functional image for the patient whose anatomical images have been analyzed and labeled. These images can be of significant value in differential diagnosis as well as in teaching. In the proposed system, the labeled normal anatomical regional

0169-2607/90/$03.50 © 1990 Elsevier Science Publishers B.V. (Biomedical Division)

222 information along with unlabeled regions (which could not be labeled as normal anatomy, such as tumors) will be projected on the functional image to analyze and interpret the biochemical activity of the functional image in the context of the specific anatomical information of the patient. Our efforts are directed towards achieving this goal in the hope that such analysis will bring out the regions where the anatomical a n d / o r functional abnormalities are suspected. Thus, the fundamental issue is to develop a knowledge-based analysis system for identifying, labeling, and interpreting each part of the human organ imaged from the anatomical (such as CT) image. Some of the earlier work in understanding 3D medical images of X-ray CT scans of human abdomen was done by Shani [1] and others [2,3]. In the approach presented by Shani [1], the generalized-cylinder representation was used in modelling without providing any powerful strategy for adequate reasoning in the high-level analysis and interpretation of objects. Also, no effective feedback strategy was used to improve the model-based matching process or to improve the low-level segmentation analysis. The low-level and intermediate-level image analysis stages were not sufficiently powerful to handle the problems of poor segmentation causing failures in the recognition process. We have been developing a knowledge-based biomedical image analysis and understanding system for interpreting anatomical images (CT images) using the anatomical knowledge-base of the respective organ. Our approach utilizes a strong low-level image analysis system with the capability of analyzing the data in both bottom-up (or datadriven) and top-down feedback (or model-driven) modes to improve the high-level recognition process. In the approach presented here, we incorporate the relational, spatial, and structural knowledge of regions modeled in the anatomical knowledge-base of the respective organ. The process of recognizing an object is realized as hierarchical labeling of a region or a group of regions in the image. The control strategies are capable of postponing the recognition process of an object if the confidence in the matching process is not high enough to label the object according to the model.

The final labeling of such an object is delayed until more information is derived by analyzing other parts of the image in which the high-level analysis has more confidence for recognizing the objects. This additional information affects the relational description of the object when it is re-analyzed by the system. The relative emphasis among the knowledge sources can be changed according to the type of knowledge and the belief in the knowledge used. The knowledge-base has been simply created by stacking the 2D slices in the form of relational, spatial, and structural models in the 3D space. The structural features are normalized (and thus independent of size changes in image data) and stored in the frame representation. Thus, we do not use any specific generalized-cylinder representation in modelling but we make use of the knowledge in inference mechanism in the form of rules and frames. This provides us an efficient and flexible method of analysis. In this paper, we will first describe the design and implementation of the complete system and then discuss the results of the knowledge-based analysis and understanding of X-ray CT human chest scans.

2. Knowledge-based image analysis A knowledge-based biomedical image analysis and understanding system can be implemented using four major component blocks: (1) the entry-level preprocessing block to enhance features and remove noise; (2) the low-level segmentation block where the image is segmented into regions, and a global set of features is extracted to aid the segmentation analysis for obtaining a reasonable number of meaningful segmented regions; (3) the intermediate-level feature extraction block where the specific features are extracted and transformed into the appropriate form (symbolic or quantitative, as required); and (4) the high-level interpretation block with a knowledge-base for labeling and interpreting the segmented regions according to their specific features using the knowledge-base. A schematic block diagram of the complete system is shown in Fig. 1.

223 Anatomical Knowledge-Base

1[ MaskCreation ~ I

Input Image

ModelInstantiation

l Pre-processing

Prelimina~ Segmentation

~1

~-~

Rule-Based Segmentation

~

Top-Down Feedback

T

Intermediate-Level FeatureExtraction

Selected-Window Processing

Fig. 1. The schematic block diagram of the complete system.

2.1. The entry-level preprocessing The entry-level preprocessing for removing the image noise and enhancing the features of interests is an important aspect for a successful image analysis. For medical images, often the conventional point-processing techniques such as histogram modification (see [4] for various methods) do not provide good results because they may enhance the noise instead of features. Also, because of their lack of ability in discriminating features from the background noise, they do not allow the desired enhancement of useful features. On the other hand, the region-based neighborhood processing techniques [4,5] provide better results since they analyze not only the pixel itself but also the neighborhood region around it. We have developed feature adaptive neighborhood processing techniques to enhance the contrast of desirable features in the image [6,7]. The algorithm [7] computes the contrast of the feature against its background and then enhances it using a contrast enhancement function (CEF) which can be designed and tuned intelligently using some knowledge derived from the neighborhood processing. In this method, an adaptive neighborhood structure is defined as a set of two neighborhoods: inner and outer. Three types of adaptive neighborhoods have been defined: constant ratio, constant difference, and feature adaptive [6,7]. A constant ratio adaptive neighborhood criterion is the one that main-

tains the ratio of the inner to outer neighborhood size at 1 : 3, i.e. each adaptive neighborhood about a pixel had an inner neighborhood of size c x c and an outer neighborhood of size 3c x 3c where c was an odd number. A constant difference neighborhood criterion is the one that allows the size of the outer neighborhood to be (c + n) x (c + n), where n is a positive even integer (see Fig. 2). Note that both of the above-mentioned neighborhoods are of fixed shape, i.e. square. Thus they can only provide the closest possible approximation of the local features into square regions. A variable shaped feature adaptive neighborhood criterion that adapts the arbitrary shape and size of the local features to obtain the 'Center' (consisting of pixels forming the feature) and the 'Surround' (consisting of pixels forming the background for the feature) regions is defined using the predefined similarity and distance criteria. These regions are used to compute the local contrast for the centered pixel. The procedure to obtain the Center and the Surround regions is as follows: First, the inner and external neighborhoods around a pixel are grown using the constant difference adaptive neighborhood criterion, i.e. the inner and outer regions are grown around the pixel as 1 x 1 and 3 x 3 ; 3 x 3 and 5 x 5 ; 5 x 5 and 7 x 7; and so on, respectively. The incoming pixels in each step of region growing are labeled as the feature or the background based on the similarity criterion. To define the similarity criterion, a gray-level threshold (say three gray levels), and a percentage threshold (say 50%) are defined. If the

(a)

(b)

(c)

Fig. 2. Adaptive neighborhood region growing process using the constant difference method. The dark region shows the inner neighborhood at a step of processing while the hatched region shows the outer neighborhood at that step. The inner neighborhood grows from l x I to 3 × 3 to 5 x 5 and so on. The outer neighborhood contains all pixels contiguous to the inner neighborhood.

224 incoming pixel is within the gray-level threshold to the centered pixel value, it is assigned to the feature otherwise it is assigned the background. The process is continued until the ratio of the number of incoming pixels assigned as feature to the number of incoming pixels assigned as background falls below the percentage threshold. Thus, the gray-level threshold takes care of small variations of the feature in the region growing process while the percentage threshold checks the outgrowing of the feature. At the point of violation of the percentage threshold, the region forming all pixels, labeled as feature, is designated as the Center region. The Surround region is then computed using the distance criterion which may be a unit distance in all directions (see Fig. 3). Thus the Surround region comprises all pixels contiguous to the Center region. The local contrast C(i, j) for the centered pixel is then computed as

I Pc( i, J ) - P s ( i, J) l C(i, j ) = max{Pc(i ' j ) , ps(i ' j)}

Ps(i, j) e(i, j)= (1- c'(i, j)) if

P~(i, j ) > P~(i, j)

e(i, j ) = Ps(i,

(1)

where Pc(i, j) and Ps(i, j) are the average graylevel values of the pixels corresponding to the

1 2 3

Center and the Surround regions respectively centered on the pixel. The contrast enhancement function (CEF) is used to modify the contrast distribution in the contrast domain of the image. A contrast histogram is computed and is used in designing the best CEF for the specific image (see [7] for details). Piecewise exponential curves are used along with the knowledge derived from the contrast histogram analysis to design the CEF. After computing the new contrast value, C'(i, j) (using the CEF) a new pixel value for the enhanced image e(i, j) is assigned to the pixel (i, j ) from the following equations

4 5 6 7 8 9

1 2 3 4 5

if

j)(1-

C'(i, j))

Pc(i, j) < Ps(i, j)

(2)

More details about the method can be found in Dhawan and Le Royer [7]. By applying constraints on the size of the desirable features, we can easily recognize 'salt and pepper' noise and provide almost noise-free feature enhancement. The feature adaptive neighborhood process also provides the contrast data for defining the link strategy used in the pyramid-based low-level preliminary segmentation (explained in the next section).

6

2.2. Low-level segmentation

7 8 9

~

CENTER SURROUND

Fig. 3. Adaptive neighborhood region growing process using the variable-shapedfeature adaptive method. The dark hatched region has been grown by the constant difference method using the similarity and the percentage thresholds. Once the 'Center' region representing the feature is found, the 'Surround' region representing the backgroundcontains all pi×els contiguous to the ' Center' region.

After the preprocessing, the image is segmented. Segmentation is one of the most important steps of a computer vision system. In this step, neighboring points of an image are grouped into regions using some similarity criterion of some characteristic feature such as intensity. In order to perform a successful knowledge-based analysis, we need a flexible region extraction algorithm which can be easily tuned to the extraction of a reasonable number of regions. These regions are then passed on to the intermediate-level processing of feature extraction and description forming. This

225 description is to be used in the high-level analysis to instantiate a model (or to create a hypothesis) from the knowledge-base. When a model is instantiated using such data-driven bottom-up analysis, the detailed region extraction and its associated analysis can then be performed in the model-driven mode. Thus, the preliminary segmentation algorithm must be very flexible and efficient for region extraction. We perform preliminary segmentation by using a multi-resolution pyramid-based processing with the modified contrast function as described above. In the multi-resolution pyramid, each level is a lower resolution version of its predecessor. Each level is formed by summarizing a 4 x 4 neighborhood in the preceding level. The neighborhoods are overlapped 50% vertically and horizontally so that each pixel has four parents at the next level, and 16 children at the previous level. The average of the 16 children is used as the summarizing value. The entire pyramid is constructed up to the level at which there are only four pixels. The basic idea in the approach is to define link strategies between the neighboring pixels on adjacent levels of a pyramid, based on the similarity and proximity of each parent/child pair. Hong and Rosenfeld [8] suggested the use of weighted linking in a pyramid to extract the regions. In this scheme, the pixel values (at the levels above the base) are computed as the weighted average of their children values where the weights depend on the link strengths. These new values define new link strengths, and the process is iterated. After a few iterations, when the link strengths stabilize, the strong links represented by a subtree, define a compact homogeneous region. The leaves of this tree are the pixels belonging to the regions, and the height corresponds to the region size. In the method proposed in [8], only vertical (between levels) link-strengths have been used. Also, the vertical link strength is defined by fitting a Gaussian distribution function based on weighted geometric distance between the parent and the child. We have recently proposed a new method for defining link strengths incorporating the local properties of the image vertically as well as horizontally (i.e., inter-level and intra-level) [9]. We define a local contrast function for each pixel at a

given level by feature adaptive neighborhood processing method as given by Eqn. 1. The contrast function, as defined by Eqn. 1 serves as the basis of the intra-level link property. For example, if the region in the image is uniform, the contrast values in that region will be quite low. The contrast value will be much higher in the presence of an edge or at the boundaries of a sharp feature. Textured regions will have the contrast values accordingly. The contrast function is also utilized in computing the vertical link strengths w(s, f ) as follows: 1

w( s, f ) = ~-~-ff~o( l + C., + D ) ×exp{-(s-f)z/2o

z}

(3)

where D is the weighting inversely proportional to the geometric separation between the son (denoted as son with a pixel value s) and the parent (denoted as father with a pixel value f ) pixels; a is the standard deviation of the 16 sons of the father pixel; and Cw is the weighted contrast function defined as g Cw- C+ b

(4)

where C is the contrast function, b is the bias, and g is the gain function. Once the link strengths have stabilized for all levels, the segmentation tree is built by linking a parent pixel to the child pixel having the greatest link strength satisfying a threshold derived from the contrast histogram. The setting of the acceptable link strength threshold is thus made adaptive to the contents and the quality of the image (for details, see [9]). The preliminary segmentation, as described above, is quite effective in region extraction. The threshold on the link strengths decides the number of extracted regions. Once the pyramid is stabilized, different region segmentation maps can be created by just relaxing or increasing the threshold. The CPU time consumed by relaxing the threshold and extracting a new set of regions is very small (a few seconds for 128 x 128 pixel image on a Microvax-II microcomputer). A suitable threshold can be selected in the data-driven bottom-up analysis

226 to restrict the n u m b e r of regions extracted and then can be relaxed in the model-driven mode to extract more numbers of regions from the already stabilized pyramid. It should be noted that the pyramid has been built mainly on the link strengths which are based on the intensity of the pixel. It is not effective in incorporating any a priori spatial knowledge. We want to incorporate some a priori spatial knowledge in the process of segmentation such that the segmented regions can be efficiently used in the model search. Thus these regions should relate to major anatomical regions directly. Also, since very small regions do not contribute significantly in the model search, we would merge them into the adjacent large regions. For these reasons, the preliminary segmentation, as obtained by the pyramid-based region extraction algorithm, is further analyzed by a rule-based region analysis system. The rule-based analysis of the regions is performed in the knowledge of a mask which represents the major anatomical areas. The rules are focused on two major actions: first, if there are small insignificant regions, they should be merged into the adjacent large regions; and, second, the segmented region map should produce a grouping as close as possible to the spatial mask designed for use in the model search (Fig. 4 shows the mask). A global set of features is required in the rule-based analysis. It includes area, centroid, adjacency, mean gray-level value in the region, variance of the gray levels in the region, and variance in the mean gray-level values of regions in a given area, gray-level profile along the b o u n d a r y of the area. The set of global features is given in Table 1. The features are c o m p u t e d and changed into symbolic form such as very high, high, average, low, and very low, as required. The fuzziness of these

Fig. 4. A mask used in model instantiation.

TABLE 1 The list and symbols used in the low-level rule-based segmentation analysis Note that for some features both the symbols as well as the actual values are used. Feature Region area Region average gray-level value Region variance gray-level value Region adjacency : neighbor Region adjacency : common edge Region circumference Region adjacency (by measuring common edge length divided by the circumference of the region) Edge length Edge-gradient average magnitude Edge-gradient direction Edge-region interaction Difference: feature 1 - feature 2

Symbol class used 1, 4 2, 4 2, 4 3 4 4 2 1.4 2, 4 4 5 2, 4

Symbol class 1: Very Large, Large, Average, Small. Very Small Symbol class 2: Very High, High, Average, Low, Very Low Symbol class 3: Touching, Not Touching (Binary) Symbol class 4: Actual value is used Symbol class 5: Bisecting, Not Bisecting (Binary)

symbolic features is based on the distribution of the quantitative data. For example, for the area feature, the m a p p i n g to symbolic form is performed as following. First, the minimum, maximum, average, and variance values of the area feature of segmented regions are computed. Then, the low and high ranges of the symbolic form of the area feature are c o m p u t e d as the ranges within one standard deviation distance from the average value on the lower and higher sides of the feature scale respectively. The preliminary segmentation is then analyzed and processed by a rule-based system to obtain the candidate regions which can be merged in order to obtain a meaningful or sub-optimal segmentation (Fig. 5a). The edge-based preliminary segmentation creates the edge map of the image data by computing the Sobel gradient operators [4,10] in the horizontal and vertical directions to provide the gradient magnitude and directional information. Fig. 5b shows the schematic block diagram of the low-level rule-based analysis system. The design of the rule-based segmentation

227 analysis system is based on the Nazif and Levine approach [11] but is much more effective and a lot less complex simply because (1) it uses the preliminary segmented data instead of raw image data, and (2) the rules in the context of a priori

spatial knowledge of the chest cavity are designed to be very specific and are few in number. This provides an efficient way of analyzing the preliminary segmented regions by using fewer number of knowledge rules in order to obtain the

Region-based Preliminary Segmentation

Input Image

Low Level Feature Extraction

Rule Based v Segmentation

Edge-based Preliminary Segmentation o

Input Image (preliminary Segmented) Data-Base

~ A c t i v i ~

Output Image (suboptimally segmented) Data-Base

I Knowledge-Rules I

Focus of Attention

• I Apriori Knowledge or Masking ~ Knowledge

f StrategyRules

~

Top-Down Feedback (if available)

b Fig. 5 (a) The schematic block diagram of the low-level analysis system. (b) The schematic block diagram of the rule-based segmentationanalysissystem.

228 meaningful segmentation. The mask, as a representation of spatial a priori knowledge about the chest cavity, is used for defining the strategy rules. The mask, as shown in Fig. 4, has four distinct areas in the chest wall boundary. These four areas are related to the SPINAL CORD (R4), L E F T L U N G

(R3), HEART (R2), and RIGHTLUNG (R1). The remaining area under the chest wall is called here as LEFT OUT (R5) area. The strategy rules decide how the input image database is to be scanned and analyzed in the context of a priori knowledge. We implemented the strategy rule to analyze the image in the following order of spatial encoding: first the spinal cord, and then the left lung and right lung areas followed by the heart area and the chest-wall boundary LEFT_OUT area at the last. The reason for this particular sequence is basically the consistency of these areas. The spinal cord region is supposed to be the most prominent and consistent. The focus of attention rules decide where in the specific area the analysis should be performed. We have implemented it as to find the region with the largest area that has not been analyzed yet and then start using the knowledge rules focussed on it. After creating the focus of attention, the required data is sent to the activity center, where the knowledge rules are executed. The activity center holds the current active area or region under analysis. The activity center also holds the labeled status information about the regions: ACTIVE if the region is currently being analyzed, and ANALYZED if it has been analyzed before. In the beginning of the analysis all regions have NIL status in their status fields. Whenever a region is activated by FOCUS OF ATTENTION rules, the status label is changed to ACTIVE and it is analyzed by the KNOWLEDGE_MERGE or KNOWLEDGE_SPLIT rules, The action part of the KNOWLEDGE rules results in the change of the status to

The actions are stored in the output database which at the end stores the final suboptimal meaningful segmentation. In case of a topdown feedback, the strategy rules direct the focus of attention and activate appropriate knowledge rules for execution. Based on the above discussion, the strategy rules are defined as follows:

Strategy ru& SR1 IF NONE

REGION

is A C T I V E

NONE

REGION

is A N A L Y Z E D

THEN ACTIVATE

FOCUS

IN S P I N A L _ CORD

area

Strategy ru& SR2 IF ANALYZED

REGION

is in S P I N A L _ C O R D

area ALL R E G I O N S

in S P I N A L _ C O R D

area

are

NOT A N A L Y Z E D THEN ACTIVATE

FOCUS

in S P I N A L _ C O R D

area

Strategy ru& SR3 IF ALL

REGIONS

in S P I N A L _ C O R D

area

are

ANALYZED ALL

REGIONS

in L E F T L U N G

area

are

NOT

ANALYZED THEN ACTIVATE

FOCUS

in LEFT_LUNG

area

Strategy ru& SR4 IF ALL R E G I O N S

in S P I N A L _ C O R D

area

are

ANALYZED ALL R E G I O N S

in LEFT_LUNG

area

are

ANALYZED ALL

REGIONS

in R I G H T L U N G

area

are

NOT A N A L Y Z E D THEN ACTIVATE

FOCUS

in R I G H T L U N G

area

Strategy ru~ SR5 IF ALL

ANALYZED.

REGIONS

in S P I N A L _ C O R D

area

are

ANALYZED ALL

REGIONS

in L E F T _ L U N G

area

are

ANALYZED ALL

REGIONS

in R I G H T L U N G

area

in H E A R T

are

are

ANALYZED ALL R E G I O N S

area

ANALYZED THEN ACTIVATE

FOCUS

in H E A R T

area

NOT

229 Focus ru& FR3 IF

Strategy rule SR6 IF ALL REGIONS ANALYZED ALL REGIONS

in SPINAL

CORD area

in L E F T L U N G

area

are

are

ANALYZED ALL REGIONS ANALYZED

in R I G H T _ L U N G

ALL

in HEART

REGIONS

area

are

FEEDBACK

THEN put NIL

area

are

ACTIVATE

FOCUS

in LEFT OUT area

Strategy rule SR7 IF ALL REGIONS a r e ANALYZED

THEN STOP The FOCUS_OF_ATTENTION rules imply simply the search for the largest region which has not been analyzed yet and is within the current FOCUS area. The region is then activated for the analysis using the merging knowledge rules if it is not the feedback analysis. If there is a top-down feedback, the FEEDBACKWINDOW is activated and all the regions in the feedback window are given NIL status. The FOCUS_OF_ATTENTION area is now the feedback window area. The FOCUS OF ATTENTION rules find the largest region with NIL status and activate the splitting knowledge rules. The feedback analysis is over when all the regions have the ANALYZED status. The FOCUS O F _ A T T E N T I O N rules are as follows: Focus ru& FR1 IF

in

FOCUS

on F E E D B A C K _ W I N D O W

is ACTIVE

FEEDBACK_WINDOW THEN

is NOT ACTIVE

KNOWLEDGE_MERGE

(The REGION-X is then activated in the FOCUS area by focus rule FR2).

Focus ru& FR4 IF REGION-X

is ACTIVE

FEEDBACK_WINDOW

is ACTIVE

THEN ACTIVATE

KNOWLEDGE_SPLIT

rules

Focus rule FR5 IF ALL REGIONS

are ANALYZED

FEEDBACK_WINDOW

is ACTIVE

THEN FEEDBACK_WINDOW

is A N A L Y Z E D

The knowledge rules for merging insignificant regions (for example, a region having a very small area compared to its enclosing region of large area) are stated as follows:

REGION-I

is SMALL

REGION-1

has HIGH ADJACENCY

with 2 DIFFERENCE between AVERAGE VALUES of R E G I O N - 1 and R E G I O N - 2 is LOW or VERY LOW REGION 2 is LARGE or VERY LARGE THEN MERGE R E G I O N - 1 in R E G I O N - 2 put status ANALYZED in R E G I O N - 1 and REGION-2 REGION

Focus ru& FR2 IF

ACTIVATE

REGIONS

WINDOW

Know~dge merge-regions rule K M R 1 IF

is in FOCUS area

R E G I O N - X is NOT ANALYZED R E G I O N - X is LARGEST THEN ACT I V A T E R E G I O N - X

REGION-X

of)

is

area

THEN

REGION-X

is ACTIVE

in F E E D B A C K W I N D O W

in (status

FEEDBACK

ANALYZED ACTIVATE

WINDOW

NONE REGION ACTIVE

rules

230

Knowledge merge-regions rule KMR2 IF

REGION-I is VERY SMALL R E G I O N - 1 TOUCHES other REGIONS THEN MERGE R E G I O N - 1 in the R E G I O N - X LOWEST DIFFERENCE in AVERAGE GRAY LEVEL VALUES put status ANALYZED REGION-X

in REGION-I

with

the low-level rule-based system to re-analyze the regions of the feedback window. As described earlier, only region-splitting rules are activated this time and the system goes back to the edge map of the raw data to find out the possibilities of splitting the regions. At present, we have only one region-splitting rule which is stated as follows:

and

Know&dgesplit-regions ru& KSRI

Another rule for merging two regions which are quite consistent, have similar average value, and do not have a strong edge as the common edge, is as follows:

Knowledge merge-regions rule KMR3

IF

R E G I O N - 1 is LARGE R E G I O N - 1 is BISECTED by EDGE E D G E - L E N G T H is LARGE OR VERY LARGE A V E R A G E - G R A D I E N T along EDGE is HIGH THEN SPLIT R E G I O N - I at EDGE put status ANALYZED in R E G I O N - I

IF

R E G I O N - 1 is NOT (SMALL or VERY SMALL) REGION-I is TOUCHING R E G I O N - 2 VARIANCE of R E G I O N - 1 and R E G I O N - 2 are LOW or VERY LOW DIFFERENCE in AVERAGE G R A Y L E V E L VALUE of R E G I O N - 1 and R E G I O N - 2 is LOW or VERY LOW GRADIENT along COMMON EDGE of REGIONI and R E G I O N - 2 is LOW or VERY LOW THEN MERGE R E G I O N - 1 in R E G I O N - 2 put status ANALYZED in REGION-1 REGION-2

and

After the regions obtained by the preliminary segmentation are analyzed by the rule-based analysis system for merging, as described above, the regions are grouped together according to the a priori knowledge of the chest cavity and the spinal cord, left lung, right lung, and the heart areas are realized for the specific image (being analyzed). Thus, a masked image is created which shows this grouping as realized for the specific image. This masked image is now used by the high-level analysis system to instantiate the model slice in the knowledge base. Once the model is instantiated, a detailed matching (as described in the next sections) is performed. In situations of partial match, a feedback window is activated by the top-down feedback mechanism of the system, which activates

2.3. Intermediate-level processing The purpose of the intermediate-level processing is to provide adequate information to the high-level interpretation system about the final segmented regions and areas. The following features are extracted at the intermediate-level feature extraction: (1) area of the region, (2) average gray-level value of the region, (3) centroid of the region, (4) horizontal and vertical bounds of the region, (5) orientation of the region, (6) elongatedness of the region, (7) adjacencies: neighbors of the region. The area of the region is computed as the number of connected pixels within the region, and the average gray-level value is the sum of gray-level values of all pixels within the region divided by the area of the region. The centroid is calculated as the mean of the abscissa and the ordinates of all the pixels belonging to the region. The horizontal and the vertical bounds effectively determine the rectangle bounding the region. The orientation of the region is determined from the orientation of the principal axis of the region, and computed

231

from the moment analysis. The orientation of the principal axis is given by the following equation: tan 2q, = 2/~11/(/~10 - ~01) where ~ is the angle on which the principal axes are inclined to the original axes; and I~pq is called the central two-dimensional moment, defined as: P

q

(5)

where x,, = mlo/moo and x~,, = tool/moo where meq = ZxPy u. The m,q is called the two-dimensional moment of the ( p + q) order. The elongatedness is defined as the ratio of the major to the minor axis of the minimum-area bounded ellipse (MBE) of the region which is the minimum-area ellipse required to enclose the region completely. Adjacencies of the region are all connecting neighboring regions. The features are transferred to the high-level system where they may be normalized, if required a n d / o r transformed to the appropriate symbolic form as required by the matching process and high-level interpretation system. The normalization of some of the quantitative features such as area of any part of the organ is normalized by dividing it by the area under the chest wall boundary. For normalizing other features such as elongatedness, the elliptical boundedness for the chest wall region is used. When the chest wall region is established and its MBE is computed, it provides the major and minor axes. Elongatedness of other regions is normalized using these two axes measurements.

2.4. High-level knowledge-base The high-level system accepts input from the intermediate stage and performs the matching of image data to the model stored in the knowledgebase to label and identify each object in the image. The knowledge-base contains several models corresponding to the anatomy in the form of crosssectional slices of a normal human chest. Each model is stored in two resolutions; a coarse-resolution masking model providing information about the four major areas (as shown in the mask in Fig. 4) enclosed in the chest-wall area; and a fine-reso-

lution model providing information about each object visible in the cross-sectional slice. Each model contains a structural, relational and spatial description of its contents. The structural description contains the normalized size, geometrical shape and orientation, etc., while the spatial description contains the information about the location of each object in the area enclosed by the chest wall. The relational description covers the adjacencies of objects and their relative location with respect to some established object (for example, the spinal cord). Each model is stored under a separate rule class consisting of a number of rules. These rules contain the structural, relational, and spatial knowledge embedded in their premises while the objects visible in the section for which the model is created can be represented in the frame structure. Each object can create a frame and each feature of the object is stored as a slot of this frame but can also be a frame in itself. Such feature frames have been created from the measured feature values of a number of prelabeled images used as a training set.

2.5. High-level model instantiation For the interpretation of the image data, first a candidate model is selected by matching the coarse-resolution (masked) image with the coarseresolution model. The high-level processing module accepts masked image data from the intermediate stage and creates object frames for each of the four major regions (according to the mask). Objects in the masked image are analyzed by matching with all coarse-resolution models of different cross-sectional slices of the chest stored in the knowledge-base in order to instantiate the most likely model for the fine-resolution matching. For this matching, the normalized area. elongatedness, orientation, and the centroid locations of the four major regions are considered. Based on the measurements of these regions, the model slice that provides the closest match is selected. Once the model slice is selected, the original fine-resolution image data is processed and analyzed for matching with the fine-resolution model. Now, the object frames are created for each region (according to the segmentation obtained after the rule-

232

based analysis at the low-level analysis stage) and used for further analysis based on feature values obtained from the intermediate-level analysis.

recognition. For example, if the spinal cord has been identified as a reference object, the recognition of the left ventricle should be influenced more by the relational and spatial features with respect to the spinal cord than its own structural features. Thus, each premise in the rule is assigned a certain contribution factor according to the importance and our faith in the feature used (in the premise) for the recognition of the specific object. When a rule is checked and executed, a match-confidence value is computed. This match-confidence value is based on the fuzzy truth value of the feature (used in the premise) and the contribution factor of the premise. If there are N numbers of premises in a rule with the corresponding contribution factors, CF i, i = l . . . . . N, the match-confidence value, MCV is computed as:

2. 6. High-level matching The initial step in the high-level matching is to create objects in the form of frames for each segmented region. The intermediate-level feature extractor now computes the feature vector. Corresponding to each feature in the feature vector, a slot is created in the object frame and the feature value is assigned to the corresponding slot. Thus, the slots now contain the structural and relational knowledge. The inference mechanism is applied as the forward chaining strategy to perform matching of features based on the spatial, relational, and structural properties of each object of the instantiated model stored in the knowledge-base. In this process, rules associated with each object identification are checked and executed. Each rule representing the spatial, relational, and structural knowledge of the object is written as a multiple condition-action pair. The condition part of the rule may have several premises incorporating all the knowledge of the object. It is important to note that these premises do not contribute equally in the process of matching and recognition of objects. In other words, the knowledge in each of the three categories: spatial, relational, and structural is not considered to be equally important for

N

MCV = ~

(CF,. x TV,.)

where TV,, i = 1 . . . . . N, are the fuzzy truth values (which lie between 0.0 and 1.0) of the features used in the corresponding premises; and N

CE = 1

(7)

i=l

The fuzzy truth value is computed for each feature used in the high-level matching using the

Fuzzy Truth

Value

SDf/

SD/2

I.I

0.0

(6)

i=1

,~ leeatam~ v Value

Max. Min. Average Fig. 6. A model used in obtaining the fuzzytruth value of a feature.

233 analysis on a training set of CT chest images with the pre-identified objects. The average value and variance of each feature is computed to model the fuzzy truth value of the feature as shown in Fig. 6. For example, Fig. 7a shows a rule for identifying the left ventricle for the cross-sectional slice model No. 33. Fig. 7b shows the values assigned to the contribution factor parameter, CF i, used in the left-ventricle rule. The symbolic matching of the image data to the model, as discussed above, starts initially in a bottom-up or data-driven mode. A match is said to be found, and the corresponding object is labeled accordingly, if the match-confidence value exceeds a preset threshold. Since the imaging procedure may corrupt the data with noise and lowlevel processing may fail to provide a unique optimal segmentation, a perfect match between the input image and the model may not occur in the first matching process. To overcome this problem, the high-level system pr~xvides a top-down feedback (model-driven) with the selected window area where the match could not be found because

LEFT-VENTRICLE-RULE-MOD33:

(IF _X OF ?OBJECT

( (AND

LOCATION

Y OF ?OBJECT

Ak%EA OF ?OBJECT INTENSITY NEIGHBOR

IS

(CFI)

(LOC Y LV M33))

OF.?OBJECT

(CF2)

(CF3)

(INT_LV_M33))

OF ? O B J E C T

(CF4)

IS ?OBJECT-Z1)

IS IN SPINAL-CORD)

REGION

?OBJECT-Z2

(LOC_X_LV_M33))

IS

(AI~ LV M33))

OF ?OBJECT REGION

?OBJECT-ZI NEIGHBOR

IS

IS

(CF5)

IS ?OBJECT-Z2)

IS IN A R _ A O R _ M 3 3 )

(CF6))

Fig. 8. A digitized original image of the CT chest scan of a normal human patient. of missing or corrupted information. The window 'area is carried over to the low-level processing. The rules are verified and executed again for the selected area only in the knowledge of a modeldriven goal (the expected action). For example, if some region in that area was merged because of the weak features or the mask-directed split-merge processing, it can be restored again, if that region is required to increase the belief in matching the data to the current model. If this does not happen and the required information is not expected or found in the data, the top-down feedback comes back with a negative score which decreases the belief in the current model. Thus all ambiguous areas are inspected at least once through the topdown feedback before the current model or hypothesis is rejected (see Fig. 1).

7HEN ?OBJECT

IS IN L E F T - V E N T R I C L E )

EVALUATE_MATCH_CONFIDENCE))

3. Results and discussions Total Contribution (I.0)

Relati<~nal

CF5 {C.2)

Knowledge (0.4)

CF6 (0.2)

Spatial

CFI 03.2)

Factor

Knowledge (0.4)

F2 (0.2)

Structural

CF3 (¢.1)

Knowledge (0.2)

CE4 (C.!~

b

Fig. 7. (a) A rule used for the recognitionof the left ventricle in the knowledge-base model No. 33. (b) The values of the contribution factors, C~ as assigned for the rule shown in (a).

Fig. 8 shows the original digitized image of the CT chest scan of a normal human subject. The image as obtained after preprocessing is shown in Fig. 9. The image was first segmented using the pyramid-based multi-resolution method, as described above, to obtain the preliminary segmentation (shown in Fig. 10). The preliminary segmentation was then analyzed by the rule-based analysis system to obtain the meaningful segmentation. The preliminary segmentation gave us 70 regions for the 64 x 64 pixel preprocessed image

234

shown in Fig. 9. These 70 regions were reduced to 46 regions after the rule-based segmentation analysis, if we allow the regions to be as small as 5 pixels in the area. If we force the regions less than 10 pixels in area to be merged in one of the adjacent regions (see rule KMR2), only 32 regions are obtained after the rule-based segmentation analysis. These segmented regions, labeled by the gray-level values representing the region number, are shown in Fig. 11. Table 2 shows these 32 regions with the set of features computed by the intermediate-level processing module. The segmented regions are now grouped to create a masked image as described in the previ-

Fig. 9. The pre-processed image of the CT chest scan shown in Fig. 8.

TABLE 2

The segmented regions after the rule-based analysis with their intermediate-level features Region label

Area

Average gray value

Centroid-x

Centroid-y

Elongation

Orientation

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32

899 554 17 44 25 10 1369 18 94 14 39 15 34 21 102 94 24 16 17 144 40 31 157 46 18 59 33 19 106 15 12 10

35 25 114 102 139 157 32 39 116 52 89 150 121 66 178 251 55 86 68 160 104 95 120 52 115 181 91 177 162 91 126 228

39 53 62 61 59 58 18 54 58 51 48 46 43 44 36 35 39 38 38 39 28 27 32 30 28 27 28 25 29 18 14 10

43 22 17 29 34 16 28 15 44 20 17 56 31 18 37 28 4 41 5 20 17 36 14 10 28 53 4 33 43 30 56 52

0.24 0.4 0.06 0.24 0.16 0.5 0.29 0.03 0.11 0.06 0.04 0.5 0.56 0.17 0.32 0.24 0.33 0.04 0.5 0.23 0.04 0.46 0.05 0.05 0.11 0.67 0.38 0.06 0.16 0.08 0.56 0.38

0.57 0.26 0 0.01 0.03 -0.46 0.22 0 0.04 0.01 0 - 0.32 - 0.32 -0.01 - 0.13 0 0.21 0.03 0.09 0 - 0.01 0.57 0.01 0.01 -0.14 0.58 0.04 0.02 -0.17 0.02 0.59 0

235

Fig. 10. The preliminary segmentation of the image (shown in Fig. 9) as obtained after the multi-resolution pyramid-based processing.

Fig. 11. The segmented region map for the image (shown in Fig. 9) as obtained after the rule-based analysis. ous section. The c o a r s e - r e s o l u t i o n ( m a s k e d ) image is shown in Fig. 12. Once again, the image is r e p r e s e n t e d by the gray-level values b a s e d on their region number. T a b l e 3 shows the features extracted from the m a s k e d image. The high-level i n t e r p r e t a t i o n system was i m p l e m e n t e d on a Sym-

Fig. 12. The masked coarse-resolution image after the grouping of segmented regions.

~olics 3640 c o m p u t e r using the K E E 3 (an expert system b u i l d i n g tool) e n v i r o n m e n t . Fig. 13 shows an object frame created from an i n p u t region with the features stored in the slots. Fig. 14a shows a p a r t of the k n o w l e d g e - b a s e b e f o r e m a t c h i n g , while Fig. 14b shows the k n o w l e d g e - b a s e after the matching. O n e can see the a o r t a is identified as object 22, a n d so on. Fig. 15 shows the o u t p u t of the high-level i n t e r p r e t a t i o n system. All m a j o r objects of the h u m a n chest a n a t o m y s t o r e d in the k n o w l e d g e - b a s e were identified a n d i n t e r p r e t e d successfully. T a b l e 4 shows the c o m p u t i n g time for each processing stage. W e used a n u m b e r of C T images of the h u m a n chest for analysis by the system. T h e a priori m a s k i n g k n o w l e d g e used in the low-level ruleb a s e d analysis was o b t a i n e d b y a v e r a g i n g out 58 C T chest-scan slices with h a n d - d r a w n four classified areas in the chest cavity. W e a n a l y z e d a b o u t 20 images using the k n o w l e d g e - b a s e that has 36 selected m o d e l slices in b o t h c o a r s e - r e s o l u t i o n a n d

TABLE 3 The features of the four major regions of the masked coarse-resolution image Region label

Area

Average gray value

Centroid-x

Centroid-v

Elongation

Orientation

1 2 3 4

106 1 369 1 547 556

228 32 36 169

29 18 45 34

43 28 36 36

0.16 0.29 0.34 0.27

- 0.17 0.22 0.2 0.25

236 TABLE 4 The computer processing time of each stage for the examples shown in Figs, 8-15 Processing stages 1-4 were implemented on a Microvax-Al workstation and stage 5 was processed on a Symbolics-3640 with KEE-3 software. Processing-stage

Execution time

1. 2. 3. 4. 5.

25 s 3 rain and 30 s 20 s 1 min and 30 s 3 min and 40 s

Preprocessing Preliminary segmentation Feature extraction Rule-based analysis High-level matching

fine-resolution modes. Some images were analyzed successfully with recognition of each significant part of the heart but in a number of cases, the system could not identify some regions which were

C-reared bY M e e ~ on lO-Dec-$7 1~:43:~1 t ' ~ l ~ e d b y eloe~qh o11 I0~.P~-,,'c-87 I~:44:0S Member Of: I I ~ T S new

obJec~

O w n slot: ~

tram ~ Z

Values: 102 Own slat,: A ~ E ~ I ~ , ¢ ' [nher~ta_nce: V~ue~: 1711

fltna OIWZ2

©wn slot: BI]LINIm4.XI t~-rtmn (IBJ22 Inherltd~ce: C'~]IIIIII~ V~:

29

Own slot: I i l I ] t l I I B ~ X 2 f--r~m l ] l l J Z 2 'Jal~e~ 43 C~n s~ ~ 4 - V ~ : ~ V~ues: 30

I~

~ 7

O w n ~Iot: UMIII]IS~Y2 f r o m OB-J22

Inheritance: V a l u e : 30 Own slot: It111~i from (111J22 Irxhentance: i]l~l~llnllllt, 'Jdlu~: 0,32 ?

ii

,,iii

i~

.

Fig. 13. This figure shows an object frame created from an input region with the features stored in the slots.

merged with other neighboring regions by the rule-based analysis. This problem was handled by providing feedback from the instantiated model to the rule-based analysis in the form of a selected window. The rule-based analysis is re-evaluated in the window as described in the previous sections. Splitting rules are activated in the feedback analysis. If the regions were merged in the first bottomup approach, they are restored (we keep a tabulated file of the region map before and after the rule-based analysis). The edge map information is used to execute the knowledge-split-region rule. The rest of the processing of the new window data is the same as discussed above for the first bottom-up approach. An example is shown in Figs. 16-20 in which the first bottom-up approach caused the merging of two major regions corresponding t o the left-ventricle and the rightventricle areas of the heart. As a result, the highlevel matching failed in recognizing the left ventricle and right ventricle of the heart. All other parts such as left atrium were successfully interpreted by the high-level system in the first bottomup analysis. Figs. 16-20 show the original digitized image, the preprocessed image, the preliminary segmented image, the segmented image after the rule-based analysis, and the result of the high-level interpretation after the first bottom-up analysis, respectively. With the help of the topdown feedback, a window for the ventricle regions was created. The rule-based analysis over the selected window was then re-evaluated in the context of the coarse version of the instantiated model. The two ventricle regions were restored in the rule-based re-evaluation analysis, and then interpreted successfully by the high-level sub-system. We found out from our experiments that the feedback window and re-evaluation was activated about 42% of the times the objects were analyzed by the high-level analysis. The feedback analysis did not always provide the solution to increase the confidence in matching with the current model. In about 30% of all cases, the feedback re-evaluation did not improve matching score because of very low gray-level values of the raw image and lack of good edge details in the feedback window. With the present state of the analysis, the system does not label the object, if it has not been

237

. I.'i:]NlilAL-,R[GION-ltiLE IOII~,.,.IM...ILEs, i,"~"" ~ l l : f l - t l ~ l l ) N - , t l ~ E ": (~ARINA

| i l l l i t l l r ---- ~~ i

i I~10111?!i i

l l T I ] t~11171~i ~li l l x : . ] ~ F i i T , I~~

-ll(ilb4qElilN-llil[ • -V E I I T E ~

E

13[NTIL~I_ - 0 E ~ I O N

C.| N I I L ~ - t l t I ~ I I N - t H M [

'*i~(ilJl"RtE[ ~

'

~

[

1 I t i -Iit lililN-tllll

A N A |{~411:AI - i l l l l I . ~ , AilIllA

t

}tllill I -tll lilON-tllll |

IltklT?

V t I I I I I l t l h l ~dl l i l l l N - t l l l l

t :AlUNA { ' 1 N IILAI - H I l i l i l t

MA~

I

~,.II1114 ! ~H.~I'tAIilIS-IHII !

. " l - & l ~ 3 Z-ALIIL JE

~:! N n 4 A I - ~

I"-.%1 l i : l ;.liP-iBM ! S ,

LA.A~-ALIL~ (~NTI~.1-5{e-N

~3?-AULE

I Y'~I'I~Illt

-inll !

~ l l * - V I NA--I:AVA41111 t &l I t l l A - t l l l l |

~h

I :AIIlP~ -lit m | t 5,1 l i ~ l h l i , I L ' ~ d 2 - i H n I '**

" .l~¢3?-fdJL[

I -AIII:|/-F~Q

'," " ~ L * P - V ~ N A - C A Y A 3 ~ - 4 t t l I

M A I N - P I l l - A i t l ~,12qllll !

! -ATR'-'~!JL E

- IT~l~A32-fltM L

t -'VENT"~ILE [ ~ N T R A L - S 4 E ~:-~LK~E 3 4 - R U L L ~ ~ r'~ENTII~JL - ~ I . : T I O N - g U I [ $ : "~ • E~.tEST Ocl[ S T - W A I L

|

L ,Ajar,I 2 ....!tlR t

A,ilI~TA 3 ~ -PI...IL E

~37-4q111

R-A|R~RIIE • R-VI:N1-4111 L R I I ~ -'RLIt If

II

' - S I I P - V ~ N A - I I A V A :l 7 ~

M

AIMIIA 3~-HIH I

• STF l~P~Jlla~lllll E

t -Alll'-.It~lll I~ I - V t N I -'dl~ll I

E

~

I N I ~ " L t I ~ A l [ ~'AR1

|:1 N I H A I - S t C -~L 112 3 4 - I ~ 1 t $ , . . . . 1] NltlXl -~ I:lllN~lli 15 " I ,~lll.%1

L -AT~ I -V1EN i

I ~ t l ~1 - W A I I

; (IB, I I,%

It-Alll-lltll

I~

R-~NI -Hill I f l i P / - t i t II ! .%111~llid-iflll |

i (likl~} I

Fig. 14. (left panel) A part of the knowledge-base before the high-level matching of any region. (right panel) Same part of the knowledge-base (as shown in the left panel) after the high-level matching. Object 22 has been identified as aorta,

defined and matched by the knowledge-base. Thus, in case of abnormalities, the system first provides a feedback to redo the low-level analysis but can-

Fig. 15. The output of the high-level interpretation system for the image shown in Fig. 9. The output shows the successful interpretation of the regions as bright areas,

not find the right label for the object. After the feedback re-evaluation analysis, the abnormal region is flagged as an unknown region which is an indication of the presence of a possible lesion. We

Fig. 16. Another digitized original image of the CT chest scan of a normal human patient•

238

Fig. 17. The preprocessed image of the CT chest scan shown in Fig. 16.

Fig. 19. The segmented region map for the image (shown in Fig. 17) as obtained after the rule-based analysis.

are currently in the process of developing the anatomical-lesion evaluation knowledge-base.

slice thickness is small enough such that any abnormal 3D lesion such as a tumor should be seen in a few slices and not in one slice only. First, we analyze each slice with the knowledge of normal anatomy and label all the regions which are found consistent and matched with the knowledge-base. The remaining regions after the feedback re-evaluation are not forced to be merged. They remain unlabeled with the co-ordinated description of their centroid, area, elongatedness, and moment features. In the next consecutive slices, if other regions are found with almost similar features and unlabeled markers, the regions are placed in the file of candidate abnormal regions, otherwise they are deleted. It should be noted that there are a number of abnormalities that can be

4. Conclusion We have developed and implemented a knowledge-based medical image analysis and interpretation system for analyzing human CT chest scans. The remaining work in this direction includes analysis of abnormal CT chest scans using the developed system. The knowledge-base of commonly found anatomical abnormalities is being developed to help the analysis and interpretation. The subject of the three-dimensional analysis of medical images is being dealt with by projecting the analysis of a 2D slice to the next consecutive parallel slice. It is reasonable to assume that the

Fig. 18. The preliminary segmentation of the image (shown in Fig. 17) as obtained after the multi-resolution pyramid-based processing.

Fig. 20. The output of the high-level interpretation system for the image shown in Fig. 17. The output shows the successful interpretation of the regions as bright areas. Note that the leftand the right-ventricle regions were not interpreted because these regions were merged in the rule-based segmentation (Fig. 19).

239 expected in this application. The recognition of the a b n o r m a l i t i e s has n o t been addressed in this paper because that requires d e v e l o p m e n t of the knowledge-base for expected a b n o r m a l i t i e s a n d its e v a l u a t i o n using a large training set. We are currently involved with this work. The complete report on it will be presented in future. The issue of correlating the objects from the multiple modality images is u n d e r c u r r e n t investigation using the reconstructed surfaces from the labeled slices a n d will be reported later.

Acknowledgements This work was supported, in part, by grants from the Texas A d v a n c e d T e c h n o l o g y a n d Research P r o g r a m ( T A T R P ) , a n d the N A S A J o h n s o n Space Center, Houston. This work was started at the U n i v e r s i t y of H o u s t o n a n d is c o n t i n u e d at the Knowledge-Based Image Analysis L a b o r a t o r y at the University of C i n c i n n a t i . We t h a n k N i z a r M u l l a n i from the Texas Health Sciences C e n t e r at H o u s t o n for a r r a n g i n g the C T chest scans for this study.

References [1] U. Shani, Understanding Three-Dimensional Images: Recognition of Abdominal Anatomy from CAT Scans (UMI Research Press, 1984).

[2] S.A. Stansfield, ANGY: a rule-based expert system for automatic segmentation of coronary vessels from digital subtracted angiograms, IEEE Trans. Patt. Ana. Mach. Intel., PAMI 8(2) (1986) 188-199. [3] E. Sckolowska and J.A. Newell, Recognition of the anatomy using a symbolic structural model of a CT image of the brain, in: Proc. Second International Conference on Image Processing and Applications, 24-26 June, London, pp. 233-237 (1986). [4] A. Rosenfeld and A.V. Kak, Digital Picture Processing, Vol. 2, 2nd end. (Academic Press, Orlando, FL, 1982). [5] R.M. Haralick and L.G. Shapiro, Survey: image segmentation techniques, in: Computer Vision, Graphics, and Image Processing, Vol. 29, pp. 100-132 (1985). [6] A.P. Dhawan, G. Buelloni and R. Gordon, Enhancement of mammographic features by optimal neighborhood image processing, IEEE Trans. Med. Imaging, MI-5(1) (1986) 8-15; corrections in MI-5(2) (1986) 128. [7] A.P. Dhawan and E. Le Royer, Mammographic feature enhancement by computerized image processing, Comput. Methods Programs Biomed. 27(1) (1988) 13- 35. [8] T.H. Hong and A. Rosenfeld, Compact region extraction using weighted pixel linking in a pyramid, IEEE Trans. Pattern Anal. Mach. Intell., PAMI-6(2) (1984) 222-229. [9] A.P. Dhawan, H. Baxi and M.V. Ranganath, A hybrid low-level image analysis for computer vision systems, in." Proceedings SPIE, Vol. 937, Applications of Artificial Intelligence VI, pp. 2-9 (1988). [10] R.C. Gonzalez and P. Wintz, Digital Image Processing, 2nd edn. (Addison-Wesley,New York, 1987). [11] Nazif, A.M. and M.D. Levine, Low-level image segmentation: an expert system, IEEE Trans. Pattern Anal. Mach. lntell., PAMI-6(5) (1984) 555-577. [12] M. Nagao and T. Matsuyama, A Structural Analysis of Complex Aerial Photographs (Plenum Press, New York, 1980).