computer
methods
and programs in biomedicine Computer
A -
Methods
and Programs
in Biomedicine
47 (1995)
51-71
an atlas-based software tool used to facilitate the interpretation of neuroimaging data Lennart Thurfjell*a~b, Christian Bohm”, Ewert Bengtssona
aCentre for Image Analysis, Uppsala University. L.iigerhyddvtigen 17, S-752 37 Uppsala, Sweden bDepartmenrs of Neuroradiology and Cliniral Neurophysiology. Karolinska Institute/Hospital, Stockholm, ‘Department of Physics, University of Stockholm, Stockholm, &eden Received
31 January
1994; revision
received
14 March
1995; accepted
29 March
Sweden
1995
CBA, a software tool used to improve quantification and evaluation of neuroimaging data has been developed. It uses a detailed 3-dimensional brain atlas that can be adapted to tit the brain of an individual patient represented by a series of displayed images. Anatomical information fi-om the atlas can then be introduced into the images. ff the patient has been imaged in different modalities, adaptation of the atlas to the different images will provide the transformation that brings the images into registration. CBA can thus be used as a tool for fusing multimodality information from the same patient. Furthermore, by applying the inverse atlas transformation, images from a patient can bletransformed to conform to the anatomy of the atlas brain. This anatomical standardization, where the atlas brain itself serves as the anatomy standard, brings data from different individuals into a compatible form providing possibilities to perform individual-group and group-by-group comparisons between patients and normal controls. Keywords:
Brain atlas; Multimodal registration; Intersubject standardization
1. Introtiluction The development of tomographic imaging during the past two decades, allowing for information about the state of the human body to be gathered non-invasively, has revolutionized medical imaging. Tomographic imaging devices (scanners) typically acquire 2-dimensional (2D) data sets -. *Corresponding author, Tel.: 553447; email: lennartQcb.uu.se.
46 18 183471;
0169-2607195E.SO9.50 0 1995 Elsevier SSDI 0169~2607(95)01629-8
Science
Fax:
Ireland
+46
IIS
Ltd. All rights
(slices) consisting of point-by-point estimates of a certain physical parameter from corresponding ‘slices’ through the body. Most scanners produce several slices simultaneously. This stack of slices can be regarded as a 3-dimensional (3D) data set representing the examined volume of the body. Traditionally, 2D methods, applied slice by slice, have been used for visualization and analysis, but the use of 3D methods is rapidly growing. When used to examine the human brain, these
reserved
52
L. Thurfell
ei al. /Computer
Methods
and Programs
tomographic imaging techniques are often described by the common term neuroimaging. Many di.fferent techniques exists, based on different physic,al principles. These include Computerized Tomography (CT), Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET) and Single Photon Emission Tomography (SPECT). Each of these imaging modalities provide different information. CT and standard MKI depict different aspects of anatomy but give little functional information. PET and SPECT, on the other hand, provide functional information but delineate anatomy poorly. Due to the complimentary nature of these techniques, clinical diagnosis can often be improved by combining information obtained from several examinations of the same patient. Therefore, there is a need for methods that relate one image set to another in a meaningful way. This process is often referred to as image registration. When the images are obtained from the same individual, the term intrasubject registration is often used. The need for combining images from different modalities has been widely recognized and numerous methods for registration of mu:~%~modal images have been reported. A recent review of existing methods is given in [I]. egistration of images from different examinathe same patient obtained with the same modality is also important. For example, when comparing patient data before and after ‘treatment’. In this case, registration is normally accomplished by the use of a reproducible head fixation system [2]. However, since a perfect repositioning is never possible, there is a need for a retrospective registration scheme also for intramodality images [3]. Merging of anatomical and functional information is important for the evaluation of functional data which is often based on the analysis of regions of interests (ROIs). These are normally designed to contain some specific anatomy, making aa anatomy-related approach desirable. For example, anatomical information can be given by MRI-images from the same patient brought into registration with the functional images [4,5]. An alternative approach is to provide anatomical information from an atlas adapted to the patient [h--9]. The main advantage of using MRI-data is
in Biomedicine
47 (1995)
51-71
that since data originates from the same patient, an intrasubject registration is sufficient. Atlasbased methods, on the other hand, require a much more complex registration scheme. Because the atlas is based on a different individual (or on a mean image derived from several individuals), a registration method relating images from one individual to another, also known as an intersubject registration, is required. It is also in the registration process that we find the main disadvantage with atlas-based methods - a perfect adaptation of the atlas to the patient’s anatomy is never possible [IO]. This is particularly true in pathological cases with expanding lesions. But, atlas based methods have several advantages over other methods for multimodal registration of images from the same patient. Using anatomical information from the atlas to specify RQIs will lead to a more objective evaluation of the data rather than using ROIs that are subjectively drawn into the images by the user. And time will be saved if the manual and tedious interactive specification of ROIs can be avoided. Moreover, MRI-imaging is not always available and, even if i% is, it is clear t.hat a method for evaluating functional d.ata that does not require additional examinations of the patient is important. There are also non-atlas-based methods for intersubject registration. Woods et al. have described a volume-based technique for co-registration of intra and inter-modality images that have been extended for intersubject registration [l I]. In atlas-based methods, a transformation that adapts the atlas to an individual is determined. Anatomical information from the atlas data base can then be introduced into the images. Another aspect of using an atlas is to, by using the inverse atlas transformation, transform the individual data into a common anatomical space: i.e. the anatomy of the atlas brain, Images from different individuals can, after such an anatomical standardization, be directly compared on a pixel-by-pixel basis. This is important in many applications, for example, in brain activation studies with PET, where intersubject averaging is used to merge data for accurate delineation of activated regions from surrounding regions. In the clinical situation, these techniques can be used to create a mean image
L. Thurfiell
et al. / Computer
Methods
from a number of ‘normal patients’. By subtracting the mean image of the normal group from that of an individual patient, significant deviations from the ‘norinal’ can be observed. In the future, a data base library containing mean images of normal controls and of different pathological cases as described by Greitz et al. [ 121 might become available, giving diagnostic help to the radiologist. Above, we have described three different areas which are becoming increasingly important in functional neuroimaging, namely (1) intrasubject registration used to fuse information obtained in different examinations of the same patient; (2) the use of ,a brain atlas to provide detailed anatomical information in functional images; and (3) the importance of anatomical standardization in research and in the clinical situation. The motivation when designing the Computerized Brain Atlas (CBA) described here was to provide these functions in an effective integrated environment. CBA uses the Greitz-atlas data base as a source of anatomical information. This atlas contains 3D descriptions of anatomical structures derived from a set of digitized photos obtained from a single cryosectioned brain [S]. The data base at present includes descriptions of the brain surface, the ventricular system, the cortical gyri and sulci as well as the Brodmann cytoarchitectonic areas. The major basal ganglia including the thalamic nuclei, the brain stem nuclei, the lobuli of the vermis and the cerebellar hemispheres are also contained in the data base. When using the CBA approach, the atlas is, as a first step, transformed to fit the patient undler study. Anatomical structures can then be selected and drawn on the displayed images. In this way CBA can provide the anatomical information necessary for correct interpretation of functional images. Furthermore, once the transformation adapting the atlas to the individual is specified, the image data can be transformed into a standardized anatomy. This will bring images from the same individual obtained by different imaging modalities, or images from different individuals, into a compatible form. For example, CBA tools are availabie for processing sets of such images to create average and subtraction images, and for performing statistical comparisons.
and Programs
in Biomedicine
47 (1995)
51-71
53
The system described in this paper is a completely new design which shares basic concepts with a previous command driven Fortran program. Several application papers, where this older system was used, have been published. Different aspects of the development of the new system on an algorithmical level have also been published previously. In this paper, which is the first comprehensive description of this system, we will not repeat such implementation details, but will refer to the relevant publications. Instead, we will focus on the overall description of CBA and will describe some important, previously unpublished aspects of the system. In section 2 we present some considerations that influenced the design of the program. In section 3, we present a functional view of CBA. An application example is given in section 4, and in section 5 experience from the use of the system as well as future improvements are discussed. 2. Design considerations i?.1. Portability
Modern neuroimaging equipment runs on a wide variety of computer systems often linked together in a local area network. It is not unusual to have a PET-unit running on a VAXNMS system together with an MRI-system running under UNIX, or vice versa. Portability has therefore been one of the major design considlerations. CBA runs on several platforms including most Unix workstations such as DEC Ultrix, Alpha running OSF, SUN, HP and Silicon Graphics, as well as VAX stations and Alpha running VMS. All software is written in ANSI-C and is based on the Xl 1 windowing system. The user interface has been created using the QSF/Motif Toolkit. A problem when designing software portable between different operating systems is the use of system calls. These should of course be kept to a minimum but cannot be avoided completely, for example, when listing directories. In CBA, standard UNIX system calls are used throughout the code. On VAXNMS-systems, the translation of lthese calls to the corresponding VMS system calls are hidden in a subroutine library which is added -when linking the software.
54
L. Thurfell
et al. /Computer
Methods
and Programs
2.2. Separation of application code Another design goal was to separate the applicat.ion code from the user interface as well as from the low level interface to the windowing system. The only path between the user interface and the application is through a main function in CBA. In this function the commands given are decoded and other functions are called. This resembles a traditional command-driven program, having a main loop first prompting the user to enter a command, then parsing the command and, finally, a switch statemeat where the appropriate actions are taken. In fact, it is possible to type in commands in a command widget. Most commands activated from the menu-driven users interface have also a corresponding interactive command. When a certain command is given from the menu driven users interface, a command structure is tilled in and is then passed on to the CBA main function. If the same command is given as an interactive command, the command string is first sent to the command interpreter. During the parsing phase, the command structure is filled in the same way as would be done by the menu-driven interface. Interactive and menu-driven commands are, in other words, handled in exactly the same way by the application. Although the user can type in interactive commands using the command widget, CBA is, in practice, never run this way. The major use of interactive commands is to support command tiles, Interactive commands are also useful during development; if a new function has been added to the application but not to the user’s interface, it can be called using the command widget. The separation between the application and the windowing system is performed in a similar way. A display interface is stored in one tile and all calls to the low-level functions that finally calls the windowing system are made through this interface. 2.3. Image formats Flexibiiity in reading and displaying images originating from various sources was also an important design goal and CBA recognizes a number of different image formats. These include several of those used by various scanner manufacturers as well as the ACR-NEMA standard. There is also a way for the user to add new formats, by entering,
in Biomedicine
47 (1995)
51-71
information such as image size, number of bytes per pixel, pixel size, header size, etc. for the new format in a special tile, named chalfiledefs, that is read during start-up. Another aspect of flexible image handling is that CBA should be able to read and display images created on different computer systems. There are two major difficulties; there are systems with different byte ordering (for example SUN as compared to DEC) and floating point representations are different on VAXNMS (VMS-float) compared to Unix systems (VMS-float). Byte ‘ordering and floating point standards are defined for all file formats and byte swaps and floating point conversions will take place whenever necessary. 3. How to use CBA 3.1. The user’s interface All user interactions are made through a graphical user interface. Most commands are located in pull-down menus, each of which contains a certain group of commands that appear when an associated button in the main menu is pressed. Some activations of commands in pull-down menus will directly cause an action in the applica.tion, but in many cases a dialog window will pop up. Here, optional or necessary additional selections are made before the final command is activated In addition to the main menu there is a permanently displayed dialog window from where some frequently used commands are activated. These can thus be issued with a minimum of user interaction. Windows in CBA. When starting CBA, two windows will be displayed, the Main Window and the Display Window. The Main Window is divided into three parts (Fig. 1). The main menu bar is located along the top of the window, above the display area, while a status area is placed at the bottom of the window. The display area is used for display of images and graphics. It is divided into a matrix of subwindows, each of which can display one image. When an image is displayed it will appear in the so called Next Window. The Next Window is outlined by a yellow border and is normally the window immediately following the last image displayed but
L. ThurJeN
et al. /Computer
Methods
and Programs
in Biomedicine
47 (1995)
51-71
55
Fig. I. The figure shows the CBA main window containing the main menu, a display area and a status area. The brain atlas has been adapted to a series of MRI images. Shown structures include I:hebrain surface, the ventricular system,the striatum and the thalamus.
any window in the display matrix can be selecte,d as the Next Window by a mouse click. Information to the user is displayed in the status area. Such information includes guidance about actions to undertake as well as displaying the current activity during certain computation intensive tasks. The main window can be resized using the mouse. The Display Window includes the user interface for displaying images and for color scale manipulations. Main menu commands. The following commands are available in the main menu in CBA. The name of the command group as it appears in the main menu is printed in bold followed by the commands included in the corresponding pulldown menu.
File: Load Case, Create Case, Show Case, Read Image, Save Image, Save Standard, Load Command File. Transformation: Save Parameters, Set Local, Set Global, Set Dynamic. Region: Set Structure, Clear Structure, Structure from Image, ROI Processing: Image Algebra, Threshold, Label, Setup.
Mean,
Normalize,
Customize: Default Display, Graphic, Interactive Command. Screen: Rebuild, Clear, Display Matrix, Graphic, Save Screen, Restore Screen. Measure: Inspect, Distance.
Draw
56
L. Thurfell
et al. / Computer
Methods
Help: A list of all available commands. The mouse. A standard mouse with three buttons is used. The left, middle and right mouse button wih in the following be referred to as MBI, MB2 and MB3, respectively. MB1 is used to control the user interface (e.g. pressing a push button) and to interact with the display area, while MB2 and MB3 are used to control some specific functions. 3.2. Interface to image data
As mentioned in the introduction, one of the primary aspects of CBA is to introduce anatomical information from the atlas data base into the displayed images. With a given set of images there is an associated set of transformation parametets used to adapt the atlas brain to the images. As will be described in more detail in section 3.5 there is one set of general parameters that includes both rigid and non-rigid transformations and, for each image set, there is an additional set of examination specific, rigid transformation parameters. The general transformation parameters affect all examinations, and are determined once for each patient, ,while the examination-specific parameters are used to compensate for differences in alignment of the patient during the different examinations. This concept implies that, at a given time, it is only relevant to work with images that refer to the same underlying anatomy. Or, in other words, the images should either be from the same individual or they should be transformed to correspond to the same anatomical standard. At a given time, CBA has only access to images included in the current case definition. Such definitions are normally stored as files and contain descriptions of all sets of images relevant to each case. Associated with a case definition is normally also a parameter tile where transformation parameters are stored. The case definition also provides compact aliases which are used as identifiers for the different images during the CBA session. In virtually all operations performed on images, the image is referred to by its alias. The complete filename is almost never used. We will use the term bD in the same sense throughout this paper, meaning an a&as for an image.
and Programs
in Biomedicine
47 (1995)
51-71
Creating a casedefinition. A case delinition must be created before applying CBA to a new patient, and for each new investigation made on the patient, the case definition must be updated. Besides the IDS by which the images are referred to during the session, the case definition contains information about the image data sets such as number of slices, the level of the first image and the slice thickness. A sample case definition is shown in Fig. 2. Case definitions are normally created with the Create Case command. From a dialog, the user can select image files and assign an ID to each file. Information such as number of slices, slice spacing, etc. is filled in automatically when a file is selected. A new case definition can be created, or images from subsequent examinations can be added to an existing case definition. Case definition files can also be edited with an ordinary text editor. Loading a casedefinition. A case definition is selected with the Load Casecommand. A file selection box will then appear from which the user selects the desired definition file. When the case dlefinition is loaded, information of each examination in the case is added to an internal data structure called the case list. Each entry in the case list stores information corresponding to one examina-
M: l-15 POSITIONS: 0,6.5 FILE:/usr/home/atIas/dat/mrdata*.img P: 1-15 POSITIONS: 32.0.6.5 FILE:/usr/home/atlas/dat/petdataima Fig. 2. An example of a case definition file describing data from different examinations of the same individual. In this example two examinations have been added to the case, each of which is described by two lines in the case definition. In the first line, the first string contains the image identifier (ID), in this case M an#d P, then follows the first and last slice in the data set. Imme:diately following the key word Positions are numbers giving the position of the first image (distance from the lowest shce to the OM-line) and the distance between adjacent slices, respectively. The second line for each entry contains the name of the image tile including the path. Images may be s:ored in one tile for all slices or each slice can be stored in a separate tile. In the latter case the tile specification must be generic, i.e. it must specify the name for all image tiles (as for the MRI data in this example).
L. ThurBelE et al. /Computer
Methods
and Programs
irr Biomedicine
47 (1995)
51-71
typedief struct case-struct { short *data: /* Pointer to image data if held in PM. Null if data is on disk */ char filepame[MAX]; /* Name of image file including path */ char id[lMAXID]; /* Image ID (alias) */ char type[MAXEXT]; /* Image format */ int orient; /* Orientation of slices in data set (e.g. trans-axial). */ float first-level; /* Level in CBA coordinates (z) for the first slice */ int normalized; /*True if the image has been normalized */ FILEDEF~STRUCT filedef; /* file definitions */ LTPAR-STRUCT ltpar; /* local parameters */ lCASE_STRUCT; typedef struct filedef ( int x-size; /* Size of image array in X-direction */ int y-size; /* Size of image array in Y-direction */ int data-type; /* Pixel data type (b’yte, integer*2. integer*4 or real) */ int fileheader-size; /* Header size in bytes */ int byte-order; /* Byte order in image data (0 - DEC, 1 - SUN, HP...) */ float pixel-size; /* Pixel size in mm */ int first-slice; /* First slice in data set */ int last-slice; /* Last slice in data. set */ int *slice-order; /* Points to slice ordering array if slices are not stored consecutively in input data */ float slice-spacing; /* Spacing between slices */ float pix-max; /* Max pixel value in whole data set */ float “spix-maxq; /* Pointer to where max pixel value for each slice is stored */ nt single-slice-file; /* O-all slices are stored in one file, l-slices are stored in different files */ 1” Set true if image must be byte swapped */ int swap; ) FILEDEF-STRUCT; t.ype.def struct ltpar-struct ( /* Data structure for local parameters */ float ul; /* Upper limit */ float 11; /* Lower limit */ float dr; /* Display range */ /* Translation in x direction */ float tx; /* Translation in y direction */ float ty; float tz; /* Translation in z direction */ float a; /* Rotation angles ‘k/ float b; /* Rotation angles *I float c; /* Rotation angles */ } LTPAR-STRUCT; Fig. 3, An entry in the case list where information regarding information such as the ID used to refer to this data together data. Exa.mination-specific, rigid parameters used to account are also included.
image data for one examination is stored, This includes some general with information about file format necessary to read and display the for differences in positioning of the patient in different cxamir.ations,
58
Z.. Thurfje
et al. /Computer
Methods
As shown in Fig. 3, this includes information about file format, the rigid transformation parameter values specific for the examination, as well as some general information used internally by CBA. When loading a case, each image specified in th,e case definition is examined. The image file is opened and is compared to the file formats known b.y CBA. If a match is found, the file header is read and information, used to Ii11 in the file format part in the case list, is extracted. If no match is found, the extension of the image file is used to look up the information about the file format in the CBA file definition file (see section 2.3). The transformation parameter values associated with the case are also loaded. If the atlas has not yet been adapted to the case, i.e. there is no associated parameter file, default values corresponding to the identity transformation are loaded. The general parameter values are stored in a special data structure while the examination-specific rigid parameters are stored in the case list. Information about the loaded case can be obtained with the command Show Case. Information similar to what is shown in Fig. 2 will then appear in a pop-up window.
and Programs
in Biomedicine
47 (1995)
SI-71
tion.
3.3. Operations on images
Neuroimaging data is normally stored as a volume created by a stack of transaxial slices. CBA4 can display these slices as well as intermediate slices a.nd orthogonal slices through the volume created from the input data through interpolation. Displayed images are automatically zoomed in order to get the same resolution in screenpixels/mm independently of the underlying resolution of the data. The brain in images from the same patient, but obtained with different imaging modalities and having different resolution will thus be zoomed to the same size when displayed. The display area is organized as a matrix of windows. The number of windows in the display matrix can be chosen in the range from I to 8 x 8. As mentioned above, an image will appear in the Next Window when displayed. The position of the Next Window is then incremented. Image display. All controls for image display and color scale manipulations are located in the Display Window (Fig. 4). In all display operations, the input image is referred to by its ID as given in
Fig. 4. The Display Window containing the user interfacefor display operations and for selecting and adjusting color scales. All IDS of images CBA have access to and are included in the list in the dialog. In this case data from one 1MRI examination (mrl) and two PET examinations (petl, pet2) are available. the case definition. When a case is loaded, the ID for each image set included is written into a list in the Display Window. Before any display operations are performed an image must be specified by selecting its ID as active. This is rmrmally done using the mouse to select an ID from the list in the Display Window, but an ID can also be entered into the Active ID text field from the keyboard. Then a slice must be specified. This can be done by pressing the numerical push buttons in the window using the mouse, or by entering numbers from the keyboard. A single slice, or a range of slices, can be specified. In the latter case a string {e.g. ‘l-9’ or ‘1, 2, 7, 9’) is used to select several slices. When the button Display is pressed, the given string is parsed and the corresponding slices are displayed. As mentioned above, an input image volume is normally represented as a sequence of transaxial slices. Sagittal and coronal views can then be created through interpolation (such views are
L. Thurfiell
et al. /Computer
Methods
shown in Fig. 1). It is also possible to calculate intermediate transaxial slices. These operations are selected from a dialog appearing if the button Crass in the Display Window is pressed. Here, the orientation of the interpolated slice, i.e. transaxial, sagittal or coronal, is first selected then the location for the new slice is given by the user, either explicitly in coordinates, or by specifying the location using the mouse. Manipulation of color scales.CBA allows images to be displayed in gray scale as well as in a number of predefined pseudo colors. Two of the color scales can be loaded simultaneously allowing for, for example, MRI images to be displayed in gra.y scale next to PET images displayed in pseudo color. The default color table of the system is used. Presently, an 8-bit screen is always assumed or, which is equivalent, a default color table with 256 entries is assumed. It is also assumed that the window manager uses 16 colors. During initialization, CBA tries to allocate as many entries as possib.le from the remaining 240 positions. It then uses 16 of the given color cells for graphic overlay, while the remaining cells are split into two equally-sized parts, each of which is used to store the color table entries for one of the available color scales used for displaying images. We refer to these two partitions as Color Map I and Color Map 2, respectively. Two color bars located at the top of the display window show the two color scales loaded into these partitions. An image is displayed using the scale loaded into either Color Map 1 or Color Map 2, depending on which is selected as active as given by the state of a pair of radio buttons located immediately to the left of the color bars. The gray-scale is computed by CBA while the pseudo color scales are defined in ASCII files stored. in a special directory. The name of the files stored in this directory are read during start-up and can then be selected. This allows the user IO define a new color scale by editing a file and have it available the next time CBA is started. The actual color scale associated with Color Map 1 and Color Map 2 can be changed at any time. If the button Set Color is pressed, a list of the available color scales compiled at start-up will appear in a pop-up window. If a new color scale is
and Programs
in Biomedicine
47 (1995)
51-71
59
selected, new color table entries are computed and are then loaded into the active color map. The corresponding color bar and all images displayed using this map are immediately visualized in the new color scale. Below the color bars are two sliders, by which the upper and lower cut-off levels for the two color scales can be changed, allowing for contrast enhancement of the displayed images. Different
images combined into one image.
i4nother important display feature in CBA, which is particularly useful when merging functional and anatomical information, is the ability to combine two input images into one displayed image. This is of course only meaningful if the two data sets have been brought into registration so the slices correspond to the same anatomical entity (as performed lay the Save Standard function described below). This display mode is initiated if the button Combine in the Display Window is pressed. A dialog window will appear from where the user can specify the two input images to combine by giving their respective IDS. Let’s call these ID1 and ID2. There are two different methods available, Superimposeand Edge. After selecting input images and mode of operation, any display operation can be carried out in the same way as described above. When Superimpose is selected a combined image is created by displaying the two images in a chess-board pattern where the ‘black’ pixels belong to the first image and the ‘white’ pixels belong to the second image. The image selected as ID1 is displayed using Color Map 1 while the image selected as ID2 uses Color Map 2. This allows for the contrast to be individually adjusted !for the two input images in order to enhance the combined image. In the edge mode the image selected as ID1 is ,displayed as usual. The image selected as ID2 is preprocessed using an edge-finding algorithm and is then thresholded. The edges from the second image are then drawn as an overlay on the first image. These two modes are quite different. Superimpose is mostly used for adding functional information to morphological data, for example, for ‘displaying activation foci from a PET brain activa-
60
L. Thurfiell
et al. /Computer
Methods
and Programs
in Biomedicine
47 (1995)
SI-71
tion study superimposed on data from MRI. Such an image is shown in Fig. 10. Edge mode, on the other hand, is intended for adding morphological information to functional data such as displaying edges from A4RI as an overlay on a PET image. 3.4. Anatomical
structures
Anatomical information from the atlas data base can be added to displayed images in two diferent ways: structures can be selected by name or by location. An anatomical structure is represented in the data base by a 3D surface description. When a structure is selected, this 3D geometry is traversed and the contours of the cut through the structure made by the displayed images are calculated and are drawn as an overlay on the images. This process, which we refer to as a trace of a structure, also involves the transformation of the selected anatomical structures from atlas space to patient space and is described in detail in [13]. Selectkg structures from the data base. Anatomical structures are selected from the Set Structure dialog shown in Fig. 5. There are two scrolled lists in the dialog; a list containing all the structures in the data base and a list displaying all the structures in the current selection. In addition, there is a row of radio buttons for selecting graphics color, an input text field and push buttons to activate or cancel the command. Structures can be selected from the data base list using the mouse or by entering the name of the requested structure into the text field. In the latter case, afl structures whose first characters match the name specified, are selected. It is also possible to use wild cards allowing for a certain range of structures to be selected. The string ‘brod*si’ will thus select all Brodmann areas in the left hemisphere while ‘brod’ (or ‘brad*’ which is equivalent) will select all Brodmann areas. A new color can be specified in-between selections and, for each selection, the name of the structure with a color identifier appended, is added to the Selected Structures list. If OK is pressed, the dialog will disappear and each of the selected structures is traced and. its contours are drawn on the displayed images. Seei’eeting structures by pointing in images. The purpose with the Structure From Image software
Fig. 5. The dialog for selecting anatomical structures from the data base. All structures in the data base are included in the left list. The user can pick items from this list in various ways, and select graphics color from the row of push buttons in-between selections. Selected structures will appear in th,e fist to the right.
mode is to allow the user to get anatomical information about a certain area in a displayed image by simply using the mouse to point and click at the desired location in the image. It is possible to get information about three different groups of structures namely Brodmann areas, gyri and central structures, respectively. When one of these groups is selected, a volumetric model containing spatial information about the selected group of structures is loaded. Separate models for the th.ree different groups is necessary since structures in different groups are partly overlapping. By moving the pointer to the desired image and clicking MB 1, information of the given point is obtained. When a point is given, its coordinates are first transformed from patient coordinate space to CBA coordinate space. The transformation used tie achieve this is the inverse of the transformation fitting the atlas to the patient. The CBA-coordinates are then used to access the volumetric model. If any structure is found to contain the point, it is drawn in the image and its name is (displayed in a message dialog.
L. Thurfieil
el al. /Computer
Methods
The c.reation of the volumetric model and the algorithms used in this process are described in [ 141. 3.5. Transformations
Since the brain atlas describes the anatomy of the brain from which it was derived only, it must first be transformed to Iit the patient under stud:y before it can be used. Available transformations include translations, rotations and linear scalings as well as second-order non-linear deformations. The adaptation can be performed in two quite different ways (or as a combination of both): * By explicitly giving new parameter values. @ By interacting with the transformations using the mouse in a way that implicitly modifies parameter values. Translations, for example, ar.e performed by moving the structures on th’e screen using the mouse. Two categories of transformations are used: a ,general case-specific transformation, determined ‘once for each patient, and an examination-specific transformation that compensates for variations in positioning of the patient during different examinations There is one set of general parameters fo:r each case and one set of examination-specific parameters for each entry in the case. Parameters are stored in a case-specific parameter file read when a case is loaded. The first time a case is loaded there is no associated parameter tile and the internal parameter structure will be set to the identity transformation, i.e. all parameter values are zero except the values for linear scalings which are set to 1.0. The transformation parameters form a compound transformation which is mathematically equivalent to a general second-degree polynomial in x, y and z 1131. The adaptation of the atlas (i.e. the definition of the correct set of transformation parameters) is performed interactively by matching the selected atlas structures (normally the brain surface and some of the central structures) to the corresponding structures in a series of images of the patient’s brain, for example, as presented in Fig. 1. Subsequent adaptations to images from other examinations are then obtained by determining the specific
and Programs
in Biomedicine
47 (1995)
51-71
51
rigid (translations and rotations) parameters only. The general adaptation is preferably made to a set of high resolution CT or MRI images, if available. However, with suitable tracers, the adaptation can be made directly to functional images with acceptable precision. Transformation parameters. The compound transformation adapting the atlas to a patient include both rigid and non-rigid transformation parameters. Rigid transformation parameters include translations (TX, TY, TZ) and in-plane (A) and out of plane (B,C) rotations. R.otations are described in euler angles. However, in some cases it has proved to be cumbersome to specify a desired rotation using euler angles and tilt and flex operations have therefore been a.dded. These operations modify the euler angles to new values corresponding to a specified tilt, a right-left rotation, or flex, a forward-backward rotation of the brain. Then there are the non-rigid transformation parameters. These include linear scalings (SCX, SCY, SCZ) as well as other linear and non-linear transformations such as variable scalings (SCXY, scxz, SCYX, SCYZ, SCZY, scxx, sczx, SCYY, SCZZ), skews (SWYX, SWYZ, SWZX) and scoliosis (SKXY, SKXZ, SKYX, SKYZ, SKZY, SKZX). These transformations can account for most types of normal variability in the shape of the brain such as size, different types of pear shapes and asymmetries, but cannot correct for large local deformations due to pathology. Apart from the linear scalings, which are defined so that one corresponds to an identity transformation, the non-rigid transformations are specified b:y arbitrary units, whose normal range is between -100 and 100. Explicit setting of parameters. As previously mentioned, transformation parameter values can be explicitly modified. The general parameters can be changed in the Set Global dialog which is a form where each parameter has an entry where its name is displayed next to a field containing the current value of the parameter. Values can be modified and a trace of the selected structures can be initiated. As mentioned above, trace is the process of traversing a 3D structure, transforming it accordmg to the compound 3D transformation and
62
L. Thurfell
et al. /Computer
Methods
computing its cross-section with the displayed images. The examination-specific parameters can, in the same way, be changed from the Set Local dialog. Dynamic transform. If the atlas is adapted using the Set Global or Set Local functions, a trace of the displayed structures is performed after each modification of parameter values. The user then examines tke result and new adjustments are given in an iterative fashion until a good fit is obtained. However, this trial-and-error approach has proved to be quite cumbersome and time consuming since it is difficult to know, for example, if a translation is perfojrmed, how many units to specify. To speed up the a.daptation the Dynamic transform was developed [13]. There are two new ideas. The firat is to use a mouse-driven procedure for implicitly modifying parameter values. The second is to speed up computations by performing 21) transformations on the displayed contours. In the dynamic mode, polygons describing the crosssection of the selected structures with each displayed image are saved as lists of coordinates. There !.s one list associated with each displayed image. Dynamic transform mode is entered if the Set Dynamic command under Transformation in the main menu is selected and the dialog shown in Fig. 6 is displayed. There are two modes of operation: 2D and 3D. If the user modifies one of the parameter values in the 2D mode, a 2D transformation with the new parameter values will be applied to the polygons in the list associated with a window selected! by the user. New polygons approximating the 2D slice through the transformed structure are then displayed and the list is updated. The selected window only will be affected. A trace of the structures as described above will take place if the button Trace is pressed and not until then will the modified transformation effect the structures in all displayed images. To save time, it is perfectly possible to make several modifications in-between each trace as long as the modifications made are independent. typical example is to adjust linear scalings SCX and SCY working in a transaxial slice and then move to a sagittal slice and adjust SCZ. The 2D mode is fast because only an in-plane 2D transformation is performed on the current
and Programs
in Biomedicine
47 (1995)
51-71
Fig. 6. This figure shows the Dynamic Transform dialog used in the adaptation process. The user can dynamically warp the atlas brain by picking transformation parameters and changing their values using the sliders. The selection of parameters for general deformations is guided by a figure appearing when the parameter is selected. The geometrical effect of parameters SCXY and SWZX are illustrated here.
window. However, the user is not guaranteed that ‘what you see is what you get’ until the Trace button has been pressed. This is especially true if an attempt is made to adjust a parameter whose
12. Thurfiell
et al. /Computer
Metho#ds
plane of operation is orthogonal to the orientation of the image in the current window. In the 3D mode, on the other hand, a full 3D transformation is performed immediately following any change of paramet.er values, guaranteeing that the visualized result reflects the actual setting of the transformation parameters. A translation is initiated by pressing the button Tramfate. The user can then use the mouse to drag structures. Only the structures in the window where the translation took place will immediately be affected and Trace must be pressed to update all windows. A translation made in a transaxial slice will affect parameters TX and TY, respectively. In the same way, a translation made in a sagittal slice will aflect TY and TZ, while a translation performed in a coronal slice will affect TX and TZ. Rotations in the dynamic mode are restricted to flex and tilt operations and to in-plane rotation (A). After selection of rotation parameter (,flex, tilt or A), the window to operate upon must be selected (by clicking with the mouse). The orientation of the image should be transaxial if parameter A is to be changed, sagittal if flex is to be changed and coronal if tilt is to be changed. The modifica-
and Programs
in Biomedicine
47 (1995)
51-71
63
tion of the selected parameter is performed by using the mouse to move the handle of the associated scale widget. When the scale is set to a new value, the screen will be updated. If the operation was performed in the 2D mode, the structures in the selected window only are affected and Trace must be pressed to propagate the changes to all windows. Linear scalings and non-linear transformations are normally adjusted only the first time when the atlas is adapted to a new patient and will then affect all examinations added to the case. When modifying the value of any of these parameters in the Dynamic mode, the desired image is selected, again by clicking with the mouse, and a transformation parameter is selected from thie transformation palette. The selection of a parameter for the more complex deformations is guided by a figure which is displayed above its associated scale, showing the geometrical effect of the current selection (see Fig. 6). It is important that the selected transformation is defined in the plane corresponcling to the orientation of the image in the selected window (e.g. it makes no sense to change SCZ on a transaxial slice). The selected parameter is then
Fig. 7 An illustration of how the atlas brain is deformed when parameters SCXY and SWZX are deliberately adjusted to their maximum limits. The parameter SCXY operates in the xy-plane and the effect is shown in the transaxial slice (left image) while the effect of SWZX is shown in the coronal slice (right image).
64
L. Thurfell
et al. /Computer
Methods
and Programs
changed by moving the handle of the scale. And again, in the 2D mode, a modification will immediately affect the structures in the selected window and Trace must be pressed before the structures in all windows are updated. An example of two possible deformations of the atlas brain is given in Fig. 7. The parameters SCXY and SWZX have deliberately been adjusted to their maximum limits. Note the correspondence with the atlas contours in this image and the figures in the dynamic dialog shown in Fig. 6. 3.6. Image registration
In previous sections we have described how the atlas brain can be adapted to fit the images of an individual. The obtained transformation describes a point-by-point mapping of the atlas brain to the patient’s brain. This also means that the inverse of
in Biomedicine
47 (1995)
51-71
the transformation can be used to map the individual brain to the atlas brain. Furthermore, if the patient is examined in several imaging modalities, let’s say PET and MRI, the difference between the transformation adapting the atlas to the MRI images and the transformation adapting the atlas to the PET images describes the rigid transformation bringing the two data sets into registration. It is thus possible to use CBA both for (1) intrasubject registration, that is, registration of images from the same individual, and for (2) transforming images from different individuals into! an anatomyindependent coordinate space, also referred to as intersubject registration (Fig. 8). In the latter case, an anatomical standardization is achieved by transforming all images to fit the shape of the atlas brain. An arbitrary coordinate fram one image will then correspond to the same anatomical entity
r= g-f
Patient
Patient
A,
PET
B, PET
Fig. 8. When the atlas is adapted to a patient for the first time, a non-linear function of three variables (x,y,z) is defined. In the funcrionJx,y,t) must be defined when the atlas is adapted to the first examination (MRI) for patient A. When adapting to the PET examination for the same patient, it is sufficient to determine the rigid transformation describing the difference in ing of the patient as compared to examination 1. In this case we have to determinef’(x,y,z). We also have a function h(x,y,z) the atlas to patient B. The functionf’ can be used to reslice the PET data to match the MRI data for patient A, and all be resliced to conform with the atlas brain through the application of the inverse transformations.
this case, the atlas positionadapting data can
L. Thurfiell
et al. /Computer
Methods
and Programs
as tbe corresponding coordinate from another image, Transformation of one image volume to match another, or to match the atlas brain, is performed with the S@veSlandurd function. Here, the ID for the source image, i.e. the image data to reslice, is specified. If an ID for a destination image is also specified, the rigid transformation matching the two data sets is computed, and the source image is resliced to match the destination image. We have thus obtained an intrasubject registration of the two data sets. If, on the other hand, the destination ID is omitted, the atlas brain is considered the destination and a non-linear reslicing of the source image to conform with the atlas brain, i.e. an intersubject registration, is performed. The reslicing is achieved through an interpolation and resampling process 1131. Basically, it consists of looping through. all the voxels in the source volume and applying the source-to-destination transformation to the voxel coordinates. This will give a point in the destination image volume corresponding to the voxel. The coordinates of the given point will be given in floating point numbers and the actual voxel value is obtained using tri-linear interpolation. 3.7. Regions of interests
A regions of interests (ROIs) tool kit has recently been implemented. It allows the user to create different types of ROIs. In the case, of interactively drawn ROIs, the specification of the ROI in a functional image can be guided by anatomical information from the atlas or by MRI data from the same patient brought into registration with the functional data. It is also possible to use anatomical stru’ctures selected from the data base as 2D ROIs or as 3D VOIs (volume of interests). In this case the ROI is coupled to anatomy instead of to spatial coordinates which has several advantages. A given collection of ROIs can, for example, be directly moved to another patient after a preceding adaptat:!on of the atlas. The use of anatomical structures as ROIs reduces subjectivity and is also time saving. The development and usage of the ROI tool kit is described in detail in 1151. 3.8. Image processing
Anatomical
standardization.
as available
in
in Biomedicine
47 (1995)
51-71
65
CBA, allows for statistical operations on data from different individuals to be performed on a plixel-by-pixel level. A number of different functions for processing such images are available in CBA. These functions are found in .the pull down menu under Processing in the main menu. Normalization. The Normalize function allows for the activity in data from different subjects to be normalized. This is, in many cases, necessary to allow for statistical comparisons between data sets from different studies. The model for normalization presently implemented sets the global whole brain average equal for all images in the case list [16]. In this process all voxels inside the brain are averaged and are compared to a predelined value. A correction factor defined as the ratio between two values is computed and is used to correct voxel values in all subsequent processing. The predefined value can be changed using the Setup command. In this type of normalization, it is important to avoid errors caused by missing data. A spatiallystandardized image volume, created through a reslicing process as described in section 3.6, might very well contain slices with no data depending on the orientation of the brain in combination with the axial field of view in the original image volume. I!f missing data is not accounted for, it will lead to an overestimation of the normalization factor. In our approach, we circumvent this problem by checking voxels against a mask during the normalization. All the image volumes to be normalized are first averaged but, if any of the input voxels ia; zero, the corresponding voxel in the resulting volume is also set to zero. The maximum voxel value in the volume is recorded, and the mask is created by thresholding the volume at a level defined as 20% of the maximum. This concept will guarantee that only relevant voxels are taken into aiccount during the normalization process. Image algebra. Image Algebra is a genera1 purpose tool for combining images in a flexible way. The user can combine images with other images and with constants by specifying strings containing mathematical expressions. The expressions are then parsed and the specified opera&ions are performed on a pixel-by-pixel level on the input ima,ges.All input images must mat& in terms of slice size, resolution, number of slices, etc. Standard arithmetic operations (i-, -, *, /) are allowed as
66
L. Thurfell
et al. /Computer
Methods
and Programs
well as a number of predefined functions. These include exponent, logarithm and square-root, and are specified by prefixing an expression enclosed within parenthesis with the keywords exp, ln and sqrt, respectively. Images can also be filtered with a Gaussian by prefixing a single image ID or a complete expression with the keyword I;. Ne:w functions can easily be defined and added to the system.. The Image Algebra dialog used for processing images is shown in Fig. 9. Expressions are edited in the Expression text field. When return is pressed, the string will be moved to the Command list. This allows for several expressions to be entered, each of which will produce one output image. Tlhe syntax is the same as used in standard mathematical expressions. To the left of the equal sign, the ID of the output image is specified and to the right the operations to be performed on the input images aye specified. If OK is pressed, all expressions entered will be evaluated and processed. When tlhe processing of an expression is ready, the case 1l;st is updated to contain the new data. The new image can then be displayed and processed in the same way as any other image in the case list. All expressions entered during the session are stored in a history list and can be reused at any time. Expressions can also be saved to a file and
in Biomedicine
47 (1995)
51-71
.previously saved commands can be read back into the Command List. To illustrate how Image Algebra can be used, let us consider an example where we have four images named nl, n2, n3 and n4. The expression cl = (nl + n2 + n3 + n4)/4
will create a new image, given ID cl, that is the average of the four input images. If the expression cl = F((n1 + n2 + n3 + n4)/4)
(2)
is given, the average of the input images is first computed, then it is filtered. If, on the other hand, the expression cl = (F(n1) + F(n2) + F(n3) + F(n4,)/4)
(3)
i.s given, filtering of the input images is performed before the average is computed. Image algebra uses a LALR-parser created with the UNIX tool YACC [17]. The lexical analyzer, which takes the stream of charac:ters from the i.nput string and divides them into meaningful tokens for syntax analysis, is created by the UNIX tool LEX [ 181. Processing is always performed on all slices in the input images. For each slice, the expression is parsed and the resulting slice for the output image is computed. During the processing, a stack is maintained where intermediate results are stored. If an intermediate result is a numerical value, it is stored on the stack. If, on the other hand, the intermediate result is an image, the image is temporarily stored in the next free position in the case list and its index in the case list is stored on the stack. Temporary images are, in other words, stored in the same way as all other images in CBA. The parsing method and the way of storing intermediate results allows for arbitrariliy complex expressions to be calculated. Expressions such as res = sqrt(n1 + sqrt(nl*sqrt(ln(nl
Fig. 9. ‘The Image Algebra dialog. Expressions are edited into the Expression text field and are then moved to the Command list. If OK is pressed, all expressions in the command list are processed, each of which will create one output image. In this case, t&o filtered average images and one subtraction image are created.
(1)
t- I))))
(4) involving recursive use of the same input image are iallowed although the practical use of this particu:\ar example might seem less obvious. Filtering. As mentioned above, images can be smoothed using a Gaussian filter. This is perform-
L. Thurfjell
et al. /Computer
Methods
and Programs
ed using Image Algebra by prefixing an expression with the keyword F. The size of the filter is specified as the diameter at FWHM (full width half maximum) and can be preset by the user. When calculating the convolution kernel, the pixel size of the processed images is taken into account. The actual size of the filter kernel will thus depend. on the resolution of the data. Since the Gaussian is a function separable in x and y, the filtering is performed in two steps, first in the x direction, then in the y direction. For a filter kernel with size bin pixels, the processing time will be O(n) instead of 0(n2). This approach reduces the time spent on this task significantly because typical filter sizes used are around 30 x 30. Filtering with this kernel will thus require 60 floating point ca.lculations for each pixel instead of 900, reducing the processing time with approximately a factor of 15. Mean and variance images. Variance images are importimt in many applications, for example for testing significance. A variance image is produced if the Mean function is used to calculate an average image. The user has to specify the IDS of the input images and the ID of the output average image. A variance image given the output ID with the string ‘-VU appended is then also created. A t-test image can then be created using Image Algebra to divide the mean image with the standard error image (see equation 9 below). Cluster aBai?/sis. Analysis of binary data is important in many applications. For example, in brain activation studies with PET, a binary volume can be obtained by thresholding a r-test map at a certain level of significance. The different clusters of connected voxels in this volume correspond 1.0 different regions in the brain where a significant increase in brain activation has occurred. Different characteristics of these regions, such as size, location, etc.: can be calculated with the cluster analysis tools available in CBA. The Threshold functions in CBA allow the user to set two threshold levels either explicitly or using a graphical approach. A binary volume, where a.11 voxels in-between the set threshold levels belong to the foreground while the remaining voxels belong to the background, can then be created. The resulting binary volume can then be examined and statistics of the different voxel clusters can be ob-
in Biomedicine
47 (1995)
51-71
67
tained through the application of the Labeling function. A 3D-connected components labeling algorithm [19] is used to find and label connected voxels. The algorithm implemented allows for a volume threshold to be set. All clusters with a volume smaller than the set threshold will then be discarded during the labeling process. As a result of the labeling, a list describing features of the different connected components is presented. This includes the size of each component and the location of its gravity center. The coordinate of the gravity center can also be used to determine the name of the anatomical structure containing the component using the Structure From Image function. In the analysis of data from activation studies, the labeling function allows for a threshold to be s#et,not only on the value of an activation, but also on the size, thereby allowing for small spurious activations to be discarded. 4,. An application example snapping with PET
fuuctional
brain
Blood flow in the brain is coupled to its need for energy and oxygen. Based on this assumption, brain function may be studied by examining the blood flow patterns produced by different types of cerebral activation. Positron emission tomographic measurements of regional cerebral blood flow (rCBF) are often used in this type of study. Each subject is typically examined in two or more different activation states. One state is normally a neutral, reference state, while the other state is during some sort of provoked stimulation. Changes in cerebral blood flow may then be detected by subtracting rCBF images obtained for the different states. To overcome statistical limitations, intersubject averaging and data smoothing is normally used [2O]. In the application example described here, cerebral response to frightening visual stimulation was studied [21]. Six female volunteers with symptomatic snake phobia were examined while exposed to visual phobogenic and neutral stimulation. A phobic reaction was provoked by showing a videotape of snakes. A neutral videotape, including scenes of people walking in a park, was used as the reference condition. Regional blood flow images for each individual were estimated
6S
L. Thurjel/
et a/. /Computer
Methods
from the PET data. After a preceding adaptation of the atlas to the images, they were resliced to conform to the anatomy of the CBA brain. After this standardization, the images were read into CB,4 using the Read Image function. Images were named Q1-o6 and n 1-n6, where ‘0’ correspond to frightening and ‘n’ to neutral stimulation, respectively while the number is individual specific. The Normdize function was used to set the global blood flow (i.e. the average of all brain pixels) to 50 mlilOOg* min in all images. Smoothed average images, one for each of the two activation states, were computed. Smoothing was performed using a Gaussian filter with a FWHM of 15 mm. These computations were performed using Image Algebra and the expressions entered were: nm = F((n1 I- n2 + n3 + n4 + n5 + n6)/6)
(!j)
om = F((oI + o2 +.03 + 04 -t- 05 + 06)/6)
((9
dif=om-nm The new images created from these expressions include an average image for the neutral stimulation (nm), an average image for the provoked phobic reaction tom), and a subtraction image (dif) showing areas where an increase in brain activation during the phobic reaction has occurred as compared to the neutral state. In these calculations, the average for each stimulation is first created, then the difference image is computed. An alternative approach that would produce the same average image but with a lower variance is to calculate the difference image for each individual first, and then produce the resulting image by averaging the differences. This was performed by entering the expressions difl = F(ol - nl), dif2 = F(02 - n2),..., dif6 = F(o6 - n6)
(8)
producing six new images given the IDS difl, dif2,....dif6. An average image was then calculated using the Mean function. Difl, dif2, . . . dif6 are now input IDS and an output ID named mdif was given resulting in an averaged difference image named mdif and a variance image named
and Programs
in Biomedicine
47 (1995)
51-71
mdif-var. The image mdif is identical to the image produced by (7). However, the presence of a variance image allows for a t-test image based on the conventional Student’s f-test to be produced. This was performed using Image Algebra by entering the expression: r-test = mdif/sqrt(mdif_var/6)
(9)
A t-test map can be used to delineate regions where a change in brain activation has occurred. Hypothesis testing based on the t-test map can be problematic, however, due to the inh.erent correlation between adjacent pixels and several approaches have been suggested to overcome this difficulty [22,23]. Some results from this application example are piresented in Fig. 10. This figure shows a slice from two of the input images (nl and ol), the smoothed average image computed in (5), the averaged difference image created by the mean function, the ttest image computed in (9) as well a.s a combined image obtained by superimposing the t-test image oln a corresponding slice from MWI. ‘When relating significant foci as shown in the t-test image to the tmderlying anatomy, the Structure From Image function is very useful. The user simply has to point and click in the highIighted a.reas, and the structure containing the given point is drawn in the image and its name is displayed. In the given example, significant activations wiere found in Elrodmann areas 19. 5. Discussion 5.1. Experiences of use
The CBA system, as presented, has been used at the Karolinska Hospital, Stockholm, Sweden, on a daily basis for about 2 years. During this period the atlas has been adapted to about 1000 examinations involving more than 100 different ind.ividuals. The Dynamic transform mode has proved to be very useful and speeds up the adaptation process considerably. Most adaptations of the atlas are performed directly to the functional PET-images and are normally done in approximately 5 min. Image Algebra has proved to be a powerful tool.
L. Thurfieii
et al. /Computer
Methods
and Programs
in Biomedicine
47 (1995)
51-71
69
Fig. 10. Kmagesfrom a brain activation study with PET. The upper row shows an rCBF image from one individual obtained during neutral (left), and during phobic (middle) stimulation, and a smoothed average image for the neutral condition based on six individuals (right). The left image in the lower row is an average of six subtraction images showing areas with increased activity during phobic stimulation. The middle image is a t-test map obtained by dividing the averaged subtraction image with the corresponding standard error image. Significant activations are found in Brodmann areas 19. Displayed structures include also the brain surface, the striatum and the thalamus. Lower right shows a combined image where the t-test map is superimposed on data from MRL
As we showed in the application example, it is very useful lbr processing images from brain activation studies. It can also be used for simple tests, for example to see if two PET-scans of a patient are misaligned. By subtracting the image volumes and examining the resulting image, a misalignment will be clearly visible. Image Algebra can also be used to create weighted sums of time sequence images and to test different hypothesis.
The interactive contour matching method used in CBA has several advantages over other methods for intersubject registration. Adaptation can be made directly to functional images which is dif-
ficult for methods such as those based on the identification of the AC-PC line [24,25] or other more general land-mark based methods 1261. A disadvantage with the adaptation method in CBA, however, is the subjectivity; it relies on the users verification of the matching between the selected s,tructures and the corresponding structures in the images. There is no goodness-of-lit measure. It is clear that some sort of automatic adaptation procedure is desirable, and this is probably necessary, before the described methods will be accepted in the clinical situation. As a first step, intrasubject registration will be made automatic and we are implementing volume-based methods for co-registration of functional images [3] and functional-
70
L. Thurfjell
et al. /Computer
Methods
and Programs
anatomical images [27]. Once implemented, it will be sufficient to interactively adapt the atlas once for each individual. All subsequent adaptations to data. from other examinations will be made automatic. The goal is, of course, to also make the general adaptation of the atlas automatic and this is an area we will focus on in the nearest future. The transformations used in the adaptation process is another area for future developments. More complex transformations are needed to more accurately account for pathological cases. 5.3. Availability
CBA is being commercialized by a company and is available as a research tool, mainly intended for the field of functional neuroimaging. Interested readers may contact the first author for more information. Acknowledgments The authors are grateful to Torgny Greitz, Martin Ingvar and Lars Eriksson and the staff at the Departments for Neuroradiology and Clinical Neurophysiology at the Karolinska Institute/ Hospital, Stockholm, Sweden for their support and constructive criticism during the development of the software system described, and to Mats Fredriksson at the departments of Psychiatry and Psychology, Karolinska Institute/Hospital, for supplying the PET data used in the application example. ‘Valuable discussions with our colleagues at the Centre for Image Analysis and with Jesper Andersson at the Uppsala University PET Centre are gratefully acknowledged. This work was supported by the Swedish Medical Research Council under grant number B9339X-08705-05C. References [l] P. Van den Elsen, E. Pol and M.A. Viergever, Medical image matching-A review with classification, IEEE En::. Med. Biol. 12(l) (1993) 26-39. 121 M. Bergstrom, J. Boethius, L. Eriksson, T. Greitz, T. Ribbe and L. Widen, Head fixation systemfor reproducible position alignment in transmission CT and positron emission tomography, J. Comput. Assist. Tomogr. 5 (1981) 136-141.
in Biomedicine
47 (1995)
51-71
J.L.R. Andersson, A rapid and accurate method to realign PET-scans utilizing image edge information, J. Nucl. Med. 36 (1995) 657-669. [41 U. Pietrzyk, K. Herholz and W.D. Heiss, Threedimensional alignment of functional and morphological tomograms, J. Comp. Assist. Tomogr. 14(l) (1990) 51-59. [51 C. Pelizzari, G. Chen, D. Spelbring, II. Weichselbaum and C.T. Chen, Accurate three-dimensional registration of CT, PET, and/or MR images of the brain, J. Comput. Assist. Tomogr. 13(I) (1989) 20-26. 161 C. Bohm, T. Greitz, D. Kingsley, B.M. Berggren and L. Olsson, Adjustable computerized stereotaxic brain atlas for transmission and emission tomography, Am. J. Neuroradiol. 4 (1983) 731-733. [71 C. Bohm, T. Greitz, R. Seitz and L. Emksson, Specitication and selection of regions of interest (ROIs) in a computerized brain atlas, J. Cerebral Blood Flow Metab. 11 (1991) A64-A66. 181T. Greitz, C. Bohm, S. Holte and L. EJriksson, A computerized brain atlas: construction, anatomical content and some applications, J. Comput. Assist. Tomogr. 15(l) (1991) 26-38. 191 A.C. Evans, C. Beil, S. Marett, C. Thompson and A. Hakim, Anatomical-functional correlation using an adjustable MRI-based region of interest atlas with positron enission tomography, J. Cerebral Blood Flow Metab. 8 (1988) 513-530. 1101L. Thurfjell, C. Bohm, T. Greitz, L. Eriksson and M. Ingvar, Accuracy and precision in image standardization in intra and intersubject comparisons, in: Functional Neuroimaging: Technical Foundations, ed. Thatcher pp. 121-130 (Academic Press, 1994). 1111R.P. Woods, J.C. Maziotta and S.R. Cherry, Automated image registration, in: Quantification o’f brain function, Tracer kinetics and image analysis in brain PET, Eds. K. Uemura, N.A. Lassen, T. Jones and. I. Kanno, pp. 391-398, International congress series no. 10301 1993. 1121T. Greitz, S. Holte, C. Bohm, L. Eriksson, R. Seitz, K. Ericson, H. Nyback and S. Stone-Elander, A data base library as a diagnostic aid in neuroimaging, Neuroradiology 33 suppl (1991) 2-4. [I31 L. Thurfjell, C. Bohm, T. Greitz alnd L. Eriksson, Transformations and algorithms in a computerized brain atlas, IEEE Trans. Nucl. Sci. vol. 4.0 (1993) 1187-l 191. [I41 L. Thurfjell, E. Bengtsson and C. Bohm, A volumetric model for identifying anatomical structures in tomographic brain images, Proc. 8th &and. Conf. Image Anal. (1993) 473-478. 1151L. Thurfjell and C. Bohm, Atlas generated generalized ROIs for use in functional neuroimaging, IEEE Trans. Nucl. Sci. vol. 41 (1994) 1670-1677. 1161P.T. Fox and M.E. Raichle, Enhanced detection of focal brain responses using intersubject averaging and changedistribution analysis of subtracted PET images, J. Cerebral Blood Flow Metab. 8 (1988) 642-653. S. Johnsson, Yacc - yet another compiler, Computing v71 [31
L. Thurfjell
et al. /Computer
Methods
Science Technical Report 32, AT&T Bell Laboratories, Murray Hill, NJ, 1975. [18] M.E. Lesk, Lex - a lexical analyzer generator, Computing Science Technical Report 39, AT&T Bell Laboratories, Murray Hill, NJ, 1975. [19] L. Thurfjell, E. Bengtsson and B. Nordin, A new threedimensional connected components labeling algorithm with simultaneous object feature extraction capability, CVGIP: Graphical Models Image Proc 54(4) (1992) 357-364.
M. Ingvar, L. Eriksson, T. Greitz, S. Stone-Elander, M. Da!hlbom, G. Rosenqvist, P. af Trampe and C. van Euler, Methodological aspects of brain activation studies: cerebral blood flow determined with [“O]butanol and positron emission tomography, J. Cerebral Blood Flow Metab. 14 (1994) 628-638. [21] M. Fredriksson, G. Wik, T. Greitz, L. Eriksson, Q. Stone-Elander, K. Ericson and G. Sedvall, Regional cerebral blood flow during experimental phobic fear, Psychophysiology 30 (1993) 126- 130. [22] K.J. Friston, CD. Frith, P.F. Liddle and R.S.J. Fmckowiak, Comparing functional (PET) images: The [20]
and Programs
in Biomedicine
47 (1995)
51-71
71
assessmentof significant change, J. Cerebral Blood Flow Metab. 11 (1991) 690-699. [23] K.J. Worsley, AC Evans, S. Marett and P. Neelin, A Three-Dimensional Statistical Analysis for CBF Activation Studies in Human Brain, J. Cerebral Blood Flow Metab. 12 (1992) 900-918. 1’241 P.T. Fox, J.S. Perlmutter and M.E. Raichle, A stereotactic method of anatomical localization for positron emission tomography, J. Comput. Assist. Tomogr. 9(l) (1985) 141-153. [.25] J. Talairach and P. Toumoux, Co-planar stereotactic atlas of the human brain: 3-dimensional proportional system - an approach to cerebral imaging (Thieme Verlag, Stuttgart/New York, 1988). [26] F.L. Bookstein, Thin-plate splines and the atlas problem for biomedical images, in: Information Processing in Medical Imaging, Eds. A. Colchester and D. Hawkes, Lecture Notes in Computer Science, vol. 511 (SpringerVerlag, 1991). [27] J.L.R. Andersson, A. Sundin and S. Valind, A robust and automatic method for coregistration of PET and MR brain images, J. Nucl. Med. (in press)