Fast fully automatic heart fat segmentation in computed tomography datasets

Fast fully automatic heart fat segmentation in computed tomography datasets

Journal Pre-proof Fast fully automatic heart fat segmentation in Computed Tomography datasets Victor Hugo C. de Albuquerque, Douglas de A. Rodrigues, ...

7MB Sizes 0 Downloads 70 Views

Journal Pre-proof Fast fully automatic heart fat segmentation in Computed Tomography datasets Victor Hugo C. de Albuquerque, Douglas de A. Rodrigues, Roberto F. Ivo, Solon A. Peixoto, Tao Han, Wanqing Wu, Pedro P. Rebouc¸as Filho

PII:

S0895-6111(19)30089-8

DOI:

https://doi.org/10.1016/j.compmedimag.2019.101674

Reference:

CMIG 101674

To appear in:

Computerized Medical Imaging and Graphics

Received Date:

18 July 2019

Revised Date:

26 September 2019

Accepted Date:

24 October 2019

Please cite this article as: { doi: https://doi.org/ This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2019 Published by Elsevier.

Fast fully automatic heart fat segmentation in Computed Tomography datasets Victor Hugo C. de Albuquerquea,b , Douglas de A. Rodriguesc,d , Roberto F. Ivoc,d , Solon A. Peixotoc,d , Tao Hana , Wanqing Wue,∗, Pedro P. Rebouças Filhoc,d a DGUT-CNAM

Institute, Dongguan University of Technology, Dongguan 523106, China b Universidade de Fortaleza, Fortaleza-CE, Brazil c Programa de Pós-Graduação em Engenharia de Teleinformática - Universidade Federal do Ceará, Fortaleza-CE, Brazil d Laboratório de Processamento de Imagens, Sinais e Computação Aplicada (LAPISCO), Instituto Federal do Ceará, Fortaleza-CE, Brazil e School of Biomedical Engineering, Sun Yat-Sen University, Guanzhou, 510275, PR China

Abstract Heart diseases affect a large part of the world’s population. Studies have shown that these diseases are related to cardiac fat. Various medical diagnostic aid systems are developed to reduce these diseases. In this context, this paper presents a new approach to the segmentation of cardiac fat from Computed Tomography (CT) images. The study employs a clustering algorithm called Floor of Log (FoL). The advantage of this method is the significant drop in segmentation time. Support Vector Machine was used to learn the best FoL algorithm parameter as well as mathematical morphology techniques for noise removal. The time to segment cardiac fat on a CT is only 2.01 seconds on average. In contrast, literature works require more than one hour to perform segmentation. Therefore, this job is one of the fastest to segment an exam completely. The value of the Accuracy metric was 93.45% and Specificity of 95.52%. The proposed approach is automatic and requires less computational effort. With these ∗ Corresponding

author Email addresses: [email protected] (Victor Hugo C. de Albuquerque), [email protected] (Douglas de A. Rodrigues), [email protected] (Roberto F. Ivo), [email protected] (Solon A. Peixoto), [email protected] (Tao Han), [email protected] (Wanqing Wu), [email protected] (Pedro P. Rebouças Filho)

Preprint submitted to Journal of LATEX Templates

September 26, 2019

results, the use of this approach for the segmentation of cardiac fat proves to be efficient, besides having good application times. Therefore, it has the potential to be a medical diagnostic aid tool. Consequently, it is possible to help experts achieve faster and more accurate results. Keywords: Cardiac Fat Segmentation, Digital Image Processing, Floor of Log, Heart.

1. Introduction Many of the diseases that affect the world’s population are linked to the heart. Estimates predict that, by 2030, cardiovascular diseases will account for 23.3 million deaths worldwide [1, 2]. In addition, a 16-year study involving 384,597 patients reported that 38.4% of deaths occurred within 28 days following an important coronary event [3]. Another study classifies cardiovascular accidents as the most common cause of natural deaths in Jamaica [4]. However, there has been a significant growth in cardiovascular research to improve early diagnosis of heart diseases in recent years. The heart is the most important component of the cardiovascular system since it is responsible for pumping blood to all organs and tissues of the body. It is located between the lungs, as well as in front of the esophagus and the aorta artery [5, 6]. The fatty layer around the outside of the heart is called the Epicardial Adipose Tissue (EAT). EAT is an uncommon visceral fat deposit that has both anatomic and functional contact with the myocardium and coronary arteries [7, 8]. Regarding the physiological aspects, this tissue has cardioprotective properties. When associated with a pathological condition, this fat can affect the parts in which it has contact [9]. Fat deposits in the vicinity of the heart are related to various health risks [10], such as: in coronary artery calcification [11, 12, 13], in coronary artery disease [14, 15], in atrial fibrillation [16], in atherosclerosis [17, 18, 19, 20], and in carotid rigidity [21], among others [22].

2

Echocardiography [23], Magnetic Resonance Imaging (MRI) [24, 25] and Computed Tomography (CT) [26] are non-invasive methods used by physicians to perform a clinical evaluation of this fat [27, 28, 29]. Figure 1 shows images of the three tests used for the quantification of cardiac fat: Echocardiography, Magnetic Resonance Imaging, and Computed Tomography. Although the images resulting from these exams present high resolution, there are differences and peculiarities that show a preference for each one type of examination or another.

(a) Echocardiography

(b)

Magnetic

Resonance (c) Computed Tomography.

Imaging. Figure 1: Images of non-invasive examinations for clinical analysis of heart fat [30].

Echocardiograms are widely available in the medical field and they have a low cost. However, it is not possible to determine the volume of fat with this examination [31]. Magnetic resonance imaging is the gold standard for fat quantification because it presents high spatial resolution and low interobserver variability [32]. However, this exam is expensive, resulting in low availability. Computed tomography compared to echocardiography and magnetic resonance imaging provides a more accurate assessment of epicardial adipose tissue [26]. With a CT examination, it is possible to evaluate the thickness, the area and the volume, besides allowing the evaluation of the coronary arteries in the same exam. Computed Tomography maps the attenuation coefficient of the X-rays that crosses the body under analysis and, from these data, reconstructs a model of 3

this body that represents the anatomical form. The CT scanner was developed by Godfrey Hounsfield, and it provides images in cross-sections of high quality, it is able to process a very large number of measures with very complex mathematical operations, and furthermore it provides results with great accuracy [33]. Figure 2 shows an unconfigured computed tomographic image. In this case, even without contrast, the specialist was able to make an appropriate diagnosis. The pericardium (a) is displayed as a thin line in the anterior region of the heart, (b) represents the epicardial fat, (c) is the adipose tissue, and (d) indicates the muscle and cardiac chambers.

Figure 2: (a) indicates the pericardium; (b) points to the epicardial fat; (c) the adipose tissue; and (d) muscle and cardiac cavities [34].

The segmentation of cardiac anatomical structures and their fat is a challenge for the diagnosis of cardiovascular diseases [35]. The manual procedure can be laborious. Among the main challenges encountered by experts is the fact that accurate diagnosis requires clinical knowledge, radiology, and pathology training. Therefore, there is a need for multidisciplinary work, introducing complexity in care. In manual procedures, the specialist inserts several points to detect fats and separate them from other structures such as lung, aorta, and bones. The specialist then sets the fat intensity values in the Hounsfield units [36]. It subsequently quantifies the fat volume. 4

Manual and subjective quantifications often involve significant intra- or interobserver variances. Human limitations, such as physical tiredness, fatigue, and the repetitiveness of actions, are contributing factors to this. Errors in the fat quantification step can affect the analysis of the specialist, and result in an inaccurate diagnosis. Fat quantification that does not match the patient’s reality may change the analysis during clinical routine and result in an erroneous diagnosis. Therefore, semi-automatic and automatic segmentation methods have been proposed to overcome inaccurate hand-segmentation. Methods based on the manual limitation of the pericardial contour and adaptive thresholds initiated the research on fat segmentation. The need for an expert to operate the system and review the results on each exam makes these solutions unviable for clinical applications. Subsequently, were developed atlas-based methods, registration and classification algorithms, and deep learning. The results showed satisfactory results of cardiac fat segmentation, but the segmentation time and the computational effort required to make it challenging to integrate it into daily use by specialists. The atlas-based methods of Ding et al. [37] and Shahzad et al. [38] considered that heart shape data between the trained and the new exam are analogs to each other. However, this idea may be invalidated if there are anatomical variations between individuals. Cardiac fat segmentation with registration and classification algorithms was developed by Rodrigues et al. [39] and Rodrigues et al. [40]. However, the time to segment an exam does not meet the needs of medical routines. On average, the data similarity index was 97.6 %, but to analyze the method required 1.8 hours. The authors themselves claimed that it still needed an adaptation to reduce the time. One year later, the same authors reached a time of 0.9 hours [40]. The segmentation time of a full exam with Commandeur textitet al.’s work [41] was 26 seconds. However, the method used by the author was deep learning. Thus, the author himself states that the work needs more samples to ascertain its segmentation more accurately. When it comes to image processing of the heart, one of the main challenges 5

is the difference in pixel-level between cardiac fat, heart, and background that is not easily distinguished. Over the years, works have turned to use other methods due to the difficulty of overcoming this challenge. However, the work bumped into the restrictions already mentioned. Moreover, another challenge that the approach aims to overcome is to perform segmentation in a short time to be able to integrate it into medical routines. Given all these limitations previously presented, this work proposed approach is a fast, accurate, and fully automated method for the segmentation of cardiac fat in CT imaging. Firstly, the Floor Log cluster (FoL) algorithm, which, along with machine learning, is capable of segmenting almost all heart fat within seconds. Then Mathematical Morphology and Hole Filling Technique were added to the approach to remove the remaining human body components (noises). As far as we know, this is the fastest method of segmenting heart fat, and it has efficient segmentation and low computational cost. Among the main contributions delivered by this work, it is possible to mention: 1. Optimization of cardiac fat segmentation with the Floor Log Cluster (FoL) which combined with machine learning and morphological image processing is able to segment heart fat in seconds. 2. Beyond the accuracy of segmentation, the concerned approach focus on the time of segmentation of the exam. Most works in the literature focus only on precision, making its application unfeasible due to the long-time required. 3. A new application area for the clustering algorithm Floor of Log (FoL). 4. The approach is automatic, without human interaction or some form of manual boot. The remainder of this article is organized as follows: Section 2 deals with heart fat and its quantification, in addition to presenting the primary literature studies related to its segmentation; Section 3 provides the details of the methodology and describes the proposed technique. In Section 4 the results obtained 6

are displayed and there is a comparison with the results of previous works; Section 5 presents the conclusions, as well as suggestions for future works.

2. Methodology This section presents a detailed description of the segmentation of heart fat in a set of Computed Tomography images. Figure 3 shows the flowchart of the method. The algorithm has four main blocks: the application of the Floor of Log in the image (Subsection 2.2), the separation of the regions present in the CT with a Thresholding (Subsection 2.3), processing of images for the removal of noise using Mathematical Morphology and the selection of heart fat (Subsection 2.4) 2.1. Database description The set of medical images used is from the database provided by Visual Lab1 . The data correspond to cardiac Computed Tomography examinations of ten patients. All patients were informed about the objectives of this study and signed a consent form [39]. The amount of fat in these images varies depending on the number of the slices. Figure 4 shows an illustration of a heart, the stacked slices of the heart, and the processed Digital Imaging and Communications in Medicine (DICOM) images. Figure 5 shows a dataset of DICOM images, and their, respective, Ground Truth (GT) images. The resolution of the images corresponds to 512 pixels in width and 512 pixels in height. A physicist and a computer scientist performed the manual segmentation of the images in the axial plane [39]. Each color labeled by specialists corresponds to one type of fat: red, green and blue represent, respectively, epicardial, mediastinal fat and the gap between these two fats, called the pericardium. In this work, as the first analysis of our proposed method, we focused on the segmentation of the fat of the heart as a whole, 1 http://visual.ic.uff.br/en/cardio/ctfat/

7

Open Image DICOM

Median Filter

Floor of Log

Normalization

Thresholding

Lung

Region of Fat and its Internal Area

Filling Hole Technique

Union of Regions

Mathematical Morphology

Thresholding

Maximum Point

ROI (based on maximum point)

Visual Comparison

Segmented Fat

Figure 3: The methodology adopted in this work for the segmentation of cardiac fat.

without differentiating the type of fat. Therefore, Figure 5 presents the DICOM image, the GT image with the differentiation of fat type and GT considering

8

Process each dicom image

3

2 Slice Order

1

Slice 1

Slices Stacked

Real Heart

Slice 18

Slice n

Figure 4: A three-dimensional representation of a heart; 2) the stacked slices; 3) the DICOM images.

the fat as a whole in blue color. 2.2. Floor of Log and Training Algorithm Segmentation accuracy in recent years has improved considerably at the expense of segmentation time. For example, in the work of Rodrigues et al. [40], the time required for analysis of a complete examination was 1.8 hours. Therefore, it is essential to overcome this limitation and make targeting faster and more effective. An image of cardiac Computational Tomography represents a chest wall, in which there are several structures, such as vessels, arteries, airway, pleura, and pulmonary parenchyma. Each region possesses specific information, for example, the relative value of the pixels of that region. This value is the same for the set of images of each exam. This information was useful for a later stage in the segmentation. This work proposes an important innovation to segment cardiac fat in computed tomography images: data compression by clustering. This technique 9

(a)

(b)

(c)

(d)

(e)

(f)

Figure 5: Samples from the medical imaging data set (a, d), Ground Truth image with fat type classification (b, e), and Ground Truth image considering the unification of heart fat types (c, f).

highlights the object of interest compared to the other regions, thus allowing a much faster and more efficient segmentation. The authors Solon et al. [42] used the FoL method to perform automatic segmentation of the lungs. A linear transformation with a logarithmic function is performed on the data set. The purpose of this transformation is to bring the data values closer to each other. In this work, the data correspond to the pixels in the image. Subsequently, a floor function is applied. Thus, only the lower limit of that function is preserved. Equation 1 represents the principal function of the clustering method. The parameter b is the logarithmic basis of the registration function. The linear transformation will depend on an adjustment in its slope, in this case, the

10

logarithmic basis. Depending on the value assigned to b be able to separate the data efficiently, as well as to avoid clustering the data of different classes. f (x) = logb (x + 1)

(1)

To analyze different logarithmic values, and define the most appropriate value, supervised artificial intelligence techniques can be used. Consequently, a cost function is required and thus simulate the descending gradient. The cost function will numerically evaluate each solution. Creating this function involves defining the labels used for the pixel intensities of the object of interest. The problem can be interpreted as a binary separation problem, as the short function will work to highlight the difference between the heart fat region and the non-heart fat region. The training step aims to interactively minimize the cost function by assigning a fixed value to b. If the b is being updated, and the value of the cost function does not change, it is interesting to use the most significant amount of the log base [42]. This will further compress the data and avoid unnecessary clusters. Algorithm ?? that defines the training step is displayed. Based on the Statistical Learning Theory, Support Vector Machine (SVM) is used for classification and regression analysis [43]. Based on the principle of structural risk minimization, SVM was used in this work to create a hyperplane that separates the essential resources of each class. After applying the logarithmic transformation, some noises in the image can interrupt the clustering process. The use of a filter can aid in clustering the pixels in an image in a local way, attenuating the homogenization of the region of interest as the external region. The works off the Omer et al. [44] and Paranjape et al. [45] evidenced noise reduction efficiency in Computed Tomography images with a Median Filter. Therefore, in this work, the filter responsible for this step was the median. Median filtering is a nonlinear filtering method. It is efficient in the presence of impulse noise, also called salt and pepper noise. This filter is the primary choice in cases where the image should be used later for high-level operations 11

such as segmentation [46]. One of the justifications for its use is the preservation of edges during their filtering [44, 47]. The median filter consists of replacing the intensities of all pixels in an image decayed u by the median of the pixels present in a two-dimensional mask of size N x N. The parameter u ˜ij is a pixel retrieved by the filter, and u ˜ is the image restored from the noisy image u. u ˜ij = median{ust : ust  Wiju (h)}

(2)

Figure 6 shows the input image and the resulting image of the FoL method with the median filter. Images are accompanied by their respective histograms. From the histograms in Figure 6, there is the local mode grouping of the pixels, so that there are attenuation and homogenization of the region of interest and its cluster.

12

(a) Input image

(b) Histogram of Figure (a)

(c) Output image of Floor of Log

(d) Histogram of Figure (c)

Figure 6: Input image (a) and the resulting image (c) of the Floor of Log method. Images are accompanied by their respective histogram (b) and (d).

2.3. Regions on computed tomography Image thresholding is a straightforward and computationally fast segmentation tool. It is suitable for applications that require the identification and extraction of the image object, whose computational time spent is a relevant factor. Thresholdization is a form of a grouping of homogeneous regions. In the second column of Figure 6 presents the histogram with intensities ranging from 0 (black) to 255 (white). There are two peaks (bimodal histogram): the first one represents the object (dark), and the second one represents the background (light). They are separated by a valley, which in this example would be the most appropriate place to set the threshold. An algorithm for automatically determining the threshold between regions

13

was applied to the normalized image resulting from the FoL method. The image normalization was performed in order to be able to work in a grayscale, ranging from 0 to 255. Five arbitrarily selected images were used to find the T threshold value between the regions. The histograms of the image intensities were divided into two parts based on an initial guess [42]. The mean intensities in the two regions were calculated and then the mean of these two values was used for the next image. Thus, a threshold image A(x, y) can be defined by Equation 3.  255, if I(x, y) > T A(x, y) =  0, if I(x, y) < T

(3)

Since the distribution is bimodal, the algorithm converges to the midpoint between the two peaks, which represent the lung region and the fat. Once the threshold parameters have been found, the entire set of images from the same patient is standardized based on the parameters for the fat and lung interval. A large binary component is obtained from the previous step, and then a region close to the region of interest is found. Thus, the segmentation process of the proposed method offers a faster convergence. Figure 7.1 shows the result of this process. 2.4. Processing of images for the removal of noise Figure 7 shows three images from different exams. In all images displayed in step Figure 7.1, there are some regions in the lung area with holes. Consequently, there would be errors in identifying the aggregate particles. The Hole Filling method was applied to avoid such problems. The hole filling procedure is fully automated. The method consists of morphological reconstruction, which involves expansion and geodetic erosion operations[48]. This process can extract all holes and connect them to the original grayscale image, providing fewer holes for the image. Initially, a matrix array X0 is created the same size as the array containing W. The exception is the coordinates of the X0 corresponding to the points given

14

A

1

Lung Input Image

2

Filling Hole Technique

3

Mathematical morphology

4

Maximum point

B

C

Figure 7: From top to bottom, three examples of the stage of works with the lung region: 1) Image of the entrance of the binarized lung; 2) Hole Filling Technique; 3) Mathematical morphology (Closing); and 4) Acquisition of the maximum lung point.

in each hole. At these positions, the pixel value is set to 255. In later steps, the procedure provided by Equation 4 fills all gaps with 255. Xj = (Xj−1 ⊕ E) ∩ W c

j = 1, 2, 3, ...

(4)

E is the structuring element. The termination criterion of the algorithm is in the iteration where j if Xj = Xj−1 . Set Xj contains all filled holes. The intersection of each iteration with W c constrains the result within the region of interest. Since mode, W includes all filled holes and their borders. 15

Figure 8 presents the procedure of Equation 4 by way of illustration.

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

(j)

Figure 8: (a) input image highlighting the first hole to be filled by the method; (b) the structuring element; c) Set W with the shading; d) Complement of W; e) Starting point within the limit; (f) - (h) Various steps of Equation 4. (i) Final result [union of c and h]; j) Output image after repeated b-h operations for the various holes in the image a.

Hole Filling method [49, 50] was applied to eliminate and close the noise, resulting in a homogeneous region. The result of this technique is shown in Figure 7.2. Subsequently, there is the application of the mathematical morphology (Closing) technique on the binarized image. Mathematical morphology refers to the study and analysis of images using non-linear operators [51]. The language of mathematical morphology is the set theory [52]. This includes the analysis of the interaction between two sets: the pixels in the image and the structuring element. Dilation and erosion are the main operators of mathematical morphology. Suppose X and Y are two non-empty sets, where X corresponds to the image and Y is the structuring element. The binary erosion for sets X and Y in W 2 is defined by X ⊕ Y = {W |(Y )W ⊆ X}. After this operation, the white pixels of the image are reduced. The expansion of the white pixels is performed for sets X and Y in W 2 is X ⊕ Y = {W |[(Yˆ )W ∩ X] ⊆ X} resulting in the expansion of the white pixels present in the input image. The binary closure of a set X by Y is defined as the expansion operation fol16

lowed by the erosion operation. The closing operation, which may be expressed as X •Y = (X ⊕Y ) Y . Consequently, there is the filling of voids in the present shapes of the image. The multiplication of the binary DICOM input image B (Figure 9.b) with the result of the mathematical morphology method (Figure 9.a) described corresponds to the final Region Of Interest (ROI). Equation 5 describes the mathematical formulation of this operation. IROI (i.j) = I(i, j) ⊗ Bw(i, j)

(5)

Since the common region in these two images corresponds to noise, this operation removes it. Figure 9.c displays the result. A

C

D

E

F

Image after the mathematical morphology

B Common region

Region of interest, plus a region at the top

Region of interest

Segmented heart fat

Binarized DICOM input image

Figure 9: Methodology to acquire the exam segmentation time, image segmentation time and evaluation metrics.

In addition to heart fat, there is yet another component of the human body in the image. These regions are not classifieds as fats, but rather as part of another area of the thoracic. This region is at the top of the images, as shown in Figure 9.d. In some cases, this may confuse the method. The criterion for removing these regions is the coordinate of the maximum point of the lung of slice 1 of the examination. Figure 7.4 illustrates this discarded upper region. On discarding that upper region, only the segmented heart fat remains in the resulting image, as shown in Figure 9.e. 17

2.5. Criteria to evaluate This work used segmentation time and evaluative metrics to provide a quantitative evaluation of the results of the cardiac fat segmentation carried out here. Figure 10 shows a diagram of the acquisition of these evaluation parameters. Exam Segmentation Time

Original Image

Image Segmentation Time

Process

Ground Truth

Ground Truth (used)

Confusion Matrix

Metrics

Figure 10: Methodology to acquire the exam segmentation time, image segmentation time and evaluation metrics.

The time evaluation considered the time to segment a slice (an image) and the time to segment the whole exam. The metrics were based on the number of pixels that were correctly and/or incorrectly assigned a (specific) region of the image [50]. The final image has two regions, so the segmentation process is interpreted as a classification [53, 54]. Each pixel is labeled 0 or 1, where 0 means that the pixel is the Region of Interest (ROI), the fat, and 1 means a Background Region (BR). The target region represents an area of interest that is called the positive class, while the background region is called the negative class. A precise segmentation should present the Segmented Region (SR) as similar as possible to the Region of Interest. TP, TN, FP, and FN correspond, respectively, to True Positives, True Negatives, False Positives, and False Negatives. 18

Figure 11 illustrates the graphical representation of TP, FP, TN, and FN. The background image is composed of TN + FP and the region of interest is the sum of TP + FN, where: • True Positives (TP) are the pixels of cardiac fat correctly segmented; • True Negatives (TN) correspond to background pixels classified as heart fat; • False Positives (FP) are background pixels classified as heart fat; • False Negatives (FN) are pixels of cardiac fat classified as background; The parameters that are in the Confusion Matrix are used to calculate the evaluative metrics. Segmented Region Circle

ROI Circle

FN

TP

FP

TN

Figure 11: Representation of TP, TN, FP, and FN regions. The yellow circle is the Region of Interest (ROI). This circle corresponds to the reference region, considered the gold standard. The blue circle represents the segmented region, which is precisely that region obtained by a computational method (Adapted from Garcia, Dominguez and Mederos [55]).

Accuracy (Acc) is the metric that represents the proportion of pixels correctly identified relative to all pixels in the image. Following the parameters of Table 2, the Acc is defined as follows in Equation 6.

Acc =

TP + TN TP + TN + FP + FN

(6)

Specificity (Sp) is the ratio of true negatives to the sum of true negatives and false positives (Equation 7). This metric shows how effective the proposed

19

segmentation method is to identify a pixel of the image as cardiac fat.

Sp =

TN TN + FP

(7)

3. Results and Discussion In this section, the results obtained by the proposed approach are presented, and are compared with those of other studies from the literature. The discussions take into account the time needed to obtain the segmentation, as well as the metrics of accuracy and specificity. All methodologies were implemented in the python language using the Pycharm IDE, along with the free OpenCV 3.0 library. The entire computational process was performed on a computer with Windows 10 operating system, Intel Core i7 processor with 2.4 GHz and 8 GB of RAM. The SVM classification learned the basis of the logarithm function used in the FoL method, as presented in Algorithm 1. In the training step, there were ten cross-validations with 130 iterations, using 0.01 as the step. This work used 1.562 as a logarithm base. The time taken by the proposed approach to learning the parameter is an average of two minutes. After the training process, were applied the FoL method and following digital image processing methods, as shown in Figure 3. In each slice of the exam presented in Figure 12 is shown the image of the CT scan, as well as the segmentation performed by a professional and the segmentation from the proposed approach. The results obtained with the proposed approach are considered satisfactory. The segmentation obtained by a professional is subjective, and often there are regions with fat that are lost by the expert. Consequently, there is an erroneous quantification of fat and a change in the analysis of the specialist physician, resulting in an inaccurate diagnosis. Unlike conventional segmentation that depends on the skill and experience of the professional, the proposed approach makes use of a pattern of fat grouping; consequently, there is a reduction of fat

20

Expert

Automatic

Inferior

Median

Superior

Exam

Figure 12: Comparison of cardiac fat segmentation performed by the specialist (middle column) and automatic (right column), the figures in the left column are the exams.

loss in the segmentation. As noted in Figure 12, the segmented fat goes beyond the reference region and covers all of the heart fat. Figure 13 presents an example of the qualitative analysis of a fat segmentation in a Computed Tomography image; where green is for true positive pixels, white represents the true negatives pixels, blue is for false positive and pink is for false negative. Figure 13 has a large amount of green pixels compared to small regions with the other colors. Demonstrating precision in fat segmentation. These other colors are present due to super-segmentation or sub-segmentation. Super segmentation, which is characterized by the color red, is because the vicinity of fat has been segmented as a component of fat. This super segmentation is the case in which, for example, for some subjective reason the professional 21

(a)

(b)

(c) Figure 13: The figures show the regions of segmentation of the heart fat obtained by the proposed method, adopting green for true positives pixels, blue for false positive, and red for false negative.

did not mark this fat present and the proposed approach was able to detect it. The methods that present sub segmentation detect only part of the fat, which is characterized by the presence of the blue color. This analysis showed that the proposed method had few regions with faults, and the green color predominated.

22

Table 1 shows the segmentation time per exam and the mean segmentation time per slice with its respective standard deviation for all ten exams. The proposed approach is fast, as noted in the time results shown in Table 1. The variation in time required for segmentation is due to the number of images of each exam. The average time to target the exams is 2.01 seconds. This technique does not require a high computational cost like the techniques discussed in previous works, which justifies the reduced total segmentation time obtained. Exam

Total time (s)

Time for slice (s)

Acc %

Spec %

1

2.3033

0.0606 ± 0.001

95.3670

97.2789

2

1.8871

0.0524 ± 0.003

94.0768

97.6629

3

2.1883

0.0521 ± 0.001

92.9879

94.6785

4

1.8679

0.0534 ± 0.002

93.3473

93.0361

5

1.8312

0.0509 ± 0.000

95.2778

97.5384

6

2.0200

0.0518 ± 0.001

92.6731

94.2864

7

1.7959

0.0499 ± 0.00

91.7109

95.4651

8

2.2366

0.0532 ± 0.002

93.3946

96.9827

9

1.9336

0.0509 ± 0.001

90.7798

94.6130

10

2.0832

0.0521 ± 0.000

94.8576

93.6180

Average

2.01471

-

93.4472

95.5160

Table 1: The values of the evaluative metrics: Accuracy (Acc) and Specificity (Spec) for each exam. This table shows the ten best results for exam segmentation time and the average time per slice.

The results of segmentation of the entire database using the proposed method are presented in Table 1. The metrics with their respective results shown here are: Accuracy and Specificity. The results presented in Table 1 show that the proposed method obtained in all the exams an accuracy greater than 90.0 %, reaching an accuracy of 95.3670 % in exam 1. The sensitivity values were higher than 93.0 %, reaching 97.5384

23

% in exam 5. The imprecision of the marking specialist influenced accuracy values. For example, for the same exam, some regions were considered fat in a slice, while in the posterior slice, those same regions were classified as nonfat by the expert. This metric validates the process and results of metric accuracy by gauging reliability in segmentation. In addition to the efficiency over time, as discussed above, the proposed method is satisfactory concerning segmentation accuracy. Finally, it is necessary to evaluate the proposed approach with the other works of the literature. Table 2 presents this analysis. Regarding the time spent in the segmentation, considering from the beginning of the method until the exam is fully segmented, the proposed method presented a decrease in the required time. To perform a comparison in the same units, the units were transformed from hours to seconds. For example, in the work of Rodrigues et al. [40] the time required was of 1.8 hours, which is equivalent to 6840 seconds. The dataset used in this work is the same as that adopted in the study by Rodrigues et al. [40] and Rodrigues et al. [39]. In a direct comparison to the same dataset, the segmentation of all cardiac fat on a CT scan was about 1608 times faster than that of Rodrigues et al. [40] and 3216 times of Rodrigues et al. [39]. Therefore, the segmentation time in this study is the lowest among all the other studies that made use of the same dataset. The accuracy value achieved with this methodology is lower than the accuracy values made by studies in the literature. However, these studies obtained high accuracy values at the expense of segmentation time or the use of methods with high computational cost, where numerous data are required to prove their accuracy. Therefore, the values achieved in this work are as satisfactory as those obtained in previous works. This research combines a reduced segmentation time of the cardiac fat with satisfactory results concerning the metrics of segmentation evaluation.

24

Work

Exam Segmentation Time

Accuracy %

Proposed

2.01 seconds

93.45

Commandeur et al. [41]

26 seconds

-

Ding et al. [37]

60 seconds

-

Rodrigues et al. [40]

3240 seconds

98.72

Rodrigues et al. [39]

6840 seconds

97.76

Table 2: Comparison between the segmentation times per exam of the proposed approach with the other works in the literature.

4. Conclusion and Future Works In this work, we proposed a fast method to segment the heart fat from noncontrasted Computed Tomography images automatically and effectively. The proposed method achieved promising results regarding the objectives of this work, besides constructing a competitive method in relation to the works in the literature, considering time, evaluative metrics and the quality of segmentation. The process a patient exam analysis has an average time of 2.01 seconds with an accuracy of 93.45% and specificity of 95.52%. Thus, the results show that the proposed method can integrate medical diagnostic aid systems for the segmentation of cardiac fat from Computed Tomography images of the heart. The comparison of the results strongly confirms the fact that the suggested work outweighs all other existing techniques in terms of speed, as well as having metric values as high as the others. However, there is the need for further analysis to target the types of fat in the adjacent regions of the heart as a target to reduce cardiovascular diseases. Therefore, it is fair to say that the suggested work presents results that prepare a new field of research for future developments.

25

Acknowledgement VHCA received support from the Brazilian National Council for Research and Development (CNPq, Grant 304315/2017-6 and 430274/2018-1).

References [1] C. D. Mathers, D. Loncar, Projections of global mortality and burden of disease from 2002 to 2030, PLoS medicine 3 (11) (2006) e442. doi:10. 1371/journal.pmed.0030442. [2] P. Peng, K. Lekadir, A. Gooya, L. Shao, S. E. Petersen, A. F. Frangi, A review of heart chamber segmentation for structural and functional analysis using cardiac magnetic resonance imaging, Magnetic Resonance Materials in Physics, Biology and Medicine 29 (2) (2016) 155–195. doi:10.1007/ s10334-015-0521-4. [3] K. Dudas, G. Lappas, S. Stewart, A. Rosengren, Trends in out-of-hospital deaths due to coronary heart disease in sweden (1991 to 2006), Circulation 123 (1) (2011) 46–52. doi:10.1161/CIRCULATIONAHA.110.964999. [4] Causes of sudden natural death in jamaica: a medicolegal (coroner’s) autopsy study from the university hospital of the west indies, Forensic Science International 129 (2) (2002) 116 – 121. doi:10.1016/S0379-0738(02) 00268-2. [5] D. K. Ravish, K. J. Shanthi, N. R. Shenoy, S. Nisargh, Heart function monitoring, prediction and prevention of heart attacks: Using artificial neural networks, in: 2014 International Conference on Contemporary Computing and Informatics (IC3I), 2014, pp. 1–6. doi:10.1109/IC3I.2014.7019580. [6] B. Gaborit, C. Sengenes, P. Ancel, A. Jacquier, A. Dutour, Role of epicardial adipose tissue in health and disease: a matter of fat?, Comprehensive Physiology 7 (3) (2011) 1051–1082. doi:10.1002/cphy.c160034.

26

[7] Human epicardial adipose tissue: A review, American Heart Journal 153 (6) (2007) 907 – 917. doi:10.1016/j.ahj.2007.03.019. [8] D. Corradi, R. Maestri, S. Callegari, P. Pastori, M. Goldoni, T. V. Luong, C. Bordi, The ventricular epicardial fat is related to the myocardial mass in normal, ischemic and hypertrophic hearts, Cardiovascular Pathology 13 (6) (2004) 313 – 316. doi:10.1016/j.carpath.2004.08.005. [9] Epicardial adipose tissue: emerging physiological, pathophysiological and clinical features, Trends in Endocrinology Metabolism 22 (11) (2011) 450 – 457. doi:10.1016/j.tem.2011.07.003. [10] P. Peng, K. Lekadir, A. Gooya, L. Shao, S. E. Petersen, A. F. Frangi, Pericardial fat, visceral abdominal fat, cardiovascular disease risk factors, and vascular calcification in a community-based sample: the framingham heart study. [11] O. Hartiala, C. G. Magnussen, M. Bucci, S. Kajander, J. Knuuti, H. Ukkonen, A. Saraste, I. Rinta-Kiikka, S. Kainulainen, M. Kähönen, N. HutriKähönen, T. Laitinen, T. Lehtimäki, J. S. Viikari, J. Hartiala, M. Juonala, O. T. Raitakari, Coronary heart disease risk factors, coronary artery calcification and epicardial fat volume in the Young Finns Study, European Heart Journal - Cardiovascular Imaging 16 (11) (2015) 1256–1263. doi:10.1093/ehjci/jev085. [12] P. M. Gorter, A. M. de Vos, Y. van der Graaf, P. R. Stella, P. A. Doevendans, M. F. Meijs, M. Prokop, F. L. Visseren, Relation of epicardial and pericoronary fat to coronary atherosclerosis and coronary artery calcium in patients undergoing coronary angiography, The American journal of cardiology 102 (4) (2008) 380–385. [13] C. L. Schlett, M. Ferencik, M. F. Kriegel, F. Bamberg, B. B. Ghoshhajra, S. B. Joshi, J. T. Nagurney, C. S. Fox, Q. A. Truong, U. Hoffmann, Association of pericardial fat and coronary high-risk lesions as determined by cardiac ct, Atherosclerosis 222 (1) (2012) 129–134. 27

[14] T. L. (née Elss), H. Nickisch, T. Wissel, R. Bippus, H. Schmitt, M. Morlock, M. Grass, Motion estimation and correction in cardiac ct angiography images using convolutional neural networks, Computerized Medical Imaging and Graphics 76 (2019) 101640. doi:https://doi.org/10.1016/j.compmedimag.2019.06.001. URL

http://www.sciencedirect.com/science/article/pii/

S0895611119300515 [15] T. Lossau, H. Nickisch, T. Wissel, R. Bippus, H. Schmitt, M. Morlock, M. Grass, Motion artifact recognition and quantification in coronary ct angiography using convolutional neural networks, Medical Image Analysis 52 (2019) 68 – 79. doi:https://doi.org/10.1016/j.media.2018.11.003. URL

http://www.sciencedirect.com/science/article/pii/

S1361841518308624 [16] A. A. Mahabadi, N. Lehmann, H. Kälsch, T. Robens, M. Bauer, I. Dykun, T. Budde, S. Moebus, K.-H. Jöckel, R. Erbel, S. Möhlenkamp, Association of epicardial adipose tissue with progression of coronary artery calcification is more pronounced in the early phase of atherosclerosis: Results from the heinz nixdorf recall study, JACC: Cardiovascular Imaging 7 (9) (2014) 909 – 916. doi:10.1016/j.jcmg.2014.07.002. [17] R. Djaberi, J. D. Schuijf, J. M. van Werkhoven, G. Nucifora, J. W. Jukema, J. J. Bax, Relation of epicardial adipose tissue to coronary atherosclerosis, The American journal of cardiology 102 (12) (2008) 1602–1607. doi:10. 1016/j.amjcard.2008.08.010. [18] A. Yerramasu, D. Dey, S. Venuraju, D. V. Anand, S. Atwal, R. Corder, D. S. Berman, A. Lahiri, Increased volume of epicardial fat is an independent risk factor for accelerated progression of sub-clinical coronary atherosclerosis, Atherosclerosis 220 (1) (2012) 223–230. doi:10.1016/j. atherosclerosis.2011.09.041. [19] T.-Y. Choi, N. Ahmadi, S. Sourayanezhad, I. Zeb, M. J. Budoff, Relation 28

of vascular stiffness with epicardial and pericardial adipose tissues, and coronary atherosclerosis, Atherosclerosis 229 (1) (2013) 118–123. doi:10. 1016/j.atherosclerosis.2013.03.003. [20] S. Zhao, Z. Gao, H. Zhang, Y. Xie, J. Luo, D. Ghista, Z. Wei, X. Bi, H. Xiong, C. Xu, S. Li, Robust segmentation of intima–media borders with different morphologies and dynamics during the cardiac cycle, IEEE Journal of Biomedical and Health Informatics 22 (5) (2018) 1571–1582. doi:10.1109/JBHI.2017.2776246. [21] T. E. Brinkley, F.-C. Hsu, J. J. Carr, W. G. Hundley, D. A. Bluemke, J. F. Polak, J. Ding, Pericardial fat is associated with carotid stiffness in the multi-ethnic study of atherosclerosis, Nutrition, Metabolism and Cardiovascular Diseases 21 (5) (2011) 332–338. [22] P. Raggi, Epicardial adipose tissue as a marker of coronary artery disease risk, Journal of the American College of Cardiology 61 (13) (2013) 1396– 1397. doi:10.1016/j.jacc.2012.12.028. [23] G. Iacobellis, F. Assael, M. C. Ribaudo, A. Zappaterreno, G. Alessi, U. Di Mario, F. Leonetti, Epicardial fat from echocardiography: a new method for visceral adipose tissue prediction, Obesity research 11 (2) (2003) 304–310. doi:10.1038/oby.2003.45. [24] K. Kessels, M. M. Cramer, B. Velthuis, Epicardial adipose tissue imaged by magnetic resonance imaging: an important risk marker of cardiovascular disease, Heart 92 (7) (2006) 962. doi:10.1136/hrt.2005.074872. [25] C. Xu, L. Xu, Z. Gao, S. Zhao, H. Zhang, Y. Zhang, X. Du, S. Zhao, D. Ghista, H. Liu, S. Li, Direct delineation of myocardial infarction without contrast agents using a joint motion feature learning architecture, Medical Image Analysis 50 (2018) 82 – 94. doi:https://doi.org/10.1016/j.media.2018.09.001. URL

http://www.sciencedirect.com/science/article/pii/

S1361841518306960 29

[26] G. Coppini, R. Favilla, P. Marraccini, D. Moroni, G. Pieri, Quantification of epicardial fat by cardiac ct imaging, The open medical informatics journal 4 (2010) 126. doi:10.2174/1874431101004010126. [27] R. Sicari, A. M. Sironi, R. Petz, F. Frassi, V. Chubuchny, D. D. Marchi, V. Positano, M. Lombardi, E. Picano, A. Gastaldelli, Pericardial rather than epicardial fat is a cardiometabolic risk marker: An mri vs echo study, Journal of the American Society of Echocardiography 24 (10) (2011) 1156 – 1162. doi:10.1016/j.echo.2011.06.013. [28] T. S. Polonsky, R. L. McClelland, N. W. Jorgensen, D. E. Bild, G. L. Burke, A. D. Guerci, P. Greenland, Coronary Artery Calcium Score and Risk Classification for Coronary Heart Disease Prediction, JAMA 303 (16) (2010) 1610–1616. doi:10.1001/jama.2010.461. [29] A. Zappaterreno, C. Tiberti, E. Vecci, F. Assael, F. Leonetti, M. C. Ribaudo, U. Di Mario, G. Iacobellis, Echocardiographic Epicardial Adipose Tissue Is Related to Anthropometric and Clinical Parameters of Metabolic Syndrome: A New Indicator of Cardiovascular Risk, The Journal of Clinical Endocrinology

Metabolism 88 (11) (2003) 5163–5168.

doi:10.1210/jc.2003-030698. [30] A. G. Bertaso, epicárdica:

D. Bertol,

B. B. Duncan,

M. Foppa,

Gordura

definiçÃ, medidas e revisÃsistemática dos principais

desfechos, Arquivos Brasileiros de Cardiologia 101 (2013) e18 – e28. doi:10.5935/abc.20130138. URL

http://www.scielo.br/scielo.php?script=sci_arttext&pid=

S0066-782X2013002700020&nrm=iso [31] A. Gastaldelli, D. Davidovich, R. Sicari, Imaging cardiac fat, European Heart Journal - Cardiovascular Imaging 14 (7) (2013) 625–630. doi:10. 1093/ehjci/jet045. [32] A. Cristobal-Huerta, A. Torrado-Carvajal, N. Malpica, M. Méndez, J. A. Hernández Tamames, Automated quantification of epicardial adipose tissue 30

in cardiac magnetic resonance imaging, 2015, pp. 7308–7311. doi:10.1109/ EMBC.2015.7320079. [33] W. A. Kalender, X-ray computed tomography, Physics in Medicine & Biology 51 (13) (2006) R29. doi:10.1088/0031-9155/51/13/R03. [34] J. G. Barbosa, B. Figueiredo, N. Bettencourt, J. M. R. Tavares, Towards automatic quantification of the epicardial fat in non-contrasted ct images, Computer methods in biomechanics and biomedical engineering 14 (10) (2011) 905–914. doi:10.1080/10255842.2010.499871. [35] F. Zhao, H. Hu, Y. Chen, J. Liang, X. He, Y. Hou, Accurate segmentation of heart volume in cta with landmark-based registration and fully convolutional network, IEEE Access 7 (2019) 57881–57893. doi: 10.1109/ACCESS.2019.2912467. [36] R. Molteni, Prospects and challenges of rendering tissue density in hounsfield units for cone beam computed tomography, Oral Surgery, Oral Medicine, Oral Pathology and Oral Radiology 116 (1) (2013) 105 – 119. doi:10.1016/j.oooo.2013.04.013. [37] X. Ding, D. Terzopoulos, M. Diaz-Zamudio, D. S. Berman, P. J. Slomka, D. Dey, Automated epicardial fat volume quantification from non-contrast ct, in: Medical Imaging 2014: Image Processing, Vol. 9034, International Society for Optics and Photonics, 2014, p. 90340I. doi:10.1117/1.JMI. 3.1.014002. [38] R. Shahzad, D. Bos, C. Metz, A. Rossi, H. Kirişli, A. van der Lugt, S. Klein, J. Witteman, P. de Feyter, W. Niessen, et al., Automatic quantification of epicardial fat volume on non-enhanced cardiac ct scans using a multi-atlas segmentation approach, Medical physics 40 (9). doi:10.1118/1.4817577. [39] E. Rodrigues, F. Morais, N. Morais, L. Conci, L. Neto, A. Conci, A novel approach for the automated segmentation and volume quantification of

31

cardiac fats on computed tomography, Computer Methods and Programs in Biomedicine 123 (2016) 109–128. doi:10.1016/j.cmpb.2015.09.017. [40] Rodrigues, V. Pinheiro, P. Liatsis, A. Conci, Machine learning in the prediction of cardiac epicardial and mediastinal fat volumes, Computers in Biology and Medicine 89 (2017) 520 – 529. doi:10.1016/j.compbiomed. 2017.02.010. [41] F. Commandeur, M. Goeller, J. Betancur, S. Cadet, M. Doris, X. Chen, D. S. Berman, P. J. Slomka, B. K. Tamarappoo, D. Dey, Deep learning for quantification of epicardial and thoracic adipose tissue from non-contrast ct, IEEE Transactions on Medical Imaging 37 (8) (2018) 1835–1846. doi: 10.1109/TMI.2018.2804799. [42] P. Solon, M. Aldísio, R. F. Pedro, A high performance approach for 3d lung segmentation in computer tomography images based on a new supervised clustering algorithm, Journal 0 (0) (2019) 0–0. [43] V. N. Vapnik, An overview of statistical learning theory, IEEE Transactions on Neural Networks 10 (5) (1999) 988–999. doi:10.1109/72.788640. [44] A. A. Omer, O. I. Hassan, A. I. Ahmed, A. Abdelrahman, Denoising ct images using median based filters: a review, in: 2018 International Conference on Computer, Control, Electrical, and Electronics Engineering (ICCCEEE), 2018, pp. 1–6. doi:10.1109/ICCCEEE.2018.8515829. [45] R. B. Paranjape, Chapter 1 - fundamental enhancement techniques, in: I. N. BANKMAN (Ed.), Handbook of Medical Image Processing and Analysis (Second Edition), second edition Edition, Academic Press, Burlington, 2009, pp. 3 – 18. doi:10.1016/B978-012373904-9.50008-8. [46] T. Jain, P. Bansod, C. B. S. Kushwah, M. Mewara, Reconfigurable hardware for median filtering for image processing applications, in: 2010 3rd International Conference on Emerging Trends in Engineering and Technology, 2010, pp. 172–175. doi:10.1109/ICETET.2010.172. 32

[47] J. D. Broesch, Chapter 7 - applications of dsp, in: J. D. Broesch (Ed.), Digital Signal Processing, Instant Access, Newnes, Burlington, 2009, pp. 125 – 134. doi:10.1016/B978-0-7506-8976-2.00007-9. [48] P. Maji, A. Mandal, M. Ganguly, S. Saha, An automated method for counting and characterizing red blood cells using mathematical morphology, in: 2015 Eighth International Conference on Advances in Pattern Recognition (ICAPR), 2015, pp. 1–6. doi:10.1109/ICAPR.2015.7050674. [49] C. K. Chui, M.-J. Lai, Filling polygonal holes using c1 cubic triangular spline patches, Computer Aided Geometric Design 17 (4) (2000) 297 – 307. doi:10.1016/S0167-8396(00)00005-4. [50] Y. Fan, T. Chi, The novel non-hole-filling approach of depth image based rendering, in: 2008 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video, 2008, pp. 325–328. doi:10.1109/3DTV. 2008.4547874. [51] J. Serra, Image Analysis and Mathematical Morphology, Academic Press, Inc., Orlando, FL, USA, 1983. [52] P. Soille, Morphological image analysis:

principles and applications,

Springer Science & Business Media, 2013. [53] J. Pont-Tuset, F. Marques, Supervised evaluation of image segmentation and object proposal techniques, IEEE Transactions on Pattern Analysis and Machine Intelligence 38 (7) (2016) 1465–1478. doi:10.1109/TPAMI. 2015.2481406. [54] J. Pont-Tuset, F. Marques, Measures and meta-measures for the supervised evaluation of image segmentation, in: 2013 IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 2131–2138. doi:10.1109/CVPR. 2013.277.

33

[55] V. Garcia, H. de Jesus Ochoa Dominguez, B. Mederos, Analysis of discrepancy metrics used in medical image segmentation, IEEE Latin America Transactions 13 (1) (2015) 235–240. doi:10.1109/TLA.2015.7040653.

34