NASAL-Geom, a free upper respiratory tract 3D model reconstruction software

NASAL-Geom, a free upper respiratory tract 3D model reconstruction software

Accepted Manuscript NASAL-Geom, a free upper respiratory tract 3D model reconstruction software J.L. Cercos-Pita, I.R. Cal, D. Duque, G. Sanjuán de Mo...

9MB Sizes 0 Downloads 30 Views

Accepted Manuscript NASAL-Geom, a free upper respiratory tract 3D model reconstruction software J.L. Cercos-Pita, I.R. Cal, D. Duque, G. Sanjuán de Moreta

PII: DOI: Reference:

S0010-4655(17)30348-X https://doi.org/10.1016/j.cpc.2017.10.008 COMPHY 6355

To appear in:

Computer Physics Communications

Received date : 23 January 2017 Revised date : 25 April 2017 Accepted date : 18 September 2017 Please cite this article as: J.L. Cercos-Pita, I.R. Cal, D. Duque, G.S. de Moreta, NASAL-Geom, a free upper respiratory tract 3D model reconstruction software, Computer Physics Communications (2017), https://doi.org/10.1016/j.cpc.2017.10.008 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

NASAL-Geom, a free upper respiratory tract 3D model reconstruction software J.L. Cercos-Pitaa,∗, I.R. Calb , D. Duqued , G. Sanju´an de Moretaa,c a

d

Research department, NASAL Systems SL, 28020 Madrid, Spain b Computational Engineering Department, Tecnolog´ıas Avanzadas Inspiralia SL, 28034 Madrid, Spain c Servicio de Otorrinolaringolog´ıa, Hospital General Universitario Gregorio Mara˜no´ n, 28007 Madrid, Spain Hydrodynamics Research Basin (CEHINAV), ETS Ingenieros Navales (ETSIN), Technical University of Madrid (UPM), 28040 Madrid, Spain

Abstract The tool NASAL-Geom, a free upper respiratory tract 3D model reconstruction software, is here described. As a free software, researchers and professionals are welcome to obtain, analyze, improve and redistribute it, potentially increasing the rate of development, and reducing at the same time ethical conflicts regarding medical applications which cannot be analyzed. Additionally, the tool has been optimized for the specific task of reading upper respiratory tract Computerized Tomography scans, and produce 3D geometries. The reconstruction process is divided in three stages: preprocessing (including Metal Artifact Reduction, noise removal, and feature enhancement), segmentation (where the nasal cavity is identified), and 3D geometry reconstruction. The tool has been automatized (i.e. no human intervention is required) a critical feature to avoid bias in the reconstructed geometries. The applied methodology is discussed, as well as the program robustness and precision. Keywords: Computerized Tomography, free software, 3D geometry, Nasal cavity PROGRAM SUMMARY Program Title: NASAL-Geom Program Files doi: http://dx.doi.org/10.17632/d23m5ykyw2.1 Licensing provisions: GPLv3 Programming language: Python, Cython, C, OpenCL Nature of problem: Upper respiratory tract 3D model reconstruction from CT images Solution method: 3D geometry reconstruction is divided in three stages: imagery preprocessing (including Metal Artifact Reduction, noise removal, and features enhancement), segmentation (where the nasal cavity is identified) and 3D geometry reconstruction Additional comments: At least 8GB of RAM memory are recommended ∗

Corresponding author Email address: [email protected] (J.L. Cercos-Pita)

Preprint submitted to Computer Physics Communications

October 25, 2017

Contents 1

Introduction

3

2

Methodology 2.1 DICOM data load . . . . . . . . . . . . . 2.2 Metal Artifacts Reduction . . . . . . . . 2.3 Edge preserving image smoothing . . . . 2.3.1 Gaussian filters . . . . . . . . . . 2.3.2 Anisotropic diffusion filters . . . 2.4 Image enhancing . . . . . . . . . . . . . 2.5 Segmentation . . . . . . . . . . . . . . . 2.5.1 Rough voxelization computation . 2.5.2 Nostril refinement . . . . . . . . 2.6 3D model reconstruction . . . . . . . . .

3

Quality analysis 3.1 Robustness . . . . . . . . . . . . . . . . 3.2 Precision . . . . . . . . . . . . . . . . . 3.2.1 Patient 1 . . . . . . . . . . . . . 3.2.2 Patient 2 . . . . . . . . . . . . . 3.2.3 Patient 3 . . . . . . . . . . . . . 3.3 Convergence . . . . . . . . . . . . . . . 3.3.1 Consistency . . . . . . . . . . . 3.3.2 Stability . . . . . . . . . . . . . 3.4 Comparison with manual reconstructions 3.4.1 Osman et al. [62] . . . . . . . . . 3.4.2 Patients 1-3 . . . . . . . . . . . .

4

Conclusions and future work

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . .

4 4 5 6 6 8 9 11 11 12 12

. . . . . . . . . . .

15 15 16 17 17 17 17 17 20 22 22 23 26

2

1. Introduction There is not doubt computerized tomography (CT) is nowadays one of the most significant medical advances in the last century. Proof of that is the Nobel prize awarded to G.N. Hounsfield in 1977 [65]. However, physicians, engineers, and researchers have only recently taken the first steps to maximize the benefits of this revolutionary technology in constant development. For instance, in Kalpathy-Cramer et al. [37] the radiotherapy target is determined from a CT, while Palomar et al. [63] developed a tool to reconstruct liver 3D geometries in order to aid the surgical planning and execution. In a similar context, we find the works of Qi et al. [71], where cancer pathologies are automatically extracted from imaged tissue microarrays, and of Lee et al. [44], carrying out segmentation and measurement from echocardiography. Regarding the upper tract respiratory system, the segmentation and reconstruction of the nasal cavity and the paranasal sinuses have traditionally been very challenging, due to their variability and complex anatomy, among other reasons [69]. More specifically, the segmentation of the ethmoidal sinuses is a hard task, due to the fine bony structures that separate ethmoidal cells, causing great variability and some difficulties for closing different cell morphologies. Of course, these facts have led the free software and open source communities to invest a considerable amount of effort in order to develop libraries and tools, designed to aid physicians and researchers in reconstructing 3D geometries from diagnosis imagery. In the specific context of the CTs, the utilities can be split in 2 main categories: reconstruction, and processing tools. In the former group we can consider both, the utilities designed to reconstruct the images from the raw data and viceversa, like OpenRECON [79] and RTK [75, 76], and the utilities to simulate the CT physical problem, like CTSim [28] or GIMIAS [13]. Regarding the latter group, probably the most popular tools are 3D Slicer [82], ITK-SNAP [33], DeVIDE [7] and the Nifty utilities package [57– 60]. Moreover, ITK-SNAP has been already applied by Yushkevich et al. [95] to the segmentation of human upper airway imagery, while 3D Slicer has been widely used for the segmentation and 3D geometry reconstruction of the upper respiratory tract for numerical simulation studies (see Butaric et al. [10], Flint et al. [24], Nardelli et al. [55], Afiza et al. [1] and Quadrio et al. [72]) and for 3D printing and experimentation tasks (see Buchajczyk et al. [9] and Sander et al. [77]) In Nagy [54] a review on the available free/open source software was already carried out. Almost all the programs are using the famous libraries ITK [89] and VTK [41] At this point, it should be remarked that the list of introduced software comprise either libraries, which should be compiled within a tool to carry out the reconstruction, or tools which invariably require human intervention, most of them controlled by a Graphic User Interface (GUI). Even though human controlled tools may become really powerful, they have some limitations. The most remarkable one is the eventual bias that can be introduced by the operator — Eklund et al. [21] recently carried out a blind experiment, demonstrating that bias may yield a dramatically large percentage of false positive results. In the context of the upper respiratory tract, Tingelhoff et al. [86, 87] have demonstrated that human guided reconstruction is affected by several biases, which may invalidate the whole process. In addition to this bias, the time required to carry out the process should also be addressed. Hence, rhinologists community would be very interested in automatic methods and software to reconstruct the nasal cavity and paranasal sinuses. For instance, it may improve the diagnosis of 3

nasal airway obstruction (NAO), related to allergic rhinitis, which is the main cause of absenteeism and decrease in labor productivity, affecting up to 20% of the human population [43]. Moreover, the literature demonstrates that the surgical procedure more often performed in NAO (septoplasty) has very high rates of failure (25-50%), as reported by Illum [31], Dinis and Haider [18], and Singh et al. [81]. In more than three decades, very little advance has been made in the treatment of nasal obstruction syndrome and other related diseases, in contrast to the large amount of technology advances experienced in other similar areas. The main objective of the present work is the description of NASAL-geom, an autonomous new software for the 3D geometry reconstruction of upper respiratory tracts, a tool which otorhinolaryngologists, plastic surgeons and maxillofacial surgeons may use for clinical purposes, and from which researchers and professionals may develop more complex tools (see among others Quinlan et al. [73] and Inthavong et al. [32]). Even though this software is not expected to fill the gap of tools to automatically reconstruct 3D geometries from medical imagery in the most general case, the methodology applied herein can be extended to other practical applications. The paper is organized as follows: First, the methodology as well as the software (including its main features and the libraries used) is described in Section 2. Its quality is assessed in Section 3, separately addressing its robustness, precision, and convergence, as suitable defined in different subsections 3.1-3.3. Finally, conclusions and future work are discussed in Section 4. 2. Methodology Tomography images are to be provided as an input to the software. The reconstruction process from X-ray raw data is therefore not herein considered (see Miqueles and Pierro [52] and Yang et al. [94] for recent developments in this area). The global 3D geometry reconstruction process can be split in several stages: 1. Image preprocessing: In this first stage, input images are read, as described in Section 2.1, and preprocessed in order to improve their quality. The latter comprises metal artifacts reduction, described in section 2.2 and data smoothing, described in Section 2.3. 2. Segmentation: The main features of preprocessed images are enhanced, as described in section 2.4, in order to improve the precision of the subsequent body extraction (segmentation), as documented in section 2.5. 3. 3D geometry generation: Finally, the 3D geometry is built. This process comprises the smoothing, the sanitation and the nostrils and pharynx treatment. A number of algorithms have been already described for each of the tasks described above (see among others Goyal et al. [26], Patil and Deore [64] and Jalalian et al. [34]). In present work, just the algorithms implemented in NASAL-Geom, selected for their robustness and quality, are discussed. 2.1. DICOM data load As Erickson et al. [23] pointed out, DICOM is the de facto IT standard for medical imaging. Unfortunately, it is also a huge specification. To work around this problem, in NASAL-Geom the VTK-GDCM plugin [16] to generate a VTK [41] 3D image object is used. DICOM files are 2D 4

slice images, in principle with arbitrary, but well defined, locations, orientations, and spacings. By the usage of VTK-GDCM plugin, such collection of 2D slices is converted into a regular lattice of non-isotropic voxels, i.e. different voxel cell lengths can be considered in each direction (a voxel is the natural 3D extension of the pixel.) Regarding VTK [41], it was initially designed as a visualization toolkit, however constant development has turned it into a very useful library to load, manipulate, and save 3D geometries. It is important to remark that, in order to read the DICOM metadata, PyDICOM [48] is applied. This is a Python package for working with DICOM files, which is a natural choice since NASAL-Geom has also been developed in this language. Python [70] is an open-source portable programming language which is becoming very popular in recent years, due to its simplicity and active community. The resulting VTK 3D image is conveniently cast as NumPy [61] data for later processing (NumPy is a fundamental library for scientific computing within Python). The Python libraries SciPy [22] and scikit-image [78] are also applied. 2.2. Metal Artifacts Reduction Glover and Pelc [25] and Kalender et al. [36] were the first to describe a methodology to reduce the artifacts produced in tomography images by high density bodies, like metal implants. Based on those approaches, many methodologies have been suggested in the past, like iterative processes [93] or wavelets-based inpaint [96]. However, the most popular technique is probably the Normalized Metal Artifact Reduction (NMAR), introduced by Meyer et al. [50]. NMAR is a generalization for more complex structures of the method presented by M¨uller and Buzug [53], which may achieve exact solution in situations where only air, metal and another material are considered. Unfortunately, the methodology proposed by Meyer et al. [50] requires a previous segmentation stage to produce the so-called prior image, which is used to renormalize the entire image data. Since such segmentation process increases the complexity and computational cost, and since significant benefits can be expected only close to bones or metal bodies, in this work the most traditional inpaint-based technique is applied. The filter working flow is schematically depicted in Fig. 1, and can be summarized as follows: 1. A threshold filter is applied to extract the projection of the metal body, hereinafter “the mask”. 2. A Radon transform [74] is applied to both the original image and the mask. In this context, the Radon transform is an integral operator which projects an image over a line by integration: Z ∞Z ∞ R f (s, α) = f (x, y) δ(x cos(α) + y sin(α) − s) dx dy, (1) −∞

−∞

where s is the distance of the integration line to the center of the image, and α is the steepness angle of the integration line. Indeed, applying the projection for several α angles, a 2D image with a stack of line projections, so-called sinogram, is produced. 5

Figure 1: Schematic representation of the Metal Artifact Reduction algorithm

3. The pixels marked by the sinogram of the mask are inpainted upon the sinogram of the original image. In the most general case, image inpainting can be defined as a 2D interpolation, where a piece of an image is reconstructed using the surrounding available information. 4. The Radon transform is inverted to conveniently get back the corrected image. In Fig. 2 the original and corrected images are compared, for different inpaint techniques. As can be appreciated, undesirable artifacts close to the metal bodies arise from the application of the Metal Artifacts Reduction technique, regardless of the inpaint algorithm used. However, in the area of interest, the pharynx, the errors have been significantly reduced. The three considered inpaint algorithms have similar features, the fastest one being the Fast Marching Method proposed by Telea [85]. This method is actually implemented in Cython [8], an optimising static compiler. Due to their higher computational cost, the biharmonic and the Total variational inpaint algorithms are not included in NASAL-Geom. 2.3. Edge preserving image smoothing Features-preserving noise reduction is nowadays an open question. Fortunately, a number of very promising approaches have been proposed in the past. These can be grouped into two categories: Gaussian filters, and anisotropic diffusion filters, which are discussed in separate Sections. 2.3.1. Gaussian filters The simplest approach to effectively suppress noise in an image is the Gaussian filter, consisting in computing each pixel’s value as the integral of a Gaussian function weighting data from neighboring pixels – i.e. by Gaussian convolution. The traditional Gaussian filter effectively suppresses noise, but also main features of the image, like edges. To partially mitigate this problem, Huang et al. [30] introduced the median filter, where the pixel value is replaced by the median of the neighboring one, becoming able to reduce low 6

Figure 2: Comparison between an image affected by a Metal Artifact, and its corrected versions. The pixels obtained by a watershed algorithm of the pharynx are highlighted in red. Upper-left: Original image. Upper-right: Biharmonic inpaint based filtered image [5]. Bottom-left: Total variational inpaint based filtered image [14]. Bottom-right: Fast Marching Method inpaint based filtered image [85].

7

Figure 3: Effects of the smoothing filters on a coronal slice of a helicoidal CT. Upper-left: Original image, after application of the Metal Artifact Reduction (see section 2.2). Upper-right: Median filter [30], aided by a Gaussian filter far away from the image features. Bottom-left: Bilateral filter [88]. Bottom-right: Maiseli filter [46].

level noise while preserving edges [2, 3]. This filter can be aided by a conventional Gaussian filter, where the areas close to the features are excluded using a Laplacian-based edge detector. Along the same lines, the bilateral filter [88] can be considered for larger levels of noise. The bilateral filter is in fact built on top of the same convolution process as the Gaussian filter, conveniently modifying the convolution kernel to reduce the weight of the pixels with large radiosity differences. In Fig. 3 the effect of both filters can be appreciated. The results from the median and bilateral filters are quite similar, even though the bilateral filter has a better performance close to the edges. The latter filter is much more costly in computational terms, hence it has been accelerated using PyOpenCL [42]. The resulting program is nevertheless still much slower, therefore the median filter should be the one considered in general, except in the case of very noisy images. 2.3.2. Anisotropic diffusion filters A different approach to the one described in section 2.3.1 was introduced for the first time by Perona and Malik [66], based on the iterative solution of a diffusive equation. In order to prevent the diffusion equation from distorting the features of the image, an anisotropic diffusion coefficient is used, which depends on the gradient. This filter may in fact return good results. However, the slight image quality improvements that can be achieved with respect to the median filter described in the previous Section 2.3.1, hardly compensates its large computational cost. Related with that, Maiseli et al. [46] recently reviewed existing similar models, proposing a slightly modified version of the diffusion equation, which may significantly improve noise suppression and edge preserving features. Both, the traditional Perona-Malik and Maiseli filters have good capabilities to deal with larger noise levels. Unfortunately, as pointed out by Maiseli et al. [46], the result depends on a shape-defining tuning constant, which should take into account the magnitude of the gradients itself. That makes extremely hard to achieve good edge preserving 8

properties and noise suppression on the whole image. In Fig. 3 the filter proposed by Maiseli et al. [46] is compared with the more traditional Gaussian based filters described in section 2.3.1. As can be appreciated, a good balance between noise removal and edge preservation is achieved. However, some artifacts are seen close to the edges with largest magnitude of the gradient, while some other features are undesirably blurred. The Maiseli image smoothing iterative filter has also been accelerated by PyOpenCL, due to its high computational cost. 2.4. Image enhancing There are currently two main alternatives to carry out a segmentation of a voxelization: active contours, proposed by Kass et al. [38], and the watershed growing algorithm by Vincent and Soille [92]. The former consists in the generation of a surrounding surface, which is progressively adapted to the segmented body. It is a really powerful tool, but unfortunately applying it for very complex geometries, like in this case, which features many concave and convex bodies, is not trivial. On the other hand, the latter is very simple to implement and apply, requiring just a seed (a point known to be inside the body). The main drawback of watershed algorithms arises from the fact that they essentially consist in a thresholding algorithm, where only the bodies with one or more pixels/voxels matching the seeds are kept. As a result, the algorithm is very sensitive to noise and blur. In Sections 2.2 and 2.3 several techniques to deal with the noise have been documented. However, the image itself inherently has a level of blur, which will be eventually increased by the noise removal tools. Along these lines, several authors have introduced the requirement of image deblurring filters (see for instance Dobson and Santosa [19], Sheppard et al. [80] and Pesicek et al. [67]). Nevertheless, in those works the authors are almost invariably suggesting border sharpening algorithms, which may be applied in the most general case. An alternative is proposed here, which takes advantage on the fact that only the air cavity is intended to be reconstructed. Hence, in order to improve the precision the image is roughly thresholded as follows:   0      1 Ii∗ =    Ii − I0     I1 − I0

if Ii ≤ I0 if Ii ≥ I1

(2)

otherwise

such that all the pixels which can be asserted as belonging either to the air or the tissue, bounded by the values I0 and I1 , take the values 0 and 1 respectively. The blurred area, where I0 < Ii < I1 , is rescaled accordingly. In Fig. 4 an arbitrary coronal slice, and its thresholded associated image are presented. At this point, the algorithm should try to dig, i.e. darken, all the blurred pixels which should be identified as air. To carry out the task, two different filters are applied, one designed to dig the subpixel’s precision cavities, i.e. the thin cavities which are completely blurred, and the other one to dig the larger blurred cavity area. The first filter is based in the edge detection algorithm proposed by Canny [12], which has demonstrated its great capabilities to extract a wide variety of edges, with a remarkably 9

Figure 4: Image enhancing process. Upper-left: Original image. Upper-right: Image after thresholding. Bottom-left: Image after Canny edge detection. Bottom-right: Image after Laplacian-guided digging process.

low error rate. Such algorithm is based on pixels’ intensity gradient local maximum detection, conveniently taking into account the gradient direction. In NASAL-Geom the filter application can be summarized as follows: 1. Edges are detected and stored in a mask image using the Canny edge detector [12] 2. Three morphological operations are applied to the mask: binary dilation, binary holes filling, and binary erosion (see Haralick et al. [27]) Binary dilation is an operation where all the pixels in contact with a selected one, i.e. detected as being in an edge, will be selected as well. Binary holes filling means adding to the selection all the pixels completely surrounded by already selected ones. Finally, binary erosion is an operation which deselects the pixels in contact with a non-selected pixel. Hence, all the pixels surrounded by two edges are selected by means of such morphological operations. 3. The radiosity of the pixels belonging to the mask is raised to the third power, such that the tissue values are almost not affected (since its values are close to 1). In the same Fig. 4, the effect of the Canny edge enhancer can be noticed. Indeed, some blurred lines, with a width of just a few of pixels, are well detected and recovered along the septum and left turbinate roof. Regarding the filter required to dig larger cavities, it is based in the intensity Laplacian edge detector. Even though the Laplacian edge detector has been widely used in the past to detect edges (see for instance Van Dokkum [90]), the Canny edge detector actually has interesting features which makes it more convenient in the most general case; among others, its better capabilities to deal with noisy images (see related works like Berzins [6], or van Vliet et al. [91]). However, 10

Figure 5: Comparison of the coronal slice shown in Fig. 4 with the previous and next coronal slices. Highlighted in red the contour of the air cavity from Fig. 4 (bottom-right). Left: Previous coronal slice. Mid: Coronal slice considered in Fig. 4. Right: Next coronal slice.

the Laplacian detector has two key features which make it perfectly suited for this task: First, it is positively valued in the darker side of the edge, such that by simply subtracting the Laplacian from the original image large holes are eventually recovered. Second, the Laplacian filter is better suited to detect 3D features, like thin flat surfaces. In Fig. 4, the effect of the Laplacian hole dig is presented, showing that some relevant features like the turbinates floor are much better captured. It can be noticed as well that two apparently independent holes are joined at top left of the image. This is in fact a 3D effect of the Laplacian, as shown in Fig. 5. In such figure, the same coronal slice considered in Fig. 4 is compared with its previous and next coronal slices. The contour of the air cavity got for the slice depicted in Fig. 4 is overlapped in red. Indeed, the Laplacian filter is able to capture 3D features. 2.5. Segmentation At this stage, the images are ready for extracting the air voxels by means of a watershed algorithm. Unfortunately, the nasal cavity is connected by the nostrils with the air surrounding the human head, hereinafter “background”. We are therefore interested in providing two different voxelizations, one for the nasal cavity and another for the background. Such goal may be achieved in two stages: a rough approximation for the division followed by a refinement process. 2.5.1. Rough voxelization computation As previously discussed, a segmentation watershed algorithm [92] is applied, split in several stages to get the nasal cavity and the background in different pieces. In Fig. 6 the proposed system is schematically depicted. The process begins by applying a conventional 3D watershed algorithm, which is seeded by an arbitrary point of the background. Of course, the voxelization obtained contains both the nasal cavity, and the background. In order to get a rough background approximation, a 2D watershed algorithm is recursively applied to the axial and coronal slices, such that all the voxels selected in both, are considered background. These are therefore subtracted to the global air body obtained by the 3D watershed algorithm. However, the nasal cavity body obtained can be contaminated by some other inner bodies which should be neglected, as can be appreciated in Fig. 6. In order to do that, first all the bodies that either are too small —lower than 15 cm3 —, or are not in contact with the bottom axial slice, 11

Figure 6: Schematic representation of the rough voxelization computation

are discarded. If several bodies still remain, which may be the case in noisy tomographies where wrong bodies arise from the head surface, the one with the minimum standard deviation of the voxels’ x coordinates will be chosen, i.e. the most centered one. 2.5.2. Nostril refinement The rough approximation described above would probably provide a poor quality geometry description close to the nostrils. Fortunately, these are convex bodies which can be computed from the axial, coronal and sagittal slices. Such process is recursively carried out starting from the contact between the rough approximations to the nasal cavity and background. Incidentally, it should be remarked that only the voxels found in at least 2 of the 3 nostril voxelizations —based on axial, coronal and sagittal slices— are considered to extend the nostrils. In Fig. 7 the improved segmented area achieved by the nostril refinement is depicted. Indeed, nostril refinement can significantly improve the quality of the reconstructed geometry, depending on the specific anatomy, as well as on the patient’s position. 2.6. 3D model reconstruction The voxelizations obtained after the segmentation, documented in the previous section, are in fact a poor description of the surfaces. Our proposal to get much better quality surfaces is schematically depicted in Fig. 8. In the first place, a Marching Cubes algorithm [45, 47, 56] is applied to the joined voxelizations of the nasal cavity and the background. Marching Cubes can be considered the de facto standard for this kind of procedures, even in the context of CTs reconstruction (see Peters et al. [68], among others). In such algorithm, the centers of the voxels of the original 3D image are used as vertexes of a set of virtual cubes. Depending on whether the voxels correspond to air or tissue, the cubes 12

Figure 7: Axial slice with the rough segmented area highlighted in red, and the nostril refined area highlighted in green

Figure 8: Schematic representation of the surface generation

13

Figure 9: Meshes generated at the nostrils. Top: Mesh generated by an Advance Front Triangulation-like methodology. Some intersections with the nasal cavity have been highlighted with red boxes. Bottom: Radial mesh with extrusion.

are classified in 16 subcategories, with associated precomputed surface polygons. The procedure is conveniently carried out within the VTK library [41]. In order to improve the quality of the produced surface, which is so far very faceted, polygons are decimated, and later smoothed (both procedures are also included in the VTK library.) At this point, the surface should be split into the one associated with the cavity, and the complementary one corresponding to the background, which should correspond to the outside of the head. With this aim, the nasal cavity voxelization is sampled on the surface, so that the vertices with a value equal to 1 are assigned to the nasal cavity surface. Thus the surface can be easily split by a clip based on the sampled voxelization. This process is aided by Numpy in the code. The goal for several applications is to obtain a nasal cavity that is a watertight surface, which implies the generation of patch meshes for the pharynx and both nostrils. Generating arbitrary surface meshes on top of a polygonal description of the outline is a very complex problem, even in 2D (see Tarjan and Van Wyk [83], Chazelle [15] and Aronov et al. [4] among others). Fortunately, the pharynx has quite simple features, which can be tackled by Advance Front Triangulation algorithms [49]. However, for the nostrils such algorithms may result in conflicting surfaces, which eventually intersect the nasal cavity surface, as depicted in Fig. 9. A radial mesh may become therefore more convenient for the nostrils, due to the possibility of extruding internal vertices in a sphere-like shape. In the same Fig. 9, generated radial meshes at each nostril are shown. As can be appreciated, no intersections between the meshes generated and 14

the nasal cavity surface remain. In order to generate the patches for the pharynx and nostrils, the Python library trimesh [51] is used, as well as the OpenGL [39] utilities to mesh 3D polygons. 3. Quality analysis In this section the quality of the reconstructed geometries is discussed. In order to avoid bias errors, all the geometries have been automatically reconstructed with the same options and parameters. The section is divided into: robustness assessment (Section 3.1), precision (Section 3.2), and convergence (Section 3.3). 3.1. Robustness In the context of robustness, a reconstruction is considered satisfactory if the 3D geometry of the nasal cavity and the face are successfully obtained, as well as the pharynx and nostrils taps discussed in Section 2.6. Of course, all the reconstructions have been manually checked to avoid false positives, like misconsidered nasal cavities, or wrong nostrils. In order to check the robustness of the tool, 95 cases have been parsed. The only requirement to accept a DICOM file as valid for this test is that a full geometric description of the upper respiratory tract is available, and bounded at the bottom by a slice arbitrarily placed between the oropharynx and the esophagus. In other words: that the upper respiratory tract intersects the DICOM bounds only at the bottom slice. The resulting set of cases therefore include a large variability, both in CT protocol and specific patient geometry. Among others, we may consider: 1. 2. 3. 4. 5.

Removed ethmoidal cells. Partial and total nasal obstructions. Septal perforations or deviations. Chronic rhinosinusitis, with or without polyps. Turbinate hypertrophy.

Unfortunately, the authors do not possess the legal rights to release all the CTs considered, but only the subset discussed below in the section devoted to precision, Section 3.2. A satisfactory reconstruction has been achieved for 100% of the considered cases. It is worth mentioning that all the reconstructions have been carried out with the same NASAL-geom bash command: nasal-geom $DICOM_PATH --mar \ --smooth-factor=0.5 --smooth-median \ --smooth-gaussian --patch-wall \ --patch-improve-nostrils \ --quality-segmentation Thus the robustness analysis is not biased by parameter tuning. Incidentally, the performance can be discussed. All the reconstructions were carried out in the same mid-term computer, equipped with an Intel i7-3820 CPU, taking 5-15 minutes each one. The computational time required linearly depends on the resolution of the input DICOM data, i.e. on the number of voxels to become processed. On the other hand, the presence of metal bodies 15

would play a significant role in the tool performance. In fact, the Metal Artifact Reduction (MAR) algorithm described in Section 2.2 is quite expensive in computational terms, becoming applied just on the axial slices where metal bodies are detected. On top of that, the MAR execution time does not depends on the metal body axial footprint, but on the axial slices image resolution and the number of axial slices affected by the metal body. 3.2. Precision In this section, the precision of the segmentation and of the 3D geometry is evaluated for three patients: 1. An asymptomatic patient who is affected by several alterations in the anatomic condition that does not represents any pathology. More specifically, the patient was diagnosed by an agenesis of left paranasal frontal sinus and hypoplasia of the right one, having all the ostia totally blocked. It should be remarked that the ethmoidal cells are partially blocked, as well as the left maxillary sinus. Incidentally, we also remark that the patient has dental implants generating significant artifacts in the CT. 2. A patient diagnosed of NAO, presenting a deviated septum with a complete obstruction of the left nostril. An exploration of the CT reveals that all the paranasal sinuses, ethmoidal cells, and ostia are clear. 3. A patient who underwent ethmoidectomy surgery to remove polyps inside the nasal cavity and the ethmoidal cells, in order to improve the symptoms due to Chronic Rhinosinusitis, including NAO, as well as to partially restore the sense of smell. After surgery, no ethmoidal cells can be found by endoscopic exploration. The exploration also reveals that all paranasal sinus and ostia are clear except the left frontal one. For the sake of reproducibility, the DICOM files are conveniently included in the NASAL-Geom package. For each patient the precision of the reconstruction has been evaluated both quantitatively and qualitatively. Quantitative evaluation is carried out by unsupervised methods, which have the main benefit of minimizing bias. The segmentation can be quantitatively evaluated by the Jaccard [35] and Dice [17] similarity indicators, JSC and DSC. Both are normalized quality indicators extensively used in medical image segmentation, measuring the ratio of misconsidered voxels, such that lower values are associated to better segmentations. The JSC index of data sets A and B is defined as the size of their intersection divided by the size of their union: JSC(A, B) =

|A ∩ B| , |A ∪ B|

while the DSC is closely related to JSC: DSC = 2 JSC/(1 + JSC). Unfortunately, while several method exist to quantitatively evaluate the quality of the segmentation, there is a lack of literature regarding the evaluation of the reconstructed 3D geometry. The first attempts to quantitatively evaluate 3D reconstructed surface are based in the comparison with an already known body, usually known as a “phantom” (see Kim et al. [40] among others). Unfortunately, such methodology cannot be easily applied in our context due to the large variability. 16

Herein, the main features of the 3D geometry are checked to be preserved with a qualitative supervised method. Later, in Section 3.3, a further analysis is carried out, to quantitatively evaluate the quality of the generated geometries. 3.2.1. Patient 1 The segmentation of patient 1 has similarity indicators scores: JSC= 5.9 × 10−3 and DSC= 1.2 × 10− 2. Hence, around 1% of the voxels may be potentially misconsidered. The 3D geometry reconstructed from patient 1’s DICOM files is depicted in Fig. 10. Since all the ostia are blocked, no ethmoidal or paranasal sinuses structures should be reconstructed, as it is indeed the case. Many complex structures are nevertheless preserved along the turbinates, as it is noticeable in the detail of a partial blockage at the right turbinate depicted in Fig. 10, overlapped with the corresponding coronal slice of the CT. The reconstructed geometry can be therefore qualitatively considered a close representation of the data coming from the DICOM imagery. 3.2.2. Patient 2 In this case the segmentation has scored JSC=3.4 × 10−3 and DSC= 6.7 · 10−3 . In Fig. 11 the 3D geometry associated to patient 2 is depicted. As expected, the left nostril is missing due to a full obstruction, while all the ethmoidal cells and paranasal sinuses are successfully preserved. It is worth mentioning that, due to the nostril block, the endoscopic exploration of the left side is not possible, hence the only way to study the full nasal cavity for clinical purposes is the reconstruction of the 3D model from the DICOM imagery. 3.2.3. Patient 3 In this case the segmentation has scored JSC= 7.3 × 10−3 and DSC= 1.5 × 10−2 . These are values quite similar to the previous cases, even though the input CT quality is significantly worse. In Fig. 12 the 3D geometry associated to patient 3 is shown. The removed ethmoidal cells are successfully missing in the resulting geometry, while the left frontal sinus is present, since the ostium had not been successfully cleared during surgery. The geometry lacks the right inferior turbinate, which is indeed missing in the CT. 3.3. Convergence As mentioned in previous Section 3.2, reports on quality evaluation of 3D geometries generated from medical images are currently scarce, becoming this fact particularly acute in the context of nasal cavities. Developing a quality evaluation methodology is out of the scope of this work. However, in order to demonstrate the benefits of NASAL-Geom, a convergence analysis is carried out in this section. It consists of three steps: consistency analysis, stability analysis, and comparison with manually reconstructed geometries. 3.3.1. Consistency Consistency may be demonstrated by the application of the tool to an image whose 3D resulting geometry is known. It has been already mentioned that there is not such information available in the literature. Hence, a classical point sampling algorithm (see for instance Hohne and Bernstein 17

Figure 10: Top: Snapshot of the nasal cavity 3D geometry automatically generated from Patient 1’s DICOM files. Bottom: Detail of the 3D geometry, superposed with a coronal slice of the CT

18

Figure 11: Snapshot of the nasal cavity 3D geometry automatically generated from Patient 2’s DICOM files.

Figure 12: Snapshot of the nasal cavity 3D geometry automatically generated from Patient 3’s DICOM files.

19

[29]) has been applied to synthesize binary CT images from the 3D reconstructed geometries described in Sections 3.2.1-3.2.3. The consistency analysis process can be summarized as follows: 1. A CT image is generated from each of the 3D geometries described in sections 3.2.1-3.2.3. 2. NASAL-Geom is applied to each synthesized CT image to produce a new 3D geometry. 3. The new 3D geometry is compared with the original one. The similarity of both 3D geometries may evaluated by the Hausdorff distance (HD) [20]: HD(A, B) =

H(A, B) , max(L(A), L(B))

(3)

with  H(A, B) = max min ka − bk , a∈A



(4)

b∈B

L(A) = max kek,

(5)

e∈A

where a and b are points in surface A or B, and e are edges. That is, HD(A, B) measures the maximum distance between both shapes, A and B. This distance has been conveniently nondimensionalized by the maximum edge length, such that both shapes can be considered consistent if HD(A, B) < 1. Traditionally, the Hausdorff distance has been justly criticized given its the large sensitivity to outliers, it being comparable to a L∞ distance in linear algebra (defined as the largest difference between two vectors.) Along these lines, Dubuisson and Jain [20] proposed a new algorithm based on mean Hausdoff distance (MHD), akin to a L1 distance, which conveniently nondimensionalized is defined as: MHD(A, B) =

h(A, B) + h(B, A) , l(A) + l(B)

(6)

with 1 X min (ka − bk) na a∈A b∈B 1 X l(A) = kek. ne e∈A

(7)

h(A, B) =

(8)

where na and ne are the number of points and edges of the mesh respectively. Again, the consistency can be asserted for MHD(A, B) < 1 values. In Table 1 the consistency parameters for the three considered practical applications are presented. Their values are much lower than the limit value of 1, and therefore the tool can be considered consistent, in the absence of a more formal analysis.

20

Table 1: Consistency parameters for the 3 considered practical applications

4.5

HD

MHD

1 2 3

0.31 0.36 0.36

0.24 0.27 0.25

4.5

MHD HD

4.5

MHD HD

4.0

3.5

3.0

3.0

3.0

2.5

2.5

2.5

MHD / HD

3.5

2.0

2.0

2.0

1.5

1.5

1.5

1.0

1.0

1.0

0.5

0.5

0.0 0.2

0.3

0.4

0.5 0.6 Resampling factor

0.7

(a) Patient 1

0.8

0.9

0.0 0.2

MHD HD

4.0

3.5

MHD / HD

MHD / HD

4.0

Patient

0.5 0.3

0.4

0.5 0.6 Resampling factor

0.7

(b) Patient 2

0.8

0.9

0.0 0.2

0.3

0.4

0.5 0.6 Resampling factor

0.7

0.8

0.9

(c) Patient 3

Figure 13: Hausdorff distances as a function of the resampling factor

3.3.2. Stability The stability of the tool may be assessed by a systematic perturbation of the CTs used in Sections 3.2.1-3.2.3, and comparison of the resulting geometries. More specifically, the original CT information in each case is progressively downsampled by decimation. The 3D resulting geometries are then compared with the original one using the Hausdorff distances defined in Eqs. (3) and (6). These distances are plotted in Figs. 13a-13c for each patient, as a function of the resampling factor (this factor is defined as the ratio of the original CT voxel length and the resampled one). As can be noticed, the HD and MHD distances have a similar behavior in Figs. 13b and 13c. However, in the case of Patient 1, the traditional Hausdorff distance value, HD, grows when the resampling factor tends to 1. This error is caused by the single pixel that implies the ostium blockage, which upon downsampling is artificially unblocked. Fig. 14 shows snapshots of Patient 1’s 3D reconstructed geometries, for all considered resampling factors. Indeed, the Hausdorff distance, H, between the original surface and the resampled one will be almost constant in this case, resulting in a non-dimensional Hausdorff distance, HD, roughly proportional to the resampling factor. As predicted by Dubuisson and Jain [20], the traditional Hausdorff distance is seen to indeed be excessively sensitive to outliers. The modified Hausdorff distance, MHD, can be therefore considered a more convenient metric to evaluate the stability of the tool: in all three cases this parameter has large values for the coarsest resolution, decreasing to a low constant value for the finest resolutions. Recalling Eq. (6), a constant value implies that the mean distance between the surfaces decreases proportionally to the mean edge length. 21

(a) 0.2

(b) 0.3

(c) 0.4

(d) 0.5

(e) 0.6

(f) 0.7

(g) 0.8

(h) 0.9

(i) 1.0

Figure 14: Patient 1’s reconstructed geometries for different resampling factors

3.4. Comparison with manual reconstructions To complement the convergence analysis of section 3.3, here 3D geometries generated by NASAL-Geom are compared with computer aided manual segmentations. For completeness, this comparison is carried out in two stages: First, NASAL-Geom is applied to the CT kindly supplied by Osman et al. [62], and compared with the corresponding geometry in the same work. Second, the three subjects in Sections 3.2.1-12 are reconstructed by medical doctors Ricardo San Rom´an Manso –Radiology department of Hospital 12 Octubre, Madrid, Spain– and Jorge Jim´enez Antol´ın –Rhinology department of Hospital Virgen de la Salud, Toledo, Spain–. While the former comparison has obvious benefits regarding eventual bias, therefore increasing the reliability of the analysis, the latter presents reproducibility advantages, since the considered CTs are shipped with the software. 3.4.1. Osman et al. [62] Osman et al. [62] addressed nasal airflow CFD simulations considering three different subjects, and compared the computed nasal resistance with the values measured by anterior rhinomanometry. Here, NASAL-Geom is applied to the third subject’s CT to asses the quality of its generated 3D geometries. In Fig. 15 snapshots of both nasal cavity surfaces are shown, the one generated with NASALGeom, and the one already published one by Osman et al. [62]. At gross scale, the main differences are the absence of the sinus and ethmoidal cells in the surface provided by Osman et al. [62], as well as the sliced nasopharynx, right above the metal artifacts caused by dental implants. There are also noticeable differences at the nostrils, where Osman et al. [62] added some mesh in order to crop by a planar surface, as can be noticed in Fig. 16, where the surface supplied by Osman et al. [62] is overlapped with the nasal cavity and face surfaces generated by NASALGeom. Depending on the specific practical application, the differences along the nostrils may have a critical impact. In the context of CFD simulations, the potential negative consequences derived from a bad boundary condition are discussed by Taylor et al. [84]. Along the same line, Cal et al. [11] already applied NASAL-Geom to demonstrate the critical impact of the inflow boundary conditions on the temperature field, adding an outer spherical air region around the nose. The non-dimensional Hausdorff distances, defined in Eqs. 3 and 6, are HD = 5.8 and MHD = 0.78. As happened in section 3.3.2, the traditional Hausdorff distance is excessively sensitive to the absence of sinuses, becoming again a non-convenient metric. On the other hand, the modified Hausdorff distance, MHD, has a value low enough to suggest that both surfaces are 22

Figure 15: The generated 3D surfaces. Left: The 3D geometry generated by NASAL-Geom. Right: The 3D geometry generated by Osman et al. [62].

Figure 16: Detail of the differences at the right nostril. Red: The nasal cavity generated by Osman et al. [62]. Gray: The nasal cavity generated by NASAL-Geom. Blue: The facial surface generated by NASAL-Geom.

23

Figure 17: Arbitrary axial slice of the surfaces and the CT. Red: Slice of the geometry generated by Osman et al. [62]. Blue: Slice of the surface generated by NASAL-Geom.

Figure 18: The generated 3D surfaces for the patient 1. Left: The 3D geometry generated by NASAL-Geom. Right: The 3D geometry manually generated.

similar, but large enough to suggest that there are small differences. In Fig. 17 an axial slice of both surfaces, overlapped with the CT, is displayed. It is noteworthy that NASAL-Geom captures better the features of the air cavity, more accurately discriminating between the air and tissue voxels. 3.4.2. Patients 1-3 For the sake of reproducibility, a similar comparison to the one described in previous section, 3.4.1, is carried out considering the three subjects discussed in Sections 3.2.1-3.2.3. To this end, nasal cavity surfaces have been manually generated on top of the three patients’ CTs by medical doctors Ricardo San Rom´an Manso and Jorge Jim´enez Antol´ın. In Figs. 18-20 snapshots of such surfaces, beside the corresponding NASAL-Geom ones, are presented. At gross scale, both methodologies keep the same sinuses structures, except in the first patient, where a Sphenoid sinus has been included in the manual segmentation. Such discrepancy led to the large Hausdorff distances reported in table 2 for the first subject. In contrast with the results presented in Section 3.4.1, this time NASAL-Geom cannot be claimed to have outperformed manual reconstructions, as can be appreciated in the comparative axial slices shown in Fig. 21. Nevertheless, NASAL-Geom seems to be less sensitive to noisy and 24

Figure 19: The generated 3D surfaces for the patient 2. Left: The 3D geometry generated by NASAL-Geom. Right: The 3D geometry manually generated.

Figure 20: The generated 3D surfaces for the patient 3. Left: The 3D geometry generated by NASAL-Geom. Right: The 3D geometry manually generated.

Table 2: Discrepancy parameters between the manually generated surfaces and NASAL-Geom generated ones

Patient

HD

MHD

1 2 3

3.07 0.90 0.75

1.91 0.46 0.50

25

(a) Patient 1

(b) Patient 2

(c) Patient 3

Figure 21: Arbitrary axial slices of the surfaces and the CTs Red: Slice of the geometry manually generated. Blue: Slice of the surface generated by NASAL-Geom.

blurred areas in general. 4. Conclusions and future work NASAL-Geom, an automated free software to generate nasal cavity 3D geometries, including the surrounding head surface, and eventually the pharynx and nostril patches, has been described. The main benefit of an automated software is the dramatic reduction of the bias which may result from other computer-aided manual segmentation and reconstruction tools. In order to check the robustness of the tool, a set of 95 cases has been parsed with NASALGeom, the 3D surface of the 100% of them having been successfully reconstructed. Regarding precision, 3 of the cases from the aforementioned set have been conveniently anonymized and provided with the software package, for the sake of reproducibility. The reconstruction of these 3 cases has been discussed, and its quality evaluated. Along these lines, very promising results have been obtained from the quantitative evaluation of the segmentations, where similarity indicators yielded errors below 1.5% of the voxels. The 3D reconstructed geometries have also been qualitatively evaluated, to demonstrate that the main features of the actual nasal cavity are preserved including the ones coming from both, the patient diagnosis and the DICOM imaging. Convergence has also been analyzed, in terms of consistency, stability, and comparison with manually reconstructed geometries. In computational terms, the tool is able to generate 3D geometries in 5-15 minutes (in an Intel i7-3820 CPU), depending on the resolution of the input DICOM and the metal artifacts volume. Hence, the tool is not well suited to carry out fast visualization tasks, but is rather designed to 26

generate 3D geometries for more complex applications, where precision may have a critical impact —e.g. CFD simulations—. Regarding future work, the total test cases basis should be amplified, as well as the percentage of them which may be publicly released. In fact, the huge DICOM specification and the large physiological variability imply a lot of “fringe” cases. This would also make possible to eventually apply machine learning techniques to the segmentation and features recognition. As has been already remarked, even though the segmentation has been quantitatively evaluated, the resulting 3D geometry has been evaluated just qualitatively, therefore being more prone to bias defects. Thus, the development of an unsupervised methodology to evaluate the quality of 3D geometries, similar to the one applied to the segmentation, would be a great advance in a future work. As an intermediate step, the deployment of a set of 3D nasal cavity phantom models, with the CT imagery associated — which can be numerically generated — may be considered for future work. This would permit an easy assessment of results without introducing a significant bias. Acknowledgments The authors would like to express their gratitude to medical doctors Ricardo San Rom´an Manso and Jorge Jim´enez Antol´ın for the time they have kindly invested to manually generate 3D geometries of the three considered patients. The authors are thankful as well to J. Osman’s research team for granting one of their CT data set and corresponding 3D geometry, which made it possible to address the comparison with already published results. Finally, the authors would like to acknowledge the two anonymous reviewers, which have significantly contributed to increase the quality of this article. Bibliography [1] Afiza, E., Takakura, Y., Atsumi, T., Iida, M., 2016. Effects of ostia variation for Airflow patterns within Nasal cavity Models with Maxillary Sinus. Journal of Medical and Bioengineering Vol 5 (2). [2] Arce, G. R., 2005. Nonlinear signal processing: a statistical approach. John Wiley & Sons. [3] Arias-Castro, E., Donoho, D. L., 2009. Does median filtering truly preserve edges better than linear filtering? The Annals of Statistics, 1172–1206. [4] Aronov, B., Seidel, R., Souvaine, D., 1993. On compatible triangulations of simple polygons. Computational Geometry 3 (1), 27–35. [5] Bertalmio, M., Sapiro, G., Caselles, V., Ballester, C., 2000. Image inpainting. In: Proceedings of the 27th annual conference on Computer graphics and interactive techniques. ACM Press/Addison-Wesley Publishing Co., pp. 417–424. [6] Berzins, V., 1984. Accuracy of laplacian edge detectors. Computer Vision, Graphics, and Image Processing 27 (2), 195–210. [7] Botha, C. P., 2002. DeVIDE. URL https://github.com/cpbotha/devide [8] Bradshaw, R., Behnel, S., 2007. Cython. URL http://cython.org [9] Buchajczyk, M., Geoghegan, P., Kabaliuk, N., Adams, C., Spence, C., Jermy, M., 2016. Experimental investigation of airway pressures in human airway models with compliant components. In: Proceedings of the 20th Australasian Fluid Mechanics Conference, Perth, Australia.

27

[10] Butaric, L. N., McCarthy, R. C., Broadfield, D. C., 2010. A preliminary 3D computed tomography study of the human maxillary sinus and nasal cavity. American journal of physical anthropology 143 (3), 426–436. [11] Cal, I. R., Cercos-Pita, J. L., Duque, D., 2017. The incompressibility assumption in computational simulations of nasal airflow. Computer Methods in Biomechanics and Biomedical Engineering 20 (8), 853–868. [12] Canny, J., 1986. A computational approach to edge detection. IEEE Transactions on pattern analysis and machine intelligence (6), 679–698. [13] Center for Computational Image and Simulation Technologies in Biomedicine (CISTIB), University of Sheffield, UK, 2014. GIMIAS. URL http://www.gimias.org [14] Chan, T. F., Shen, J., 2001. Nontexture inpainting by curvature-driven diffusions. Journal of Visual Communication and Image Representation 12 (4), 436–449. [15] Chazelle, B., 1991. Triangulating a simple polygon in linear time. Discrete & Computational Geometry 6 (3), 485–524. [16] CREATIS, Malaterre, M., 2007. Vtk-gdcm. URL https://www.creatis.insa-lyon.fr/software/public/Gdcm/VtkGdcm.html [17] Dice, L. R., 1945. Measures of the amount of ecologic association between species. Ecology 26 (3), 297–302. [18] Dinis, P. B., Haider, H., 2002. Septoplasty: long-term evaluation of results. American journal of otolaryngology 23 (2), 85–90. [19] Dobson, D. C., Santosa, F., 1994. An image-enhancement technique for electrical impedance tomography. Inverse problems 10 (2), 317. [20] Dubuisson, M.-P., Jain, A. K., 1994. A modified Hausdorff distance for object matching. In: Pattern Recognition, 1994. Vol. 1-Conference A: Computer Vision & Image Processing., Proceedings of the 12th IAPR International Conference on. Vol. 1. IEEE, pp. 566–568. [21] Eklund, A., Nichols, T., Knutsson, H., 2016. Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates. Proceedings of the National Academy of Sciences, 201602413. [22] Enthought Inc., SciPy Developers, 2001. Scipy. URL https://www.scipy.org [23] Erickson, B., Langer, S., Nagy, P., 2005. The role of open-source software in innovation and standardization in radiology. Journal of the american college of radiology 2 (11), 927–931. [24] Flint, T., Esmaily-Moghadam, M., Thamboo, A., Velasquez, N., Moin, P., 2015. Computational investigation of empty nose syndrome. Center for Turbulence Research (Stanford), Annual Research Briefs. [25] Glover, G., Pelc, N., 1981. An algorithm for the reduction of metal clip artifacts in CT reconstructions. Medical physics 8 (6), 799–807. [26] Goyal, A., Bijalwan, A., Chowdhury, M. K., 2012. A comprehensive review of image smoothing techniques. International Journal of Advanced Research in Computer Engineering & Technology (IJARCET) 1 (4), pp–315. [27] Haralick, R. M., Sternberg, S. R., Zhuang, X., 1987. Image analysis using mathematical morphology. IEEE transactions on pattern analysis and machine intelligence (4), 532–550. [28] Herman, G. T., Kay, I., Yee, C., Warner, A., Angeles, E. G., 2007. CTSim. URL http://www.ctsim.org [29] Hohne, K. H., Bernstein, R., 1986. Shading 3D-images from CT using gray-level gradients. IEEE transactions on medical imaging 5 (1), 45–47. [30] Huang, T., Yang, G., Tang, G., 1979. A fast two-dimensional median filtering algorithm. IEEE Transactions on Acoustics, Speech, and Signal Processing 27 (1), 13–18. [31] Illum, P., 1997. Septoplasty and compensatory inferior turbinate hypertrophy: long-term results after randomized turbinoplasty. European Archives of Oto-Rhino-Laryngology 254 (1), S89–S92. [32] Inthavong, K., Wen, J., Tian, Z., Tu, J., 2008. Numerical study of fibre deposition in a human nasal cavity. Journal of Aerosol Science 39 (3), 253–265. [33] ITK-SNAP Team, 1999. ITK-SNAP. URL http://www.itksnap.org [34] Jalalian, A., Mashohor, S. B., Mahmud, H. R., Saripan, M. I. B., Ramli, A. R. B., Karasfi, B., 2013. Computeraided detection/diagnosis of breast cancer in mammography and ultrasound: a review. Clinical imaging 37 (3),

28

420–426. [35] Jarccard, P., 1908. Nouvelles recherches sur la distribution floral. Bull. Soc. Vard. Sci. Nat 44, 223–270. [36] Kalender, W., Hebel, R., Ebersberger, J., 1987. Reduction of CT artifacts caused by metallic implants. Radiology 164 (2), 576–577. [37] Kalpathy-Cramer, J., Awan, M., Bedrick, S., Rasch, C. R., Rosenthal, D. I., Fuller, C. D., 2014. Development of a software for quantitative evaluation radiotherapy target and organ-at-risk segmentation comparison. Journal of digital imaging 27 (1), 108–119. [38] Kass, M., Witkin, A., Terzopoulos, D., 1988. Snakes: Active contour models. International journal of computer vision 1 (4), 321–331. [39] KHRONOS group, 1992. OpenGL. URL https://www.opengl.org/ [40] Kim, J. S., Singh, V., Lee, J. K., Lerch, J., Ad-Dab’bagh, Y., MacDonald, D., Lee, J. M., Kim, S. I., Evans, A. C., 2005. Automated 3-D extraction and evaluation of the inner and outer cortical surfaces using a Laplacian map and partial volume effect classification. Neuroimage 27 (1), 210–221. [41] Kitware, 1993. Vtk. URL http://www.vtk.org [42] Kl¨ockner, A., 2009. Pyopencl. URL https://mathema.tician.de/software/pyopencl [43] Lamb, C. E., Ratner, P. H., Johnson, C. E., Ambegaonkar, A. J., Joshi, A. V., Day, D., Sampson, N., Eng, B., 2006. Economic impact of workplace productivity losses due to allergic rhinitis compared with select medical conditions in the United States from an employer perspective. Current medical research and opinion 22 (6), 1203–1210. [44] Lee, A. P.-W., Fang, F., Jin, C.-N., Kam, K. K.-H., Tsui, G. K., Wong, K. K., Looi, J.-L., Wong, R. H., Wan, S., Sun, J. P., et al., 2014. Quantification of mitral valve morphology with three-dimensional echocardiography. Circulation Journal 78 (5), 1029–1037. [45] Lorensen, W. E., Cline, H. E., 1987. Marching cubes: A high resolution 3D surface construction algorithm. In: ACM siggraph computer graphics. Vol. 21. ACM, pp. 163–169. [46] Maiseli, B. J., Ally, N., Gao, H., 2015. A noise-suppressing and edge-preserving multiframe superesolution image reconstruction method. Signal Processing: Image Communication 34, 1–13. [47] Masala, G. L., Golosio, B., Oliva, P., 2013. An improved Marching Cube algorithm for 3D data segmentation. Computer Physics Communications 184 (3), 777–782. [48] Mason, D., 2008. PyDICOM. URL http://www.pydicom.org [49] Mavriplis, D. J., 1995. An advancing front Delaunay triangulation algorithm designed for robustness. Journal of Computational Physics 117 (1), 90–101. [50] Meyer, E., Raupach, R., Lell, M., Schmidt, B., Kachelrieß, M., 2010. Normalized metal artifact reduction (NMAR) in computed tomography. Medical physics 37 (10), 5482–5493. [51] Michael Dawson-Haggerty, 2015. python-trimesh. URL https://github.com/mikedh/trimesh [52] Miqueles, E. X., Pierro, A. R. D., 2011. C-library raft: Reconstruction algorithms for tomography. applications to x-ray fluorescence tomography. Computer Physics Communications 182 (12), 2661 – 2673. [53] M¨uller, J., Buzug, T., 2009. Spurious structures created by interpolation-based CT metal artifact reduction. In: SPIE Medical Imaging. International Society for Optics and Photonics, pp. 72581Y–72581Y. [54] Nagy, P., 2007. Open Source in Imaging Informatics. Journal of Digital Imaging 20 (1), 1–10. [55] Nardelli, P., Khan, K. A., Corv`o, A., Moore, N., Murphy, M. J., Twomey, M., OConnor, O. J., Kennedy, M. P., Est´epar, R. S. J., Maher, M. M., et al., 2015. Optimizing parameters of an open-source airway segmentation algorithm using different CT images. Biomedical engineering online 14 (1), 62. [56] Newman, T. S., Yi, H., 2006. A survey of the marching cubes algorithm. Computers & Graphics 30 (5), 854 – 879. [57] Nifty Developers, 2009. Nifty Reg. URL https://sourceforge.net/projects/niftyreg

29

[58] Nifty Developers, 2011. Nifty Rec. URL https://sourceforge.net/projects/niftyrec [59] Nifty Developers, 2011. Nifty Seg. URL https://sourceforge.net/projects/niftyseg [60] Nifty Developers, 2012. Nifty Sim. URL https://sourceforge.net/projects/niftysim [61] NumPy Developers, 2005. NumPy. URL http://www.numpy.org [62] Osman, J., Großmann, F., Brosien, K., Kertzscher, U., Goubergrits, L., Hildebrandt, T., 2016. Assessment of nasal resistance using computational fluid dynamics. Current Directions in Biomedical Engineering 2 (1), 617– 621. [63] Palomar, R., Cheikh, F. A., Edwin, B., Beghdadhi, A., Elle, O. J., 2016. Surface reconstruction for planning and navigation of liver resections. Computerized Medical Imaging and Graphics 53, 30–42. [64] Patil, D. D., Deore, S. G., 2013. Medical image segmentation: a review. International Journal of Computer Science and Mobile Computing 2 (1), 22–27. [65] Peeters, F., Verbeeten Jr, B., Venema, H., 1979. Nobel Prize for medicine and physiology 1979 for A.M. Cormack and G.N. Hounsfield. Nederlands tijdschrift voor geneeskunde 123 (51), 2192–2193. [66] Perona, P., Malik, J., 1990. Scale-space and edge detection using anisotropic diffusion. IEEE Transactions on pattern analysis and machine intelligence 12 (7), 629–639. [67] Pesicek, J., Thurber, C., Widiyantoro, S., Zhang, H., DeShon, H., Engdahl, E., 2010. Sharpening the tomographic image of the subducting slab below sumatra, the andaman islands and burma. Geophysical Journal International 182 (1), 433–453. [68] Peters, O., Laib, A., R¨uegsegger, P., Barbakow, F., 2000. Three-dimensional analysis of root canal geometry by high-resolution computed tomography. Journal of Dental Research 79 (6), 1405–1409. [69] Pirner, S., Tingelhoff, K., Wagner, I., Westphal, R., Rilk, M., Wahl, F., Bootz, F., Eichhorn, K. W., 2009. CTbased manual segmentation and evaluation of paranasal sinuses. European archives of oto-rhino-laryngology 266 (4), 507–518. [70] Python Software Foundation, 2001. Python. URL https://www.python.org [71] Qi, X., Xing, F., Foran, D. J., Yang, L., 2012. Robust segmentation of overlapping cells in histopathology specimens using parallel seed detection and repulsive level set. IEEE Transactions on Biomedical Engineering 59 (3), 754–765. [72] Quadrio, M., Pipolo, C., Corti, S., Messina, F., Pesci, C., Saibene, A. M., Zampini, S., Felisati, G., 2016. Effects of CT resolution and radiodensity threshold on the CFD evaluation of nasal airflow. Medical and Biological Engineering and Computing 54 (2-3), 411. [73] Quinlan, N., Kendall, M., Bellhouse, B., Ainsworth, R., 2001. Investigations of gas and particle dynamics in first generation needle-free drug delivery devices. Shock Waves 10 (6), 395–404. [74] Radon, J., 1986. On the determination of functions from their integral values along certain manifolds. IEEE transactions on medical imaging 5 (4), 170–176. [75] Rit, S., Oliva, M. V., Brousmiche, S., Labarbe, R., Sarrut, D., Sharp, G. C., 2014. The Reconstruction Toolkit (RTK), an open-source cone-beam CT reconstruction toolkit based on the Insight Toolkit (ITK). In: Journal of Physics: Conference Series. Vol. 489. IOP Publishing, p. 012079. [76] RTK Consortium, 2012. OpenRTK. URL http://www.openrtk.org/ [77] Sander, I. M., McGoldrick, M. T., Helms, M. N., Betts, A., Avermaete, A., Owers, E., Doney, E., Liepert, T., Niebur, G., Liepert, D., et al., 2017. Three-dimensional printing of X-ray computed tomography datasets with multiple materials using open-source data processing. Anatomical Sciences Education. [78] scikit-image team, 2011. scikit-image. URL http://scikit-image.org [79] Sharma, K. S., Jin, X., 2010. Openrecon. URL https://bitbucket.org/bid/openrecon/wiki/Home

30

[80] Sheppard, A. P., Sok, R. M., Averdunk, H., 2004. Techniques for image enhancement and segmentation of tomographic images of porous materials. Physica A: Statistical mechanics and its applications 339 (1), 145– 151. [81] Singh, A., Patel, N., Kenyon, G., Donaldson, G., 2006. Is there objective evidence that septal surgery improves nasal airflow? The Journal of Laryngology & Otology 120 (11), 916–920. [82] Surgical Planning Laboratory at the Brigham and Women’s Hospital and the MIT Artificial Intelligence Laboratory, 1998. 3D Slicer. URL https://www.slicer.org [83] Tarjan, R. E., Van Wyk, C. J., 1988. An O(n log log n)-time algorithm for triangulating a simple polygon. SIAM Journal on Computing 17 (1), 143–178. [84] Taylor, D., Doorly, D., Schroter, R., 2010. Inflow boundary profile prescription for numerical simulation of nasal airflow. Journal of the Royal Society Interface 7 (44), 515–527. [85] Telea, A., 2004. An image inpainting technique based on the fast marching method. Journal of graphics tools 9 (1), 23–34. [86] Tingelhoff, K., Eichhorn, K. W., Wagner, I., Kunkel, M. E., Moral, A. I., Rilk, M. E., Wahl, F. M., Bootz, F., 2008. Analysis of manual segmentation in paranasal CT images. European Archives of Oto-Rhino-Laryngology 265 (9), 1061–1070. [87] Tingelhoff, K., Moral, A. I., Kunkel, M. E., Rilk, M., Wagner, I., Eichhorn, K. W., Wahl, F., Bootz, F., 2007. Comparison between manual and semi-automatic segmentation of nasal cavity and paranasal sinuses from CT images. In: 2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE, pp. 5505–5508. [88] Tomasi, C., Manduchi, R., 1998. Bilateral filtering for gray and color images. In: Computer Vision, 1998. Sixth International Conference on. IEEE, pp. 839–846. [89] US National Library of Medicine, 1999. ITK. URL https://itk.org [90] Van Dokkum, P. G., 2001. Cosmic-ray rejection by Laplacian edge detection. Publications of the Astronomical Society of the Pacific 113 (789), 1420. [91] van Vliet, L. J., Young, I. T., Beckers, G. L., 1989. A nonlinear Laplace operator as edge detector in noisy images. Computer Vision, Graphics, and Image Processing 45 (2), 167–195. [92] Vincent, L., Soille, P., 1991. Watersheds in digital spaces: an efficient algorithm based on immersion simulations. IEEE transactions on pattern analysis and machine intelligence 13 (6), 583–598. [93] Wang, G., Frei, T., Vannier, M., 2000. Fast iterative algorithm for metal artifact reduction in X-ray CT. Academic radiology 7 (8), 607–614. [94] Yang, X., van Ommen, J. R., Schoormans, J., Mudde, R. F., 2015. A hybrid tomographic reconstruction algorithm for high speed x-ray tomography. Computer Physics Communications 196, 27 – 35. [95] Yushkevich, P. A., Piven, J., Hazlett, H. C., Smith, R. G., Ho, S., Gee, J. C., Gerig, G., 2006. User-guided 3D active contour segmentation of anatomical structures: significantly improved efficiency and reliability. Neuroimage 31 (3), 1116–1128. [96] Zhao, S., Robeltson, D., Wang, G., Whiting, B., Bae, K., 2000. X-ray CT metal artifact reduction using wavelets: an application for imaging total hip prostheses. IEEE transactions on medical imaging 19 (12), 1238–1247.

31