A computational approach to estimate postmortem interval using opacity development of eye for human subjects

A computational approach to estimate postmortem interval using opacity development of eye for human subjects

Computers in Biology and Medicine 98 (2018) 93–99 Contents lists available at ScienceDirect Computers in Biology and Medicine journal homepage: www...

1MB Sizes 0 Downloads 20 Views

Computers in Biology and Medicine 98 (2018) 93–99

Contents lists available at ScienceDirect

Computers in Biology and Medicine journal homepage: www.elsevier.com/locate/compbiomed

A computational approach to estimate postmortem interval using opacity development of eye for human subjects € _ Ismail Cantürk *, Lale Ozyılmaz _ Department of Electronics and Communications Engineering, Yıldız Technical University, Istanbul, Turkey

A R T I C L E I N F O

A B S T R A C T

Keywords: Postmortem interval estimation Time after death Time of death Image processing Forensic science Forensic medicine

This paper presents an approach to postmortem interval (PMI) estimation, which is a very debated and complicated area of forensic science. Most of the reported methods to determine PMI in the literature are not practical because of the need for skilled persons and significant amounts of time, and give unsatisfactory results. Additionally, the error margin of PMI estimation increases proportionally with elapsed time after death. It is crucial to develop practical PMI estimation methods for forensic science. In this study, a computational system is developed to determine the PMI of human subjects by investigating postmortem opacity development of the eye. Relevant features from the eye images were extracted using image processing techniques to reflect gradual opacity development. The features were then investigated to predict the time after death using machine learning methods. The experimental results prove that the development of opacity can be utilized as a practical computational tool to determine PMI for human subjects.

1. Introduction Postmortem interval (PMI) estimation is one the most debated and complicated problems in forensic science. The time of death should be determined correctly by revealing the elapsed time since death. PMI estimation is particularly crucial in criminal justice systems, where determining the time of death can help clarify some criminal cases by identifying criminals, determining inheritance, and decreasing the number of suspects [1]. Investigations on PMI estimation have focused on physical postmortem changes, such as cooling [2-5], stiffening [6-9], and decomposition [10-13]; or chemical changes, such as postmortem alterations of the electrolytes within the body fluids [14-21]. Forensic entomology has also been studied to predict PMI. In this method, the presence, age, and incidence timing of insects on corpses are investigated [22, 23]. Additionally, different techniques based on signal and image processing methods have been proposed to estimate PMI. Cantürk et al. [1] explored the correlation between tissue conductivity changes and time of death. Postmortem opacity development of the eye has been indicated as a potential tool for PMI estimation [24-31]. Kumar et al. [25] reported opacity development of the corneal region according to personal observations without using image processing methods. Zhou et al. [26] and Liu

et al. [27] studied postmortem eye changes of rabbits. They photographed the eyes of rabbits periodically after death and extracted some features from the images. The features were then classified with a classification algorithm. Kawashima et al. [24] investigated human subjects and took eye images of the subjects. They used RGB pixel values of the corneal regions and developed a mathematical formula to calculate the PMI. Cantürk et al. [31] analyzed eye images of human subjects from ten cases that were taken periodically over 15 h. The study also medically interpreted the reasons for postmortem alterations of the eye. They analyzed three regions of the eye by extracting image features: corneal region, non-corneal region (sclera), and pupil. Although the opacity development in the corneal and non-corneal regions is correlated with postmortem interval, the postmortem pupil alterations do not exhibit this relationship. Additionally, further color features were observed to show better performance for grading opacity than textural features. In this study, a novel computerized method based on the outputs of the previous study [31] is developed. Since postmortem pupil changes are not significant, only corneal and non-corneal regions are included. An increased number of color features and different texture features are utilized. This new computational approach, which includes image processing and machine learning methods, was developed to estimate PMI of human

* Corresponding author. Yıldız Technical University, Faculty of Electrical and Electronics, Department of Electronics and Communications Engineering, 34220 Esenler, Istanbul, Turkey. _ Cantürk). E-mail address: [email protected] (I. https://doi.org/10.1016/j.compbiomed.2018.04.023 Received 22 January 2018; Received in revised form 20 April 2018; Accepted 24 April 2018 0010-4825/© 2018 Elsevier Ltd. All rights reserved.

€ _ Cantürk, L. Ozyılmaz I.

Computers in Biology and Medicine 98 (2018) 93–99

For this purpose, the images were processed as follows: the eye region of the images was detected using the Viola–Jones algorithm represented in [32]. The Viola–Jones detector applies a machine learning based approach. Different types of eye models are developed and then used to detect the eye in a picture. A sample image showing the eye region in the picture as detected by the algorithm is given in Fig. 2a. After segmenting the eye region, Otsu’s threshold method [33] was utilized to find the corneal part of the eye. Otsu’s method calculates a threshold from intensity image and converts it to a binary (black and white) image using a defined threshold. This method successfully separated the corneal region because of the high difference in intensity between the two regions. After thresholding, morphological closing and opening operations were successively performed to fill tiny gaps and remove small parts (see Fig. 2b). Morphological operations deal with small gaps and parts while preserving the shape and size of larger ones. The maximum and minimum points of the corneal regions were then calculated horizontally and vertically. Since the corneal region has a circular shape, the horizontal and vertical diameters are the differences between the maximum and minimum rows, and between the maximum

subjects based on opacity development of the eye. 2. Materials and methods The flowchart of the proposed approach to predict PMI is given in Fig. 1. In this approach, there are two main steps: (1) Image Processing, and (2) Computer Vision and Machine Learning. In the image processing step, corneal/non-corneal regions of the eye images are detected, and normalized pixel values for selected regions of interest (ROIs) are obtained. Subsequently, feature selection methods are used to reveal the most significant features of the relevant features in the ROIs. Finally, classification methods are utilized to predict PMI using the selected features. Details of the methods used in the proposed approach are given in the subsequent sections. 2.1. The dataset In this study, the dataset collected by Cantürk et al. [31] was utilized. The dataset has approval of Clinical Researches Ethic Commission and was collected with permission. There are eye images in the dataset of one female and nine male subjects whose ages vary from 30 to 77 (mean: 48, standard deviation: 14.9)dataset. Definite selection criteria were utilized for inclusion of the cases to the study group by the researchers. Cases with prone death positions and some causes of death were discarded. Edema, and head and cervical trauma were also reasons for elimination from the research group. There are records for every 20 min across 15 h for each case. Therefore, there are 45 pictures for each subject. In total, the dataset includes 450 images. 2.2. Applied image processing methods Image processing methods were utilized to automatically detect corneal and non-corneal parts of the eye, and extract features. Color and texture based features were extracted to represent the postmortem opacity changes of the regions. Corneal ROIs were selected between the sclera and pupil, and the non-corneal ROIs were of the sclera. The images were processed and analyzed using MATLAB 2014a. 2.2.1. Detection of the corneal and non-corneal regions The corneal and non-corneal parts of the eye are extracted automatically by image processing methods. However, the dataset also includes some facial parts of the cases. Thus, the eye region must be segmented.

Fig. 2. (a) Viola–Jones detector is used to find the eye (b) Binary image which represents corneal boundaries after Otsu’s threshold method and morphological operations (c) Corneal and non-corneal ROIs (d) Joined image for corneal region, (e) Joined image for non-corneal region.

Fig. 1. The proposed approach to predict PMI. 94

€ _ Cantürk, L. Ozyılmaz I.

Computers in Biology and Medicine 98 (2018) 93–99

and minimum columns, respectively. The intersection point of the horizontal and vertical diameters is the center of the corneal region. Since we know the center coordinates and the radius of the corneal region, the non-corneal region can be determined. Four regions of the corneal and non-corneal regions were chosen as ROIs (see Fig. 2c). Then, these square regions were joined to extract features (see Fig. 2d and e). Each square region is 20  20 pixels. Image normalization was realized to remove the dependency of the images on light intensity before applying feature extraction methods. Assuming independent light intensity as α, acquired (R,G,B) pixels will be scaled as (αR, αG, αB). In order to have a linear camera response scaling factor α must be removed. Therefore, each pixel of an image is normalized as shown in Eq. (1) [34].

As a result of image processing and feature extraction steps, 61 features were extracted automatically (see Table 2). Since there are 10 subjects and 45 pictures for each subject, we generated a 450  61 data matrix for machine learning; there were no missing values.

R G B ; ; RþGþB RþGþB RþGþB

2.3.1. Class labeling In order to apply classification methods, every record must be labeled to a class. In this study, a new approach based on recording time is proposed for class labeling. The class label of each image was determined based on time after death. Since the pictures were taken at 20-min intervals over 15 h, the images are separated into 45 sections. Therefore, the number of classes for 20-min intervals is 45. In order to analyze the performance of the classifiers with different intervals, we extended the interval gaps. We tested the system with 40min intervals (i.e., 22 classes), 60-min intervals (i.e., 15 classes), 80min intervals (i.e., 11 classes), and 100-min intervals (i.e., 9 classes). To clarify our class labeling approach, assume that 1-h intervals will be used. Thus, the first three images, which are taken at 20, 40 and 60 min, are labeled as Class 1, and the successive three images, which are taken at 80, 100 and 120 min, are labeled as Class 2, and so on. As indicated in [31], time after death was 6 h when the experiments were started. Therefore, PMI estimations begin at 6 h (e.g., 6 h passed, 6 h 20 min passed …) and between 6 and 21 h.

2.3. Applied machine learning methods Machine learning methods were used for feature selection and classification. Feature selection algorithms revealed the most important features in the data matrix and eliminated redundant ones. Classification algorithms predicted the output of a given feature vector based on training data.

(1)

2.2.2. Feature extraction Color and texture based features are extracted from the corneal and non-corneal regions of the normalized images. The color spaces of RGB (red, green, and blue), HSI (hue, saturation, and intensity), YCbCr (luminance and chrominance), Lab, and grayscale were investigated to reflect gradual opacity changes over time. The means and standard deviations of each channel of the color spaces were calculated for both regions. Since the pixel values of the captured images are expressed as a combination of RGB channels, the images should be converted to other color spaces. Although RGB color spaces only have color information, color spaces like HSI, YCbCr, and Lab exhibit different features of pixels. Some of them are light intensity, color depth, purity of color, luminance, and chrominance [35]. Since features such as light intensity and purity of color may reflect the opacity development in a better way, expressing a pixel in different ways rather than solely color information is important to grade the opacity alteration. Details of color space conversion methods and the related formulas can be found in [35]. In addition to color features, some texture features were also extracted. Contrast, correlation, energy, and homogeneity were computed using gray-level co-occurrence matrix (GLCM) [36] for both regions. GLCM is used to represent the pixels of an image as a function of determined gray levels. Definitions and formulas of the extracted texture features are summarized in Table 1 [36].where Ng denotes the number of quantized gray levels and pði; jÞ is ði; jÞ entry of GLCM. The last feature denotes the correlation between the corneal and noncorneal regions over time and calculated as shown in Eq. (2).     Amn  A Bmn  B ffi r ¼ vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u  !  2 ! u P P  2  P P t m n Amn  A m n Bmn  B P P m

2.3.2. Feature selection methods Feature selection techniques are used to reduce the dimensions of the feature matrix. Elimination of irrelevant features which do not contribute toward the predicting response improves the efficiency of classifiers. In this study, we utilized two feature selection techniques: Least Absolute Shrinkage and Selection Operator (LASSO) [37] and Relief [38], which are mostly used and reported as effective methods in the literature [39]. LASSO [37] assigns some coefficients to the features and these coefficients are altered during the selection process. Features which are helpful for finding the response are labeled as true, while the coefficients of useless features become zero. False features are removed from the feature subset. In addition to the LASSO method, another commonly used feature



n

(2)

Table 2 Relevant Features extracted from the images.

where A and B are images to be correlated, m and n are numbers of pixels, 



Feature Group

Feature Name

Color based

Means and standard deviations of RGB values Means and standard deviations of HSI values Means and standard deviations of YCbCr values Means and standard deviations of Lab values Means and standard deviations of grayscale values Contrast Correlation Energy Homogeneity Correlation between the corneal and noncorneal region

and A and B are means of the image pixel values.

Table 1 Definitions and formulas of the extracted texture features from GLCM. Contrast

Correlation Energy Homogeneity

measures intensity differences between neighboring pixels shows how neighboring pixels are correlated is defined as sum of the squared elements is closeness of the distribution of elements to the matrix diagonal

PNg 1 n¼0

n f 2

PNg PNg i¼1

j¼1 pði; jÞjji

 jj ¼ ng

PP PP PP ðijÞpði;jÞ i i⋅pði;jÞ j⋅pði;jÞ i P j P Pj P i 2 j 2 i

PP i

j

i

j

PP

j

ðiμx Þ ⋅pði;jÞ

i

j

ðjμy Þ ⋅pði;jÞ

pði; jÞ2 1 1þðijÞ2

Texture based

pði; jÞ

where Ng denotes the number of quantized gray levels and pði; jÞ is ði; jÞ entry of GLCM. 95

Number of Measures Corneal

NonCorneal

6

6

6

6

6

6

6

6

2

2

1 1 1 1 1

1 1 1 1

€ _ Cantürk, L. Ozyılmaz I.

Computers in Biology and Medicine 98 (2018) 93–99

results, greater time intervals gave higher accuracies. The LibSVM with a linear kernel and k-NN with k ¼ 1 performed the best. Generally, the classification accuracy obtained by 10-fold CV was higher than that of LOSO. Feature selection algorithms contributed to an increase in accuracy as shown in Table 4. Increasing the time gaps (i.e., reducing the number of classes) enhanced the accuracies, as depicted in Fig. 3a and b. Fig. 3a shows the classification results with 10-fold CV for different numbers of classes. The graph depicts results for k ¼ 1 and LibSVM with the RBF kernel, as these were the most successful classifiers for 10-fold CV. Increasing the time gap significantly improved the accuracy. Fig. 3b exhibits the classification results with LOSO. Since the linear kernel for LibSVM performs better than the RBF kernel, the linear kernel results are given in the LOSO graphics. Reducing the number of classes again increases accuracy for LOSO. The performance of the k-NN classifier was observed to be inversely proportional to the k parameter. Higher k parameters gave reduced performance, smaller ones increased accuracy. The best results were found for k ¼ 1 in 10-fold CV and LOSO. Fig. 4 demonstrates the accuracy of the results of the k-NN classifier with different k parameters for both 10-fold CV and LOSO for 22 classes. The graphic also shows that 10-fold CV results are more successful than LOSO. The best accuracy results for this graphic belong to the Relief feature selection algorithm.

extraction method called Relief was used to determine the most relevant features for prediction. Relief [38] applies feature weighting methodology to select features that are highly rated to the response, while features with small weights are discarded from the subset. This method lists the selected features based on their significance toward predicting the response. 2.3.3. Classification methods Extracted features from the images were given to classifiers. We used two popular and effective classification methods, which are k-nearest neighbor (k-NN) and support vector machines (SVM). In terms of speed and accuracy, they are more successful than most other classification methods [39]. The k-NN classification technique classifies a new input with regard to its similarity to the training set. The distances between the new instance to other samples in the training set are computed. The new instance is classed with respect to its nearest neighbors’ classes [40]. LibSVM [41] is a library package that extends the abilities of classical SVM. LibSVM can make multiclass classification as well as binary classification. The training set was utilized to build a model, which was then applied to the test set to clarify the classes. 2.3.4. Classifier validation Classifier validation techniques are utilized to observe the generalization performance of the classifiers. In this study, 10-fold cross validation (CV) and leave-one-subject-out (LOSO) methods were employed to validate the classifiers.

4. Discussion

All features were scaled to lie in the range of ½1; 1 before classification. We tested four k parameters for k-NN classifier: 1, 3, 5, and 7. For the LibSVM classifier, we investigated linear and radial basis function (RBF) kernels. We decided the optimum values for kernel width parameter ðγÞ and cost value parameter ðcÞ based on a grid search of possible values. The combination of ð0:8; 10Þ for ðγ; cÞ had the best performance. Table 3 shows the accuracy of the results of the classifiers with different parameters using 10-fold CV. The first row (Non) are the accuracies of the classifiers when they are fed with all features. LASSO and Relief columns exhibit the accuracies of the classifiers when they are fed with the selected features by the respected feature selection algorithms. The performance of classifiers increased with the decrease in the number of classes. The LibSVM with a RBF kernel and k-NN with k ¼ 1 had the best performance. The contribution of feature selection to the classification accuracy was not significant when using 10-fold CV. Table 4 presents the LOSO classification results. According to the

The experimental results indicate that computer assisted image analysis of opacity development of the eye in the PMI may solve the time of death estimation problem. Reducing the number of classes increased the accuracies of the classifiers significantly using both 10-fold CV and LOSO. LOSO is more suitable for practical purposes because one of the subjects is tested as a previously unseen subject. Two popular feature selection algorithms are implemented to select the more relevant features among the total of 61 extracted features. LASSO and Relief mostly eliminated the mean values of the color channels. It shows that standard deviations of the color channels might be more important to grade opacity level than other features. LASSO discarded some texture features (i.e., energy of corneal region, energy and homogeneity of non-corneal region), while Relief only excluded contrast value of non-corneal region. Although reducing the feature dimensions by LASSO and Relief increased speed of the classifiers, it did not contribute to the accuracy performance of the classifiers significantly. Due to the robustness of the classifiers, irrelevant features did not substantially affect the correct classification ratio. However, high dimensional datasets, which include more features, might distort the robustness of the classifiers, and elimination of irrelevant features within the dataset

Table 3 10-fold CV classification accuracy (%) of the classifiers with different numbers of classes.

Table 4 LOSO classification accuracy (%) of the classifiers with different numbers of classes.

3. Experimental results

LibSVM

20-min intervals

40-min intervals

60-min intervals

80-min intervals

100-min intervals

Non LASSO Relief Non LASSO Relief Non LASSO Relief Non LASSO Relief Non LASSO Relief

k-NN

LibSVM

Linear

Radial

1

3

5

7

66.6 65.1 66.2 74 73.7 73.7 82.4 83.1 81.1 80.9 81.8 80.9 87.8 89 83.5

71.5 71.7 71.7 75.3 75.7 76.8 84 83.3 84.8 85.2 83.4 85.9 89 87.5 88

69.7 71.1 69.1 74.2 73.5 77.1 83.3 83.5 83.1 85.2 84.7 84.3 87.8 88.5 87.8

67.1 64.4 64 68.6 65.7 68.6 79.3 78 79.5 78.4 77 78.4 85.4 85 87.1

64.8 63.7 65.1 65.1 63.5 67.3 69.5 70.2 71.1 74.3 73.4 76.1 85.4 84.2 85.2

64.2 62.4 63.1 61.7 60.8 63.5 65.7 64 65.3 72 72.2 72.7 83.3 82.3 83

20-min intervals

40-min intervals

60-min intervals

80-min intervals

100-min intervals

96

Non LASSO Relief Non LASSO Relief Non LASSO Relief Non LASSO Relief Non LASSO Relief

k-NN

Linear

Radial

1

3

5

7

46.6 45.4 42.7 53.3 55.5 53.3 64.4 60 57.7 68.1 66.7 62 76.1 76.2 65

22.2 33.3 33.3 31.8 36.6 36.3 33 55.5 46.6 40 42.2 51.1 42.8 55.5 52.3

53.6 47.7 50.7 60.6 61.4 59 66.1 71.1 64.4 73.3 75 66.7 76.1 77.7 73.8

51.3 47.7 48.8 55.4 57.1 58.6 60 64 64.4 68.8 66.6 62.2 69 71.1 71.4

48.2 45.6 57.7 53.2 54.1 55.8 57.7 62.2 55.5 62.2 64.4 60 68.7 51.1 69

40 43.5 57.7 52.5 52.2 53 46.6 60 48.8 53.3 62.3 60 64.3 46.6 66.6

€ _ Cantürk, L. Ozyılmaz I.

Computers in Biology and Medicine 98 (2018) 93–99

Fig. 3. (a) 10-fold CV and (b) LOSO results for different number of classes (i.e., increased time gaps) with standard deviations.

Fig. 4. k-NN accuracy results with different k parameters for both 10-fold CV and LOSO for 22 classes with standard deviations.

because extended datasets will include more similar results. This dataset may be extended to cover a greater number of cases and wider environmental conditions. Then, classifiers trained with this dataset can determine the PMI of a corpse in real life applications. This study is developed based on the previous study [31]. Corneal and non-corneal regions are automatically detected by the image processing

might increase the accuracy significantly. k-NN computes the distance between a sample to be tested and all other samples used for training in the dataset and classifies the new input based on the nearest k neighbors. Since small k values increase the accuracy, it can be concluded that samples within the dataset are distinct from each other. Extension of the dataset may overcome this problem

97

€ _ Cantürk, L. Ozyılmaz I.

Computers in Biology and Medicine 98 (2018) 93–99

algorithm rather than manual selection. Although the previous research investigates the suitability of postmortem eye opacification for time of death estimation, this study focuses on time of death predictions using machine learning algorithms. This kind of computational approach for PMI estimation has been reported on limited number of animal subjects [26, 27]. We have improved on this computational approach with computer vision and machine learning algorithms and tested on human subjects. Kawashima et al. [24] utilized RGB color space values of the corneal region to generate a regression model for PMI estimation, which influenced our method for determining the ROIs. In this paper, as well as RGB, additional color spaces for both corneal and non-corneal regions are investigated. Although RGB color space only has color information, other color spaces include information on light intensity, color depth, purity of color, etc. There are studies reporting morphological postmortem alterations of the cornea on animal and human subjects [42, 43], however, we refer to the anterior surface of the eye for PMI prediction with a practical approach. Some methods proposed in literature for PMI estimation are not practical. They require significant amounts of time and skilled experts to elucidate the results. In contrast, the method proposed herein is very practical, does not need expertise, and can be made available to forensic experts even as a mobile application. An extended online database can be used to generate a prediction model from which the PMI of a subject can be determined.

[7] L. Varetto, O. Curto, Long persistence of rigor mortis at constant low temperature, Forensic Sci.Int. 147 (2005) 31–34. [8] A. Nishida, H. Funaki, M. Kobayashi, Y. Tanaka, Y. Akasaka, T. Kubo, H. Ikegaya, Blood creatinine level in postmortem cases, Sci. Justice 55 (2015) 195–199. [9] P. Martins, F. Ferreira, R.N. Jorge, M. Parente, A. Santos, Necromechanics: deathinduced changes in the mechanical properties of human tissues, Proc. Inst. Mech. Eng. Part H-J. Eng. Med. 229 (2015) 343–349. [10] J.K. Suckling, M.K. Spradley, K. Godde, A longitudinal study on human outdoor decomposition in central Texas, J. Forensic Sci. 61 (2016) 19–25. [11] D.L. Cockle, L.S. Bell, Human decomposition and the reliability of a 'Universal' model for post mortem interval estimations, Forensic Sci.Int. 253 (2015). [12] M.T. Ferreira, E. Cunha, Can we infer post mortem interval on the basis of decomposition rate? A case from a Portuguese cemetery, Forensic Sci.Int 226 (2013), 298. e291–298. e296. [13] A.A. Vass, S.A. Barshick, G. Sega, J. Caton, J.T. Skeen, J.C. Love, J.A. Synstelien, Decomposition chemistry of human remains: a new methodology for determining the postmortem interval, J. Forensic Sci. 47 (2002) 542–553. [14] T.O. Rognum, S. Holmen, M.A. Musse, P.S. Dahlberg, A. Stray-Pedersen, O.D. Saugstad, S.H. Opdal, Estimation of time since death by vitreous humor hypoxanthine, potassium, and ambient temperature, Forensic Sci.Int. 262 (2016) 160–165. [15] A.K. Parmar, S.K. Menon, Estimation of postmortem interval through albumin in CSF by simple dye binding method, Sci. Justice 55 (2015) 388–393. [16] B. Zilg, S. Bernard, K. Alkass, S. Berg, H. Druid, A new model for the estimation of time of death from vitreous potassium levels corrected for age and temperature, Forensic Sci.Int. 254 (2015) 158–166. [17] C. Cordeiro, R. Seoane, A. Camba, E. Lendoiro, M.S. Rodriguez-Calvo, D.N. Vieira, J.I. Munoz-Barus, The application of flow cytometry as a rapid and sensitive screening method to detect contamination of vitreous humor samples and avoid miscalculation of the postmortem interval, J. Forensic Sci. 60 (2015) 1346–1349. [18] H.V. Chandrakanth, T. Kanchan, B.M. Balaraj, H.S. Virupaksha, T.N. Chandrashekar, Postmortem vitreous chemistry – an evaluation of sodium, potassium and chloride levels in estimation of time since death (during the first 36 h after death), J. Forensic Leg. Med. 20 (2013) 211–216. [19] A. Yildirim, B. Demirel, T. Akar, E. S¸enol, H. Erdamar, Usage of vi{dotless}treous humour hypoxanthi{dotless}ne and potassi{dotless}um values for the esti{dotless} mati{dotless}on of postmortem i{dotless}nterval, HealthMED 5 (2011) 1129–1136. [20] N.K. Tumram, R.V. Bardale, A.P. Dongre, Postmortem analysis of synovial fluid and vitreous humour for determination of death interval: a comparative study, Forensic Sci.Int. 204 (2011) 186–190. [21] K.D. Jashnani, S.A. Kale, A.B. Rupani, Vitreous humor: biochemical constituents in estimation of postmortem interval*,y, J. Forensic Sci. 55 (2010) 1523–1527. [22] L. Iancu, T. Sahlean, C. Purcarea, Dynamics of necrophagous insect and tissue bacteria for postmortem interval estimation during the warm season in Romania, J. Med. Entomol. 53 (2016) 54–66. [23] F. Tuccia, G. Giordani, S. Vanin, A combined protocol for identification of maggots of forensic interest, Sci. Justice 56 (2016) 264–268. [24] W. Kawashima, K. Hatake, R. Kudo, M. Nakanishi, S. Tamaki, S. Kasuda, K. Yuui, A. Ishitani, Estimating the time after death on the basis of corneal opacity, J. Forensic Res. 6 (2015) 1. [25] B. Kumar, V. Kumari, T. Mahto, A. Sharma, A. Kumar, Determination of Time Elapsed since Death from the Status of Transparency of Cornea in Ranchi in Different Weathers, 2012. [26] L. Zhou, Y. Liu, L. Liu, L. Zhuo, M. Liang, F. Yang, L. Ren, S. Zhu, Image analysis on corneal opacity: a novel method to estimate postmortem interval in rabbits, J. Huazhong Univ. Sci. Technol. - Med. Sci. 30 (2010) 235–239. [27] F. Liu, S. Zhu, Y. Fu, F. Fan, T. Wang, S. Lu, Image analysis of the relationship between changes of cornea and postmortem interval, in: PRICAI 2008: Trends in Artificial Intelligence, 2008, pp. 998–1003. [28] S. Tsunenari, M. Kanda, The post-mortem changes of corneal turbidity and its water content, medicine, Sci. Law 17 (1977) 108–111. [29] D. Fang, Y. Liang, H. Chen, The advance on the mechanism of corneal opacity and its application in forensic medicine, Forensic Sci. Technol. 2 (2007) 36–38. [30] Y. Balci, H. Basmak, B.K. Kocaturk, A. Sahin, K. Ozdamar, The importance of measuring intraocular pressure using a tonometer in order to estimate the postmortem interval, Am. J. Forensic Med. Pathol 31 (2010) 151–155. _ Cantürk, S. Çelik, M.F. S¸ahin, F. Ya [31] I. gmur, S. Kara, F. Karabiber, Investigation of opacity development in the human eye for estimation of the postmortem interval, Biocybern. Biomed. Eng. 37 (2017) 559–565. [32] P. Viola, M. Jones, Rapid object detection using a boosted cascade of simple features, Computer Vision and Pattern Recognition, 2001. CVPR 2001, in: Proceedings of the 2001 IEEE Computer Society Conference on, IEEE, 511, 2001 pp. I-511-I-518. [33] N. Otsu, A threshold selection method from gray-level histograms, Automatica 11 (1975) 23–27. [34] G.D. Finlayson, B. Schiele, J.L. Crowley, Comprehensive Colour Image Normalization, European Conference on Computer Vision, Springer Berlin Heidelberg, 1998, pp. 475–490. [35] H.R. Kang, Color Technology for Electronic Imaging Devices, SPIE press, 1997. [36] R.M. Haralick, K. Shanmugam, Textural features for image classification, in: IEEE Transactions on Systems, Man, and Cybernetics, 1973, pp. 610–621. [37] R. Tibshirani, Regression shrinkage and selection via the Lasso, J. R. Stat. Soc. Ser. B-Methodol. 58 (1996) 267–288. [38] K. Kira, L.A. Rendell, A Practical Approach to Feature-selection, Morgan Kaufmann Pub Inc, San Mateo, 1992.

5. Conclusion A computer-aided system which includes image processing and machine learning systems was developed to estimate PMI. The system automatically detected corneal and non-corneal regions of the eye and extracted a total of 61 features from the regions. These extracted features were fed into the machine learning system which includes two feature selection algorithms, two classification algorithms, and two validation methods. The feature selection algorithms (i.e., LASSO and Relief) were utilized to reduce the dimensions of the feature matrix. Classification algorithms (i.e., k-NN and LibSVM) were used to predict the time of death. Validation methods (i.e., 10-fold and LOSO) measured the classifiers’ generalization performances according to the extracted features. One of the main contributions of this study to the literature is development of an image processing system which identifies corneal and non-corneal regions of the eye effectively and extracts features from both regions. Secondly, a machine learning system is developed to estimate PMI according to the extracted features. To the best of our knowledge, this is the first study which develops a computational system to determine PMI of human cases based on the extracted features from the anterior surface of the eye. Further studies are needed to verify postmortem eye opacification as a tool for time of death estimation. Conflicts of interest None declared. References _ Cantürk, F. Karabiber, S. Çelik, M.F. S¸ahin, F. Ya [1] I. gmur, S. Kara, An experimental evaluation of electrical skin conductivity changes in postmortem interval and its assessment for time of death estimation, Comput. Biol. Med. 69 (2016) 92–96. [2] Y. Igari, Y. Hosokai, M. Funayama, Rectal temperature-based death time estimation in infants, Leg. Med. 19 (2016) 35–42. [3] M.R. Rodrigo, A nonlinear Least squares approach to time of death estimation via body cooling, J. Forensic Sci. 61 (2016) 230–233. [4] M.R. Rodrigo, Time of death estimation from temperature readings only: a Laplace transform approach, Appl. Math. Lett. 39 (2015) 47–52. [5] M. Kaliszan, R. Hauser, G. Kernbach-Wighton, Estimation of the time of death based on the assessment of post mortem processes with emphasis on body cooling, Leg. Med. 11 (2009) 111–117. [6] M. Ozawa, K. Iwadate, S. Matsumoto, K. Asakura, E. Ochiai, K. Maebashi, The effect of temperature on the mechanical aspects of rigor mortis in a liquid paraffin model, Leg. Med. 15 (2013) 293–297. 98

€ _ Cantürk, L. Ozyılmaz I.

Computers in Biology and Medicine 98 (2018) 93–99

_ Cantürk, F. Karabiber, A machine learning system for the diagnosis of Parkinson's [39] I. disease from speech signals and its application to multiple speech signal types, Arabian J. Sci. Eng. 41 (2016) 5049–5059. [40] X.D. Wu, V. Kumar, J.R. Quinlan, J. Ghosh, Q. Yang, H. Motoda, G.J. McLachlan, A. Ng, B. Liu, P.S. Yu, Z.H. Zhou, M. Steinbach, D.J. Hand, D. Steinberg, Top 10 algorithms in data mining, Knowl. Inf. Syst. 14 (2008) 1–37. [41] C.C. Chang, C.J. Lin, LIBSVM: a library for support vector machines, ACM Trans. Intell. Syst. Technol. 2 (2011).

[42] M. Nioi, P.E. Napoli, R. Demontis, E. Locci, M. Fossarello, E. d'Aloja, Morphological analysis of corneal findings modifications after death: a preliminary OCT study on an animal model, Exp. Eye Res. 169 (2018) 20–27. [43] G. Prieto-Bonete, M.D. Perez-Carceles, A. Luna, Morphological and histological changes in eye lens: possible application for estimating postmortem interval, Leg. Med. 17 (2015) 437–442.

99