A computer-aided healthcare system for cataract classification and grading based on fundus image analysis

A computer-aided healthcare system for cataract classification and grading based on fundus image analysis

G Model COMIND-2612; No. of Pages 9 Computers in Industry xxx (2014) xxx–xxx Contents lists available at ScienceDirect Computers in Industry journa...

2MB Sizes 0 Downloads 51 Views

G Model

COMIND-2612; No. of Pages 9 Computers in Industry xxx (2014) xxx–xxx

Contents lists available at ScienceDirect

Computers in Industry journal homepage: www.elsevier.com/locate/compind

A computer-aided healthcare system for cataract classification and grading based on fundus image analysis Liye Guo a, Ji-Jiang Yang b,e,*, Lihui Peng a, Jianqiang Li c, Qingfeng Liang d a

Tsinghua National Laboratory for Information Science and Technology, Department of Automation, Tsinghua University, Beijing, China Research Institute of Information and Technology, Tsinghua University, Beijing, China c School of Software Engineering, Beijing University of Technology, Beijing, China d Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China e Research Institute of Application Technology in Wuxi, Tsinghua University, Jiangsu, China b

A R T I C L E I N F O

A B S T R A C T

Article history: Received 29 January 2014 Received in revised form 27 June 2014 Accepted 26 September 2014 Available online xxx

This paper presents a fundus image analysis based computer aided system for automatic classification and grading of cataract, which provides great potentials to reduce the burden of well-experienced ophthalmologists (the scarce resources) and help cataract patients in under-developed areas to know timely their cataract conditions and obtain treatment suggestions from doctors. The system is composed of fundus image pre-processing, image feature extraction, and automatic cataract classification and grading. The wavelet transform and the sketch based methods are investigated to extract from fundus image the features suitable for cataract classification and grading. After feature extraction, a multiclass discriminant analysis algorithm is used for cataract classification, including two-class (cataract or noncataract) classification and cataract grading in mild, moderate, and severe. A real-world dataset, including fundus image samples with mild, moderate, and severe cataract, is used for training and testing. The preliminary results show that, for the wavelet transform based method, the correct classification rates of two-class classification and cataract grading are 90.9% and 77.1%, respectively. The correct classification rates of two-class classification and cataract grading are 86.1% and 74.0% for the sketch based method, which is comparable to the wavelet transform based method. The pilot study demonstrates that our research on fundus image analysis for cataract classification and grading is very helpful for improving the efficiency of fundus image review and ophthalmic healthcare quality. We believe that this work can serve as an important reference for the development of similar health information system to solve other medical diagnosis problems. ß 2014 Elsevier B.V. All rights reserved.

Keywords: Fundus image classification Cataract detection Ophthalmic disease Healthcare improvement Healthcare system

1. Introduction Along with the development of information technology, computer-aided healthcare by integrating medical devices and healthcare information systems to improve healthcare quality and productivity is getting more and more attention. In particular, computer-aided healthcare system may provide a solution in the developing areas where medical resources are scarce. An example is about the diagnosis and treatment of eye cataract. The WHO’s report in 2004 shows that there are 53.8 million people globally suffering moderate to severe disability caused by cataract,

* Corresponding author at: Research Institute of Information and Technology, Tsinghua University, Beijing, China. Tel.: +86 1062788788 13. E-mail addresses: [email protected] (L. Guo), [email protected] (J.-J. Yang), [email protected] (L. Peng), [email protected] (J. Li), lqfl[email protected] (Q. Liang).

52.2 million of whom are in low and middle income countries [1]. Although early and correct diagnosis will help patients reduce the suffering, millions of them, especially in under-developed area, can hardly get a chance to receive treatment in hospital, because of the limited healthcare resources, lacking healthcare information and economic consideration. It has long been a desire to develop a convenient and cost-effective computer-aided auxiliary diagnosis system, which is able to help cataract patients in under-developed areas to know timely their cataract conditions and obtain treatment suggestions from doctors. In this paper, we demonstrated a possible solution of a computer-aided healthcare system for cataract classification and grading based on fundus image analysis. Retina consists of several light-sensitive neuron layers, lining the inner surface of the eye, in which many diseases manifest themselves, such as macular degeneration, glaucoma and diabetic retinopathy [2]. Ophthalmologists and scientists have been

http://dx.doi.org/10.1016/j.compind.2014.09.005 0166-3615/ß 2014 Elsevier B.V. All rights reserved.

Please cite this article in press as: L. Guo, et al., A computer-aided healthcare system for cataract classification and grading based on fundus image analysis, Comput. Industry (2014), http://dx.doi.org/10.1016/j.compind.2014.09.005

G Model

COMIND-2612; No. of Pages 9 2

L. Guo et al. / Computers in Industry xxx (2014) xxx–xxx

seeking the approach to examining the retina for a long time. Jan Evangelista Purkinje invented the ophthalmoscope in 1823 and Charles Babbage improved it in 1845 [3,4]. In 1851, von Helmholtz reinvented the ophthalmoscope, regarded as the revolution of ophthalmology. In 1910, Allvar Gullstrand developed the first fundus camera [5] and received the Nobel Prize in Physiology or Medicine in 1911. Nowadays, the fundus imaging has been widely used as primary retinal imaging, routine physical examination and population screening programs, because of its safety and costeffectiveness. The cataract is a clouding or dulling of the lens inside the eye, which leads to a decrease in vision and is considered as the most common cause of blindness. The cataract brings many difficulties to the patients’ life, such as appreciating colors and changes in contract, driving, reading, recognizing faces and coping with glare from the bright lights [6]. The longer with cataract, the lower vision the patients will have. The cataract can be diagnosed by ophthalmologists or optometrists with a slit lamp, and then be classified with lens opacities classification system (LOCS) [7]. Cataract removal surgery is the most effective treatment, which is usually conducted when the cataract influences badly on patients’ daily life and work. Although the cataract is not in the retina, the clouding in crystalline lens will reduce the light that focused on the retina, leading to the degradation of fundus image quality. By judging the difference between non-cataract fundus image and the cataract one, well-experienced ophthalmologists can decide whether to conduct a surgery. The present cataract examination equipment and methods, such as lens opacities classification system (LOCS) [4], are intricate for most patients and can only be operated by wellexperienced ophthalmologists. To make them applicable to perform a diagnosis based on fundus image, the well-experienced ophthalmologist has to be physically close to the patients. This fact makes the ophthalmologist becoming a scarce resource and a bottleneck that causes the large scale screening of the cataract disease in the early stage impossible. The fundus image can be more easily obtained only with the help of nurses from community service even the patients themselves. This paper focuses on fundus image analysis and fully automatic cataract classification. Its goal is to reduce the burden of scarce resources and improve the effectiveness and efficiency of fundus image review, through which to enable active and enhanced healthcare services. Studies on fundus image analysis have been made for years. Segmentation and location of retinal structures, such as retinal lesions [8–10], vessels [11–17], optic disc [18–21] and fovea [22,23], have been widely studied. Based on these techniques, researchers are also trying to develop diagnose systems for specific retina-related diseases includes microaneurysms [10], diabetic retinopathy [24–29], age-related macular degeneration [30], glaucoma [31–34], cardiovascular diseases [35]. Li et al. has made an effort to classify and diagnose specific cataract automatically by

split image and retro-illumination image, including nuclear cataract [36–39], cortical cataract [40] and posterior sub-capsular cataract [41,42]. However, there is little work reported on cataract classification and grading by using fundus images. Fig. 1 shows the fundus image of non-cataract and cataract in different gradings. In the image (a) without cataract, the blood vessels can be shown very clearly, even the capillary ones. The more severe cataract the patients have, the more cloud will be in the lens, resulting in that less vessels can be observed from the fundus image. There are less vessels details in mild cataract patients’ eye fundus image, while only the trunk vessel and little details in the moderate cataract ones’. Furthermore, there is hardly anything in the severe cataract ones’. By selecting appropriate features such as detecting the vessel details in fundus images, it is possible to find a method to recognize the cataract and classify its grading automatically. The motivation of the paper is to develop a fundus image analysis based automatic classification and grading system for cataract so that cataract patients can receive preliminary classification and get suggestions from ophthalmologists timely, conveniently and even remotely, meanwhile hospitals can manage their un-sufficient medical resources more efficiently and take more capacity and ability for cataract treatment instead of preliminary cataract screening. The contribution of this paper can be summarized as follows: (1) a computer-aided healthcare system for cataract classification and grading based on fundus image analysis is proposed. Since it provides great potential to reduce the burden of the wellexperienced ophthalmologists (the scarce resources) and enable the large scale cataract screening, which is helpful and worthwhile for other medical diagnosis problems by using similar approaches. (2) An experimental study on exploiting fundus image for cataract classification and grading is described, where two feature extraction methods, based respectively on the wavelet transformation and the sketch with discrete cosine transformation, are investigated. The empirical experiments on the real-world datasets are reported, which illustrates that, for the wavelet transform based method, the correct classification rates of two-class classification and cataract grading are 90.9% and 77.1%, respectively. For the sketch based method, the correct classification rates of two-class classification and cataract grading are respectively 86.1% and 74.0%, which are comparable to the wavelet transform based method. (3) A pilot study is described for the application of the proposed approach in a real-world usage scenario. It demonstrates that our research on fundus image analysis for cataract classification and grading is very helpful for improving the efficiency of fundus image review and ophthalmic healthcare quality. We believe that our pilot study can serve as an important reference for the development of similar health information system for enhanced medical care. The paper is organized as follows. Section 2 introduces the framework of our automatic classification and grading system for

Fig. 1. Fundus images of non-cataract and cataract in different gradings. (a) Non-cataract; (b) mild; (c) moderate; (d) severe.

Please cite this article in press as: L. Guo, et al., A computer-aided healthcare system for cataract classification and grading based on fundus image analysis, Comput. Industry (2014), http://dx.doi.org/10.1016/j.compind.2014.09.005

G Model

COMIND-2612; No. of Pages 9 L. Guo et al. / Computers in Industry xxx (2014) xxx–xxx

3

cataract. Section 3 discusses the feature extraction methods and the algorithms for cataract classification and grading. Section 4 shows the results. The case study on the application of the proposed solution to enable the large scale cataract screening is reported in Section 5. Finally, conclusions are drawn in Section 6.

patients that really need their concerns. Inside our system, each fundus image is converted into a set of features after preprocessing and feature extraction, based on which results are obtained by classification algorithm. The result gives the ophthalmologist a reference to the condition of cataractous patients.

2. The framework of computer-aided cataract classification and grading system

3. Cataract feature extraction and classification

The basic requirement of this research is to develop a convenient and cost-effective computer-aided auxiliary diagnosis system for automatic cataract classification and grading, which should make multiple types of healthcare service providers loosely coupled together and collaboratively provide high quality medical care to the patients in remote regions. Accordingly, we proposed a cataract classification and grading system, shown in Fig. 2. The main component of the system consists of three parts, i.e., fundus image pre-processing, feature extraction, and automatic cataract classification and grading. These three parts will run on the server of an ophthalmic hospital and can be integrated into its existing information systems. The existing information system has cooperated with many community clinics, remote rural hospitals and other hospitals, sharing the healthcare resources through the internet. This cooperative mode has greatly improved the medical service quality and clinic efficiency [43]. The proposed cataract classification and grading system retrieves fundus image from the server, conducts image analysis and automatic cataract classification, and returns the report on the classification results. In this way, the automatical grading and classification system replaces the manual screening and decouple the co-located relationship between patients and well-experienced ophthalmologists (the scarce resources). Since the non-cataract images are not sent to the ophthalmologist for checking, a large amount of work burden of scarce resources is reduced, which make them be able to spend more time on the

Extracting appropriate features from fundus image, which can represent different cataract degrees, plays an important role in our automatic classification and grading system. As shown in Fig. 1, the non-cataract fundus image shows clear optic structure details, such as vessels and macular. On the contract, those cataract fundus images appear less optic structure details. For this reason, an intuitive approach to select features sensitive to different gradings of cataract, is to use the localized features related to the high frequency components, such as the details related to edges or sudden small peaks. These details are usually hard to be recognized and quantitatively assessed in space domain but obvious in frequency domain. According to such considerations, in this paper, discrete wavelet transform (DWT) and discrete cosine transform (DCT) commonly used for the analyses of localized features in signals are investigated for cataract feature extraction. 3.1. Feature extraction based on wavelet transform Wavelet transform has been used for fundus image analysis to provide a high contrast between blood vessels and backgrounds [12,44–48]. Many functions can be used as the mother functions for wavelet transform. Since the calculation with Haar wavelet transform can be easily carried out with only additions so that no multiplications are needed because most elements of the Haar wavelet transform matrix are zero, it can achieve high computing efficiency. Thus, we use the Haar wavelet transform in this paper to decompose the fundus image into 3 levels to distinguish the high

Fig. 2. the framework of cataract classification and grading system.

Please cite this article in press as: L. Guo, et al., A computer-aided healthcare system for cataract classification and grading based on fundus image analysis, Comput. Industry (2014), http://dx.doi.org/10.1016/j.compind.2014.09.005

G Model

COMIND-2612; No. of Pages 9 L. Guo et al. / Computers in Industry xxx (2014) xxx–xxx

4

frequency components in fundus image, which is mainly related to the vessels, from the low frequency component representing the background. For example, Fig. 3 depicts the distribution of coefficients corresponding to the third level horizontal details of Haar wavelet transformations of the non-cataract image in Fig. 1(a) and the severe cataract image in Fig. 1(d). The coefficients can be calculated in matrix form defined in Eq. (1), which related to Haar wavelet transform. Sn ¼Hn  Fn  HTn

(1)

where Fn is the image in n  n matrix form. Hn is the matrix related to n point Haar transform. Sn is the coefficients of Haar wavelet transformed Fn in n  n matrix form. It is found from Fig. 3 that the coefficients are dominantly in the range of 1 and 1. In addition, it is obvious that there is a significant difference in the amplitude of the coefficients of the non-cataract image and the severe cataract image. To further quantitatively assess the difference between the distribution of coefficients related to fundus images with different gradings, the support of wavelet coefficient, which is [1,1], is divided into 10 regions. Additionally, the number of coefficients, which are corresponding to the horizontal, the vertical, and the diagonal details of the second and third level of Haar wavelet transformations of a fundus image, is counted according to their amplitude in these 10 regions. As a result, the histogram of wavelet coefficients can be obtained. Fig. 4 demonstrates the histogram for 8 samples of fundus images in different cataract gradings. It is found from Fig. 4 that coefficients of Haar-wavelet transformations of fundus images in different cataract gradings show different characterizations. It is clear that the number of coefficients in different amplitude regions has a significant difference for fundus images in different cataract gradings, which implies that the number of coefficients in different amplitude regions can be used as the feature for cataract classification and grading. To validate the aforementioned analysis, a number of samples are selected randomly, which consist of non-cataract, mild cataract, moderate cataract, and severe cataract to construct the data matrix as follow 2 3 I1;1 I1;2    I1;10 6 I2;1 I2;2    I2;1 7 6 7 (2) D¼6 . .. 7 .. .. 4 .. . 5 . . IN;1 IN;2    IN;10 where Ii,j(i = 1, 2, . . ., N ; j = 1, 2, . . ., 10), stands for the number of coefficients of the third or second level details of Haar-wavelet transformation of the ith fundus image in the jth amplitude region. N is the number of samples. By taking the principal component analysis of the data matrix, the difference between samples in different cataract gradings can be clearly distinguished in Fig. 5.

3.2. Feature extraction based on the sketch method with discrete cosine transform The presented wavelet transform based method uses the number of coefficients in different amplitude regions as the feature for cataract classification and grading. The larger the number in different amplitude regions, the more detailed high frequency components in the original fundus image, which also means that original image has less cataract. Another method to evaluate the high frequency component in an image is the sketch based approach, which has been successfully used in the recognition of volcano [49] from images. By using the similar approach, the feature extraction based on the sketch method together with discrete cosine transform for cataract classification and grading are explored. Fig. 6 depicts the sketch based feature extraction for cataract classification and grading. The intensity of fundus image is sampled uniformly along the 18 radial lines and 5 concentric circles in different radius, of which only 4 radials lines and 3 concentric circles are shown in Fig. 6. When the vessel goes through the sampling lines or circles, it forms a local peak correspondingly. The more details the image has, the more fluctuations in the sampled data. Fig. 7 demonstrates an example of the signals sampled on a sketch line. By comparing the sketch approach sampled signals in Fig. 7(a) and (b), which are corresponding to the fundus images with noncataract and severe cataract, respectively, it is found that sampled signals from different cataract fundus images show different fluctuations. This phenomenon behaves more clearly after the signals being processed by using the cosine transform. As a result, the variances of the cosine transform coefficients of signals sampled on 18 sketch lines and 5 sketch circles are calculated and taken as the features for classification. Furthermore, by taking the principal component analysis of the data matrix composed of the sketch method and cosine transformation based features of sample images, the difference between samples in different cataract gradings is clearly distinguished in Fig. 8. 4. Initial result evaluation We collected 445 fundus images to construct the data set for training and testing of the classifier, which 199, 148, 71, and 27 images are with non-cataract, mild cataract, moderate cataract and severe cataract, respectively. Following the feature extraction approaches described in Section 3, a multi-class discriminant analysis is used for classification and grading of cataract [50]. The multi-Fisher classification algorithm is trained by randomly selecting 70% samples from the data set and then tested by using the other 30% samples. By repeating the training and testing procedure 100 times, the overall performance of the aforementioned classification and grading methods are obtained.

Fig. 3. Distribution of coefficient of Haar wavelet transformation of fundus image. (a) Coefficients related to Fig. 1(a); (b) coefficients related to Fig. 1(d).

Please cite this article in press as: L. Guo, et al., A computer-aided healthcare system for cataract classification and grading based on fundus image analysis, Comput. Industry (2014), http://dx.doi.org/10.1016/j.compind.2014.09.005

G Model

COMIND-2612; No. of Pages 9 L. Guo et al. / Computers in Industry xxx (2014) xxx–xxx

5

Fig. 4. Samples of fundus images in different cataract gradings and the related histogram of their Haar-wavelet transformation coefficients. (a) Non-cataract fundus image sample 1; (a) non-cataract fundus image sample 2; (c) mild cataract fundus image sample 1; (d) mild cataract fundus image sample 2; (e) moderate cataract fundus image sample 1; (f) moderate cataract fundus image sample 2; (g) severe cataract fundus image sample 1; (g) severe cataract fundus image sample 2; (i) histogram of Haar wavelet transformation coefficients.

Fig. 5. Visualization of the PCA result of wavelet transformation coefficients data matrix for cataract images in different gradings.

Fig. 6. Sketch based feature extraction for cataract classification and grading.

Please cite this article in press as: L. Guo, et al., A computer-aided healthcare system for cataract classification and grading based on fundus image analysis, Comput. Industry (2014), http://dx.doi.org/10.1016/j.compind.2014.09.005

G Model

COMIND-2612; No. of Pages 9 6

L. Guo et al. / Computers in Industry xxx (2014) xxx–xxx

Fig. 7. An example of the signals sampled by using the sketch approach and the related cosine transformed coefficients. (a) Sampled signal for non-cataract fundus image; (b) sampled signal for severe cataract fundus image; (c) cosine transform coefficients of signal in (a); (d) cosine transform coefficients of signal in (b).

Table 1 gives out the classification and grading result by using wavelet transform based feature extraction. It is found that the mean of correct classification rate for two-class cataract classification is 90.9%. Meanwhile, the mean of correct classification rate for cataract grading is 77.1%. Table 2 presents the classification and grading result by using the sketch method with discrete cosine transform. In comparison to the result in Table 1, the mean of correct classification rate for both the two-class cataract classification and grading are degenerated slightly but still comparable. By simply considering both the features extracted from wavelet transform and from the sketch method with cosine transform, the mean of correct classification rate for two-class cataract classification and grading are 89.3% and 73.8%, respectively, which are shown in Table 3 and not improved as expected. How to fuse the features from both the wavelet transform based method and

the sketch method with cosine transform to improve the correct classification rate still need to be considered. Thus far, the preliminary test results are based on the multiclass discriminant analysis algorithm. There have been many of other classifiers and algorithms, such as ANN based methods, SVM based methods, and AdaBoost algorithm, have been successfully used in pattern recognition and classification. Future work will focus on applying other algorithms to improve the performance of fundus image based cataract classification and grading. From the point view of practical application, it is necessary to consider the robustness of our fundus image based cataract classification and grading algorithm to the scale of data set.

Table 1 Wavelet transform based classification and grading results. Correct classification rate

Two-class (non-cataract and cataract) classification

Grading of cataract

Maximum (%) Minimum (%) Mean (%)

97.1 83.5 90.9

86.5 67.7 77.1

Table 2 The sketch method based classification and grading results.

Fig. 8. Visualization of the PCA result of the sketch method and cosine transformation based feature data matrix for cataract images in different gradings.

Correct classification rate

Two-class (non-cataract and cataract) classification

Grading of cataract

Maximum (%) Minimum (%) Mean (%)

95.5 77.4 86.1

85.7 63.2 74.0

Please cite this article in press as: L. Guo, et al., A computer-aided healthcare system for cataract classification and grading based on fundus image analysis, Comput. Industry (2014), http://dx.doi.org/10.1016/j.compind.2014.09.005

G Model

COMIND-2612; No. of Pages 9 L. Guo et al. / Computers in Industry xxx (2014) xxx–xxx Table 3 The hybrid of wavelet transform and sketch method based classification and grading results. Correct classification rate

Two-class (non-cataract and cataract) classification

Grading of cataract

Maximum (%) Minimum (%) Mean (%)

95.5 78.2 89.3

84.2 59.4 73.8

5. The real-world application In this section, the implementation and deployment of the proposed solution on using fundus image for cataract classification and grading on a real-world Cloud of Regional Healthcare Collaboration Platform (RHCP) is introduced. As shown in Fig. 9, the usage scenario of the pilot study on the RHCP involves three types of parties, i.e., (1) community hospitals, optometry and glasses centers, or physical examination centers; (2) ophthalmic hospitals or departments of ophthalmology in class A hospitals; (3) Picture review center or ophthalmic laboratories with the picture archiving and communication system (PACS). In a realword case, parties (2) and (3) might locate in one organization. The staff members in party (1) are nurses or general medical practitioners. They know how to operate the medical instruments or equipment for ophthalmologic examination, for instance, the fundus image capturing. But their ophthalmic knowledge is limited. The workers in party (2) are ophthalmic experts or eye doctors. They provide the ophthalmic medical care to their patients, including the fundus image checking, ophthalmic disease diagnosis and treatment recommendation. The employees in party (3) are skillful ophthalmic inspectors or specific technicians for fundus image checking. The implementation and deployment of the fundus image classification solution includes two phases, i.e., the build-time phase and run-time phase. In the former, the component for fundus image classification is deployed in RHCP for automatic cataract detection. The software interface is developed to connect the instruments or equipment for ophthalmologic examination to the storage service in the RHCP, through which the medical sensing data (e.g., the fundus images) regarding to examination results are stored automatically in the RHCP. In this pilot study, the collected fundus images will be shared and accessed by other two parties. In the run-time phase, whenever the potential patients (e.g., health checkers) go to party (1) for ophthalmic checking, their fundus images are stored remotely in the RHCP. At the same time, the fundus image classification component is triggered. When the

7

two-class classification result is cataract, a notification message is sent to the laboratory technician in party (3) for validation. If the classification result is true, an email from party (1) is sent to the corresponding potential patient to inform him/her that he/she has early stage of cataract. The contact information of the eye doctor in party (2) is also sent as the attached information, from where the potential patient can obtain the instruction of the early intervention. For the current trial stage, only the two-class image classification is adopted for the practical application, the goal of which is to screen the cataract disease in the early stage. The advantages of our proposed solution can be summarized into two aspects: (A) Improving the efficiency of fundus image review: without the automatic cataract screening component, fundus images have to been checked by the ophthalmic inspector in party (3), which is the bottleneck for large scale cataract screening. The incidence rate of the cataract disease is about 5% [56]. It means that about 95% work burden of ophthalmic inspectors is reduced through the adoption of the automatic fundus image classification. Along this line, the cost of cataract screening is reduced remarkably. This improvement provides a great potential to make the large scale screening and early intervention of the cataract disease come true. (B) Enhancing the healthcare quality: when a fundus image is classified as cataract, the result will be validated by the ophthalmic inspector. Such a two-time checking process can improve the correctness rate of cataract diagnosis. In addition, with the component of automatic fundus image classification, the ophthalmic inspector only need to review the 5% of the population taking the ophthalmic checking in party (1). This fact makes the response time to each individual person is reduced significantly and improve the medical experience. We also report some of the findings or learned lessons that can serve as references for the development of similar systems for enhanced healthcare service. 5.1. Enabling active healthcare service [53] The application of the fundus image classification for automatic cataract detection enables the early intervention of the potential cataract patients. The usage scenario in Fig. 9 shows that the multiple types of healthcare service providers are loosely coupled together to enable the high quality healthcare to reach the remote regions. The general medical practitioners in local community hospitals or physical examination centers collect the fundus images. The ophthalmic picture review center provides the fundus image checking service for the large scale screening of early stage cataract diseases. The expert doctors in ophthalmic hospital provide professional instructions and necessary treatments. Such a flexible integration pattern of different medical care services provides great potential to improve the healthcare in undeveloped poverty areas without sufficient healthcare resources. 5.2. Matching information technology to healthcare requirement

Fig. 9. The usage scenario of the real-world application of our proposed algorithm on fundus image classification for automatic cataract detection.

To provide concrete incentives for adopting information technologies for enhanced healthcare, we need to work closely with the end users and find the matching points of their real requirements and the proposed technologies. The original task of this research is to build a classifier for the four-class categorization. Its core is the high accurate differentiation between the mild and moderate cataract fundus images, the result of which will provide guidance to the eye doctor in party (2) on whether or not to conduct the cataract surgery. However, our initial results of cataract grading make eye doctors disappointed since they cannot tolerate that the image of moderate cataract is

Please cite this article in press as: L. Guo, et al., A computer-aided healthcare system for cataract classification and grading based on fundus image analysis, Comput. Industry (2014), http://dx.doi.org/10.1016/j.compind.2014.09.005

G Model

COMIND-2612; No. of Pages 9 8

L. Guo et al. / Computers in Industry xxx (2014) xxx–xxx

classified as mild cataract. However, the close working relationship with the different ophthalmic parties let us know that the bottleneck for large scale cataract screening lies in the task of the fundus image inspection, which has bothered them for a long time. When we recommend the two-class fundus image classification solution to replace the work of fundus image checking. The ophthalmic inspector in party (3) shows strong interests on this technology since the cataract screening task can tolerate certain level of misclassification (even though a small part of cataract cases might be missed, the broad masses of the people will benefit from the large scale cataract screening). Our experience shows that identifying the matching point between the requirements and available information technologies is very important for their adoption in medial domain, which is also the key to convince them and realize mutual motivation between the researchers and the end users. 5.3. Cross-boundary medical data utilization Pursuant to the initial evaluation results of the classification of fundus images, we find that one dominant reason for misclassification might lie in the fact that the fundus images from different sites have different quality and cover different group of populations (e.g., young or senior people). In the following work, we will collect multiple types of contextual information regarding to the fundus images and use the heterogeneous mixture learning [54,55] for cataract classifier building. Under certain privacy protection mechanisms [51,52,57], the fusion of these contextual information from multiple sources has great potential to be employed for improving the accuracy of the screening and automatic detection of ophthalmic diseases. 6. Conclusions A computer-aided auxiliary diagnosis system for cataract classification and grading from fundus image is presented. It is helpful for improving the screening test of cataract in underdeveloped areas without sufficient healthcare resources, so that the cataract patients in these areas can timely know their cataract conditions and obtain treatment suggestions from doctors. The wavelet transform and the sketch based method with cosine transformation are used to extract the features in fundus image. A multi-class discriminant analysis algorithm is used for cataract classification, including two-class (cataract or non-cataract) classification and cataract grading in mild, moderate, and severe. The preliminary test results on a data set of 445 fundus image samples show that the performance of the wavelet transformation based method is better than that of the sketch based method, which the correct classification rates of two-class classification and cataract grading are 90.9% and 77.1%, respectively. The real-world pilot study about the implementation and deployment of the automatic fundus image classification system is reported. By retrieving fundus image from the Cloud server, conducting image analysis and automatic cataract classification, and return the report on the classification results, the proposed solution provide great potentials to reduce the work burden of the well-experienced ophthalmologists (the scarce resources) and improve the ophthalmic healthcare quality. It is worthwhile to be generalized and adapted to be used for solving other medical diagnosis problems with similar situations. Acknowledgements This research was supported by the following grants: China 973 project with no. 2011CB302505; China National Key Technology Research and Development Program project with no. 2013BAH19F01.

References [1] C.D. Mathers, D.M. Fat, J.T. Boerma, The Global Burden of Disease: 2004 Update, World Health Organization, 2008. [2] M.D. Abramoff, M.K. Garvin, M. Sonka, Retinal imaging and image analysis, IEEE Reviews in Biomedical Engineering 3 (2010) 169–208. [3] C.R. Keeler, 150 years since Babbage’s ophthalmoscope, Archives of Ophthalmology 1997 (11511) (1997) 1456–1457. [4] C.S. Flick, Centenary of Babbage’s ophthalmoscope, The Optician 113 (2925) (1947) 246. [5] A. Gullstrand, Neue methoden der reflexlosen ophthalmoskopie, Berichte Deutsche Ophthalmologische Gesellschaft (1910) 36. [6] D. Allen, A. Vasavada, Cataract and surgery for cataract, British Medical Journal 333 (7559) (2006) 128. [7] L.T. Chylack, et al., The lens opacities classification system III, Archives of Ophthalmology 111 (1993) 1506. [8] M.D. Abramoff, M.D.B. Van Ginneken, M. Niemeijer, Automatic detection of red lesions in digital color fundus photographs, IEEE Transactions on Medical Imaging 24 (5) (2005) 584–592. [9] G.B. Kande, T.S. Savithri, P.V. Subbaiah, Automatic detection of microaneurysms and hemorrhages in digital fundus images, Journal of Digital Imaging 23 (4) (2010) 430–437. [10] M. Niemeijer, et al., Retinopathy online challenge: automatic detection of microaneurysms in digital color fundus photographs, IEEE Transactions on Medical Imaging 29 (1) (2010) 185–195. [11] S. Chaudhuri, et al., Detection of blood vessels in retinal images using two-dimensional matched filters, IEEE Transactions on Medical Imaging 8 (3) (1989) 263–269. [12] J.V. Soares, et al., Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification, IEEE Transactions on Medical Imaging 25 (9) (2006) 1214–1222. [13] L. Gang, O. Chutatape, S.M. Krishnan, Detection and measurement of retinal vessels in fundus images using amplitude modified second-order Gaussian filter, IEEE Transactions on Medical Imaging 19 (2) (2002) 168–172. [14] J. Lowell, et al., Measurement of retinal vessel widths from fundus images based on 2-D modeling, IEEE Transactions on Medical Imaging 23 (10) (2004) 1196–1204. [15] M.D. Abramoff, P.J. Magalhaes, S.J. Ram, Image processing with ImageJ, Biophotonics International 11 (7) (2004) 36–42. [16] J. Staal, et al., Ridge-based vessel segmentation in color images of the retina, IEEE Transactions on Medical Imaging 23 (4) (2004) 501–509. [17] E. Ricci, R. Perfetti, Retinal blood vessel segmentation using line operators and support vector classification, IEEE Transaction on Medical Imaging 26 (10) (2007) 1357–1365. [18] C. Sinthanayothin, et al., Automated localisation of the optic disc, fovea, and retinal blood vessels from digital colour fundus images, British Journal of Ophthalmology 83 (8) (1999) 902–910. [19] A. Abdel-Razik Youssif, A.Z. Ghalwash, A. Abdel-Rahman Ghoneim, Optic disc detection from normalized digital fundus images by means of a vessels’ direction matched filter, IEEE Transactions on Medical Imaging 27 (1) (2008) 11–18. [20] T. Walter, J. Klein, Segmentation of color fundus images of the human retina: detection of the optic disc and the vascular tree using morphological techniques, in: Medical Data Analysis, Springer, 2001, pp. 282–287. [21] C. Muramatsu, et al., Automated segmentation of optic disc region on retinal fundus photographs: comparison of contour modeling and pixel classification methods, Computer Methods and Programs in Biomedicine 101 (1) (2011) 23–32. [22] M.V. Iba, N. Ez, A. Sim O, Bayesian detection of the fovea in eye fundus angiographies, Pattern Recognition Letters 20 (2) (1999) 229–240. [23] M. Niemeijer, M.D. Abramoff, B. van Ginneken, Fast detection of the optic disc and fovea in color fundus photographs, Medical Image Analysis 13 (6) (2009) 859–870. [24] O. Faust, et al., Algorithms for the automated detection of diabetic retinopathy using digital fundus images: a review, Journal of Medical Systems 36 (1) (2012) 145–157. [25] R.J. Winder, et al., Algorithms for digital image processing in diabetic retinopathy, Computerized Medical Imaging and Graphics 33 (8) (2009) 608–622. [26] C. Sinthanayothin, et al., Automated detection of diabetic retinopathy on digital fundus images, Diabetic Medicine 19 (2) (2002) 105–112. [27] D. Usher, et al., Automated detection of diabetic retinopathy in digital retinal images: a tool for diabetic retinopathy screening, Diabetic Medicine 21 (1) (2004) 84–90. [28] G.G. Gardner, et al., Automatic detection of diabetic retinopathy using an artificial neural network: a screening tool, British Journal of Ophthalmology 80 (11) (1996) 940–944. [29] M.D. Abramoff, et al., Evaluation of a system for automatic detection of diabetic retinopathy from color fundus photographs in a large population of patients with diabetes, Diabetes Care 31 (2) (2008) 193–198. [30] S. Kankanahalli, et al., Automated classification of severity of age-related macular degeneration from fundus photographs, Investigative Ophthalmology & Visual Science 54 (3) (2013) 1789–1796. [31] J. Nayak, et al., Automated diagnosis of glaucoma using digital fundus images, Journal of Medical Systems 33 (5) (2009) 337–346. [32] R. Chrastek, et al., Automated segmentation of the optic nerve head for diagnosis of glaucoma, Medical Image Analysis 9 (4) (2005) 297–314. [33] R. Bock, et al., Glaucoma risk index: automated glaucoma detection from color fundus images, Medical Image Analysis 14 (3) (2010) 471–481. [34] U.R. Acharya, et al., Automated diagnosis of glaucoma using texture and higher order spectra features, IEEE Transactions on Information Technology in Biomedicine 15 (3) (2011) 449–455.

Please cite this article in press as: L. Guo, et al., A computer-aided healthcare system for cataract classification and grading based on fundus image analysis, Comput. Industry (2014), http://dx.doi.org/10.1016/j.compind.2014.09.005

G Model

COMIND-2612; No. of Pages 9 L. Guo et al. / Computers in Industry xxx (2014) xxx–xxx [35] S. Willikens, et al., Retinal arterio-venule-ratio (AVR) in the cardiovascular risk management of hypertension, European Heart Journal 34 (Suppl. 1) (2013) P5002. [36] H. Li, et al., A computer-aided diagnosis system of nuclear cataract, IEEE Transactions on Biomedical Engineering 57 (7) (2010) 1690–1698. [37] S. Fan, et al., An automatic system for classification of nuclear sclerosis from slitlamp photographs, in: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2003, Springer, 2003, pp. 592–601. [38] H. Li, et al., Image based grading of nuclear cataract by SVM regression, SPIE Proceeding of Medical Imaging 6915 (2008) 691536. [39] H. Li, et al., Towards automatic grading of nuclear cataract, in: 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2007, 4961–4964. [40] H. Li, et al., Image based diagnosis of cortical cataract, in: 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2008, 3904–3907. [41] H. Li, et al., Automatic detection of posterior subcapsular cataract opacity for cataract screening, in: 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2010, 5359–5362. [42] X. Gao, et al., Automatic grading of cortical and PSC cataracts using retroillumination lens images, in: Computer Vision – ACCV, 2012, 256–267. [43] J.P. Guo, D. Pan, Digital ophthalmology regional collaboration medical service mode, China Digital Medicine 8 (9) (2013) 68–71. [44] F. Oloumi, et al., Detection of blood vessels in fundus images of the retina using Gabor wavelets, in: 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2007, 6451–6454. [45] D.J. Cornforth, et al., Development of retinal blood vessel segmentation methodology using wavelet transforms for assessment of diabetic retinopathy, in: Proceedings of 8th Asia Pacific Symposium on Intelligent and Evolutionary System, 2004, pp. 50–60. [46] J.J. Leandro, et al., Blood vessels segmentation in nonmydriatic images using wavelets and statistical classifiers, in: Proceedings of 16th Brazilian Symposium on Computer Graphics Image Process (ISIBGRAPI), 2003, pp. 262–269. [47] R.M. Cesar Jr., H.F. Jelinek, Segmentation of retinal fundus vasculature in nonmydriatic camera images using wavelets, Angiography and Plaque Imaging: Advanced Segmentation Techniques (2003) 193–224. [48] J.J. Leandro, M. Cesar Jr., H.F. Jelinek, Blood vessels segmentation in retina: preliminary assessment of the mathematical morphology and of the wavelet transform techniques, in: Proceedings of 14th Brazilian Symposium on Computer Graphics and Image Processing, 2001, pp. 84–90. [49] K.J. Cherkauer, Human expert-level performance on a scientific image analysis task by a system using combined artificial neural networks, in: Working Notes of the AAAI Workshop on Integrating Multiple Learned Models, 1996, 15–21. [50] T. Li, S. Zhu, M. Ogihara, Using discriminant analysis for multi-class classification: an experimental investigation, Knowledge and Information Systems 10 (4) (2006) 453–472. [51] J. Li, J.-J. Yang, Y. Zhao, B. Liu, A top-down approach for approximate data anonymisation, Enterprise Information Systems 7 (3) (2013) 272–302. [52] C.M. Benjamin Fung, K. Wang, R. Chen, S. Yu, Privacy-preserving data publishing: a survey on recent developments, ACM Computing Surveys 42 (4) (2010). [53] A.K. Hochhalter, J. Song, J. Rush, L. Sklar, A. Stevens, Making the most of your healthcare intervention for older adults with multiple chronic illnesses, Patient Education and Counseling 81 (2) (2010) 207–213. [54] J. Li, C. Liu, B. Liu, Large scale sequential learning from partially labeled data, in: Proceedings of IEEE International Conference on Semantic Computing, 2013, pp. 176–183. [55] R. Fujimaki, S. Morinaga, Factorized asymptotic Bayesian inference for mixture modeling, in: Proceedings of AISTATS, 2012, pp. 400–408. [56] M. Yang, J.-J. Yang, Q. Zhang, Y. Niu, J. Li, Classification of retinal image for automatic cataract detection, in: 15th IEEE International Conference on e-Health Networking, Applications and Services (Healthcom 2013), 2013, 674–679. [57] J.-J. Yang, J. Li, Y. Niu, A hybrid solution for medical data sharing in the cloud environment, Future Generation Computer System (2014), http://www. sciencedirect.com/science/article/pii/S0167739X14001253#.

Liye Guo received the B.Eng. at Department of Automation, Tsinghua University, Beijing, China, in 2012. He is currently a Ph.D. candidate. His main research includes medical image analysis, pattern recognition and machine learning.

9

Ji-Jiang Yang got his B.S. and M.S. degrees from Tsinghua University, and Ph.D. from National University of Ireland (Galway). His research areas include ehealth, e-government/e-commerce, privacy preserving, information resource management, data mining and cloud computing. Now he is an associate professor of Tsinghua University. Dr. Yang worked for CIMS/ERC (Computer Integrated Manufacturing System/Engineering Research Center) of Tsinghua University from 1995 to 1999. He had joined or been in charge of different projects funded by the State Hi-Tech program (863 program), NSF (China), and European Union. From 2009, Dr. Yang’s main focus is e-health and medical service. He has undertaken a few projects in the National Science & Technology Supporting Program about Digital Medical Service model and key technologies. He is also collaborating with a lot of medical institutions and hospitals. He is the member of Expert committee of IoT (Internet of Things) in Health at Chinese Electronic Association, Expert committee of Remote medicine and Cloud computing at Chinese Medicine Informatics Association. He has published more than 60 papers on professional Journals and Conferences. Lihui Peng received the B.Eng., M.Sc., and Ph.D. degrees in Measurement Science and Electronics at Tsinghua University, Beijing, China, in 1990, 1995, 1998, respectively. He is currently a Professor with the Department of Automation, Tsinghua University, Beijing, China. He started his academic career in 1990 as an Assistant Lecturer at Tsinghua University. He has been a member of three Chinese national technical committees since 2001. He has published more than 50 research papers. His biography has been included in Who’s Who in Science and Engineering since 2006. His current research interests include medical image analysis and the measurement techniques for multiphase flow, process tomography, multi-sensor data fusion, flow measurement and instrumentation. Jianqiang Li received his B.S. degree in Mechatronics from Beijing Institute of Technology, Beijing, China in 1996, and M.S. and Ph.D degrees in Control Science and Engineering from Tsinghua University, Beijing, China in 2001 and 2004, respectively. He worked as a researcher in Digital Enterprise Research Institute, National University of Ireland, Galway in 2004–2005. From 2005 to 2013, he worked in NEC Labs China as a researcher, and Department of Computer Science, Stanford University, as a visiting scholar in 2009– 2010. He joined Beijing University of Technology, Beijing, China, in 2013 as Beijing Distinguished Professor. His research interests are in Petri nets, enterprise information system, business process, data mining, information retrieval, semantic web, privacy protection, and big data. He has over 40 publications including 1 book, 10+ journal papers, and 37 international patent applications (19 of them have been granted in China, US, or Japan). He served as PC members in multiple international conferences and organized the IEEE workshop on medical computing. He served as Guest Editor to organize a special issue on information technology for enhanced healthcare service in Computer in Industry. Qingfeng Liang received his B.S. degree in Clinical Medicine from the Medical School of Dongnan University, Nanjing, China in 1996, and M.S. and Ph.D degrees in Ophthalmology from Capital Medical University, Beijing, China in 2005 and 2014, respectively. He worked as an Assistant Professor in Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China. His research interests include eye disease investigation and screening, clinical histopathologic characteristics and gene mutation and NTM Keratitis.

Please cite this article in press as: L. Guo, et al., A computer-aided healthcare system for cataract classification and grading based on fundus image analysis, Comput. Industry (2014), http://dx.doi.org/10.1016/j.compind.2014.09.005