Neurocomputing 195 (2016) 6–12
Contents lists available at ScienceDirect
Neurocomputing journal homepage: www.elsevier.com/locate/neucom
Cross domain mitotic cell recognition Tong Hao a, Ai-Ling Yu a, Wei Peng a, Bin Wang a, Jin-Sheng Sun a,b,n a b
Tianjin Key Laboratory of Animal and Plant Resistance/College of- Life Science, Tianjin Normal University, Tianjin 300387, China Tianjin Aquatic Animal Infectious Disease Control and Prevention Center, Tianjin 300221, China
art ic l e i nf o
a b s t r a c t
Article history: Received 9 April 2015 Received in revised form 14 May 2015 Accepted 1 June 2015 Available online 12 February 2016
Accurate and automated identification of mitosis is essential and challenging to many biomedical applications. To handle this challenge, we propose a novel mitotic cell recognition method by integrating heterogenous data in the framework of cross domain learning. First, we extract the discriminative feature to represent the local structure and textural saliency of individual cell sample. Second, the cell typedependent classifiers are respectively trained on the target domain and the auxiliary domain and then fused in the framework of adaptive support vector machine for cross-domain learning. The achieved classifier can be implemented for mitotic cell recognition in the cross domain manner. The extensive experiments on two kinds of phase contrast microscopy image sequences (C3H10T1/2& C2C12) show that the proposed method can leverage the datasets from multiple domains to boost the performance by effectively transferring the knowledge from the auxiliary domain to the target domain. Therefore, it can overcome the inconsistence of feature distributions in different domains. & 2016 Elsevier B.V. All rights reserved.
Keywords: Mitotic cell Cross-domain learning Target domain Auxiliary domain Max margin principle
1. Introduction Recently, researchers are paying increasing attention on applying image analysis and machine learning for various routine clinical pathology tests [1–4]. Results outputted by these advanced techniques can be leveraged for further subjective analysis by scientists, which will make the test results to be more reliable and consistent across laboratories [4]. The related computer-aided methods have been widely used for many medical, especially for breast cancer diagnosis [5]. Among measurable characteristics, the detection and counting of mitotic cells is currently considered as the optimal predictor for long-term prognosis for breast carcinomas [6]. However, the requirement of manual annotation of mitosis is a time-consuming task [7]. Therefore, it is essential to explore the methods for automated mitotic cell recognition. To our knowledge, most of previous methods work on the learning problem in the same domain [8,9]. Since both training dataset and test dataset are prepared under the same conditions, the classifier learned on the training dataset can be straightforwardly applied to the test dataset for prediction. However, the state-of-the-art public cell image dataset for the classification problem usually contains a limited number of samples. Facing the difficulty caused by the variation of non-rigid cell appearance, n Corresponding author at: Tianjin Key Laboratory of Animal and Plant Resistance/College of- Life Science, Tianjin Normal University, Tianjin 300387, China and Tianjin Aquatic Animal Infectious Disease Control and Prevention Center, Tianjin 300221, China. E-mail address:
[email protected] (J.-S. Sun).
http://dx.doi.org/10.1016/j.neucom.2015.06.106 0925-2312/& 2016 Elsevier B.V. All rights reserved.
irregular motion, the existence of artifacts by the imaging devices, etc., it is essential to take advantage of the classifiers trained in different domains [10,11] and augment the dataset to learn a robust classifier with high generalization ability. However, it is usually expensive for large-scale dataset preparation and manual segmentation and annotation of specific cell regions. Consequently, it is reasonable to explore the method, which can take advantage of different dataset for model learning [12,13]. In this paper, we propose a cross-domain mitosis modeling method for automated mitotic cell recognition. First, the discriminative feature is extracted for individual sample to represent its local structure and textural saliency. Then, the adaptive support vector machine is adapted for this work for cross-domain learning. Two kinds of phase contrast microscopy image sequences (C3H10T1/2& C2C12) are selected for the experiments. The extensive experiments show that the proposed method can leverage the dataset from multiple domains to boost the performance by effectively transferring the knowledge from the auxiliary domain to the target domain. The main contribution lies in two-folds: 1. This method can overcome the inconsistence of feature distributions in different domains. 2. It can reduce the requirement on large-scale manual annotation on the target domain by implicitly augmenting the training data with the existing annotated auxiliary dataset. The rest of paper is structured as follows. In the 2nd section, we will introduce the related work. Then we will detail the method
T. Hao et al. / Neurocomputing 195 (2016) 6–12
for cross-domain mitotic cell modeling. The experimental method and results will be respectively detailed in the 4th section and the 5th section respectively. At last, conclusion is presented.
2. Related work Automated mitotic cell classification methods usually consist of two essential steps, visual feature representation and mitotic modeling. From the view point of feature representation, various image features have been implemented for this task. Liu et al. [14] extracted the area features (area and convex area), the shape features (eccentricity, major axis length, minor axis length and orientation), and the intensity features (maximum intensity, mean intensity, and minimum intensity), to represent the candidate mitotic region. Perner et al. [15] and Hiemann et al. [4] implemented the textual descriptors for this task. Cordelli and Soda [16] and Strandmark et al. [17] developed specific morphological features to describe the saliency of ROI region. Li et al. [18] extracted the volumetric Haar-like feature to represent the spatiotemporal volume in the image sequence. Moreover, the popular local saliency descriptors, such as GIST [19,20], Scale-Invariant Feature Transform (SIFT) [21–23], and Histogram of Oriented Gradient (HoG) [24], are also applied for this task because of the high discriminative capability and the robustness. To improve the discrimination of feature representation, the strategy of the fusion of multiple image features has been widely evaluated [25,26]. More recently, sparse representation [27] is also implemented to transform the aforementioned low-level visual features into a highlevel formulation, which can directly represent the similarity between samples. Liu et al. [28,24] proposed that the sparse representation-based method can benefit discovering discriminative feature comparing to the classic hand-crafted lowlevel features. Furthermore, Liu et al. [20] extended the traditional sparse representation method to the sequential sparse representation by imposing the temporal context regularization to induce sequential image feature transform. From the view point of model learning, more and more powerful classifiers, including support vector machine, random forest, Adaboost, etc. [29], have been applied on mitotic recognition. Liu et al. [30] originally proposed the clustered multi-task learning-based method to discover the latent relatedness among multiple cell types to boost the performance of cell classification. Recently, the popular deep learning principle [31] is also implemented on this task. Ciresan et al. [32,33] successfully leveraged the Deep Neural Networks on mitosis detection. This approach won the ICPR 2012 and MICCAI 2013 mitosis detection competitions. Another trend for this task is based on the graphical model principle [34–37], which can learn the sequential dynamics within one mitosis event. Gallardo et al. [38] applied the hidden Markov model to train a classifier for mitosis recognition with cell shape and appearance saliency. Liu et al. [39] applied the hidden-state conditional random field to learn the sequential structure of mitosis progression. Recently, Liu et al. [23] originally designed the semi-Markov model for mitosis sequence segmentation. Especially, they proposed the integration of hidden conditional random fields and the semi-markov model for both mitosis identification and localization in the time-lapse microscopy image sequence. Furthermore, they theoretically unify learning both models with the max-margin theory. Extensive experiment and comparison to the competing methods demonstrated that this method can achieve the state-of-the-art performance on mitosis detection. This method has become one of the most representative frameworks for mitosis detection.
7
3. Cross-domain modeling This step aims to learn a model which can be implemented for automatic mitotic cell recognition on individual unlabeled images. Different from previous work which simply learned and tested the classifier f(x) on the un-overlapped two parts of one dataset, which captures one type of cell under the same environment (single domain) and the feature representation of individual mitotic candidate regions of both training and test sets belongs to identical distribution, our work will take advantage of the datasets from both target domain and auxiliary domain to augment the generalization ability of the learned model. Specifically, let DT denote the target domain (e.g. C3H10T1/2 cell in Fig. 1(a)) and let DT ¼ DTl ⋃DTu denote the two parts of the target domain, where DTl is the manually labeled part and DTu is the unlabeled part. Ideally, the size of DTl is smaller compared with the size of DT. The labeled subset can be represented by DTl ¼ fðxi ; yi ÞgN i ¼ 1 where xi is the visual feature of the ith sample and yi A f 1; þ1g is its binary label. The auxiliary domain capturing the other type of cell under a different environment (e.g. C2C12 cell in Fig. 1(b)) can be represented by DA. Because DT and DA are captured under different conditions, various aspects, including cell types, image resolution, lighting and so on, will have negative influence on direct crossdomain learning and test. Simply concatenating the datasets from both domains might not improve the performance and would even degrade it. To deal with this problem, we well adapt the adaptive support vector machine (ASVM) [40] for cross-domain mitotic learning. The ultimate goal of our task is to learn a primary classifier f(x) for automatic mitotic cell recognition on the target domain DT. When there is only limited dataset for primary classifier learning, it is expected to leverage the auxiliary dataset to boost the training dataset and consequently augment the performance. In our work, A we can first train the auxiliary classifier f ðxÞ by SVM on the A auxiliary dataset. Then f ðxÞ can be integrated with the primary classifier f(x) for cross-domain learning and test. A To integrate the auxiliary classifier f ðxÞ and the primary classifier f(x), the incremental term in the form of Δf ðxÞ ¼ w > ϕðxÞ is A designed and amended to f ðxÞ. The proposed cross-domain mitotic modeling method can be formulated as f ðxÞ ¼ f ðxÞ þ Δf ðxÞ ¼ f ðxÞ þ w > ϕðxÞ A
A
ð1Þ
Motivated by the max-margin principle, the objective function for learning the parameter w can be formulated as N X 1 JwJ2 þC ξi 2 i¼1
min w
s:t:
ξi Z 0;
yi f ðxi Þ þyi w > ϕðxi Þ Z 1 ξi A
ð2Þ
The Lagrangian formulation of Eq. (2) is N N N X X X 1 LP ¼ J w J 2 þ C ξi μi ξi αi ðyi f A ðxi Þ þ yi w > ϕðxi Þ ð1 ξi ÞÞ 2 i¼1 i¼1 i¼1
ð3Þ where αi Z0 and μi Z 0 are Lagrange multipliers. Eq. (3) can be optimized by setting its derivative with respect to w and ξ to zero. Consequently, we can achieve w¼
N X
αi yi ϕðxi Þ
i¼1
αi ¼ C μi ;
8i
ð4Þ
The Karush–Kuhn–Tucker(KKT) conditions can be formulated as follows to constrain the optimal solution of Eq. (3):
αi fyi f A ðxi Þ þ yi w > xi ð1 ξi Þg ¼ 0 αi Z 0
8
T. Hao et al. / Neurocomputing 195 (2016) 6–12
Fig. 1. C3H10T1/2 and C2C12 mitotic and non-mitotic samples.
α^ , the decision function can be converted into
yi f ðxi Þ þ yi w > xi ð1 ξi Þ Z0 A
ξi μi ¼ 0
A
f ðxÞ ¼ f ðxÞ þ
N X i¼1
ð1 εi Þαi
N X N 1X α α y y Kðxi ; xj Þ 2i¼1j¼1 i j i j
ð5Þ 4. Experimental method 4.1. Data ð6Þ
where εi is defined as εi ¼ yi f ðxi Þ. α ¼ fαi gN i ¼ 1 can be optimized by maximizing LD under the constraint 0 r α r C; 8 i. Maximizing LD over α is a quadratic programming problem. We adopted the sequential minimal optimization [41] to solve it. With the optimal A
ð7Þ
where ðxi ; yi Þ A DTl .
By substituting Eq. (5) into Eq. (3), the following Lagrange dual objective function can be obtained: LD ¼
α^ i yi Kðx; xi Þ
i¼1
μi Z 0 ξi Z 0
N X
We selected two challenging datasets from CMU Cell Image Analysis Group [42] for cross-domain learning, which respectively captured C3H10T1/2 mesenchymal stem cell and C2C12 myoblastic stem cell under different living conditions. C3H10T1/2 cells were grown in Dulbecco's Modified Eagle's Media (DMEM; Invitrogen, Carlsbad, CA), 10% fetal bovine serum (Invitrogen, Carlsbad,
T. Hao et al. / Neurocomputing 195 (2016) 6–12
CA) and 1% penicillin streptomycin (PS; Invitrogen, Carlsbad, CA). C2C12 cells were grown in DMEM, 10% bovine serum (Invitrogen, Carlsbad, CA) and 1% PS. All cells were kept at 37 °C, 5% CO2 in a humidified incubator. Both C3H10T1/2 and C2C12 cells were observed during growth under a Zeiss Axiovert 135TV inverted microscope with a 5 , 0.15 N.A. objective and phase contrast optics. Time-lapse image acquisition was performed every 5 min using a 12-bit Qimaging Retiga EXi Fast 1394 CCD camera at 500 ms exposure with a gain of 1.01. The resolution is 1392 1040. We randomly selected two sequences for both C3H10T1/2 and C2C12 cells. Each C3H10T1/2 sequence contains 1436 images and each C2C12 sequence contains 1013 images. With both dataset, we manually cropped the mitotic regions and resized them into 50 50 pixels. The non-mitotic regions were automatically selected by randomly setting the seed points for the center and then extracting the surrounding regions with the 50 50 pixels. There were respectively 148 mitotic samples and 391 non-mitotic samples in the 1st sequence (Seq1) and 175 mitotic samples and 500 non-mitotic samples in the 2nd sequence (Seq2) from C3H10T1/2. Since there were much more mitotic samples in both C2C12 sequences, 300 mitotic samples and 300 non-mitotic regions were randomly selected for the experiment to avoid the data inbalance between both datasets.
9
4.2. Experiments With the visual characteristics shown in Fig. 1, GIST [19] was adopted for low level feature extraction, which can represent local texture and structural information in the prepared C3H10T1/2 and C2C12 dataset. The image is firstly divided into dense grids and the attribute of each grid is represented with the set of spatial envelope properties, including Naturalness, Openness, Roughness, Expansion, and Ruggedness. Then the attributes of all grids are combined to form GIST. To show the superiority of the proposed method, we set two experiments as follows:
Demonstrating that the proposed method can significantly improve the performance by cross-domain learning. We respectively selected C3H10T1/2/C2C12 dataset as the target domain and C2C12/C3H10T1/2 dataset as the auxiliary domain to evaluate whether the proposed method can benefit mitotic cell modeling by implicitly boosting the training dataset with the auxiliary domain. Two representative methods were selected for comparison: - Prim: With the extracted GIST feature, the SVM classifier was trained on the training set and tested on the test set in the
Table 1 Information on the target and auxiliary domains. Setting1
Target domain-C3H10T1/2
Auxiliary domain-C2C12
Sequence
#Positive sample
#Negative sample
Sequence
#Positive sample
#Negative sample
S1-1
Train-Seq1 Test-Seq2
148 175
391 500
Seq1&2 –
600 –
600 –
S1-2
Train-Seq1 Test-Seq2
89(148 0.6) 175
235(391 0.6) 500
Seq1&2 –
600 –
600 –
S1-3
Train-Seq1 Test-Seq2
45(148 0.3) 175
118(391 0.3) 500
Seq1&2 –
600 –
600 –
Setting2
Target domain-C3H10T1/2
Auxiliary domain-C2C12
Sequence
#Positive sample
#Negative sample
Sequence
#Positive sample
#Negative sample
S2-1
Train-Seq2 Test-Seq1
175 148
500 391
Seq1&2 –
600 –
600 –
S2-2
Train-Seq2 Test-Seq1
105(175 0.6) 148
300(500 0.6) 391
Seq1&2 –
600 –
600 –
S2-3
Train-Seq2 Test-Seq1
53(175 0.3) 148
150(500 0.3) 391
Seq1&2 –
600 –
600 –
Setting3
Target domain-C2C12
Auxiliary domain-C3H10T1/2
Sequence
#Positive sample
#Negative sample
Sequence
#Positive sample
#Negative sample
S3-1
Train-Seq1 Test-Seq2
300 300
300 300
Seq1&2 –
323 –
891 –
S3-2
Train-Seq1 Test-Seq2
180(300 0.6) 300
180(300 0.6) 300
Seq1&2 –
323 –
891 –
S3-3
Train-Seq1 Test-Seq2
90(300 0.3) 300
90(300 0.3) 300
Seq1&2 –
323 –
891 –
Setting4
Target domain-C2C12
Auxiliary domain-C3H10T1/2
Sequence
#Positive sample
#Negative sample
Sequence
#Positive sample
#Negative sample
S4-1
Train-Seq2 Test-Seq1
300 300
300 300
Seq1&2 –
323 –
891 –
S4-2
Train-Seq2 Test-Seq1
180(300 0.6) 300
180(300 0.6) 300
Seq1&2 –
323 –
891 –
S4-3
Train-Seq2 Test-Seq1
90(300 0.3) 300
90(300 0.3) 300
Seq1&2 –
323 –
891 –
10
T. Hao et al. / Neurocomputing 195 (2016) 6–12
target domain as Liu et al. did in [14]. No auxiliary dataset was utilized for this experiment. - Aux: The SVM classifier was trained on the ensemble of the training set in the target domain and the entire auxiliary dataset. Then it was tested on the test set in the target domain. Demonstrating that the proposed method can achieve satisfactory performance event with only a few labeled sample. Specifically, we fixed the auxiliary domain and decreased the training set in the target domain to demonstrate whether the proposed method can consistently outperform the competing methods. The detailed information about the target domain and the auxiliary domain is shown in Table 1.
the performance of mitotic recognition, accuracy To evaluate #True positive is calculated for comparison. #Test set
Fig. 2. Comparison in Setting1.
5. Experimental results and discussion In this section, we will respectively analyze the experimental results of the aforementioned cases. 5.1. Performance of cross-domain learning As shown in Table 2, we alternated Seq1 and Seq2 in the target domain (either C3H10T1/2 or C2H12) as training dataset and tested on the other while fixed the auxiliary domain. As for the competing methods Prim and Aux, we selected the popular RBF kernel and did cross validation for parameter selection. The experimental results in Table 2 showed that Prim could work better than Aux. It implies that directly fusing both dataset from a different domain may have negative influence on the performance since the visual features from both dataset might have different distribution. Comparatively, with the cross-domain learning framework, the proposed method can consistently outperform both Prim and Aux. On one hand, the proposed method can take advantage of both dataset for model learning, which implicitly augment the numbers of the positive sample and the negative sample in the training dataset. On the other hand, the proposed method can treat the data from both domain separately to learn A the auxiliary classifier f ðxÞ and the prime classifier f(x), which can be integrated together in Eq. (1) to predict the final results.
Fig. 3. Comparison in Setting2.
5.2. Performance comparison by varying the number of training data In our experiment, we alternated C3H10T1/2 and C2C12 for the target domain and the auxiliary domain. In each case, Seq1 and Seq2 were alternated for training and test. We further reduced the number of the training sample in the target domain by 60% and 30% while fixed the test set and the auxiliary set respectively, as shown in Table 1. In Figs. 2–5, we can get the following observations:
Fig. 4. Comparison in Setting3.
Table 2 Performance of cross-domain mitotic recognition (%). Target domain Train
Test
C3H10-Seq1 C3H10-Seq2 C2C12-Seq1 C2C12-Seq2
C3H10-Seq2 C3H10-Seq1 C2C12-Seq2 C2C12-Seq1
Auxiliary domain
Prim
Aux
Proposed
C2C12 Seq1&2 C2C12 Seq1&2 C3H10 Seq1&2 C3H10 Seq1&2
90.8 91.5 85.2 86.3
87.4 88.7 83.3 83.1
92.1 93.4 87.1 88.5 Fig. 5. Comparison in Setting4.
T. Hao et al. / Neurocomputing 195 (2016) 6–12
It is intuitive that the proposed method can consistently out-
perform Prim and Aux in the four settings no matter how many training samples in the target domain were selected. It implies that the proposed method is independent on either target domain or auxiliary domain. Take Setting1 and Setting2 as an example. Although the proposed method can only achieve around 2% absolute improvement over Prim, it can significantly improve the performance with about 5% when the number of the training set decreased to 60% and 30%. Comparing to Aux, the proposed method can respectively achieve the significant improvement with 10% and 20% when the number of the training set decreased to 60% and 30%. The proposed method is superior to the competing method especially when the number of training data from the target domain is small. Take Setting1 as an example. In S1-3, there are only 45 positive examples and 118 negative examples in the target domain. We can achieved 86.6% accuracy, which is even better than the performance of Prim with 89 positive samples and 235 negative samples in the target domain and comparable to the performance of Aux with 148 positive samples and 391 negative samples in the target domain. Supposing that we already have a labeled dataset with one type of cell under one special living condition, it is not necessary to fully label all the samples in the new dataset with different cell type and living condition for mitotic modeling in the real application. We can only label part of them and leverage the proposed cross-domain learning method to obtain a satisfactory classifier with the labeled dataset as the auxiliary domain.
6. Conclusion In this paper, we propose a cross-domain mitosis modeling method for automated mitotic cell recognition. Especially, we well adapted the adaptive support vector machine to the cross-domain learning problem, which can overcome the inconsistence of feature distributions in different domains. Consequently, the proposed method can reduce the requirement on large-scale manual annotation on the target domain by implicitly augment the training data with the existing annotated auxiliary dataset. Two kinds of phase contrast microscopy image sequences (C3H10T1/2 & C2C12) were prepared in the experiments. The extensive experiments show that the proposed method can effectively leverage the dataset from multiple domains to boost the performance.
Acknowledgments This work was supported by National High-Tech Research and Development Program of China (863 programs, 2012AA10A401 and 2012AA092205), Grants of the Major State Basic Research Development Program of China (973 programs, 2012CB114405), National Natural Science Foundation of China (21106095), National Key Technology R&D Program (2011BAD13B07 and 2011BAD13B04), Tianjin Research Program of Application Foundation and Advanced Technology (15JCYBJC30700), Project of introducing one thousand high level talents in three years (5KQM110003), Foundation of Introducing Talents to Tianjin Normal University (5RL123), Academic innovation promotion project of Tianjin Normal University for young teachers (52XC1403), and ‘‘131’’ Innovative Talents cultivation of Tianjin (ZX110170).
11
References [1] R. Khutlang, S. Krishnan, et al., Classification of mycobacterium tuberculosis in images of znstained sputum smears, IEEE Trans. Inf. Technol. Biomed. 14 (4) (2010) 949–957. [2] T. Hsieh, Y. Huang, et al., Hep-2 cell classification in indirect immunofluorescence images, in: International Conference on Information, Communications and Signal Processing, 2009, pp. 1–4. [3] M.N. Gurcan, L.E. Boucheron, et al., Histopathological image analysis: a review, IEEE Rev. Biomed. Eng. 2 (2009) 147C171. [4] R. Hiemann, T. Büttner, et al., Challenges of automated screening and differentiation of non-organ specific autoantibodies on hep-2 cells, Autoimmun. Rev. 9 (1) (2009) 17–22. [5] P. Foggia, G. Percannella, et al., Benchmarking hep-2 cells classification methods, IEEE Trans. Med. Imaging 32 (10) (2013) 1878–1899. [6] F. Clayton, Pathologic correlates of survival in 378 lymph node-negative infiltrating ductal breast carcinomas. Mitotic count is the best single predictor, Cancer 68 (6) (1991) 1309–1317. [7] T. Fuchs, J. Buhmann, Computational pathology: challenges and promises for tissue analysis, in: Computerized Medical Imaging and Graphics, 2011. [8] Y. Su, J. Yu, A. Liu, Z. Gao, T. Hao, Z. Yang, Cell type-independent mitosis event detection via hidden-state conditional neural fields, in: IEEE International Symposium on Biomedical Imaging, 2014. [9] A. Liu, K. Li, T. Hao, A hierarchical framework for mitosis detection in timelapse phase contrast microscopy image sequences of stem cell populations, InTech, 2011. [10] A. Liu, Y. Su, P. Jia, Z. Gao, T. Hao, Z. Yang, Multipe/Single-View Human Action Recognition via Part-induced Multi-task Structural Learning, IEEE Transactions on Cybernetics 45 (6) (2015) 1194–1208. [11] A. Liu, N. Xu, Y. Su, H. Lin, T. Hao, Z. Yang, Single/Multi-view Human Action Recognition via Regularized Multi-Task Learning, Neurocomputing 151 (2) (2015) 544–553. [12] L. Duan, D. Xu, I.W. Tsang, J. Luo, Visual event recognition in videos by learning from web data, IEEE Trans.Pattern Anal. Mach. Intell. (2012) 1667–1680. [13] L. Duan, D. Xu, I.W. Tsang, Domain adaptation from multiple sources: a domain-dependent regularization approach, IEEE Trans. Neural Netw. Learn. Syst. (2012) 504–518. [14] A. Liu, K. Li, T. Kanade, Spatiotemporal mitosis event detection in time-lapse phase contrast microscopy image sequences, in: IEEE International Conference on Multimedia and Expo, 2010. [15] P. Perner, H. Perner, B. Mller, Mining knowledge for hep-2 cell image classification, Artif. Intell. Med. 26 (2002) 161–173. [16] E. Cordelli, P. Soda, Color to grayscale staining pattern representation in IIF, in: International Symposium on Computer-Based Medical Systems, 2011, pp. 1–6. [17] P. Strandmark, J. Ulen, F. Kahl, Hep-2 staining pattern classification, in: International Conference on Pattern Recognition, 2012. [18] K. Li, E. Miller, M. Chen, T. Kanade, L. Weiss, P. Campbell., Computer vision tracking of stemness, in: IEEE International Symposium on Biomedical Imaging, 2008. [19] A. Oliva, A. Torralba, Modeling the shape of the scene: a holistic representation of the spatial envelope, Int. J. Comput. Vis. 42 (3) (2001) 145–175. [20] A. Liu, T. Hao, Y. Su, Z. Yang, Sequential sparse representation for mitotic event recognition, Electron. Lett. 49 (14) (2013) 869–870. [21] D. Lowe, Distinctive image features from scale-invariant keypoints, Int. J. Comput. Vis. 60 (2) (2004) 91–110. [22] S. Huh, D. Ker, R. Bise, M. Chen, T. Kanade, Automated mitosis detection of stem cell populations in phase-contrast microscopy images, IEEE Trans. Med. Imaging 99 (2010) 1–1. [23] A. Liu, K. Li, T. Kanade, A semi-markov model for mitosis segmentation in time-lapse phase contrast microscopy image sequences of stem cell populations, IEEE Trans. Med. Imaging 31 (2) (2012) 359–369. [24] A. Liu, T. Hao, Y. Su, Z. Yang, Nonnegative mixed-norm convex optimization for mitotic cell detection in phase contrast microscopy images, Comput. Math. Methods Med., 2013 (2013), Article ID 176272, 10 pp http://dx.doi.org/10.1155/ 2013/176272. [25] P. Foggia, G. Percannella, et al., Early experiences in mitotic cells recognition on hep-2 slides, in: IEEE Symposium on Computer-Based Medical Systems, 2010. [26] V. Snell, W. Christmas, J. Kittler, Texture and shape in fluorescence pattern identification for auto-immune disease diagnosis, in: International Conference on Pattern Recognition, 2012. [27] A. Tropp, S.J. Wright, Computational methods for sparse solution of linear inverse problems, Proc. IEEE 98 (5) (2010) 948–958. [28] A. Liu, Z. Gao, T. Hao, Y. Su, Z. Yang, Sparse coding induced transfer learning for hep-2 cell classification, Bio-Med. Mater. Eng. 24 (2014) 237–243. [29] P. Foggia, G. Percannella, P. Soda, M. Vento, Benchmarking hep-2 cells classification methods, IEEE Trans. Med. Imaging 32 (10) (2013) 1878–1889. [30] A. Liu, Y. Lu, W. Nie, Y. Su, Z. Yang, Hep-2 cells classification via clustered multi-task learning, Neurocomputing, this issue. [31] K. Yu, W. Xu, Y. Gong, Deep learning with kernel regularization for visual recognition, in: Advances in Neural Information Processing Systems, 2008.
12
T. Hao et al. / Neurocomputing 195 (2016) 6–12
[32] A. Giusti, C. Caccia, D. Ciresan, J. Schmidhuber, L. Gambardella, A comparison of algorithms and humans for mitosis detection, in: ISBI, 2014. [33] D. Ciresan, A. Giusti, L. Gambardella, J. Schmidhuber, Mitosis detection in breast cancer histology images using deep neural networks, in: MICCAI, 2013. [34] A. Liu, Human action recognition with structured discriminative random fields, Electron. Lett. 47 (11) (2012) 651–653. [35] Y. Su, L. Ma, A. Liu, Max margin discriminative random fields for multimodal human action recognition, Electron. Lett. 50 (12) (2014) 870–872. [36] A. Liu, W. Nie, Y. Su, Li Ma, T. Hao, Z. Yang, Coupled Hidden Conditional Random Fields for RGB-D Human Action Recognition, Signal Processing 112 (2015) 74–82. [37] A. Liu, Y. Su, W. Nie, Z. Yang, Jointly learning multiple sequential dynamics for human action recognition, PLoS ONE 10 (7) (2015). [38] G. Gallardo, F. Yang, F. Ianzini, M. Mackey, M. Sonka, Mitotic cell recognition with hidden markov models, in: MICCAI, vol. 5367, 2004, pp. 661–668. [39] A. Liu, K. Li, T. Kanade, Mitosis sequence detection using hidden conditional random fields, in: IEEE International Symposium on Biomedical Imaging, 2010, pp. 580–583. [40] J. Yang, R. Yan, A. Hauptmann, Cross-domain video concept detection using adaptive svms, in: ACM Multimedia, 2007. [41] Y. Torii, S. Abe, Fast training of linear programming support vector machines using decomposition techniques, in: Artificial Neural Networks in Pattern Recognition, Lecture Notes in Computer Science, vol. 4087, 2006, pp. 165–176. [42] Cmu cell image analysis group, 〈http://www.celltracking.ri.cmu.edu/〉.
Tong Hao is currently an associate professor in the Tianjin Key Laboratory of Animal and Plant Resistance and the College of Life Science, Tianjin Normal University, Tianjin 300387, China. She was with the School of Informatics, the University of Edinburgh.
Ai-Ling Yu is currently a master student in the Tianjin Key Laboratory of Animal and Plant Resistance and the College of Life Science, Tianjin Normal University, Tianjin 300387, China.
Wei Peng is currently a master student in the Tianjin Key Laboratory of Animal and Plant Resistance and the College of Life Science, Tianjin Normal University, Tianjin 300387, China.
Bin Wang is currently a Ph.D. student in the Tianjin Key Laboratory of Animal and Plant Resistance and the College of Life Science, Tianjin Normal University, Tianjin 300387, China.
Jin-Sheng Sun is currently a professor in the Tianjin Key Laboratory of Animal and Plant Resistance and the College of Life Science, Tianjin Normal University, Tianjin 300387, China. He is also with the Tianjin Aquatic Animal Infectious Disease Control and Prevention Center, Tianjin, China.