Journal of Immunological Methods 374 (2011) 47–52
Contents lists available at ScienceDirect
Journal of Immunological Methods j o u r n a l h o m e p a g e : w w w. e l s ev i e r. c o m / l o c a t e / j i m
Research paper
Ensemble approaches for improving HLA Class I-peptide binding prediction Xihao Hu a,b, Hiroshi Mamitsuka c,d, Shanfeng Zhu a,b,d,⁎ a b c d
School of Computer Science, Fudan University, Shanghai 200433, China Shanghai Key Lab of Intelligent Information Processing, Fudan University, Shanghai 200433, China Bioinformatics Center, Institute for Chemical Research, Kyoto University, Gokasho, Uji 611-0011, Japan Institute for Bioinformatics Research and Development (BIRD), Japan Science and Technology Agency (JST), Japan
a r t i c l e
i n f o
Article history: Received 3 July 2010 Accepted 3 September 2010 Available online 16 September 2010
Keywords: HLA HLA-peptide binding Prediction Ensemble
a b s t r a c t Accurately predicting peptides binding to major histocompatibility complex (MHC) I molecules is of great importance to immunologists for elucidating the underlying mechanism of immune recognition and facilitating the design of peptide-based vaccine. Various computational methods have been developed for MHC I-peptide binding prediction, and several of them are reported to achieve high accuracy in recent evaluation on benchmark datasets. For attending the machine learning in immunology competition (MLIC) in prediction of human leukocyte antigen (HLA)-binding peptides, we (FudanCS) have made use of ensemble approaches to further improve the prediction performance by integrating the outputs of several leading predictors. Two ensemble approaches, PM and AvgTanh, have been implemented for attending MLIC. AvgTanh and PM achieved the fourth and the seventh out of all 20 submissions in MLIC in terms of the average AUC. In addition, AvgTanh was awarded the winner in the category of HLA-A*0101 of 9-mer. Overall, the competition results validate the effectiveness of ensemble approaches. © 2010 Elsevier B.V. All rights reserved.
1. Introduction Major histocompatibility complex (MHC) molecules are essential for the T-cell-mediated adaptive immunity in vertebrate, which helps to recognize, remember and eliminate specific pathogens encountered (Janeway et al., 2001). Short peptides derived from the degeneration of pathogens are first bound to MHC molecules in an allele-specific manner, and then the MHC:peptide complex is presented on the surface of cell for the recognition of T-cell receptors (TCRs), which can trigger a T-cell immune response leading to the elimination of the pathogens. The accurate prediction of peptides bound to MHC molecules is thus very helpful in elucidating the underlying mechanism of immune recognition. In addition, it can facilitate the design of the potentially safer peptide-based vaccine for ⁎ Corresponding author. School of Computer Science, Fudan University, Shanghai 200433, China. E-mail address:
[email protected] (S. Zhu). 0022-1759/$ – see front matter © 2010 Elsevier B.V. All rights reserved. doi:10.1016/j.jim.2010.09.007
many diseases, without the need for the attenuated form of the pathogens which is more difficult to produce in many cases, and may present danger to a sub-population of vaccine receivers. More than 30 different types of peptide-based vaccines are under development for a few serious disease such as hepatitis C virus and different types of cancers (Purcell et al., 2007). MHC molecules can be mainly divided into two groups: MHC Class I and MHC Class II molecules. Short endogenous peptides (around 8–11 amino acids) bound by MHC Class I molecules are recognized by cytotoxic T lymphocytes (CTL), while longer peptides (usually 15–25 amino acids) from exogenous resources bound by MHC Class II molecules are recognized by helper T cells (Th). Moreover, MHC Class I molecules also control the function of natural killer (NK) cells (Janeway et al., 2001). As the experimental testing of binding affinity is both expensive and time-consuming, a number of computational approaches have been developed for identifying the MHC binding peptides, which have been widely used to pre-
48
X. Hu et al. / Journal of Immunological Methods 374 (2011) 47–52
screening a small number of promising candidate epitopes for experimental validation (Lund et al., 2005). For advancing the computational methods of MHC peptide binding prediction, Professor Vladimir Brusic and his colleagues organized the Machine Learning in Immunology Competition on ICANN09 (MLIC: http://www.kios.org.cy/ICANN09/MLI.html). In this competition, the participants are required to predict the binding affinity of a set of peptides (9mer and 10mer) to three HLA (Human leukocyte antigen; the corresponding term of MHC in Human) Class I molecules: HLA*A0101, HLA*A0201 and HLA*B0702. Here we focus on MHC Class I-peptide binding prediction. Considering the underlying principles, existing computational approaches can be divided into roughly three categories: motif based (Parker et al., 1991; Rammensee et al., 1999), position-specific scoring matrix (PSSM) based (Peters and Sette, 2005; Bui et al., 2005) and machine learning based methods, such as decision tree (Zhu et al., 2006), evolutionary algorithm (Brusic et al., 1998), artificial neural network (Gulukota et al., 1997; Brusic et al., 1998; Nielsen et al., 2003), hidden Markov model (Udaka et al., 2002; Mamitsuka, 1998) and kernel based methods (Dönnes and Kohlbacher, 2002). By the experiment on a benchmark dataset of more than 48,000 quantitative peptide binding affinity measurements of MHC Class I molecules, Peters et al. (2006) found that the best predictors of ANN (Nielsen et al., 2003) and SMM (Peters and Sette, 2005) achieved very good performance with the average AUC of 0.874 to 0.900 over 34 different mouse, human, macaque and chimpanzee MHC Class I alleles. Most recently, using two datasets derived from the tumor antigen survivin and cytomegalovirus (CMV) internal matrix protein, Lin et al. (2008) compared the performance of 30 prediction severs, and found that several of them performed very well. The best three prediction servers are IEDB_ANN, NETM_ANN (NETMHC), and IEDB_SMM (Zhang et al., 2008; Lundegaard et al., 2008), which achieved the average AUC of 0.90 to 0.92 over six different HLA Class I alleles. Considering the good performance of state-of-the-art MHC Class I-peptide binding predictors, we resort to ensemble approaches to further improve the prediction performance for attending MLIC. Since many methods are based on different principles, their prediction results could be quite different. Ensemble based methods can integrate the output of individual predictor for better prediction performance, which have been widely deployed and obtained great success in many different areas (Polikar, 2006). In MLIC, we have submitted two prediction results by using two different ensemble strategies, AvgTanh (Jain et al., 2005) and PM (probabilistic meta-predictor) (Karpenko et al., 2008). AvgTanh and PM achieved the fourth and the seventh out of all 20 submission in MLIC in terms of average AUC (Brusic et al., 2010). In addition, AvgTanh was awarded the winner in the category of HLA-A*0101 of 9-mer. All these results have demonstrated the effectiveness of ensemble strategies in improving the accuracy of MHC Class I-peptide binding. 2. Materials and methods There are two crucial issues for designing good ensemble systems, the selection of base predictors and the combination rule for integrating the output of different base predictors (Polikar, 2006).
2.1. The selection of base predictors To make the best use of ensemble approaches, the base predictors should be both accurate and diverse. The first criterion assures that every base predictor makes correct prediction with high probability, whereas the second criterion requires every base predictor to contribute some new prediction results. Two recent evaluation studies show that IEDB_ANN, NETM_ANN and IEDB_SMM are top three best performed predictors in MHC Class I-peptide binding predictions. In addition, recently NETMHCPAN has been developed to predict peptides binding to MHC Class I alleles with very few or even no training data by leveraging MHC–peptide binding affinity information of other MHC alleles (Nielsen et al., 2007a). Experimental results show that NETMHCPAN achieved similar performance with NETM_ANN, and thus we also incorporate NETMHCPAN as a base predictor (Zhang et al., 2009). That is, we select four outstanding predictors as our base predictors, NETM_ANN, IEDB_ANN, IEDB_SMM and NETMHCPAN. The executable programs of these predictors can be downloaded from the website of analysis tools of Immune Epitope Database (IEDB) and the Center for Biological Sequence Analysis (CBS) of Technical University of Denmark (DTU), which greatly facilitates the process of designing effective ensemble systems. 2.2. The Combination Rule The second key issue for building an ensemble system is to combine the output of each base predictor. How to design an effective combination rule is crucial for the success of ensemble systems. The combination rules could be roughly divided into two categories, trainable and non-trainable combination rules. In the trainable combination rules, usually a model, such as linear regression and support vector machine, with suitable parameters learned from training data, is employed to integrate the output of each base predictor. In this case, well-performed base predictors in training data will be weighed more, where poorly performed base predictors will be more or less ignored. On the other hand, in the non-trainable combination rules, every base predictor is treated equally in the combination, and the final score is usually an average of all prediction results. In this study, since we choose four leading predictors as the base predictors, and acquiring additional training data of high quality is not a trivial task, we use non-trainable combination rules for integrating the output of different base predictors. With respect to the output information used, non-trainable combination rules can be grouped as rank-based methods and score-based methods. Here we first describe a popular rank-based method, Consensus (Wang et al., 2008), and then present two score-based methods, AvgTanh (Jain et al., 2005) and PM (Karpenko et al., 2008), which we have used in attending MLIC. 2.2.1. Consensus In Consensus, we first collect a set of random peptides as a reference list, and then each base predictor ranks peptides in this reference list. For a test peptide, each predictor can give a score, and then a corresponding rank in the reference list could be obtained. The final score of the peptide is the median
X. Hu et al. / Journal of Immunological Methods 374 (2011) 47–52
of the ranks given by all base predictors. This method has been widely employed in the problem of predicting MHC peptide bindings, such as IEDB analysis tools (Zhang et al., 2008). In our work, we retrieved one million random peptides from the Swiss-Prot database to generate the reference list.
49
the prediction score by each base predictor, which is then transformed by the PM normalization method. Finally, the average PM normalization score is given to the peptide as the final score. 2.3. The distribution of prediction scores
2.2.2. AvgTanh Despite the advantage of simpleness, rank-based methods ignore the specific prediction score, which may have rich information. Since the output scores of different base predictors vary greatly, the first task is to normalize the scores of different predictors to be comparable in numerical value. One simple approach for solving this problem is to shift the minimum scores to 0, the maximum scores to 1, and then linearly transform all other scores to the corresponding values between 0 and 1. Another popular approach is the Z-score normalizing method, which assumes that scores given by a predictor obey normal distributions. In this method, it transforms a different normal distribution to the standard normal distribution. That is, Scorez−score =
Score−μ ; σ
ð1Þ
where μ is the mean, σ is the standard deviation and Score is the original score given by the predictor. Unfortunately, both min– max and Z-score have the shortcoming of being sensitive to outliers. To overcome this problem, the tanh normalization method has been proposed to produce a robust transformation by using a tanh function to normalize the Z-score (Jain et al., 2005). Scoretanh =
1 Score−μ tanh 0:1 +1 : 2 σ
ð2Þ
For a test peptide, we first collect the prediction score by each base predictor, which is then transformed by the tanh normalization method. Finally, the average tanh normalization score is given to the peptide as the final score. We call this method as AvgTanh. 2.2.3. PM In AvgTanh, for each predictor we assume that the prediction scores of all peptides come from a single normal distribution. However, the prediction scores from binders and non-binders may be very different, and actually come from two distinct distributions. Karpenko et al. (2008) proposed PM to normalize the prediction score by considering the score distributions of both binders and non-binders. The computing formula is as follows, ScorePM = log
1 cdfbinders ðScoreÞ cdfnon−binders ðScoreÞ + : 2 1−cdfbinders ðScoreÞ 1−cdfnon−binders ðScoreÞ
ð3Þ Here cdf is the abbreviation of the cumulative distribution function, cdfbinders(Score) is the probability that a peptide with the score distributions of binders receives a score of no more than Score, and cdfnon − binders(Score) is the probability that a peptide with the score distributions of non-binders receives a score no more than Score. For a test peptide, we first collect
As shown in the previous section, both AvgTanh and PM need to compute the distribution of prediction scores. For PM, it considers two distinct distributions of prediction scores, the binders and non-binders. We use the largest database of MHC–peptide binding data, IEDB (Peters et al., 2005), to estimate the score distributions of binders and non-binders. IC50 of 500 nm is utilized to distinguish between binders and non-binders. Similar to another related study (Nielsen et al., 2007b), for the IC50 value that is larger than 50,000 nm, it will be assigned to 50,000 nm. The IC50 value of binding affinity is then log-transformed as follows: Score = 1−log50000 IC50:
ð4Þ
The mean and standard deviation of the score distributions over binders and non-binders in PM can be computed subsequently. Without distinguishing between binders and non-binders, the prediction score distribution of peptides in AvgTanh could be computed in a similar way by utilizing the binding data in IEDB. Since there are very few or no binding data for many HLA alleles in IEDB, the application of this method is limited. To solve this problem, we also use the prediction score of randomly generated peptides to estimate the score distribution in AvgTanh. Please note that PM needs to know the labels of peptides (binders or non-binders), and thus cannot use randomly generated peptides to estimate the prediction score distribution. 3. Results 3.1. Overview We report the performance of ensemble approaches on two datasets. The first dataset has been used by Lin et al. (2008) to compare the performance of 30 MHC peptide binding prediction web servers. It consists of 176 9-mer peptides derived from tumor antigen survivin (Swiss-Prot: O15392) and cytomegalovirus (CMV) internal matrix protein pp65 peptides. The binding affinities of these peptides to different HLA-I molecules have been produced by iTopia™. Following the study by Lin et al. (2008), we compared the performance of different ensemble approaches on the peptide binding prediction of seven HLA-I molecules (HLA-A*0201, HLA-A*0301, HLA-A*1101, HLA-A*2402, HLA-B*0702, HLAB*0801 and HLA-B*1501). The second dataset comes from the MLIC, and we report the performance of our ensemble approaches on 9-mer and 10-mer peptide binding prediction of three HLA-I molecules (HLA-A*0101, HLA-A*0201 and HLA-B*0702). Hereafter we call the first dataset Lin dataset, and the second dataset MLIC dataset. The performance of each model was measured by AUC (Area Under ROC Curve), which reflects the ability of the model in discriminating positive instances (binders) from negative instances (non-binders).
50
X. Hu et al. / Journal of Immunological Methods 374 (2011) 47–52
Table 1 The performance comparison of four base predictors and three ensemble approaches on Lin dataset in terms of AUC. Allele
IEDB_ANN
IEDB_SMM
NETM_ANN
NetMHCPAN
Consensus
PMIEDB
AvgTanhIEDB
AvgTanhRand
A*0201 A*0301 A*1101 A*2402 B*0702 B*0801 B*1501 Mean
0.9646 0.9814 0.9193 0.8715 0.9800* 0.9299 0.9070* 0.9362
0.9633 0.9747* 0.9189 0.8667 0.9900 0.9141* 0.9105 0.9340
0.9508* 0.9847 0.8772* 0.8201* 0.9807 0.9345 0.9264 0.9249*
0.9600 0.9821 0.9087 0.8792 0.9914 0.9390 0.9352 0.9422
0.9682 0.9792 0.9118 0.8710 0.9874 0.9400 0.9262 0.9405
0.9695 0.9821 0.9167 0.8801 0.9900 0.9400 0.9320 0.9444
0.9693 0.9821 0.9167 0.8792 0.9894 0.9405 0.9336 0.9444
0.9697 0.9799 0.9189 0.8852 0.9907 0.9426 0.9313 0.9455
3.2. The performance of ensemble approaches on Lin dataset Here we select four state-of-the-art MHC I-peptide binding predictors as the base predictors, NETM_ANN, IEDB_ANN, IEDB_SMM and NETMHCPAN. The executable programs of IEDB_ANN and IEDB_SMM are provided by analysis tools of IEDB, and the executable programs of NETM_ANN and NETMHCPAN are downloaded from CBS of Technical University of Denmark (DTU). We examine three ensemble approaches, Consensus, PM and AvgTanh. Depending on the way of generating the distribution of prediction scores, AvgTanh is implemented in two different versions, AvgTanhIEDB and AvgTanhRand, where the first one uses IEDB data to generate the distribution, and the second one uses one million random peptides to generate distribution. Similarly, the implementation of PM is denoted as PMIEDB. The performance of all base predictors and ensemble approaches on Lin dataset, downloaded from Dana-Farber Repository for Machine Learning in Immunology (http://bio. dfci.harvard.edu/DFRMLI/), is illustrated in Table 1. Each row corresponds to the performance of all predictors on one HLA-I allele, and the average performance of each predictor is shown in the last row. For each row, the model that achieved the highest AUC is highlighted in the bold face, where the model that achieved the lowest AUC is annotated with *. For example, for HLA-A*0201, AvgTanhRand achieved the highest AUC of 0.9697, and NETM_ANN achieved the lowest AUC of 0.9508. We can say that AvgTanhRand was the best performed model, which achieved the highest average AUC of 0.9455 out of all seven
HLA-I alleles. Moreover, AvgTanhRand was the best performed model in three HLA-I alleles (HLA-A*0201, HLA-A*2402 and HLA-B*0801) out of all seven alleles. AvgTanhRand was followed by AvgTanhIEDB and PMIEDB, both of which achieved the average AUC of 0.9444. In addition, for any HLA allele, the worst performed model was not any of the ensemble methods, which demonstrates the effectiveness of the ensemble methods. In addition, to examine the robustness of our findings, we generated 100 datasets from Lin dataset using bootstrapping with replacement with a constant ratio of binders to nonbinders. The paired t-test is used to compare the performance of different prediction models. This strategy has been also used by Peters et al. (2006) for comparing the performance of different prediction models. The experimental result is shown in Table 2, where every two rows show results for each allele. In the first row, the model with the highest average AUC is highlighted in bold, and the model with the lowest AUC is annotated with *. The second row shows the p-values of a paired t-test between the current method and the best method on each allele. This table shows similar findings to that by Table 1. AvgTanhRand was the best performed model for three (HLA-A*0201, HLA-A*2402 and HLA-B*0801) out of all seven HLA-I alleles, achieving the highest average AUC of 0.9445. For example, for HLA-A*0201, AvgTanhRand achieved the highest AUC of 0.9698, which outperformed all other models, being statistically significant. Both AvgTanhIEDB and PMIEDB achieved the average AUC of 0.9434, and became the second best performed models. Please also note that no ensemble approaches were the worst model in performance
Table 2 The performance comparison of four base predictors and three ensemble approaches on 100 datasets by bootstrapping with replacement on Lin data set in terms of AUC. Allele
IEDB_ANN
IEDB_SMM
NETM_ANN
NetMHCPAN
Consensus
PMIEDB
AvgTanhIEDB
AvgTanhRand
A*0201
0.9646 (b4E−13) 0.9809 (b5E−15) 0.9200 (0.2854) 0.8656 (b5E−29) 0.9817 (b5E−25) 0.9289 (b3E−22) 0.9050* (b2E−36) 0.9352
0.9627 (b3E−25) 0.9737* (b4E−29) 0.9209
0.9506* (b 8E−49) 0.9852
0.9601 (b1E−32) 0.9813 (b2E−6) 0.9099 (b2E−11) 0.8748 (b7E−6) 0.9909
0.9683 (b 4E−7) 0.9787 (b 4E−24) 0.9122 (b 2E−14) 0.8664 (b 1E−34) 0.9863 (b 2E−14) 0.9391 (b 2E−7) 0.9228 (b 2E−17) 0.9391
0.9693 (0.0002) 0.9818 (b4E−13) 0.9181 (b3E−11) 0.8761 (b2E−14) 0.9894 (b6E−6) 0.9399 (b7E−18) 0.9291 (0.001) 0.9434
0.9690 (0.0093) 0.9821 (b 2E−14) 0.9181 (b 4E−12) 0.8752 (b 5E−16) 0.9886 (b 9E−13) 0.9405 (b 4E−9) 0.9306 (0.0130) 0.9434
0.9698
A*0301 A*1101 A*2402 B*0702 B*0801 B*1501 Mean
0.8642 (b5E−21) 0.9891 (0.0003) 0.9137* (b3E−30) 0.9086 (b4E−23) 0.9333
0.8822* (b 9E−44) 0.8137* (b 3E−52) 0.9789* (b 2E−30) 0.9363 (0.0003) 0.9230 (b 2E−17) 0.9243*
0.9405 (0.0025) 0.9333 0.9416
0.9791 (b2E−20) 0.9203 (0.2621) 0.8808 0.9901 (0.0209) 0.9430 0.9285 (b8E−6) 0.9445
X. Hu et al. / Journal of Immunological Methods 374 (2011) 47–52
51
Table 3 The performance of PMIEDB (FudanCS) and AvgTanhIEDB (H00001) in MLIC. Allele
BIMAS
SYFPEITHI
PMIEDB
AvgTanhIEDB
Maximum
Minimum
Average
A*0101 9mer A*0201 9mer B*0702 9mer A*0101 10mer A*0201 10mer B*0702 10mer
0.91 0.99 0.92 0.92 0.99 0.85
0.92 0.98 0.72 0.69 0.96 0.82
0.95 0.99 0.95 0.95 0.99 0.88
0.97 0.99 0.96 0.96 0.99 0.90
0.97 0.99 0.96 0.99 1 0.97
0.46 0.58 0.52 0.53 0.41 0.53
0.9 0.96 0.91 0.88 0.92 0.87
for any HLA-I allele. In addition, AvgTanh and PM outperformed Consensus in all HLA-I alleles. 3.3. The performance of ensemble approaches on MLIC dataset For attending MLIC, we submitted two prediction results, FudanCS and H00001, corresponding to two ensemble approaches, PMIEDB and AvgTanhIEDB, respectively. At that time, the base predictors were composed of IEDB_ANN, IEDB_SMM and IEDB_ARB. The summary of the performance of our submission is shown in Table 3 (Brusic et al., 2010). Both PMIEDB and AvgTanhIEDB achieved good prediction results in the competition, outperforming two well-known predictors, BIMAS and SYFPEITHI, in all six categories. Specifically, AvgTanhIEDB and PMIEDB achieved the fourth and the seventh out of all 20 submission in MLIC in terms of the average AUC. In addition, AvgTanh was awarded the winner in the category of HLA-A*0101 of 9-mer. Since all top three submissions came from the Center for Biological Sequence Analysis of Technical University of Denmark (DTU), who developed the state-of-theart predictors NETM_ANN and NETMHCPAN, we think that incorporating NETM_ANN and NETMHCPAN into the base predictors will improve the prediction performance further. 4. Conclusion Here we have described three ensemble approaches, Consensus, PM and AvgTanh. We used two of them, PM and AvgTanh, for attending MLIC. The experimental results on the two benchmark datasets have demonstrated the effectiveness of ensemble approaches. Ensemble approaches not only avoid the weakness of each base predictor, but also improve the prediction performance by integrating the outputs of all base predictors in most cases. Recently by implementing our idea based on the ensemble approaches which we describe in this paper, we have developed a web server, MetaMHC, which integrates the result of some leading predictors for improving MHC peptide binding prediction (Hu et al., 2010). We think these ensemble approaches could be also applied in other areas of bioinformatics, which require robust and accurate predictions. Acknowledgements The authors would like to thank anonymous reviewers for their helpful comments and advice. Funding: National Nature Science Foundation of China (nos. 60903076 and 60773010) and Shanghai Committee of Science and Technology, China (Grant nos. 08DZ2271800 and 09DZ2272800, in part).
References Brusic, V., Rudy, G., Honeyman, G., Hammer, J., Harrison, L., 1998. Prediction of MHC class ii-binding peptides using an evolutionary algorithm and artificial neural network. Bioinformatics 14 (2), 121. Brusic, V., et al., 2010. MLI competition: prediction of HLA ligands. J. Immunol. Meth. 358 To appear. Bui, H.-H., Sidney, J., Peters, B., Sathiamurthy, M., Sinichi, A., Purton, K.-A., Mothé, B.R., Chisari, F.V., Watkins, D.I., Sette, A., 2005. Automated generation and evaluation of specific MHC binding predictive tools: ARB matrix applications. Immunogenetics 57 (5), 304 Jun. Dönnes, P., Kohlbacher, O., 2002. Prediction of MHC class I binding peptides, using SVMHC. BMC Bioinform. 3, 25 Sep. Gulukota, K., Sidney, J., Sette, A., DeLisi, C., 1997. Two complementary methods for predicting peptides binding major histocompatibility complex molecules. J. Mol. Biol. 267 (5), 1258. Hu, X., Zhou, W., Udaka, K., Mamitsuka, H., Zhu, S., 2010. MetaMHC: a meta approach to predict peptides binding to MHC molecules. Nucleic Acids Res. 38, W474 Web server issue. Jain, A., Nandakumar, K., Ross, A., 2005. Score normalization in multimodal biometric systems. Pattern Recognit. 38 (12), 2270 December. Janeway, C., Travers, P., Walport, M., Shlomchik, M., 2001. Immunobiology: The Immune System in Health and Disease. Garland Publishing, New York. Karpenko, O., Huang, L., Dai, Y., 2008. A probabilistic meta-predictor for the MHC class II binding peptides. Immunogenetics 60 (1), 25 Jan. Lin, H.H., Ray, S., Tongchusak, S., Reinherz, E.L., Brusic, V., 2008. Evaluation of MHC class I peptide binding prediction servers: applications for vaccine research. BMC Immunol. 9, 8. Lund, O., Nielsen, M., Lundegaard, C., Kesmir, C., Brunak, S., 2005. Immunological Bioinformatics. The MIT Press, Cambridge, MA. Lundegaard, C., Lund, O., Nielsen, M., 2008. Accurate approximation method for prediction of class I MHC affinities for peptides of length 8, 10 and 11 using prediction tools trained on 9mers. Bioinformatics 24 (11), 1397 Jun. Mamitsuka, H., 1998. Predicting peptides that bind to MHC molecules using supervised learning of hidden Markov models. Proteins 33, 460. Nielsen, M., Lundegaard, C., Blicher, T., Lamberth, K., Harndahl, M., Justesen, S., Røder, G., Peters, B., Sette, A., Lund, O., Buus, S., 2007. NetMHCpan, a method for quantitative predictions of peptide binding to any HLA-A and -B locus protein of known sequence. PLoS ONE 2 (8), e796. Nielsen, M., Lundegaard, C., Lund, O., 2007. Prediction of MHC class II binding affinity using SMM-align, a novel stabilization matrix alignment method. BMC Bioinform. 8, 238. Nielsen, M., Lundegaard, C., Worning, P., Lauemoller, S., Lamberth, K., Buus, S., Brunak, S., Lund, O., 2003. Reliable prediction of T-cell epitopes using neural networks with novel sequence representations. Protein Sci. 12 (5), 1007 May. Parker, K., Bednarek, M., Coligan, J., 1991. Scheme for ranking potential HLA-A2 binding peptides based on independent binding of individual peptide sidechains. J. Immunol. 152, 163. Peters, B., Bui, H.-H., Frankild, S., Nielson, M., Lundegaard, C., Kostem, E., Basch, D., Lamberth, K., Harndahl, M., Fleri, W., Wilson, S.S., Sidney, J., Lund, O., Buus, S., Sette, A., 2006. A community resource benchmarking predictions of peptide binding to MHC-I molecules. PLoS Comput. Biol. 2 (6), e65 Jun. Peters, B., Sette, A., 2005. Generating quantitative models describing the sequence specificity of biological processes with the stabilized matrix method. BMC Bioinform. 6, 132. Peters, B., Sidney, J., Bourne, P., Bui, H., Buus, S., Doh, G., Fleri, W., Kronenberg, M., Kubo, R., Lund, O., et al., 2005. The immune epitope database and analysis resource: from vision to blueprint. PLoS Biol. 3, e91. Polikar, R., 2006. Ensemble based systems in decision making. IEEE Circuits Syst. Mag. 6 (3), 21.
52
X. Hu et al. / Journal of Immunological Methods 374 (2011) 47–52
Purcell, A.W., McCluskey, J., Rossjohn, J., 2007. More than one reason to rethink the use of peptides in vaccine design. Nat. Rev. Drug Discov. 6 (5), 404 May. Rammensee, H., Bachmann, J., Emmerich, N., Bachor, O., Stevanović, S., 1999. SYFPEITHI: database for MHC ligands and peptide motifs. Immunogenetics 50, 213. Udaka, K., Mamitsuka, H., Nakaseko, Y., Abe, N., 2002. Empirical evaluation of a dynamic experiment design method for prediction of MHC class I-binding peptides. J. Immunol. 169, 5744. Wang, P., Sidney, J., Dow, C., Mothé, B.R., Sette, A., Peters, B., 2008. A systematic assessment of MHC class II peptide binding predictions and evaluation of a consensus approach. PLoS Comput. Biol. 4 (4), e1000048 Apr.
Zhang, H., Lundegaard, C., Nielsen, M., 2009. Pan-specific MHC class I predictors: a benchmark of HLA class I pan-specific prediction methods. Bioinformatics 25 (1), 83. Zhang, Q., Wang, P., Kim, Y., Haste-Andersen, P., Beaver, J., Bourne, P.E., Bui, H.-H., Buus, S., Frankild, S., Greenbaum, J., Lund, O., Lundegaard, C., Nielsen, M., Ponomarenko, J., Sette, A., Zhu, Z., Peters, B., 2008. Immune epitope database analysis resource (IEDB-AR) Jul Nucleic Acids Res. 36, W513 (Web Server issue). Zhu, S., Udaka, K., Sidney, J., Sette, A., Aoki-Kinoshita, K., Mamitsuka, H., 2006. Improving MHC binding peptide prediction by incorporating binding data of auxiliary MHC molecules. Bioinformatics 22, 1648.