Sample Reduction Strategies for Protein Secondary Structure Prediction


Creative Commons License

Atasever S., AYDIN Z. , Erbay H., Sabzekar M.

APPLIED SCIENCES-BASEL, vol.9, no.20, 2019 (Journal Indexed in SCI) identifier identifier

  • Publication Type: Article / Article
  • Volume: 9 Issue: 20
  • Publication Date: 2019
  • Doi Number: 10.3390/app9204429
  • Title of Journal : APPLIED SCIENCES-BASEL
  • Keywords: protein secondary structure prediction, support vector machine, bayesian network, stratified sampling, hierarchical clustering, SUPPORT VECTOR MACHINE, SEQUENCES

Abstract

Predicting the secondary structure from protein sequence plays a crucial role in estimating the 3D structure, which has applications in drug design and in understanding the function of proteins. As new genes and proteins are discovered, the large size of the protein databases and datasets that can be used for training prediction models grows considerably. A two-stage hybrid classifier, which employs dynamic Bayesian networks and a support vector machine (SVM) has been shown to provide state-of-the-art prediction accuracy for protein secondary structure prediction. However, SVM is not efficient for large datasets due to the quadratic optimization involved in model training. In this paper, two techniques are implemented on CB513 benchmark for reducing the number of samples in the train set of the SVM. The first method randomly selects a fraction of data samples from the train set using a stratified selection strategy. This approach can remove approximately 50% of the data samples from the train set and reduce the model training time by 73.38% on average without decreasing the prediction accuracy significantly. The second method clusters the data samples by a hierarchical clustering algorithm and replaces the train set samples with nearest neighbors of the cluster centers in order to improve the training time. To cluster the feature vectors, the hierarchical clustering method is implemented, for which the number of clusters and the number of nearest neighbors are optimized as hyper-parameters by computing the prediction accuracy on validation sets. It is found that clustering can reduce the size of the train set by 26% without reducing the prediction accuracy. Among the clustering techniques Ward's method provided the best accuracy on test data.