Protein Structure Prediction Using a Maximum Likelihood Formulation of a Recurrent Geometric Network [article]

Guowei Qi, Mallory R Tollefson, Rose A Gogal, Richard J.H. Smith, Mohammed AlQuraishi, Michael J Schnieders
2021 bioRxiv   pre-print
Only ~40% of the human proteome has structural coordinates available from experiment (i.e., X-ray crystallography, NMR spectroscopy, or cryo-EM) or homology modeling with quality templates (i.e., 30% sequence identity or greater), leaving most of the proteome structurally unsolved. Deep learning (DL) methods for predicting protein structure can help close knowledge gaps where experimental and homology models are difficult to obtain. Recent advances in these DL methods have shown promising
more » ... s in expanding structural coverage to the scale of the entire human proteome, providing researchers with more complete protein structural information. Here, we improve upon an existing DL algorithm for protein structure prediction, the Recurrent Geometric Network (RGN). We first expand the training dataset to include experimental uncertainty data in the form of atomic displacement parameters, then derive a maximum likelihood loss function that incorporates this uncertainty data into model training. Compared to the original RGN, our novel maximum likelihood model improves the rate of convergence of initial model training and ultimately results in more accurate structure prediction according to the root mean square deviation (RMSD) of backbone atoms, the Global Distance Test (GDT), the Global Distance Test High Accuracy (GDT-HA), and the Template-Modeling Score (TM-Score). Our model also predicts structures with more favorable backbone torsions, which provide more accurate starting coordinates for downstream physics-based simulations. Based on these results, our maximum likelihood reformulation provides a framework for improving existing or future machine learning algorithms for protein structure prediction. The augmented dataset, data collection scripts, reformulated RGN source code, and a series of trained models are publicly available at
doi:10.1101/2021.09.03.458873 fatcat:htpiko7vvve4dh2i2nvyvihphq