Item

Training compute-optimal protein language models

Cheng, Xingyi
Chen, Bo
Li, Pan
Gong, Jing
Tang, Jie
Song, Le
Supervisor
Department
Machine Learning
Embargo End Date
Type
Conference proceeding
Date
2024
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
We explore optimally training protein language models, an area of significant interest in biological research where guidance on best practices is limited.Most models are trained with extensive compute resources until performance gains plateau, focusing primarily on increasing model sizes rather than optimizing the efficient compute frontier that balances performance and compute budgets.Our investigation is grounded in a massive dataset consisting of 939 million protein sequences. We trained over 300 models ranging from 3.5 million to 10.7 billion parameters on 5 to 200 billion unique tokens, to investigate the relations between model sizes, training token numbers, and objectives.First, we observed the effect of diminishing returns for the Causal Language Model (CLM) and that of overfitting for Masked Language Model (MLM) when repeating the commonly used Uniref database. To address this, we included metagenomic protein sequences in the training set to increase the diversity and avoid the plateau or overfitting effects. Second, we obtained the scaling laws of CLM and MLM on Transformer, tailored to the specific characteristics of protein sequence data. Third, we observe a transfer scaling phenomenon from CLM to MLM, further demonstrating the effectiveness of transfer through scaling behaviors based on estimated Effectively Transferred Tokens.Finally, to validate our scaling laws, we compare the large-scale versions of ESM-2 and PROGEN2 on downstream tasks, encompassing evaluations of protein generation as well as structure- and function-related tasks, all within less or equivalent pre-training compute budgets.
Citation
X. Cheng, B. Chen, P. Li, J. Gong, J. Tang, and L. Song, “Training Compute-Optimal Protein Language Models,” Adv Neural Inf Process Syst, vol. 37, pp. 69386–69418, Dec. 2024.
Source
Advances in Neural Information Processing Systems (NeurIPS 2024)
Conference
Keywords
Protein language models, Compute-optimal training, Scaling laws, Causal Language Model (CLM), Masked Language Model (MLM)
Subjects
Source
Publisher
NEURIPS
DOI
Full-text link