< Explain other AI papers

NatureLM: Deciphering the Language of Nature for Scientific Discovery

Yingce Xia, Peiran Jin, Shufang Xie, Liang He, Chuan Cao, Renqian Luo, Guoqing Liu, Yue Wang, Zequn Liu, Yuan-Jyue Chen, Zekun Guo, Yeqi Bai, Pan Deng, Yaosen Min, Ziheng Lu, Hongxia Hao, Han Yang, Jielan Li, Chang Liu, Jia Zhang, Jianwei Zhu, Kehan Wu

2025-02-12

NatureLM: Deciphering the Language of Nature for Scientific Discovery

Summary

This paper talks about NatureLM, a new AI model that can understand and work with different scientific fields like chemistry, biology, and materials science all at once. It's designed to help scientists make discoveries by treating all these areas as if they were parts of one big 'language of nature'.

What's the problem?

Current AI models for science are usually trained to work in just one area, like chemistry or biology. This means they can't easily combine knowledge from different fields, which limits their usefulness for making new scientific discoveries that might involve multiple areas of science.

What's the solution?

The researchers created NatureLM, an AI model that learns from data across many scientific fields at once. It can understand and generate information about molecules, proteins, RNA, and materials using simple text instructions. NatureLM can even translate between different scientific 'languages', like turning chemical formulas into names or figuring out how to make specific molecules.

Why it matters?

This matters because it could speed up scientific discoveries in important areas like creating new medicines or designing new materials. By combining knowledge from different fields, NatureLM might help scientists come up with ideas they wouldn't have thought of otherwise. It could make it easier for researchers to work across different scientific disciplines, potentially leading to breakthroughs in drug development, material science, and other crucial areas of research.

Abstract

Foundation models have revolutionized natural language processing and artificial intelligence, significantly enhancing how machines comprehend and generate human languages. Inspired by the success of these foundation models, researchers have developed foundation models for individual scientific domains, including small molecules, materials, proteins, DNA, and RNA. However, these models are typically trained in isolation, lacking the ability to integrate across different scientific domains. Recognizing that entities within these domains can all be represented as sequences, which together form the "language of nature", we introduce Nature Language Model (briefly, NatureLM), a sequence-based science foundation model designed for scientific discovery. Pre-trained with data from multiple scientific domains, NatureLM offers a unified, versatile model that enables various applications including: (i) generating and optimizing small molecules, proteins, RNA, and materials using text instructions; (ii) cross-domain generation/design, such as protein-to-molecule and protein-to-RNA generation; and (iii) achieving state-of-the-art performance in tasks like SMILES-to-IUPAC translation and retrosynthesis on USPTO-50k. NatureLM offers a promising generalist approach for various scientific tasks, including drug discovery (hit generation/optimization, ADMET optimization, synthesis), novel material design, and the development of therapeutic proteins or nucleotides. We have developed NatureLM models in different sizes (1 billion, 8 billion, and 46.7 billion parameters) and observed a clear improvement in performance as the model size increases.