MolReFlect: Towards In-Context Fine-grained Alignments between Molecules and Texts
Jiatong Li, Yunqing Liu, Wei Liu, Jingdi Le, Di Zhang, Wenqi Fan, Dongzhan Zhou, Yuqiang Li, Qing Li
2024-11-27

Summary
This paper introduces MolReFlect, a new framework that helps align molecules with their descriptions in a detailed way, improving how we understand and predict the properties of molecules.
What's the problem?
Molecule discovery is essential for many fields, including medicine and materials science. However, current methods using Large Language Models (LLMs) struggle to accurately connect molecular structures with their descriptive text. This lack of precise alignment can lead to misunderstandings and inaccuracies in predicting how molecules behave.
What's the solution?
To tackle this issue, the authors created MolReFlect, which uses a teacher-student model approach. A larger 'teacher' model identifies important phrases in molecule descriptions and links them to specific parts of the molecules. Then, a smaller 'student' model learns from these connections and improves its understanding of how to describe molecules accurately. The framework also includes techniques to enhance learning by reflecting on previous examples and integrating reasoning processes.
Why it matters?
This research is significant because it enhances the ability of LLMs to generate accurate descriptions of molecules, which is crucial for drug discovery and other scientific applications. By improving the alignment between molecules and their textual descriptions, MolReFlect helps scientists make better predictions about molecular behavior, ultimately advancing research in various fields.
Abstract
Molecule discovery is a pivotal research field, impacting everything from the medicines we take to the materials we use. Recently, Large Language Models (LLMs) have been widely adopted in molecule understanding and generation, yet the alignments between molecules and their corresponding captions remain a significant challenge. Previous endeavours often treat the molecule as a general SMILES string or molecular graph, neglecting the fine-grained alignments between the molecular sub-structures and the descriptive textual phrases, which are crucial for accurate and explainable predictions. In this case, we introduce MolReFlect, a novel teacher-student framework designed to contextually perform the molecule-caption alignments in a fine-grained way. Our approach initially leverages a larger teacher LLM to label the detailed alignments by directly extracting critical phrases from molecule captions or SMILES strings and implying them to corresponding sub-structures or characteristics. To refine these alignments, we propose In-Context Selective Reflection, which retrieves previous extraction results as context examples for teacher LLM to reflect and lets a smaller student LLM select from in-context reflection and previous extraction results. Finally, we enhance the learning process of the student LLM through Chain-of-Thought In-Context Molecule Tuning, integrating the fine-grained alignments and the reasoning processes within the Chain-of-Thought format. Our experimental results demonstrate that MolReFlect enables LLMs like Mistral-7B to significantly outperform the previous baselines, achieving SOTA performance on the ChEBI-20 dataset. This advancement not only enhances the generative capabilities of LLMs in the molecule-caption translation task, but also contributes to a more explainable framework.