Are Today's LLMs Ready to Explain Well-Being Concepts?
Bohan Jiang, Dawei Li, Zhen Tan, Chengshuai Zhao, Huan Liu
2025-08-08
Summary
This paper talks about how large language models can be trained to explain well-being concepts in a way that fits different audiences, by using methods that improve the quality of their explanations.
What's the problem?
The problem is that understanding and explaining well-being is complex because it covers mental, physical, and social aspects, and people have different levels of knowledge, so AI models struggle to provide clear and accurate explanations tailored to each person.
What's the solution?
The solution was to create a big dataset with explanations from many language models and then fine-tune an open-source model using supervised learning and preference optimization, which helped the model produce better and audience-appropriate explanations.
Why it matters?
This matters because better explanations about well-being can help people understand important ideas related to their health and happiness in a way that matches their personal needs and knowledge.
Abstract
LLMs can be fine-tuned to generate high-quality, audience-tailored explanations of well-being concepts using Supervised Fine-Tuning and Direct Preference Optimization.