< Explain other AI papers

Drag-and-Drop LLMs: Zero-Shot Prompt-to-Weights

Zhiyuan Liang, Dongwen Tang, Yuhao Zhou, Xuanlei Zhao, Mingjia Shi, Wangbo Zhao, Zekai Li, Peihao Wang, Konstantin Schürholt, Damian Borth, Michael M. Bronstein, Yang You, Zhangyang Wang, Kai Wang

2025-06-23

Drag-and-Drop LLMs: Zero-Shot Prompt-to-Weights

Summary

This paper talks about Drag-and-Drop LLMs, a new way to quickly customize large language models by directly generating the special task-related settings from a few example prompts without needing long training.

What's the problem?

The problem is that the usual way to make large language models good at specific tasks requires a lot of time and computing power because they need to be trained on each new task separately.

What's the solution?

The researchers created a system that learns how to map task descriptions or prompts directly to the model’s task-specific settings, using a special generator. This way, the model can adjust to new tasks in seconds without traditional training.

Why it matters?

This matters because it makes adapting large AI models to new tasks super fast and efficient, saving resources and making it easier to use AI for many different problems quickly.

Abstract

Drag-and-Drop LLMs generate task-specific parameters through prompt-conditioned parameter generation, achieving significant efficiency gains and cross-domain generalization without per-task training.