< Explain other AI papers

MiniAppBench: Evaluating the Shift from Text to Interactive HTML Responses in LLM-Powered Assistants

Zuhao Zhang, Chengyue Yu, Yuante Li, Chenyi Zhuang, Linjian Mo, Shuai Li

2026-03-11

MiniAppBench: Evaluating the Shift from Text to Interactive HTML Responses in LLM-Powered Assistants

Summary

This paper introduces a new way to test how well large language models (LLMs) can create interactive web applications, which the authors call MiniApps. These aren't just simple text responses, but actual little programs with buttons, displays, and things you can click on.

What's the problem?

Currently, there aren't good ways to measure how well LLMs do at building these interactive apps. Existing tests mostly check if the code *works* or if the layout looks right, but they don't see if the app actually *behaves* in a way that makes sense and follows real-world logic. It's like testing if a car starts, but not if it drives safely.

What's the solution?

The researchers created MiniAppBench, a collection of 500 tasks across different areas like games and tools, designed to test these interactive apps. They also built MiniAppEval, a system that uses automated browser testing – basically, a computer program acting like a user – to explore the apps and see if they do what they're supposed to. This evaluation looks at whether the app understands the user's goal, if it looks correct, and if it works dynamically as expected.

Why it matters?

This work is important because as LLMs get better at creating apps, we need reliable ways to evaluate them. MiniAppBench and MiniAppEval provide a standard for measuring the quality of these apps, helping researchers improve LLMs and build more useful and intuitive interactive experiences. It shows current models still have a long way to go, and gives a benchmark for future progress.

Abstract

With the rapid advancement of Large Language Models (LLMs) in code generation, human-AI interaction is evolving from static text responses to dynamic, interactive HTML-based applications, which we term MiniApps. These applications require models to not only render visual interfaces but also construct customized interaction logic that adheres to real-world principles. However, existing benchmarks primarily focus on algorithmic correctness or static layout reconstruction, failing to capture the capabilities required for this new paradigm. To address this gap, we introduce MiniAppBench, the first comprehensive benchmark designed to evaluate principle-driven, interactive application generation. Sourced from a real-world application with 10M+ generations, MiniAppBench distills 500 tasks across six domains (e.g., Games, Science, and Tools). Furthermore, to tackle the challenge of evaluating open-ended interactions where no single ground truth exists, we propose MiniAppEval, an agentic evaluation framework. Leveraging browser automation, it performs human-like exploratory testing to systematically assess applications across three dimensions: Intention, Static, and Dynamic. Our experiments reveal that current LLMs still face significant challenges in generating high-quality MiniApps, while MiniAppEval demonstrates high alignment with human judgment, establishing a reliable standard for future research. Our code is available in github.com/MiniAppBench.