JanusCoder: Towards a Foundational Visual-Programmatic Interface for Code Intelligence
Qiushi Sun, Jingyang Gong, Yang Liu, Qiaosheng Chen, Lei Li, Kai Chen, Qipeng Guo, Ben Kao, Fei Yuan
2025-10-30
Summary
This paper focuses on making computers better at understanding the connection between code and what that code *shows* visually, like charts or websites. It's about building AI that can create code based on what you want to see, or understand code by looking at the visuals it produces.
What's the problem?
Currently, it's hard to build these kinds of AI systems because there isn't much good training data that pairs code with its visual output. Creating this data is difficult – it's not easy to automatically generate code that makes good-looking and functional visuals, and it's also hard to automatically check if the visuals produced by code are correct. This lack of data is a major roadblock to progress.
What's the solution?
The researchers created a new toolkit to automatically generate a huge dataset called JanusCode-800K, which contains 800,000 examples of code paired with the visuals they create. This toolkit cleverly uses the relationship between code and visuals to make the process more efficient. They then used this dataset to train new AI models, called JanusCoder and JanusCoderV, that can generate code from text descriptions, visual examples, or a combination of both. Importantly, these models are designed to handle all these tasks together, unlike previous approaches that used separate models for each.
Why it matters?
This work is important because it significantly improves the ability of AI to bridge the gap between code and visuals. This opens up possibilities for more flexible content creation, easier editing of visualizations, and generally makes it simpler to use computers to create things we can *see* based on what we *tell* them to do. The models they created perform very well, even rivaling some commercial AI systems, and the dataset they built will be valuable for future research in this area.
Abstract
The scope of neural code intelligence is rapidly expanding beyond text-based source code to encompass the rich visual outputs that programs generate. This visual dimension is critical for advanced applications like flexible content generation and precise, program-driven editing of visualizations. However, progress has been impeded by the scarcity of high-quality multimodal code data, a bottleneck stemming from challenges in synthesis and quality assessment. To address these challenges, we make contributions from both a data and modeling perspective. We first introduce a complete synthesis toolkit that leverages reciprocal synergies between data modalities to efficiently produce a large-scale, high-quality corpus spanning from standard charts to complex interactive web UIs and code-driven animations. Leveraging this toolkit, we construct JanusCode-800K, the largest multimodal code corpus to date. This powers the training of our models, JanusCoder and JanusCoderV, which establish a visual-programmatic interface for generating code from textual instructions, visual inputs, or a combination of both. Our unified model is a departure from existing approaches that build specialized models for isolated tasks. Extensive experiments on both text-centric and vision-centric coding tasks demonstrate the superior performance of the JanusCoder series, with our 7B to 14B scale models approaching or even exceeding the performance of commercial models. Furthermore, extensive analysis provides key insights into harmonizing programmatic logic with its visual expression. Our code and checkpoints will are available at https://github.com/InternLM/JanusCoder.