< Explain other AI papers

OpenNovelty: An LLM-powered Agentic System for Verifiable Scholarly Novelty Assessment

Ming Zhang, Kexin Tan, Yueyuan Huang, Yujiong Shen, Chunchun Ma, Li Ju, Xinran Zhang, Yuhui Wang, Wenqing Jing, Jingyi Deng, Huayu Sha, Binze Hu, Jingqi Tong, Changhao Jiang, Yage Geng, Yuankai Ying, Yue Zhang, Zhangyue Yin, Zhiheng Xi, Shihan Dou, Tao Gui, Qi Zhang

2026-01-06

OpenNovelty: An LLM-powered Agentic System for Verifiable Scholarly Novelty Assessment

Summary

This paper introduces OpenNovelty, a new system designed to help with peer review by automatically figuring out how original a research paper is.

What's the problem?

Determining if a research paper is truly new and innovative is really hard for peer reviewers. They have to read tons of existing research to see if something similar has already been done, and the amount of research is constantly growing, making it a huge challenge to stay up-to-date and make fair judgments.

What's the solution?

OpenNovelty uses a powerful language model, similar to the ones behind chatbots, but it doesn't just guess. It first identifies the main point and what the paper claims to contribute. Then, it searches for relevant existing papers using those claims. After finding those papers, it carefully compares the new paper to each of them, building a sort of family tree of related work. Finally, it creates a report that clearly shows *why* it thinks the paper is novel, backing up its claims with specific examples and citations from the papers it found.

Why it matters?

This tool could make peer review much more reliable and consistent. It helps reviewers avoid overlooking important related work and provides clear evidence for their decisions, leading to fairer evaluations and ultimately, better science. It's a scalable solution, meaning it can handle a large number of papers, and the reports are publicly available, promoting transparency in the research process.

Abstract

Evaluating novelty is critical yet challenging in peer review, as reviewers must assess submissions against a vast, rapidly evolving literature. This report presents OpenNovelty, an LLM-powered agentic system for transparent, evidence-based novelty analysis. The system operates through four phases: (1) extracting the core task and contribution claims to generate retrieval queries; (2) retrieving relevant prior work based on extracted queries via semantic search engine; (3) constructing a hierarchical taxonomy of core-task-related work and performing contribution-level full-text comparisons against each contribution; and (4) synthesizing all analyses into a structured novelty report with explicit citations and evidence snippets. Unlike naive LLM-based approaches, OpenNovelty grounds all assessments in retrieved real papers, ensuring verifiable judgments. We deploy our system on 500+ ICLR 2026 submissions with all reports publicly available on our website, and preliminary analysis suggests it can identify relevant prior work, including closely related papers that authors may overlook. OpenNovelty aims to empower the research community with a scalable tool that promotes fair, consistent, and evidence-backed peer review.