< Explain other AI papers

MMKE-Bench: A Multimodal Editing Benchmark for Diverse Visual Knowledge

Yuntao Du, Kailin Jiang, Zhi Gao, Chenrui Shi, Zilong Zheng, Siyuan Qi, Qing Li

2025-02-27

MMKE-Bench: A Multimodal Editing Benchmark for Diverse Visual Knowledge

Summary

This paper talks about MMKE-Bench, a new tool for testing how well AI systems that use both text and images can update their knowledge. It's designed to be more realistic and challenging than previous tests.

What's the problem?

Current AI models that work with both text and images (called multimodal models) are great, but they sometimes have outdated or wrong information. We need ways to fix this without starting over, but the current tests for doing this are too simple and don't reflect real-world complexity.

What's the solution?

The researchers created MMKE-Bench, which includes three types of tasks for editing AI knowledge: changing information about visual things, updating how the AI understands visual concepts, and adding personal knowledge. They used natural language instead of simple data formats, making it more like real-world situations. The test includes nearly 3,000 pieces of knowledge and over 8,000 images across many categories.

Why it matters?

This matters because as AI becomes more common in our lives, we need ways to keep it up-to-date and accurate. MMKE-Bench helps researchers create better methods for updating AI knowledge, which could lead to smarter, more reliable AI systems that can handle complex real-world information. This could improve things like virtual assistants, image recognition, and personalized AI services.

Abstract

Knowledge editing techniques have emerged as essential tools for updating the factual knowledge of large language models (LLMs) and multimodal models (LMMs), allowing them to correct outdated or inaccurate information without retraining from scratch. However, existing benchmarks for multimodal knowledge editing primarily focus on entity-level knowledge represented as simple triplets, which fail to capture the complexity of real-world multimodal information. To address this issue, we introduce MMKE-Bench, a comprehensive MultiModal Knowledge Editing Benchmark, designed to evaluate the ability of LMMs to edit diverse visual knowledge in real-world scenarios. MMKE-Bench addresses these limitations by incorporating three types of editing tasks: visual entity editing, visual semantic editing, and user-specific editing. Besides, MMKE-Bench uses free-form natural language to represent and edit knowledge, offering a more flexible and effective format. The benchmark consists of 2,940 pieces of knowledge and 8,363 images across 33 broad categories, with evaluation questions automatically generated and human-verified. We assess five state-of-the-art knowledge editing methods on three prominent LMMs, revealing that no method excels across all criteria, and that visual and user-specific edits are particularly challenging. MMKE-Bench sets a new standard for evaluating the robustness of multimodal knowledge editing techniques, driving progress in this rapidly evolving field.