< Explain other AI papers

Artificial Intelligence and Misinformation in Art: Can Vision Language Models Judge the Hand or the Machine Behind the Canvas?

Tarian Fu, Javier Conde, Gonzalo Martínez, Pedro Reviriego, Elena Merino-Gómez, Fernando Moral

2025-08-05

Artificial Intelligence and Misinformation in Art: Can Vision Language
  Models Judge the Hand or the Machine Behind the Canvas?

Summary

This paper talks about how current vision language models struggle to correctly identify whether a piece of artwork was made by a human artist or generated by AI. It shows that these models often make mistakes in attributing the creator of the art.

What's the problem?

The problem is that as AI-generated art looks more realistic, it becomes harder to tell it apart from human-made art. Existing models have trouble accurately recognizing who or what made the art, which can cause misinformation and confusion.

What's the solution?

The paper examines how well these state-of-the-art vision language models do at distinguishing between human and AI art and shows their limitations, pointing out the need for better approaches to prevent mistakes and misinformation.

Why it matters?

This matters because correctly identifying the source of artwork is important for artists, collectors, and platforms to avoid fake or misleading information, protect artists’ rights, and maintain trust in digital content.

Abstract

State-of-the-art vision language models struggle with accurately attributing artists and distinguishing AI-generated images, highlighting the need for improvement to prevent misinformation.