< Explain other AI papers

Sightation Counts: Leveraging Sighted User Feedback in Building a BLV-aligned Dataset of Diagram Descriptions

Wan Ju Kang, Eunki Kim, Na Min An, Sangryul Kim, Haemin Choi, Ki Hoon Kwak, James Thorne

2025-03-18

Sightation Counts: Leveraging Sighted User Feedback in Building a
  BLV-aligned Dataset of Diagram Descriptions

Summary

This paper focuses on improving diagram descriptions for blind and low-vision (BLV) users by using feedback from sighted individuals to evaluate and refine descriptions generated by AI.

What's the problem?

Creating useful diagram descriptions for BLV users is difficult because sighted people, who usually create the descriptions, may not understand what information is most important or how best to present it for BLV users. Direct descriptions by sighted annotators are often costly, biased, and don't fully meet the needs of BLV individuals.

What's the solution?

Instead of having sighted people create descriptions, they are asked to evaluate descriptions generated by AI (vision-language models). This evaluation helps improve the AI-generated descriptions. The feedback from sighted individuals is then used to create a dataset of diagram descriptions that are better suited for BLV users, as validated by professional educators who are themselves BLV.

Why it matters?

This work is important because it provides a more effective and user-centered way to create accessible diagram descriptions for BLV users, empowering them to better understand visual information and participate more fully in education and other activities.

Abstract

Often, the needs and visual abilities differ between the annotator group and the end user group. Generating detailed diagram descriptions for blind and low-vision (BLV) users is one such challenging domain. Sighted annotators could describe visuals with ease, but existing studies have shown that direct generations by them are costly, bias-prone, and somewhat lacking by BLV standards. In this study, we ask sighted individuals to assess -- rather than produce -- diagram descriptions generated by vision-language models (VLM) that have been guided with latent supervision via a multi-pass inference. The sighted assessments prove effective and useful to professional educators who are themselves BLV and teach visually impaired learners. We release Sightation, a collection of diagram description datasets spanning 5k diagrams and 137k samples for completion, preference, retrieval, question answering, and reasoning training purposes and demonstrate their fine-tuning potential in various downstream tasks.