< Explain other AI papers

RewardSDS: Aligning Score Distillation via Reward-Weighted Sampling

Itay Chachy, Guy Yariv, Sagie Benaim

2025-03-13

RewardSDS: Aligning Score Distillation via Reward-Weighted Sampling

Summary

This paper talks about RewardSDS, a method that helps AI-generated 3D models or images better match what users want by using a reward system to pick the best ideas during the creation process.

What's the problem?

Current AI tools for making 3D models or images from text often miss details or don’t perfectly match user requests, leading to results that look off or miss the mark.

What's the solution?

RewardSDS adds a quality-checker that scores different AI ideas as they’re being made, then focuses on the best-scoring ideas to guide the final result, making it more accurate.

Why it matters?

This improves AI tools for designers, game developers, or artists by creating 3D models and images that better match their vision, saving time and reducing frustration.

Abstract

Score Distillation Sampling (SDS) has emerged as an effective technique for leveraging 2D diffusion priors for tasks such as text-to-3D generation. While powerful, SDS struggles with achieving fine-grained alignment to user intent. To overcome this, we introduce RewardSDS, a novel approach that weights noise samples based on alignment scores from a reward model, producing a weighted SDS loss. This loss prioritizes gradients from noise samples that yield aligned high-reward output. Our approach is broadly applicable and can extend SDS-based methods. In particular, we demonstrate its applicability to Variational Score Distillation (VSD) by introducing RewardVSD. We evaluate RewardSDS and RewardVSD on text-to-image, 2D editing, and text-to-3D generation tasks, showing significant improvements over SDS and VSD on a diverse set of metrics measuring generation quality and alignment to desired reward models, enabling state-of-the-art performance. Project page is available at https://itaychachy. github.io/reward-sds/.