< Explain other AI papers

DiffNR: Diffusion-Enhanced Neural Representation Optimization for Sparse-View 3D Tomographic Reconstruction

Shiyan Su, Ruyi Zha, Danli Shi, Hongdong Li, Xuelian Cheng

2026-04-27

DiffNR: Diffusion-Enhanced Neural Representation Optimization for Sparse-View 3D Tomographic Reconstruction

Summary

This paper introduces a new method called DiffNR to improve the quality of 3D images created from CT scans, especially when the scans don't have a lot of data due to limited angles during the scan.

What's the problem?

CT scans sometimes have to be taken with fewer angles to reduce radiation exposure or scan time. However, this results in 'sparse-view' settings, which create noticeable artifacts and blurriness in the reconstructed 3D image. Existing methods for fixing these issues are often slow and computationally expensive.

What's the solution?

DiffNR uses a technique called diffusion to 'repair' the blurry slices of the CT scan. It has a component called SliceFixer, which is a diffusion model specifically designed to correct these artifacts. SliceFixer doesn't constantly need to run complex calculations during the reconstruction process; instead, it generates 'reference' images periodically to guide the overall 3D image creation, making it faster. They also carefully prepared the data used to train the model to make it work even better.

Why it matters?

This research is important because it allows for clearer 3D images from CT scans even when fewer scan angles are used. This means patients can be exposed to less radiation, and scans can be completed more quickly, all while maintaining a high level of image quality. The method is also versatile and works well on different types of CT scans.

Abstract

Neural representations (NRs), such as neural fields and 3D Gaussians, effectively model volumetric data in computed tomography (CT) but suffer from severe artifacts under sparse-view settings. To address this, we propose DiffNR, a novel framework that enhances NR optimization with diffusion priors. At its core is SliceFixer, a single-step diffusion model designed to correct artifacts in degraded slices. We integrate specialized conditioning layers into the network and develop tailored data curation strategies to support model finetuning. During reconstruction, SliceFixer periodically generates pseudo-reference volumes, providing auxiliary 3D perceptual supervision to fix underconstrained regions. Compared to prior methods that embed CT solvers into time-consuming iterative denoising, our repair-and-augment strategy avoids frequent diffusion model queries, leading to better runtime performance. Extensive experiments show that DiffNR improves PSNR by 3.99 dB on average, generalizes well across domains, and maintains efficient optimization.