Generating Compositional Scenes via Text-to-image RGBA Instance Generation
Alessandro Fontanella, Petru-Daniel Tudosiu, Yongxin Yang, Shifeng Zhang, Sarah Parisot
2024-11-21

Summary
This paper discusses a new method for generating complex images from text descriptions using a technique called RGBA instance generation, which allows for better control over how objects appear and are arranged in the images.
What's the problem?
While current text-to-image models can create high-quality images, they often require complicated prompts and lack the ability to edit layouts or control specific details of objects. This makes it hard for users to get exactly what they want in their generated images, especially when dealing with multiple objects or complex scenes.
What's the solution?
The authors propose a multi-stage generation process that first creates individual image components as RGBA images, which include transparency information. This allows for fine-tuned control over each object's appearance and position. These pre-generated components are then combined using a multi-layer approach that carefully assembles them into a complete scene. This method improves flexibility and interactivity, enabling users to manipulate the layout and attributes of objects more easily than previous methods.
Why it matters?
This research is important because it enhances the capabilities of text-to-image generation, making it more user-friendly and precise. By allowing for detailed control over how objects are represented and arranged, this method can benefit artists, designers, and anyone looking to create customized images from text prompts.
Abstract
Text-to-image diffusion generative models can generate high quality images at the cost of tedious prompt engineering. Controllability can be improved by introducing layout conditioning, however existing methods lack layout editing ability and fine-grained control over object attributes. The concept of multi-layer generation holds great potential to address these limitations, however generating image instances concurrently to scene composition limits control over fine-grained object attributes, relative positioning in 3D space and scene manipulation abilities. In this work, we propose a novel multi-stage generation paradigm that is designed for fine-grained control, flexibility and interactivity. To ensure control over instance attributes, we devise a novel training paradigm to adapt a diffusion model to generate isolated scene components as RGBA images with transparency information. To build complex images, we employ these pre-generated instances and introduce a multi-layer composite generation process that smoothly assembles components in realistic scenes. Our experiments show that our RGBA diffusion model is capable of generating diverse and high quality instances with precise control over object attributes. Through multi-layer composition, we demonstrate that our approach allows to build and manipulate images from highly complex prompts with fine-grained control over object appearance and location, granting a higher degree of control than competing methods.