MIDI: Multi-Instance Diffusion for Single Image to 3D Scene Generation

1Beihang University    2VAST    3Tsinghua University    4The University of Hong Kong
Project Lead    Corresponding Author

Create high-fidelity 3D Scene from from a single image using Multi-Instance Diffusion Models.

Abstract

This paper introduces MIDI, a novel paradigm for compositional 3D scene generation from a single image. Unlike existing methods that rely on reconstruction or retrieval techniques or recent approaches that employ multi-stage object-by-object generation, MIDI extends pre-trained image-to-3D object generation models to multi-instance diffusion models, enabling the simultaneous generation of multiple 3D instances with accurate spatial relationships and high generalizability. At its core, MIDI incorporates a novel multi-instance attention mechanism, that effectively captures inter-object interactions and spatial coherence directly within the generation process, without the need for complex multi-step processes. The method utilizes partial object images and global scene context as inputs, directly modeling object completion during 3D generation. During training, we effectively supervise the interactions between 3D instances using a limited amount of scene-level data, while incorporating single-object data for regularization, thereby maintaining the pre-trained generalization ability. MIDI demonstrates state-of-the-art performance in image-to-scene generation, validated through evaluations on synthetic data, real-world scene data, and stylized scene images generated by text-to-image diffusion models.

How it works


Given an input image of a scene, we segment it into multiple parts and use a multi-instance diffusion model conditioned on those images to generate compositional 3D instances of the scene. These 3D instances can be directly composed into a scene. The total processing time runs in as little as 40 seconds.

Interactive Results


Comparisons to Other Methods


Method Overview



MIDI is a multi-instance diffusion model to generate compositional 3D instances of the scene from a single image. Based on 3D object generation models, MIDI denoises the latent representations of multiple 3D instances simultaneously using a weight-shared DiT module. The multi-instance attention layers are introduced to learn cross-instance interaction and enable global awareness, while cross-attention layers integrate the information of object images and global scene context.

BibTeX

@article{huang2024midi,
  title={MIDI: Multi-Instance Diffusion for Single Image to 3D Scene Generation},
  author={Huang, Zehuan and Guo, Yuanchen and An, Xingqiao and Yang, Yunhan and Li, Yangguang and Zou, Zixin and Liang, Ding and Liu, Xihui and Cao, Yanpei and Sheng, Lu},
  journal={arXiv preprint arXiv:2412.03558},
  year={2024}
}