This paper introduces MIDI, a novel paradigm for compositional 3D scene generation from a single image. Unlike existing methods that rely on reconstruction or retrieval techniques or recent approaches that employ multi-stage object-by-object generation, MIDI extends pre-trained image-to-3D object generation models to multi-instance diffusion models, enabling the simultaneous generation of multiple 3D instances with accurate spatial relationships and high generalizability. At its core, MIDI incorporates a novel multi-instance attention mechanism, that effectively captures inter-object interactions and spatial coherence directly within the generation process, without the need for complex multi-step processes. The method utilizes partial object images and global scene context as inputs, directly modeling object completion during 3D generation. During training, we effectively supervise the interactions between 3D instances using a limited amount of scene-level data, while incorporating single-object data for regularization, thereby maintaining the pre-trained generalization ability. MIDI demonstrates state-of-the-art performance in image-to-scene generation, validated through evaluations on synthetic data, real-world scene data, and stylized scene images generated by text-to-image diffusion models.
MIDI is a multi-instance diffusion model to generate
compositional 3D instances of the scene from a single image. Based
on 3D object generation models, MIDI denoises the latent
representations of multiple 3D instances simultaneously using a
weight-shared DiT module. The multi-instance attention layers are
introduced to learn cross-instance interaction and enable global
awareness, while cross-attention layers integrate the information of
object images and global scene context.
@inproceedings{huang2025midi,
title={Midi: Multi-instance diffusion for single image to 3d scene generation},
author={Huang, Zehuan and Guo, Yuan-Chen and An, Xingqiao and Yang, Yunhan and Li, Yangguang and Zou, Zi-Xin and Liang, Ding and Liu, Xihui and Cao, Yan-Pei and Sheng, Lu},
booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
pages={23646--23657},
year={2025}
}