Training-Free Precise and Coherent 3D Editing in Native 3D Space
VoxHammer enables the editing of 3D models conditioned on images. Click on the cards to view extracted GLB files.
VoxHammer also supports editing 3D models conditioned on text. Click on the cards to view extracted GLB files.
VoxHammer enables flexible editing of part-aware generated 3D assets.
VoxHammer further extends to compositional 3D scene editing.
VoxHammer also generalizes NeRF or 3DGS editing.
Given an input 3D model, a user-specified editing region, and a text prompt, the off-the-shelf models are used to inpaint the rendered view from the 3D model. Subsequently, our VoxHammer, a training-free framework based on structured 3D diffusion models, performs native 3D editing conditioned on the input 3D and the edited image.
Our framework adopts TRELLIS as the base model, which predicts sparse structures at the first structure (ST) stage and denoise fine-grained structured latents at the second sparse-latent (SLAT) stage. VoxHammer performs inversion prediction in both the ST and SLAT stages, which map the textured 3D asset to its terminal noise, with latents and key/value tensors cached at each timestep. Subsequently, VoxHammer denoises from the inverted noise, and replace the features of the preserved regions with the corresponding cached latents and key-value tokens, thereby achieving precise and coherent editing in native 3D space.
Explore our Edit3D-Bench visualization for more details.
If you find our work useful, please consider citing:
@article{li2025voxhammer, title = {VoxHammer: Training-Free Precise and Coherent 3D Editing in Native 3D Space}, author = {Li, Lin and Huang, Zehuan and Feng, Haoran and Zhuang, Gengxiong and Chen, Rui and Guo, Chunchao and Sheng, Lu}, journal = {arXiv preprint arXiv:2508.19247}, year = {2025} }
The website template is borrowed from TRELLIS.