VoxHammer
Training-Free Precise and Coherent 3D Editing in Native 3D Space
Lin Li2*
Zehuan Huang1*†
Haoran Feng3
Gengxiong Zhuang1
Rui Chen1
Chunchao Guo4
Lu Sheng1✉
1Beihang University
2Renmin University of China
3Tsinghua University
4Tencent Hunyuan
* Equal contribution
Project Lead
Corresponding author
Teaser Image
TL;DR: A training-free 3D editing approach that performs precise and coherent editing in native 3D latent space instead of multi-view space.
3D local editing of specified regions is crucial for game industry and robot interaction. Recent methods typically edit rendered multi-view images and then reconstruct 3D models, but they face challenges in precisely preserving unedited regions and overall coherence. Inspired by structured 3D generative models, we propose VoxHammer, a novel training-free approach that performs precise and coherent editing in 3D latent space. Given a 3D model, VoxHammer first predicts its inversion trajectory and obtains its inverted latents and key-value tokens at each timestep. Subsequently, in the denoising and editing phase, we replace the denoising features of preserved regions with the corresponding inverted latents and cached key-value tokens. By retaining these contextual features, this approach ensures consistent reconstruction of preserved areas and coherent integration of edited parts. To evaluate the consistency of preserved regions, we constructed Edit3D-Bench, a human-annotated dataset comprising hundreds of samples, each with carefully labeled 3D editing regions. Experiments demonstrate that VoxHammer significantly outperforms existing methods in terms of both 3D consistency of preserved regions and overall quality. Our method holds promise for synthesizing high-quality edited paired data, thereby laying the data foundation for in-context 3D generation.
Preview
Editing | Image-condition 3D Editing

VoxHammer enables the editing of 3D models conditioned on images. Click on the cards to view extracted GLB files.

Editing | Text-condition 3D Editing

VoxHammer also supports editing 3D models conditioned on text. Click on the cards to view extracted GLB files.

Applications | Part-aware Object Editing

VoxHammer enables flexible editing of part-aware generated 3D assets.

Applications | Compositional 3D Scene Editing

VoxHammer further extends to compositional 3D scene editing.

Applications | NeRF or 3DGS Editing

VoxHammer also generalizes NeRF or 3DGS editing.

Methodology

Pipeline of the method

Given an input 3D model, a user-specified editing region, and a text prompt, the off-the-shelf models are used to inpaint the rendered view from the 3D model. Subsequently, our VoxHammer, a training-free framework based on structured 3D diffusion models, performs native 3D editing conditioned on the input 3D and the edited image.

Our framework adopts TRELLIS as the base model, which predicts sparse structures at the first structure (ST) stage and denoise fine-grained structured latents at the second sparse-latent (SLAT) stage. VoxHammer performs inversion prediction in both the ST and SLAT stages, which map the textured 3D asset to its terminal noise, with latents and key/value tensors cached at each timestep. Subsequently, VoxHammer denoises from the inverted noise, and replace the features of the preserved regions with the corresponding cached latents and key-value tokens, thereby achieving precise and coherent editing in native 3D space.

Benchmark

Explore our Edit3D-Bench visualization for more details.

Citation

If you find our work useful, please consider citing:

@article{li2025voxhammer, title = {VoxHammer: Training-Free Precise and Coherent 3D Editing in Native 3D Space}, author = {Li, Lin and Huang, Zehuan and Feng, Haoran and Zhuang, Gengxiong and Chen, Rui and Guo, Chunchao and Sheng, Lu}, journal = {arXiv preprint arXiv:2508.19247}, year = {2025} }

The website template is borrowed from TRELLIS.