Instruction-NeRF2NeRF:使用指令编辑3D场景

2023-03-22  本文已影响0人  Valar_Morghulis

Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions

Mar 2023

Ayaan Haque, Matthew Tancik, Alexei A. Efros, Aleksander Holynski, Angjoo Kanazawa

https://arxiv.org/abs/2303.12789

https://instruct-nerf2nerf.github.io/            ★★★★★

我们提出了一种使用文本指令编辑NeRF场景的方法。给定场景的NeRF和用于重建它的图像集合,我们的方法使用图像条件扩散模型(InstructPix2Pix)来迭代编辑输入图像,同时优化底层场景,从而产生尊重编辑指令的优化3D场景。我们证明了我们提出的方法能够编辑大规模的真实世界场景,并且能够比以前的工作完成更逼真、更有针对性的编辑。

We propose a method for editing NeRF scenes with text-instructions. Given a NeRF of a scene and the collection of images used to reconstruct it, our method uses an image-conditioned diffusion model (InstructPix2Pix) to iteratively edit the input images while optimizing the underlying scene, resulting in an optimized 3D scene that respects the edit instruction. We demonstrate that our proposed method is able to edit large-scale, real-world scenes, and is able to accomplish more realistic, targeted edits than prior work.

上一篇下一篇

猜你喜欢

热点阅读