Learning Naturally Aggregated Appearance for Efficient 3D Editing
CoRR(2023)
Abstract
Neural radiance fields, which represent a 3D scene as a color field and a
density field, have demonstrated great progress in novel view synthesis yet are
unfavorable for editing due to the implicitness. In view of such a deficiency,
we propose to replace the color field with an explicit 2D appearance
aggregation, also called canonical image, with which users can easily customize
their 3D editing via 2D image processing. To avoid the distortion effect and
facilitate convenient editing, we complement the canonical image with a
projection field that maps 3D points onto 2D pixels for texture lookup. This
field is carefully initialized with a pseudo canonical camera model and
optimized with offset regularity to ensure naturalness of the aggregated
appearance. Extensive experimental results on three datasets suggest that our
representation, dubbed AGAP, well supports various ways of 3D editing (e.g.,
stylization, interactive drawing, and content extraction) with no need of
re-optimization for each case, demonstrating its generalizability and
efficiency. Project page is available at https://felixcheng97.github.io/AGAP/.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined