WonderWorld: Interactive 3D Scene Generation from a Single Image
CoRR(2024)
摘要
We present WonderWorld, a novel framework for interactive 3D scene
extrapolation that enables users to explore and shape virtual environments
based on a single input image and user-specified text. While significant
improvements have been made to the visual quality of scene generation, existing
methods are run offline, taking tens of minutes to hours to generate a scene.
By leveraging Fast Gaussian Surfels and a guided diffusion-based depth
estimation method, WonderWorld generates geometrically consistent extrapolation
while significantly reducing computational time. Our framework generates
connected and diverse 3D scenes in less than 10 seconds on a single A6000 GPU,
enabling real-time user interaction and exploration. We demonstrate the
potential of WonderWorld for applications in virtual reality, gaming, and
creative design, where users can quickly generate and navigate immersive,
potentially infinite virtual worlds from a single image. Our approach
represents a significant advancement in interactive 3D scene generation,
opening up new possibilities for user-driven content creation and exploration
in virtual environments. We will release full code and software for
reproducibility. Project website: https://WonderWorld-2024.github.io/
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要