Dynamic Gaussians Mesh: Consistent Mesh Reconstruction from Monocular Videos
CoRR(2024)
摘要
Modern 3D engines and graphics pipelines require mesh as a memory-efficient
representation, which allows efficient rendering, geometry processing, texture
editing, and many other downstream operations. However, it is still highly
difficult to obtain high-quality mesh in terms of structure and detail from
monocular visual observations. The problem becomes even more challenging for
dynamic scenes and objects. To this end, we introduce Dynamic Gaussians Mesh
(DG-Mesh), a framework to reconstruct a high-fidelity and time-consistent mesh
given a single monocular video. Our work leverages the recent advancement in 3D
Gaussian Splatting to construct the mesh sequence with temporal consistency
from a video. Building on top of this representation, DG-Mesh recovers
high-quality meshes from the Gaussian points and can track the mesh vertices
over time, which enables applications such as texture editing on dynamic
objects. We introduce the Gaussian-Mesh Anchoring, which encourages evenly
distributed Gaussians, resulting better mesh reconstruction through mesh-guided
densification and pruning on the deformed Gaussians. By applying
cycle-consistent deformation between the canonical and the deformed space, we
can project the anchored Gaussian back to the canonical space and optimize
Gaussians across all time frames. During the evaluation on different datasets,
DG-Mesh provides significantly better mesh reconstruction and rendering than
baselines.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要