SCULPT: Shape-Conditioned Unpaired Learning of Pose-dependent Clothed and Textured Human Meshes

2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)(2024)

引用 0|浏览104
暂无评分
摘要
We present SCULPT, a novel 3D generative model for clothed and textured 3Dmeshes of humans. Specifically, we devise a deep neural network that learns torepresent the geometry and appearance distribution of clothed human bodies.Training such a model is challenging, as datasets of textured 3D meshes forhumans are limited in size and accessibility. Our key observation is that thereexist medium-sized 3D scan datasets like CAPE, as well as large-scale 2D imagedatasets of clothed humans and multiple appearances can be mapped to a singlegeometry. To effectively learn from the two data modalities, we propose anunpaired learning procedure for pose-dependent clothed and textured humanmeshes. Specifically, we learn a pose-dependent geometry space from 3D scandata. We represent this as per vertex displacements w.r.t. the SMPL model.Next, we train a geometry conditioned texture generator in an unsupervised wayusing the 2D image data. We use intermediate activations of the learnedgeometry model to condition our texture generator. To alleviate entanglementbetween pose and clothing type, and pose and clothing appearance, we conditionboth the texture and geometry generators with attribute labels such as clothingtypes for the geometry, and clothing colors for the texture generator. Weautomatically generated these conditioning labels for the 2D images based onthe visual question answering model BLIP and CLIP. We validate our method onthe SCULPT dataset, and compare to state-of-the-art 3D generative models forclothed human bodies. Our code and data can be found athttps://sculpt.is.tue.mpg.de.
更多
查看译文
关键词
Digital avatars,Generative modelling of digital humans,GAN,Generative model for clothing geometry and appearance,Generative texture,Human images dataset
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要