VICtoR: Learning Hierarchical Vision-Instruction Correlation Rewards for Long-horizon Manipulation
CoRR(2024)
摘要
We study reward models for long-horizon manipulation tasks by learning from
action-free videos and language instructions, which we term the
visual-instruction correlation (VIC) problem. Recent advancements in
cross-modality modeling have highlighted the potential of reward modeling
through visual and language correlations. However, existing VIC methods face
challenges in learning rewards for long-horizon tasks due to their lack of
sub-stage awareness, difficulty in modeling task complexities, and inadequate
object state estimation. To address these challenges, we introduce VICtoR, a
novel hierarchical VIC reward model capable of providing effective reward
signals for long-horizon manipulation tasks. VICtoR precisely assesses task
progress at various levels through a novel stage detector and motion progress
evaluator, offering insightful guidance for agents learning the task
effectively. To validate the effectiveness of VICtoR, we conducted extensive
experiments in both simulated and real-world environments. The results suggest
that VICtoR outperformed the best existing VIC methods, achieving a 43
improvement in success rates for long-horizon tasks.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要