Unleashing the Power of Meta-tuning for Few-shot Generalization Through Sparse Interpolated Experts
ICML(2024)
摘要
Conventional wisdom suggests parameter-efficient fine-tuning of foundationmodels as the state-of-the-art method for transfer learning in vision,replacing the rich literature of alternatives such as meta-learning. In tryingto harness the best of both worlds, meta-tuning introduces a subsequentoptimization stage of foundation models but has so far only shown limitedsuccess and crucially tends to underperform on out-of-domain (OOD) tasks. Inthis paper, we introduce Sparse MetA-Tuning (SMAT), a method inspired by sparsemixture-of-experts approaches and trained to isolate subsets of pre-trainedparameters automatically for meta-tuning on each task. SMAT successfullyovercomes OOD sensitivity and delivers on the promise of enhancing the transferabilities of vision foundation models beyond parameter-efficient finetuning. Weestablish new state-of-the-art results on a challenging combination ofMeta-Dataset augmented with additional OOD tasks in both zero-shot andgradient-based adaptation settings. In addition, we provide a thorough analysisof the superiority of learned over hand-designed sparsity patterns for sparseexpert methods and the pivotal importance of the sparsity level in balancingbetween in-domain and out-of-domain generalization. Our code is publiclyavailable.
更多查看译文
关键词
Model Reduction,Signal Processing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要