BERTs are Generative In-Context Learners
CoRR(2024)
摘要
This paper explores the in-context learning capabilities of masked language
models, challenging the common view that this ability does not 'emerge' in
them. We present an embarrassingly simple inference technique that enables
DeBERTa to operate as a generative model without any additional training. Our
findings demonstrate that DeBERTa can match and even surpass GPT-3, its
contemporary that famously introduced the paradigm of in-context learning. The
comparative analysis reveals that the masked and causal language models behave
very differently, as they clearly outperform each other on different categories
of tasks. This suggests that there is great potential for a hybrid training
approach that takes advantage of the strengths of both training objectives.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要