LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning
CoRR(2024)
摘要
This work elicits LLMs' inherent ability to handle long contexts without
fine-tuning. The limited length of the training sequence during training may
limit the application of Large Language Models (LLMs) on long input sequences
for inference. In this work, we argue that existing LLMs themselves have
inherent capabilities for handling long contexts. Based on this argument, we
suggest extending LLMs' context window by themselves to fully utilize the
inherent ability.We propose Self-Extend to stimulate LLMs' long context
handling potential. The basic idea is to construct bi-level attention
information: the group level and the neighbor level. The two levels are
computed by the original model's self-attention, which means the proposed does
not require any training. With only four lines of code modification, the
proposed method can effortlessly extend existing LLMs' context window without
any fine-tuning. We conduct comprehensive experiments and the results show that
the proposed method can effectively extend existing LLMs' context window's
length.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要