Computational Limits of Low-Rank Adaptation (LoRA) for Transformer-Based Models

Jerry Yao-Chieh Hu, Maojiang Su, En-Jui Kuo,Zhao Song,Han Liu

CoRR(2024)

引用 0|浏览24
暂无评分
摘要
We study the computational limits of Low-Rank Adaptation (LoRA) update for finetuning transformer-based models using fine-grained complexity theory. Our key observation is that the existence of low-rank decompositions within the gradient computation of LoRA adaptation leads to possible algorithmic speedup. This allows us to (i) identify a phase transition behavior and (ii) prove the existence of nearly linear algorithms by controlling the LoRA update computation term by term, assuming the Strong Exponential Time Hypothesis (SETH). For the former, we identify a sharp transition in the efficiency of all possible rank-r LoRA update algorithms for transformers, based on specific norms resulting from the multiplications of the input sequence 𝐗, pretrained weights 𝐖^⋆, and adapter matrices α𝐁𝐀 / r. Specifically, we derive a shared upper bound threshold for such norms and show that efficient (sub-quadratic) approximation algorithms of LoRA exist only below this threshold. For the latter, we prove the existence of nearly linear approximation algorithms for LoRA adaptation by utilizing the hierarchical low-rank structures of LoRA gradients and approximating the gradients with a series of chained low-rank approximations. To showcase our theory, we consider two practical scenarios: partial (e.g., only 𝐖_V and 𝐖_Q) and full adaptations (e.g., 𝐖_Q, 𝐖_V, and 𝐖_K) of weights in attention heads.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要