Mechanism Design for LLM Fine-tuning with Multiple Reward Models
CoRR(2024)
摘要
Recent research on fine-tuning large language models (LLMs) through the
aggregation of multiple preferences has attracted considerable attention.
However, the existing literature predominantly focuses on the empirical
performance of aggregation algorithms, while neglecting the underlying
motivation for agents to misreport their preferences. In this paper, we
formalize this as a multi-parameter mechanism design problem, where an LLM
provider designs both training and payment rules to achieve specific objectives
and promote the truthful reporting of preferences. Firstly, we claim the
necessity of a payment scheme by demonstrating that without payments,
truth-telling is a strictly dominated strategy under a wide range of training
rules. Then, we introduce the affine maximizer payment scheme for the social
welfare maximizing training rules that are widely used in practice, which
ensures both dominant-strategy incentive compatibility (DSIC) and individual
rationality (IR). Furthermore, we prove that under mild conditions, any other
payment rule that also implements these training rules in DSIC can be converted
to the affine maximizer payment by adding a factor irrelevant to the agents'
own reports. We also show that this mechanism satisfies approximate DSIC when
the input of the mechanism is a biased version of the reported preferences,
showcasing its robustness in real-world applications.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要