Distilling Text Style Transfer With Self-Explanation From LLMs
CoRR(2024)
摘要
Text Style Transfer (TST) seeks to alter the style of text while retaining
its core content. Given the constraints of limited parallel datasets for TST,
we propose CoTeX, a framework that leverages large language models (LLMs)
alongside chain-of-thought (CoT) prompting to facilitate TST. CoTeX distills
the complex rewriting and reasoning capabilities of LLMs into more streamlined
models capable of working with both non-parallel and parallel data. Through
experimentation across four TST datasets, CoTeX is shown to surpass traditional
supervised fine-tuning and knowledge distillation methods, particularly in
low-resource settings. We conduct a comprehensive evaluation, comparing CoTeX
against current unsupervised, supervised, in-context learning (ICL) techniques,
and instruction-tuned LLMs. Furthermore, CoTeX distinguishes itself by offering
transparent explanations for its style transfer process.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要