Discussion Paper: Exploiting LLMs for Scam Automation: A Looming Threat

Gilad Gressel, Rahul Pankajakshan,Yisroel Mirsky

PROCEEDINGS OF THE 3RD ACM WORKSHOP ON SECURITY IMPLICATIONS OF DEEPFAKES AND CHEAPFAKES, ACM WDC 2024(2024)

引用 0|浏览3
暂无评分
摘要
Large Language Models (LLMs) have enabled powerful new AI capabilities, but their potential misuse for automating scams and fraud poses a serious emerging threat. In this paper, we investigate how LLMs combined with speech synthesis and speech recognition could be leveraged to build automated systems for executing phone scams at scale. Our research reveals that current publicly accessible language models can, through advanced prompt engineering, mimic authorities and seek personal financial information, bypassing existing safeguards. As these models become more widely available, they significantly lower the barriers for executing complex AI-driven scams, including potential future threats like voice cloning for virtual kidnapping. Existing defences, such as passive detection is not suitable for synthetic voice over compressed channels. Therefore, we urgently call for multi-disciplinary research into user education, media forensics, regulatory measures, and AI safety enhancements to combat this growing risk. Without proactive measures, the rise in AI-enabled fraud could undermine consumer trust in the digital and economic landscape, emphasizing the need for a comprehensive strategy to prevent automated fraud.
更多
查看译文
关键词
LLM,AI Security,Vishing,Deepfakes
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要