Exploring the use of a Large Language Model for data extraction in systematic reviews: a rapid feasibility study
CoRR(2024)
摘要
This paper describes a rapid feasibility study of using GPT-4, a large
language model (LLM), to (semi)automate data extraction in systematic reviews.
Despite the recent surge of interest in LLMs there is still a lack of
understanding of how to design LLM-based automation tools and how to robustly
evaluate their performance. During the 2023 Evidence Synthesis Hackathon we
conducted two feasibility studies. Firstly, to automatically extract study
characteristics from human clinical, animal, and social science domain studies.
We used two studies from each category for prompt-development; and ten for
evaluation. Secondly, we used the LLM to predict Participants, Interventions,
Controls and Outcomes (PICOs) labelled within 100 abstracts in the EBM-NLP
dataset. Overall, results indicated an accuracy of around 80
variability between domains (82
for studies of human social sciences). Causal inference methods and study
design were the data extraction items with the most errors. In the PICO study,
participants and intervention/control showed high accuracy (>80
were more challenging. Evaluation was done manually; scoring methods such as
BLEU and ROUGE showed limited value. We observed variability in the LLMs
predictions and changes in response quality. This paper presents a template for
future evaluations of LLMs in the context of data extraction for systematic
review automation. Our results show that there might be value in using LLMs,
for example as second or third reviewers. However, caution is advised when
integrating models such as GPT-4 into tools. Further research on stability and
reliability in practical settings is warranted for each type of data that is
processed by the LLM.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要