New journal article on Screening Articles for Systematic Reviews with ChatGPT

Our article on Screening Articles for Systematic Reviews with ChatGPT with Eugene Syriani and Gauransh Kumar has been accepted for publication in the Journal of Computer Languages (COLA).

This is the first proper evaluation of ChatGPT’s ability to support one of the most labor-intensive and error-prone activities in empirical research. A comparison of ChatGPT’s performance with that of five other classifiers, in six prompting strategies, six metrics, and five real large-scale data sets from SLRs and SMSs.

Takeaways:

  • 🎉 ChatGPT outperforms traditional classifiers (up to 82% accuracy) 🎉, and that
  • 🎉 without prior training 🎉; however,
  • ☝️ human intelligence is still required ☝️ (anticipate this in the next generation of SR tools).

Preprint: available.

Abstract.

Systematic reviews (SRs) provide valuable evidence for guiding new research directions. However, the manual effort involved in selecting articles for inclusion in an SR is error-prone and time-consuming. While screening articles has traditionally been considered challenging to automate, the advent of large language models offers new possibilities. In this paper, we discuss the effect of using ChatGPT on the SR process. In particular, we investigate the effectiveness of different prompt strategies for automating the article screening process using five real SR datasets.
Our results show that ChatGPT can reach up to 82% accuracy. The best performing prompts specify exclusion criteria and avoid negative shots. However, prompts should be adapted to different corpus characteristics.

.