Poster Presentation Clinical Oncology Society of Australia Annual Scientific Meeting 2024

A Chat-tastrophe: Cross-sectional evaluation of customisable cancer GPTs accessible via OpenAI’s ChatGPT platform   (#376)

Bianca B. Chu 1 , Bradley D. Menz 1 , Natansh D. Modi 1 , Michael J. Sorich 1 , Ashley M. Hopkins 1
  1. Flinders University - College of Medicine and Public Health, Seaton, SA, Australia

Objectives 
In May 2024, OpenAI enabled public access to a novel feature of ChatGPT, called “GPTs”, a customisable and task-specific version of ChatGPT. While these GPTs have potential to support patient education, their ease of creation and public availability raise concerns on their accuracy and safety. Therefore, this study aimed to evaluate the GPTs currently available to patients and clinicians for cancer-specific applications, focusing on their intended use, usage frequency and endorsement by regulatory or professional organizations.  

Methods 
OpenAI’s GPT store was accessed within the ChatGPT platform, and a search of the available cancer-specific GPTs was conducted using the terms “oncology”, “cancer”, “chemotherapy”, “lung cancer”, “breast cancer”, “bowel cancer”, and “colorectal cancer”. Two researchers reviewed the descriptions provided by the GPT creators to determine their intended use, the number of conversations (i.e., individual uses), and whether the GPTs were endorsed by any regulatory or professional organizations. Identified GPTs were then classified into two categories: patient-specific tools or clinician-specific tools, and results were presented using descriptive statistics. 

Results  
In August 2024, a total of 202 cancer-specific GPTs were identified. This included 98 (49%) patient-specific and 104 (51%) clinician-specific tools. A cumulative total of over 5000 conversations with these GPTs were recorded, with two GPTs having over 500 conversations each. Importantly, none of the identified cancer-specific GPTs had endorsements or verifications from regulatory or professional organizations regarding the accuracy of the medical information they provide. 

Conclusions 
This study highlights the growing proliferation of unregulated and publicly accessible GPT models designed to provide cancer-related information to both patients and clinicians. While they have the potential to offer valuable supplementary support, their ease of creation and the risk of disseminating unverified health information highlight the need for further evaluation and regulatory intervention.