ChatGPT was correct on 26.7 and 28.2 percent of open-ended and multiple-choice questions
By Elana Gotkine HealthDay Reporter
TUESDAY, June 27, 2023 (HealthDay News) — ChatGPT performs poorly on the 2022 American Urological Association Self-Assessment Study Program, according to a study published online June 5 in Urology Practice.
Linda My Huynh, from the University of Nebraska Medical Center in Omaha, and colleagues assessed use of ChatGPT on the American Urological Association Self-Assessment Study Program as an educational adjunct. A total of 135 questions from the 2022 Self-Assessment Study Program exam were screened. ChatGPT’s output was coded as correct, incorrect, or indeterminate; responses were regenerated up to two times if indeterminate.
The researchers found that ChatGPT was correct on 26.7 percent of open-ended and 28.2 percent of multiple-choice questions (36 and 38 questions, respectively). Indeterminate responses were generated in 29.6 and 3.0 percent of open-ended and multiple choice questions, respectively. Of the correct responses, 66.7 and 94.7 percent were on initial output; 22.2 and 2.6 percent on second output; and 11.1 and 2.6 percent on final output, respectively. Indeterminate responses decreased with regeneration, but there was no increase in the proportion of correct responses. ChatGPT provided consistent justifications for incorrect answers for open-ended and multiple-choice questions, and remained concordant between correct and incorrect answers.
“While ChatGPT provided explanations that were well written, easy to read, and concordant with the selected answer choice, these explanations were also lengthy and lacked mechanistic or pathophysiological justifications,” the authors write. “As is, utilization of ChatGPT in urology has a high likelihood of facilitating medical misinformation for the untrained user.”
Editorial 1 (subscription or payment may be required)
Editorial 2 (subscription or payment may be required)
Copyright © 2023 HealthDay. All rights reserved.