

Comparative Analysis of Large Language Models’ Performance on a Practice Radiography Certification Examination
Files
Download Poster (2.6 MB)
Description
This study highlights the educational value of large language models (LLMs) in radiography training by evaluating their accuracy on a practice certification exam. By identifying strengths and weaknesses in LLM-generated responses, educators and students can better understand their potential as study tools while recognizing the importance of critical evaluation. The findings emphasize the need for verification when using AI for exam preparation, reinforcing responsible AI integration in radiologic science education.
Publication Date
2025
Recommended Citation
Clark, Kevin R., "Comparative Analysis of Large Language Models’ Performance on a Practice Radiography Certification Examination" (2025). 2025 Education Week Poster Showcase. 7.
https://openworks.mdanderson.org/edwk25/7