%0 Journal Article %T ChatGPT: Can You Prepare My Patients for [18F]FDG PET/CT and Explain My Reports? %A Rogasch JMM %A Metzger G %A Preisler M %A Galler M %A Thiele F %A Brenner W %A Feldhaus F %A Wetz C %A Amthauer H %A Furth C %A Schatka I %J J Nucl Med %V 64 %N 12 %D 2023 12 1 %M 37709536 %F 11.082 %R 10.2967/jnumed.123.266114 %X We evaluated whether the artificial intelligence chatbot ChatGPT can adequately answer patient questions related to [18F]FDG PET/CT in common clinical indications before and after scanning. Methods: Thirteen questions regarding [18F]FDG PET/CT were submitted to ChatGPT. ChatGPT was also asked to explain 6 PET/CT reports (lung cancer, Hodgkin lymphoma) and answer 6 follow-up questions (e.g., on tumor stage or recommended treatment). To be rated "useful" or "appropriate," a response had to be adequate by the standards of the nuclear medicine staff. Inconsistency was assessed by regenerating responses. Results: Responses were rated "appropriate" for 92% of 25 tasks and "useful" for 96%. Considerable inconsistencies were found between regenerated responses for 16% of tasks. Responses to 83% of sensitive questions (e.g., staging/treatment options) were rated "empathetic." Conclusion: ChatGPT might adequately substitute for advice given to patients by nuclear medicine staff in the investigated settings. Improving the consistency of ChatGPT would further increase reliability.