{Reference Type}: Journal Article {Title}: ChatGPT: Can You Prepare My Patients for [18F]FDG PET/CT and Explain My Reports? {Author}: Rogasch JMM;Metzger G;Preisler M;Galler M;Thiele F;Brenner W;Feldhaus F;Wetz C;Amthauer H;Furth C;Schatka I; {Journal}: J Nucl Med {Volume}: 64 {Issue}: 12 {Year}: 2023 12 1 {Factor}: 11.082 {DOI}: 10.2967/jnumed.123.266114 {Abstract}: We evaluated whether the artificial intelligence chatbot ChatGPT can adequately answer patient questions related to [18F]FDG PET/CT in common clinical indications before and after scanning. Methods: Thirteen questions regarding [18F]FDG PET/CT were submitted to ChatGPT. ChatGPT was also asked to explain 6 PET/CT reports (lung cancer, Hodgkin lymphoma) and answer 6 follow-up questions (e.g., on tumor stage or recommended treatment). To be rated "useful" or "appropriate," a response had to be adequate by the standards of the nuclear medicine staff. Inconsistency was assessed by regenerating responses. Results: Responses were rated "appropriate" for 92% of 25 tasks and "useful" for 96%. Considerable inconsistencies were found between regenerated responses for 16% of tasks. Responses to 83% of sensitive questions (e.g., staging/treatment options) were rated "empathetic." Conclusion: ChatGPT might adequately substitute for advice given to patients by nuclear medicine staff in the investigated settings. Improving the consistency of ChatGPT would further increase reliability.