%0 Journal Article %T CSAMDT: Conditional Self Attention Memory-Driven Transformers for Radiology Report Generation from Chest X-Ray. %A Shahzadi I %A Madni TM %A Janjua UI %A Batool G %A Naz B %A Ali MQ %J J Imaging Inform Med %V 0 %N 0 %D 2024 Jun 3 %M 38831189 暂无%R 10.1007/s10278-024-01126-6 %X A radiology report plays a crucial role in guiding patient treatment, but writing these reports is a time-consuming task that demands a radiologist's expertise. In response to this challenge, researchers in the subfields of artificial intelligence for healthcare have explored techniques for automatically interpreting radiographic images and generating free-text reports, while much of the research on medical report creation has focused on image captioning methods without adequately addressing particular report aspects. This study introduces a Conditional Self Attention Memory-Driven Transformer model for generating radiological reports. The model operates in two phases: initially, a multi-label classification model, utilizing ResNet152 v2 as an encoder, is employed for feature extraction and multiple disease diagnosis. In the second phase, the Conditional Self Attention Memory-Driven Transformer serves as a decoder, utilizing self-attention memory-driven transformers to generate text reports. Comprehensive experimentation was conducted to compare existing and proposed techniques based on Bilingual Evaluation Understudy (BLEU) scores ranging from 1 to 4. The model outperforms the other state-of-the-art techniques by increasing the BLEU 1 (0.475), BLEU 2 (0.358), BLEU 3 (0.229), and BLEU 4 (0.165) respectively. This study's findings can alleviate radiologists' workloads and enhance clinical workflows by introducing an autonomous radiological report generation system.