Dynamic abdominal MRI image generation using cGANs: A generalized model for various breathing patterns with extensive evaluation


Cordón-Avila A., Ballı Ö. F., Damme K., Abayazid M.

Computers in Biology and Medicine, cilt.196, 2025 (SCI-Expanded, Scopus) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 196
  • Basım Tarihi: 2025
  • Doi Numarası: 10.1016/j.compbiomed.2025.110635
  • Dergi Adı: Computers in Biology and Medicine
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, Applied Science & Technology Source, BIOSIS, Biotechnology Research Abstracts, CINAHL, Compendex, Computer & Applied Sciences, EMBASE, INSPEC, Library, Information Science & Technology Abstracts (LISTA)
  • Anahtar Kelimeler: Conditional generative adversarial networks, External abdominal motion, Magnetic resonance imaging
  • Ankara Üniversitesi Adresli: Hayır

Özet

Organ motion is a limiting factor during the treatment of abdominal tumors. During abdominal interventions, medical images are acquired to provide guidance, however, this increases operative time and radiation exposure. In this paper, conditional generative adversarial networks are implemented to generate dynamic magnetic resonance images using external abdominal motion as a surrogate signal. The generator was trained to account for breathing variability, and different models were investigated to improve motion quality. Additionally, an objective and subjective study were conducted to assess image and motion quality. The objective study included different metrics, such as structural similarity index measure (SSIM) and mean absolute error (MAE). In the subjective study, 32 clinical experts participated in evaluating the generated images by completing different tasks. The tasks involved identifying images and videos as real or fake, via a questionnaire allowing experts to assess the realism in static images and dynamic sequences. The results of the best-performing model displayed an SSIM of 0.73 ± 0.13, and the MAE was below 4.5 and 1.8 mm for the superior–inferior and anterior–posterior directions of motion. The proposed framework was compared to a related method that utilized a set of convolutional neural networks combined with recurrent layers. In the subjective study, more than 50% of the generated images and dynamic sequences were classified as real, except for one task. Synthetic images have the potential to reduce the need for acquiring intraoperative images, decreasing time and radiation exposure. A video summary can be found in the supplementary material.