Automated deep identification of radiopharmaceutical type and body region from PET images

Document Type : Original Article

Authors

1 Department of Medical Physics and Biomedical Engineering, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran

2 Research Center for Evidence-Based Medicine, Tabriz University of Medical Sciences, Tabriz, Iran

3 Department of Nuclear Medicine, Vali-Asr Hospital, Tehran University of Medical Sciences, Tehran, Iran

4 Nursing Care Research Center, Iran University of Medical Sciences, Tehran, Iran

5 Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada

6 Department of Physics and Astronomy, University of British Columbia, Vancouver, BC, Canada

7 Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran

10.22034/irjnm.2024.129363.1573

Abstract

Introduction: A deep learning pipeline consisting of two deep convolutional neural networks (DeepCNN) was developed, and its capability to differentiate uptake patterns of different radiopharmaceuticals and to further categorize PET images based on the body regions was explored.
Methods: We trained two sets of DeepCNN to determine (i) the type of radiopharmaceutical ([18F]FDG and [68Ga]Ga-PSMA) used in imaging (i.e., a binary classification task), and (ii) body region including head and neck, thorax, abdomen, and pelvis (i.e., a 4-class classification task), using the 2D axial slices of PET images. The models were trained and tested for five different scan durations, thus studying different noise levels.
Results: The accuracy of the binary classification models developed for different scan duration levels was 98.9%–99.6%, and for the 4-class classification models in the range of 98.3%–99.9 ([18F]FDG) and 97.8%–99.6% ([68Ga]Ga-PSMA).
Conclusion: We were able to reliably detect the type of radiopharmaceutical used in PET imaging and the body region of the PET images at different scan duration levels. These deep learning (DL) models can be used together as a preliminary input pipeline for the use of models specific to a type of radiopharmaceutical or body region for different applications and for extracting appropriate data from unclassified images.

Keywords

Main Subjects


  1. Kang J, Gao Y, Shi F, Lalush DS, Lin W, Shen D. Prediction of standard-dose brain PET image by using MRI and low-dose brain [18F]FDG PET images. Med Phys. 2015 Sep;42(9):5301–9.
  2. Lei Y, Dong X, Wang T, Higgins K, Liu T, Curran WJ. Whole-body PET estimation from low count statistics using cycle-consistent generative adversarial networks. Phys Med Biol. 2019 Nov;64(21):215017.
  3. Kaplan S, Zhu YM. Full-dose PET image estimation from low-dose PET image using deep learning: a pilot study. J Digit Imaging. 2019 Oct;32(5):773–8.
  4. Xu J, Gong E, Pauly J, Zaharchuk G. 200x low-dose PET reconstruction using deep learning. arXiv preprint arXiv:171204119. 2017;
  5. an Leeuwen PJ, Donswijk M, Nandurkar R, Stricker P, Ho B, Heijmink S, Wit EMK, Tillier C, van Muilenkom E, Nguyen Q, van der Poel HG, Emmett L. Gallium-68-prostate-specific membrane antigen (68 Ga-PSMA) positron emission tomography (PET)/computed tomography (CT) predicts complete biochemical response from radical prostatectomy and lymph node dissection in intermediate- and high-risk prostate cancer. BJU Int. 2019 Jul;124(1):62-8.
  6. Almuhaideb A, Papathanasiou N, Bomanji J. 18F-FDG PET/CT imaging in oncology. Ann Saudi Med. 2011 Jan-Feb;31(1):3-13. 
  7. Bradshaw TJ, McMillan AB. Anatomy and physiology of artificial intelligence in PET imaging. PET Clin. 2021 Oct;16(4):471-82. 
  8. Ghafari A, Monsef A, Sheikhzadeh P. F-18-FDG PET Image quality improvement using a pix2pix conditional generative adversarial network combined with the Bayesian Penalized Likelihood (BPL) reconstruction algorithm. In: Springer One New York Plaza, Suite 4600, New York, NY, United States; 2022. p. S625–6.
  9. Ghafari A, Monsef A, Sheikhzadeh P. Image augmentation for image-to-image translation in F-18-FDG PET imaging: does it make a difference? In: Springer One New York Plaza, Suite 4600, New York, NY, United States; 2022. p. S621–S621.
  10. Ghafari A, Sheikhzadeh P, Seyyedi N, Abbasi M, Ay MR. Realizing 32-time scan duration reduction of 18F-FDG PET using deep learning model with image augmentation. Frontiers Biomed Technol. 2023 Mar;10(2):195–203.
  11. Ghafari A, Sheikhzadeh P, Seyyedi N, Abbasi M, Farzenefar S, Yousefirizi F, Ay MR, Rahmim A. Generation of18F-FDG PET standard scan images from short scans using cycle-consistent generative adversarial network. Phys Med Biol. 2022 Oct 19;67(21):215005. 
  12. Azar AS, Ghafari A, Najar MO, Rikan SB, Ghafari R, Khamene MF, Sheikhzadeh P. Covidense: providing a suitable solution for diagnosing covid-19 lung infection based on deep learning from chest X-ray images of patients. Frontiers Biomed Technol. 2021 Jun;8(2):131–42.
  13. Babaei Rikan S, Sorayaie Azar A, Ghafari A, Bagherzadeh Mohasefi J, Pirnejad H. covid-19 diagnosis from routine blood tests using artificial intelligence techniques. Biomed Signal Process Control. 2022 Feb;72:103263.
  14. Liu CC, Qi J. Higher SNR PET image prediction using a deep learning model and MRI image. Phys Med Biol. 2019 May 23;64(11):115004.
  15. Liu J, Malekzadeh M, Mirian N, Song TA, Liu C, Dutta J. Artificial intelligence-based image enhancement in PET imaging: noise reduction and resolution enhancement. PET Clin. 2021 Oct;16(4):553–76.
  16. Yousefirizi F, Jha AK, Brosch-Lenz J, Saboury B, Rahmim A. Toward high-throughput artificial intelligence-based segmentation in oncological PET imaging. PET Clin. 2021 Oct;16(4):577–96.
  17. Yousefirizi F, Pierre Decazes  null, Amyar A, Ruan S, Saboury B, Rahmim A. AI-based detection, classification and prediction/prognosis in medical imaging: towards radiophenomics. PET Clin. 2022 Jan;17(1):183–212.
  18. Wang H, Udupa JK, Odhner D, Tong Y, Zhao L, Torigian DA. Automatic anatomy recognition in whole-body PET/CT images. Med Phys. 2016 Jan;43(1):613–29.
  19. Udupa JK, Odhner D, Zhao L, Tong Y, Matsumoto MM, Ciesielski KC, Falcao AX, Vaideeswaran P, Ciesielski V, Saboury B, Mohammadianrasanani S, Sin S, Arens R, Torigian DA. Body-wide hierarchical fuzzy modeling, recognition, and delineation of anatomy in medical images. Med Image Anal. 2014 Jul;18(5):752-71.
  20. Qayyum A, Anwar SM, Awais M, Majid M. Medical image retrieval using deep convolutional neural network. Neurocomputing. 2017 Nov;266:8–20.
  21. Bradshaw TJ, Boellaard R, Dutta J, Jha AK, Jacobs P, Li Q, Liu C, Sitek A, Saboury B, Scott PJH, Slomka PJ, Sunderland JJ, Wahl RL, Yousefirizi F, Zuehlsdorff S, Rahmim A, Buvat I. Nuclear medicine and artificial intelligence: best practices for algorithm development. J Nucl Med. 2022 Apr;63(4):500-10.
  22. Grandini M, Bagli E, Visani G. Metrics for multi-class classification: an overview. arXiv preprint arXiv:200805756. 2020;
  23. Erdaw Y, Tachbele E. Machine learning model applied on chest X-ray images enables automatic detection of covid-19 cases with high accuracy. Int J Gen Med. 2021 Aug 28;14:4923-4931.
  24. Obuchowski NA, Bullen JA. Receiver operating characteristic (ROC) curves: review of methods with applications in diagnostic medicine. Phys Med Biol. 2018 Mar 29;63(7):07TR01. 
  25. Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D. Grad-CAM: visual explanations from deep networks via gradient-based localization. In: 2017 IEEE International Conference on Computer Vision (ICCV). 2017. p. 618–26.