Deep Learning-based Detection of Intravenous Contrast Enhancement on CT Scans. 2022

Zezhong Ye, and Jack M Qian, and Ahmed Hosny, and Roman Zeleznik, and Deborah Plana, and Jirapat Likitlersuang, and Zhongyi Zhang, and Raymond H Mak, and Hugo J W L Aerts, and Benjamin H Kann
Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Harvard Institutes of Medicine, Boston, Mass (Z.Y., J.M.Q., A.H., R.Z., D.P., J.L., Z.Z., R.H.M., H.J.W.L.A., B.H.K.); Departments of Radiation Oncology (Z.Y., J.M.Q., A.H., R.Z., J.L., Z.Z., R.H.M., H.J.W.L.A., B.H.K.) and Radiology (H.J.W.L.A.), Dana-Farber Cancer Institute, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115; Harvard-MIT Division of Health Sciences & Technology, Cambridge, Mass (D.P.); and Department of Radiology and Nuclear Medicine, School for Cardiovascular Diseases (CARIM) & School for Oncology and Reproduction (GROW), Maastricht University, Maastricht, the Netherlands (H.J.W.L.A.).

Identifying the presence of intravenous contrast material on CT scans is an important component of data curation for medical imaging-based artificial intelligence model development and deployment. Use of intravenous contrast material is often poorly documented in imaging metadata, necessitating impractical manual annotation by clinician experts. Authors developed a convolutional neural network (CNN)-based deep learning platform to identify intravenous contrast enhancement on CT scans. For model development and validation, authors used six independent datasets of head and neck (HN) and chest CT scans, totaling 133 480 axial two-dimensional sections from 1979 scans, which were manually annotated by clinical experts. Five CNN models were trained first on HN scans for contrast enhancement detection. Model performances were evaluated at the patient level on a holdout set and external test set. Models were then fine-tuned on chest CT data and externally validated. This study found that Digital Imaging and Communications in Medicine metadata tags for intravenous contrast material were missing or erroneous for 1496 scans (75.6%). An EfficientNetB4-based model showed the best performance, with areas under the curve (AUCs) of 0.996 and 1.0 in HN holdout (n = 216) and external (n = 595) sets, respectively, and AUCs of 1.0 and 0.980 in the chest holdout (n = 53) and external (n = 402) sets, respectively. This automated, scan-to-prediction platform is highly accurate at CT contrast enhancement detection and may be helpful for artificial intelligence model development and clinical application. Keywords: CT, Head and Neck, Supervised Learning, Transfer Learning, Convolutional Neural Network (CNN), Machine Learning Algorithms, Contrast Material Supplemental material is available for this article. © RSNA, 2022.

UI MeSH Term Description Entries

Related Publications

Zezhong Ye, and Jack M Qian, and Ahmed Hosny, and Roman Zeleznik, and Deborah Plana, and Jirapat Likitlersuang, and Zhongyi Zhang, and Raymond H Mak, and Hugo J W L Aerts, and Benjamin H Kann
July 2022, Medical physics,
Zezhong Ye, and Jack M Qian, and Ahmed Hosny, and Roman Zeleznik, and Deborah Plana, and Jirapat Likitlersuang, and Zhongyi Zhang, and Raymond H Mak, and Hugo J W L Aerts, and Benjamin H Kann
January 2023, Radiology,
Zezhong Ye, and Jack M Qian, and Ahmed Hosny, and Roman Zeleznik, and Deborah Plana, and Jirapat Likitlersuang, and Zhongyi Zhang, and Raymond H Mak, and Hugo J W L Aerts, and Benjamin H Kann
January 2024, Radiology. Artificial intelligence,
Zezhong Ye, and Jack M Qian, and Ahmed Hosny, and Roman Zeleznik, and Deborah Plana, and Jirapat Likitlersuang, and Zhongyi Zhang, and Raymond H Mak, and Hugo J W L Aerts, and Benjamin H Kann
February 1991, AJR. American journal of roentgenology,
Zezhong Ye, and Jack M Qian, and Ahmed Hosny, and Roman Zeleznik, and Deborah Plana, and Jirapat Likitlersuang, and Zhongyi Zhang, and Raymond H Mak, and Hugo J W L Aerts, and Benjamin H Kann
January 2024, International journal of computer assisted radiology and surgery,
Zezhong Ye, and Jack M Qian, and Ahmed Hosny, and Roman Zeleznik, and Deborah Plana, and Jirapat Likitlersuang, and Zhongyi Zhang, and Raymond H Mak, and Hugo J W L Aerts, and Benjamin H Kann
February 1997, Radiology,
Zezhong Ye, and Jack M Qian, and Ahmed Hosny, and Roman Zeleznik, and Deborah Plana, and Jirapat Likitlersuang, and Zhongyi Zhang, and Raymond H Mak, and Hugo J W L Aerts, and Benjamin H Kann
June 2021, Internet of things (Amsterdam, Netherlands),
Zezhong Ye, and Jack M Qian, and Ahmed Hosny, and Roman Zeleznik, and Deborah Plana, and Jirapat Likitlersuang, and Zhongyi Zhang, and Raymond H Mak, and Hugo J W L Aerts, and Benjamin H Kann
May 2023, Studies in health technology and informatics,
Zezhong Ye, and Jack M Qian, and Ahmed Hosny, and Roman Zeleznik, and Deborah Plana, and Jirapat Likitlersuang, and Zhongyi Zhang, and Raymond H Mak, and Hugo J W L Aerts, and Benjamin H Kann
January 2022, Journal of healthcare engineering,
Zezhong Ye, and Jack M Qian, and Ahmed Hosny, and Roman Zeleznik, and Deborah Plana, and Jirapat Likitlersuang, and Zhongyi Zhang, and Raymond H Mak, and Hugo J W L Aerts, and Benjamin H Kann
October 2022, Diagnostics (Basel, Switzerland),
Copied contents to your clipboard!