eduzhai > Applied Sciences > Engineering >

Multi-Modality Information Fusion for Radiomics-based Neural Architecture Search

  • king
  • (0) Download
  • 20210506
  • Save

... pages left unread,continue reading

Document pages: 9 pages

Abstract: Radiomics is a method that extracts mineable quantitative features fromradiographic images. These features can then be used to determine prognosis,for example, predicting the development of distant metastases (DM). Existingradiomics methods, however, require complex manual effort including the designof hand-crafted radiomic features and their extraction and selection. Recentradiomics methods, based on convolutional neural networks (CNNs), also requiremanual input in network architecture design and hyper-parameter tuning.Radiomic complexity is further compounded when there are multiple imagingmodalities, for example, combined positron emission tomography - computedtomography (PET-CT) where there is functional information from PET andcomplementary anatomical localization information from computed tomography(CT). Existing multi-modality radiomics methods manually fuse the data that areextracted separately. Reliance on manual fusion often results in sub-optimalfusion because they are dependent on an expert s understanding of medicalimages. In this study, we propose a multi-modality neural architecture searchmethod (MM-NAS) to automatically derive optimal multi-modality image featuresfor radiomics and thus negate the dependence on a manual process. We evaluatedour MM-NAS on the ability to predict DM using a public PET-CT dataset ofpatients with soft-tissue sarcomas (STSs). Our results show that our MM-NAS hada higher prediction accuracy when compared to state-of-the-art radiomicsmethods.

Please select stars to rate!

         

0 comments Sign in to leave a comment.

    Data loading, please wait...
×