eduzhai > Applied Sciences > Engineering >

Learning Semantics-enriched Representation via Self-discovery Self-classification and Self-restoration

  • king
  • (0) Download
  • 20210506
  • Save

... pages left unread,continue reading

Document pages: 13 pages

Abstract: Medical images are naturally associated with rich semantics about the humananatomy, reflected in an abundance of recurring anatomical patterns, offeringunique potential to foster deep semantic representation learning and yieldsemantically more powerful models for different medical applications. But howexactly such strong yet free semantics embedded in medical images can beharnessed for self-supervised learning remains largely unexplored. To this end,we train deep models to learn semantically enriched visual representation byself-discovery, self-classification, and self-restoration of the anatomyunderneath medical images, resulting in a semantics-enriched, general-purpose,pre-trained 3D model, named Semantic Genesis. We examine our Semantic Genesiswith all the publicly-available pre-trained models, by either self-supervisionor fully supervision, on the six distinct target tasks, covering bothclassification and segmentation in various medical modalities (i.e.,CT, MRI,and X-ray). Our extensive experiments demonstrate that Semantic Genesissignificantly exceeds all of its 3D counterparts as well as the de factoImageNet-based transfer learning in 2D. This performance is attributed to ournovel self-supervised learning framework, encouraging deep models to learncompelling semantic representation from abundant anatomical patterns resultingfrom consistent anatomies embedded in medical images. Code and pre-trainedSemantic Genesis are available at this https URL .

Please select stars to rate!

         

0 comments Sign in to leave a comment.

    Data loading, please wait...
×