eduzhai > Applied Sciences > Engineering >

Contrastive Visual-Linguistic Pretraining

  • king
  • (0) Download
  • 20210506
  • Save

... pages left unread,continue reading

Document pages: 11 pages

Abstract: Several multi-modality representation learning approaches such as LXMERT andViLBERT have been proposed recently. Such approaches can achieve superiorperformance due to the high-level semantic information captured duringlarge-scale multimodal pretraining. However, as ViLBERT and LXMERT adopt visualregion regression and classification loss, they often suffer from domain gapand noisy label problems, based on the visual features having been pretrainedon the Visual Genome dataset. To overcome these issues, we propose unbiasedContrastive Visual-Linguistic Pretraining (CVLP), which constructs a visualself-supervised loss built upon contrastive learning. We evaluate CVLP onseveral down-stream tasks, including VQA, GQA and NLVR2 to validate thesuperiority of contrastive learning on multi-modality representation learning.Our code is available at: this https URL.

Please select stars to rate!

         

0 comments Sign in to leave a comment.

    Data loading, please wait...
×