eduzhai > Applied Sciences > Engineering >

AVLnet Learning Audio-Visual Language Representations from Instructional Videos

  • Save

... pages left unread,continue reading

Document pages: 17 pages

Abstract: Current methods for learning visually grounded language from videos oftenrely on time-consuming and expensive data collection, such as human annotatedtextual summaries or machine generated automatic speech recognitiontranscripts. In this work, we introduce Audio-Video Language Network (AVLnet),a self-supervised network that learns a shared audio-visual embedding spacedirectly from raw video inputs. We circumvent the need for annotation andinstead learn audio-visual language representations directly from randomlysegmented video clips and their raw audio waveforms. We train AVLnet onpublicly available instructional videos and evaluate our model on video clipand language retrieval tasks on three video datasets. Our proposed modeloutperforms several state-of-the-art text-video baselines by up to 11.8 in avideo clip retrieval task, despite operating on the raw audio instead ofmanually annotated text captions. Further, we show AVLnet is capable ofintegrating textual information, increasing its modularity and improvingperformance by up to 20.3 on the video clip retrieval task. Finally, weperform analysis of AVLnet s learned representations, showing our model haslearned to relate visual objects with salient words and natural sounds.

Please select stars to rate!

         

0 comments Sign in to leave a comment.

    Data loading, please wait...
×