eduzhai > Applied Sciences > Engineering >

Unsupervised Multi-Modal Representation Learning for Affective Computing with Multi-Corpus Wearable Data

  • king
  • (0) Download
  • 20210507
  • Save

... pages left unread,continue reading

Document pages: 16 pages

Abstract: With recent developments in smart technologies, there has been a growingfocus on the use of artificial intelligence and machine learning for affectivecomputing to further enhance the user experience through emotion recognition.Typically, machine learning models used for affective computing are trainedusing manually extracted features from biological signals. Such features maynot generalize well for large datasets and may be sub-optimal in capturing theinformation from the raw input data. One approach to address this issue is touse fully supervised deep learning methods to learn latent representations ofthe biosignals. However, this method requires human supervision to label thedata, which may be unavailable or difficult to obtain. In this work we proposean unsupervised framework reduce the reliance on human supervision. Theproposed framework utilizes two stacked convolutional autoencoders to learnlatent representations from wearable electrocardiogram (ECG) and electrodermalactivity (EDA) signals. These representations are utilized within a randomforest model for binary arousal classification. This approach reduces humansupervision and enables the aggregation of datasets allowing for highergeneralizability. To validate this framework, an aggregated dataset comprisedof the AMIGOS, ASCERTAIN, CLEAS, and MAHNOB-HCI datasets is created. Theresults of our proposed method are compared with using convolutional neuralnetworks, as well as methods that employ manual extraction of hand-craftedfeatures. The methodology used for fusing the two modalities is alsoinvestigated. Lastly, we show that our method outperforms currentstate-of-the-art results that have performed arousal detection on the samedatasets using ECG and EDA biosignals. The results show the wide-spreadapplicability for stacked convolutional autoencoders to be used with machinelearning for affective computing.

Please select stars to rate!

         

0 comments Sign in to leave a comment.

    Data loading, please wait...
×