Document Type
Article
Abstract
Multi-modal sentiment analysis plays an important role for providing better interactive experiences to users. Each modality in multi-modal data can provide different viewpoints or reveal unique aspects of a user’s emotional state. In this work, we use text, audio and visual modalities from MOSI dataset and we propose a novel fusion technique using a multi-head attention LSTM network. Finally, we perform a classification task and evaluate its performance.
Publication Date
7-2-2021
Language
English
License
This work is licensed under a Creative Commons Attribution 4.0 International License.
Recommended Citation
Banerjee, Debapriya; Lygerakis, Fotios; and Makedon, Fillia, "Sequential Late Fusion Technique for Multi-modal Sentiment Analysis" (2021). Association of Computing Machinery Open Access Agreement Publications. 22.
https://mavmatrix.uta.edu/utalibraries_acmoapubs/22