The IRMA Community
Newsletters
Research IRM
Click a keyword to search titles using our InfoSci-OnDemand powered search:
|
A Multimodal Sentiment Analysis Method Integrating Multi-Layer Attention Interaction and Multi-Feature Enhancement
Abstract
To address issues related to the insufficient representation of text semantic information and the lack of deep fusion between internal modal information and intermodal information in current multimodal sentiment analysis (MSA) methods, a new method integrating multi-layer attention interaction and multi-feature enhancement (AM-MF) is proposed. First, multimodal feature extraction (MFE) is performed based on RoBERTa, ResNet, and ViT models for text, audio, and video information, and high-level features of the three modalities are obtained through self-attention mechanisms. Then, a cross modal attention (CMA) interaction module is constructed based on transformer, achieving feature fusion between different modalities. Finally, the use of a soft attention mechanism for the deep fusion of internal and intermodal information effectively achieves multimodal sentiment classification. The experimental results CH-SIMS and CMU-MOSEI datasets show that the classification results of proposed MSA method are significantly superior to other advanced comparative methods.
Related Content
Tianlong Wang, Chaoyang Wang, Zhiqiang Liu, Shuai Ma, Huibo Yan.
© 2024.
15 pages.
|
Xudong Cao, Chenchen Chen, Lejia Zhang, Li Pan.
© 2024.
25 pages.
|
Shengfeng Xie, Jingwei Li.
© 2024.
20 pages.
|
Xiaoyuan Wang, Hongfei Wang, Jianping Wang, Jiajia Wang.
© 2024.
24 pages.
|
Jiao Hao, Zongbao Zhang, Yihan Ping.
© 2024.
14 pages.
|
Qinmei Wang.
© 2024.
13 pages.
|
Wenzhen Mai, Mohamud Saeed Ambashe, Chukwuka Christian Ohueri.
© 2024.
18 pages.
|
|
|