Skip to content

X-G-Y/SATI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Semantic-Guided Multimodal Sentiment Decoding with Adversarial Temporal-Invariant Learning


Code for Semantic-Guided Multimodal Sentiment Decoding with Adversarial Temporal-Invariant Learning (SATI).
Due to the double-blind review, we will not be providing the checkpoint link at this time.

Data Download

  • Install CMU Multimodal SDK. Ensure, you can perform from mmsdk import mmdatasdk.
  • Option 1: Download pre-computed splits provided by MOSI and place the contents inside datasets folder.
  • Option 2: Re-create splits by downloading data from MMSDK. For this, simply run the code as detailed next.

Running the code

  • cd src
  • Set word_emb_path in config.py to glove file provided by MOSI and Roberta path.
  • Set sdk_dir to the path of CMU-MultimodalSDK.
  • python train.py --data mosi. Replace mosi with mosei or ur_funny for other datasets.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages