Skip to content

[EMNLP 2024] Enhancing High-order Interaction Awareness in LLM-based Recommender Model.

Notifications You must be signed in to change notification settings

WangXFng/ELMRec

Repository files navigation

ELMRec (EMNLP'24)

  • Enhancing High-order Interaction Awareness in LLM-based Recommender Model, EMNLP 2024 Main.

LOGO

Note

  • The checkpoints of ELMRec were uploaded on Google Drive.
  • If you have any questions, please feel free to contact me at kaysenn@163.com.

Train and test ELMRec

(1) Install requirements

    >> pip install -r requirements.txt

(2) Pretraining ({dataset}: beauty, sports, and toys.) (under ./ELMRec )

    >> python pretrain.py ./data/{dataset}/ --cuda --batch_size 64 --checkpoint ./checkpoint/{dataset}/

(3) Inference ({dataset}: beauty, sports, and toys.) (under ./ELMRec )

    >> python seq_reranker.py ./data/{dataset}/ --cuda --batch_size 16 --checkpoint ./checkpoint/{dataset}/
    >> python topn.py ./data/{dataset}/ --cuda --batch_size 16 --checkpoint ./checkpoint/{dataset}/
    >> python exp.py ./data/{dataset}/ --cuda --batch_size 16 --checkpoint ./checkpoint/{dataset}/

Parameters (Table 9)

Dataset Direct Recommendation Sequential Recommendation
α σ N L α σ N L
Sports 5 5 10 4 1 5 10 4
Beauty 9 6 15 4 6 6 15 4
Toys 11 5 10 4 9 5 10 4

Citation

If this repository helps you, please cite:

@inproceedings{wang2024enhancing,
  title={Enhancing High-order Interaction Awareness in LLM-based Recommender Model},
  author={Wang, Xinfeng and Cui, Jin and Fukumoto, Fumiyo and Suzuki, Yoshimi},
  booktitle={Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing},
  pages={11696--11711},
  year={2024}
}

Acknowledgement

  • Code references: P5, POD
  • Thanks to linglingl635's careful check, I could recheck and correct Table 9. If you have any questions about this, please check the latest version on arXiv.
  • Thanks to Wei-Hsiang Huang's careful review, the widely used review data (e.g., in P5 and POD) could potentially leak information. The follow-up work will avoid this.

About

[EMNLP 2024] Enhancing High-order Interaction Awareness in LLM-based Recommender Model.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages