Paper - [ArXiv] [ACL Anthology]
- Enhancing High-order Interaction Awareness in LLM-based Recommender Model, EMNLP 2024 Main.
- The checkpoints of ELMRec were uploaded on Google Drive.
- If you have any questions, please feel free to contact me at kaysenn@163.com.
>> pip install -r requirements.txt
>> python pretrain.py ./data/{dataset}/ --cuda --batch_size 64 --checkpoint ./checkpoint/{dataset}/
>> python seq_reranker.py ./data/{dataset}/ --cuda --batch_size 16 --checkpoint ./checkpoint/{dataset}/
>> python topn.py ./data/{dataset}/ --cuda --batch_size 16 --checkpoint ./checkpoint/{dataset}/
>> python exp.py ./data/{dataset}/ --cuda --batch_size 16 --checkpoint ./checkpoint/{dataset}/
Dataset | Direct Recommendation | Sequential Recommendation | ||||||
---|---|---|---|---|---|---|---|---|
α | σ | N | L | α | σ | N | L | |
Sports | 5 | 5 | 10 | 4 | 1 | 5 | 10 | 4 |
Beauty | 9 | 6 | 15 | 4 | 6 | 6 | 15 | 4 |
Toys | 11 | 5 | 10 | 4 | 9 | 5 | 10 | 4 |
If this repository helps you, please cite:
@inproceedings{wang2024enhancing,
title={Enhancing High-order Interaction Awareness in LLM-based Recommender Model},
author={Wang, Xinfeng and Cui, Jin and Fukumoto, Fumiyo and Suzuki, Yoshimi},
booktitle={Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing},
pages={11696--11711},
year={2024}
}
- Code references: P5, POD
- Thanks to linglingl635's careful check, I could recheck and correct Table 9. If you have any questions about this, please check the latest version on arXiv.
- Thanks to Wei-Hsiang Huang's careful review, the widely used review data (e.g., in P5 and POD) could potentially leak information. The follow-up work will avoid this.