-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do not find pre-trained feature_embedding files #2
Comments
Yes, you need to use the the pre-trained sentence embedding models to get feature_embedding. We applied the pre-trained paraphrase-distilroberta-base-v1 sentence embedding model in a public transformer implementation: https://github.com/UKPLab/sentence-transformers |
OK, thank you very much! |
My apology for disturbing. |
Mr.Xu, apology again. Would it be possible for you to share the source code of computing feature_embedding used in your research? |
The feature_embedding is the average of embedding of the 'title', 'description' and 'feature'. I used pre-trained model to encode sentences separately and manually compute the average. |
That's nice. I have achieved the function. |
It does not need special code. We first train a IPSBiasedMF model, and then save the predicted user-item matrix as the exposure probability into IPS_EXPO_PROB file. I have updated the README to provide more details. |
OK, thank you very much! |
The propensity score is calculated based on the method introduced in "Unbiased recommender learning from missing-not-at-random implicit feedback". Or you can refer the equation 18 in our paper. After calculating the propensity score, save the score in propensity.npy file. Use the calculated score to train a IPSBiasedMF model. |
OK!Very warmly instructed!
|
I have run the main codes with parameters in the example you've given in readme.
But I can not run the code in the image,
it prompts me that do not find 'Electronics_paraphrase-distilroberta-base-v1.npy'.
It seems like I need to use the pre-train models to get feature_embedding?
The text was updated successfully, but these errors were encountered: