Description
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
The model learns more when it is punished more. In Collie, we often use adaptive implicit loss functions that sample multiple negative items at a time to allow for a greater chance to punish the model.
Setting num_negative_samples
too low means the model learns slower over time, but setting num_negative_samples
might mean the model will be too punished to the point where it does not learn at all.
A clear and concise description of what you want to happen.
Ideally, we could have an adaptive negative sampling strategy that starts with a small num_negative_samples
and, as training progresses, will increase the num_negative_samples
so the model is given greater chance of messing up as it learns more and more.
A clear and concise description of any alternative solutions or features you've considered.
We could just train for a few epochs, do something like:
# train a few epochs
model.train_loader.num_negative_samples += 5
model.val_loader.num_negative_samples += 5
# continue training
and we would have the same outcome, but it would be nice if this were built into Collie somehow via a tunable hyperparameter!
Add any other context or information about the feature request here.
This would be cool :)