Skip to content

A comparison of Google SlateQ algorithm with traditional Reinforcement Learning algorithms

Notifications You must be signed in to change notification settings

collinprather/SlateQ

Repository files navigation

Reinforcement Learning (RL) for Recommender Systems

Embed Video ppt

Summary

Most practical recommender systems focus on estimating immediate user engagement without considering the long-term effects of recommendations on user behaviour. Reinforcement learning (RL) methods offer the potential to optimize recommendations for long-term user engagement. However, since users are often presented with slates of multiple items—which may have interacting effects on user choice—methods are required to deal with the combinatorics of the RL action space.

Google’s​ SlateQ algorithm addresses this challenge by decomposing the long-term value (LTV) of a slate into a tractable function of its component item-wise LTVs. In this repo, we compare the efficiency of SlateQ to other RL methods like Q-learning that don’t decompose the LTV of a slate into its component-wise LTVs.

Here, we explore the interest evolution environment from RecSim (GitHub repo) library to train RL agents.

title

Important Links

  1. Problem Formulation Document
  2. Exploratory Notebook on the interest evolution environment
  3. Notebook comparing RL techniques

Contributors

Collin Prather and Shishir Kumar are Master students in Data Science Students at the University of San Francisco.

Thanks to Prof Brian Spiering for introducing us to this wonderful world of RL.


As governed by the recsim library, this repo uses Python 3.6.

About

A comparison of Google SlateQ algorithm with traditional Reinforcement Learning algorithms

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published