We offer a suite of prompt decoders, albeit with a current focus on RankVicuna. Some of the code in this repository is borrowed from RankGPT!
current_version = 0.0.7
More instructions to be added soon!
The following is a table of our models hosted on HuggingFace:
Model Name | Hugging Face Identifier/Link |
---|---|
RankVicuna 7B - V1 | castorini/rank_vicuna_7b_v1 |
RankVicuna 7B - V1 - No Data Augmentation | castorini/rank_vicuna_7b_v1_noda |
RankVicuna 7B - V1 - FP16 | castorini/rank_vicuna_7b_v1_fp16 |
RankVicuna 7B - V1 - No Data Augmentation - FP16 | castorini/rank_vicuna_7b_v1_noda_fp16 |
If you use RankLLM, please cite the following paper: [2309.15088] RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
@ARTICLE{pradeep2023rankvicuna,
title = {RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models},
author = {Ronak Pradeep and Sahel Sharifymoghaddam and Jimmy Lin},
year = {2023},
journal = {arXiv preprint arXiv: 2309.15088}
}
This research is supported in part by the Natural Sciences and Engineering Research Council (NSERC) of Canada.