Stars
Code and data for "Measuring and Narrowing the Compositionality Gap in Language Models"
FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions
ColBERT: state-of-the-art neural search (SIGIR'20, TACL'21, NeurIPS'21, NAACL'22, CIKM'22, ACL'23, EMNLP'23)
LLaMa retrieval plugin script using OpenAI's retrieval plugin
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
A codebase that makes differentially private training of transformers easy.
Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)
An Empirical Evaluation of Word Embedding Models for Subjectivity Analysis Tasks
Google Research
FaVIQ: Fact Verification from Information-seeking Questions
Adversarial Natural Language Inference Benchmark
Methods of training NLP models to ignored biased strategies
Resources for the MRQA 2019 Shared Task
A simple way to calibrate your neural network.
Code for the paper "SelectiveNet: A Deep Neural Network with an Integrated Reject Option"
Implementation of various SQuAD models