Anserini is an open-source information retrieval toolkit built on Lucene that aims to bridge the gap between academic information retrieval research and the practice of building real-world search applications. This effort grew out of a reproducibility study of various open-source retrieval engines in 2016 (Lin et al., ECIR 2016). See Yang et al. (SIGIR 2017) and Yang et al. (JDIQ 2018) for overviews.
Anserini is currently based on Lucene 7.6, with a planned upgrade to Lucene 8 in the near future. Based on preliminary experiments, query evaluation latency has been much improved in Lucene 8.
Anserini requires Java 8 (note that there are known issues with Java 10 and Java 11) and Maven 3.3+. Oracle JVM is necessary to replicate our regression results; there are known issues with OpenJDK (see this and this). Build using Maven:
mvn clean package appassembler:assemble
The eval/
directory contains evaluation tools and scripts, including
trec_eval,
gdeval.pl,
ndeval.
Before using trec_eval
, unpack and compile it, as follows:
tar xvfz trec_eval.9.0.4.tar.gz && cd trec_eval.9.0.4 && make
Before using ndeval
, compile it as follows:
cd ndeval && make
Anserini is designed to support experiments on various standard TREC collections out of the box. Each collection is associated with regression tests for replicability. Note that these regressions capture the "out of the box" experience, based on default parameter settings.
- Experiments on Disks 1 & 2
- Experiments on Disks 4 & 5 (Robust04)
- Experiments on AQUAINT (Robust05)
- Experiments on the New York Times (Core17)
- Experiments on the Washington Post (Core18)
- Experiments on Wt10g
- Experiments on Gov2
- Experiments on ClueWeb09 (Category B)
- Experiments on ClueWeb12-B13
- Experiments on ClueWeb12
- Experiments on Tweets2011 (MB11 & MB12)
- Experiments on Tweets2013 (MB13 & MB14)
- Experiments on Complex Answer Retrieval v1.5 (CAR17)
- Experiments on Complex Answer Retrieval v2.0 (CAR17)
- Experiments on MS MARCO
- Experiments on AI2 Open Research
Additional regressions:
Runbooks:
- Runbooks for TREC 2018: [Anserini group] [h2oloo group]
- Runbook for ECIR 2019 paper on axiomatic semantic term matching
- Runbook for ECIR 2019 paper on cross-collection relevance feedback
-
IndexUtils
is a utility to interact with an index using the command line (e.g., print index statistics). Refer totarget/appassembler/bin/IndexUtils -h
for more details. -
MapCollections
is a generic mapper framework for processing a document collection in parallel. Developers can write their own mappers for different tasks: one simple example isCountDocumentMapper
which counts the number of documents in a collection:target/appassembler/bin/MapCollections -collection ClueWeb09Collection \ -threads 16 -input ~/collections/web/ClueWeb09b/ClueWeb09_English_1/ \ -mapper CountDocumentMapper -context CountDocumentMapperContext
Anserini was designed with Python integration in mind, for connecting with popular deep learning toolkits such as PyTorch. This is accomplished via pyjnius. The SimpleSearcher
class provides a simple Python/Java bridge, shown below:
import jnius_config
jnius_config.set_classpath("target/anserini-0.5.1-SNAPSHOT-fatjar.jar")
from jnius import autoclass
JString = autoclass('java.lang.String')
JSearcher = autoclass('io.anserini.search.SimpleSearcher')
searcher = JSearcher(JString('lucene-index.robust04.pos+docvectors+rawdocs'))
hits = searcher.search(JString('hubble space telescope'))
# the docid of the 1st hit
hits[0].docid
# the internal Lucene docid of the 1st hit
hits[0].ldocid
# the score of the 1st hit
hits[0].score
# the full document of the 1st hit
hits[0].content
Anserini provides code for indexing into SolrCloud, thus providing interoperable support for test collections wiith local Lucene indexes and Solr indexes. See this page for more details.
Anserini integration with Elastic search is coming soon! See Issues 633.
- v0.5.0: June 5, 2019 [Release Notes]
- v0.4.0: March 4, 2019 [Release Notes]
- v0.3.0: December 16, 2018 [Release Notes]
- v0.2.0: September 10, 2018 [Release Notes]
- v0.1.0: July 4, 2018 [Release Notes]
- Jimmy Lin, Matt Crane, Andrew Trotman, Jamie Callan, Ishan Chattopadhyaya, John Foley, Grant Ingersoll, Craig Macdonald, Sebastiano Vigna. Toward Reproducible Baselines: The Open-Source IR Reproducibility Challenge. ECIR 2016.
- Peilin Yang, Hui Fang, and Jimmy Lin. Anserini: Enabling the Use of Lucene for Information Retrieval Research. SIGIR 2017.
- Peilin Yang, Hui Fang, and Jimmy Lin. Anserini: Reproducible Ranking Baselines Using Lucene. Journal of Data and Information Quality, 10(4), Article 16, 2018.
- Wei Yang, Haotian Zhang, and Jimmy Lin. Simple Applications of BERT for Ad Hoc Document Retrieval. arXiv:1903.10972, March 2019.
- Rodrigo Nogueira, Wei Yang, Jimmy Lin, and Kyunghyun Cho. Document Expansion by Query Prediction. arXiv:1904.08375, April 2019.
- Peilin Yang and Jimmy Lin. Reproducing and Generalizing Semantic Term Matching in Axiomatic Information Retrieval. ECIR 2019.
- Ruifan Yu, Yuhao Xie and Jimmy Lin. Simple Techniques for Cross-Collection Relevance Transfer. ECIR 2019.
- Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. End-to-End Open-Domain Question Answering with BERTserini. NAACL-HLT 2019 Demos.
- Ryan Clancy, Toke Eskildsen, Nick Ruest, and Jimmy Lin. Solr Integration in the Anserini Information Retrieval Toolkit. SIGIR 2019.
- Ryan Clancy, Jaejun Lee, Zeynep Akkalyoncu Yilmaz, and Jimmy Lin. Information Retrieval Meets Scalable Text Analytics: Solr Integration with Spark. SIGIR 2019.
- Jimmy Lin and Peilin Yang. The Impact of Score Ties on Repeatability in Document Ranking. SIGIR 2019.
- Wei Yang, Kuang Lu, Peilin Yang, and Jimmy Lin. Critically Examining the "Neural Hype": Weak Baselines and the Additivity of Effectiveness Gains from Neural Ranking Models. SIGIR 2019.
This research is supported in part by the Natural Sciences and Engineering Research Council (NSERC) of Canada. Previous support came from the U.S. National Science Foundation under IIS-1423002 and CNS-1405688. Any opinions, findings, and conclusions or recommendations expressed do not necessarily reflect the views of the sponsors.