Skip to content

Commit

Permalink
Improved documentation of MS MARCO passage regressions (castorini#1556)
Browse files Browse the repository at this point in the history
+ BM25, doc2query, and docTTTTTquery
+ Scores didn't change - just better documentation connecting regressions to official leaderboard submissions.
  • Loading branch information
lintool authored Jun 4, 2021
1 parent d81467b commit 2991421
Show file tree
Hide file tree
Showing 10 changed files with 174 additions and 50 deletions.
6 changes: 5 additions & 1 deletion docs/experiments-msmarco-passage.md
Original file line number Diff line number Diff line change
Expand Up @@ -230,7 +230,8 @@ That's the score of a query.
We take the average of the scores across all queries (6980 in this case), and we arrive at the score for the entire run.
</details>

You can find this entry on the [MS MARCO Passage Ranking Leaderboard](https://microsoft.github.io/msmarco/) as entry "BM25 (Lucene8, tuned)", so you've just reproduced (part of) a leaderboard submission!
You can find this run on the [MS MARCO Passage Ranking Leaderboard](https://microsoft.github.io/msmarco/) as the entry named "BM25 (Lucene8, tuned)", dated 2019/06/26.
So you've just reproduced (part of) a leaderboard submission!

We can also use the official TREC evaluation tool, `trec_eval`, to compute other metrics than MRR@10.
For that we first need to convert runs and qrels files to the TREC format:
Expand Down Expand Up @@ -297,6 +298,9 @@ Optimized for MRR@10/MAP (`k1=0.60`, `b=0.62`) | 0.1892 | 0.1972 | 0.8555

To reproduce these results, the `SearchMsmarco` class above takes `k1` and `b` parameters as command-line arguments, e.g., `-k1 0.60 -b 0.62` (note that the default setting is `k1=0.82` and `b=0.68`).

As mentioned above, the BM25 run with `k1=0.82`, `b=0.68` corresponds to the entry "BM25 (Lucene8, tuned)" dated 2019/06/26 on the [MS MARCO Passage Ranking Leaderboard](https://microsoft.github.io/msmarco/).
The BM25 run with default parameters `k1=0.9`, `b=0.4` roughly corresponds to the entry "BM25 (Anserini)" dated 2019/04/10 (but Anserini was using Lucene 7.6 at the time).

## Reproduction Log[*](reproducibility.md)

+ Results reproduced by [@ronakice](https://github.com/ronakice) on 2019-08-12 (commit [`5b29d16`](https://github.com/castorini/anserini/commit/5b29d1654abc5e8a014c2230da990ab2f91fb340))
Expand Down
32 changes: 29 additions & 3 deletions docs/regressions-msmarco-passage-doc2query.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,13 +74,39 @@ With the above commands, you should be able to reproduce the following results:

MAP | BM25 (Default)| +RM3 | BM25 (Tuned)| +RM3 |
:---------------------------------------|-----------|-----------|-----------|-----------|
[MS MARCO Passage Ranking: Dev Queries](https://github.com/microsoft/MSMARCO-Passage-Ranking)| 0.2270 | 0.2028 | 0.2293 | 0.2077 |
[MS MARCO Passage: Dev](https://github.com/microsoft/MSMARCO-Passage-Ranking)| 0.2270 | 0.2028 | 0.2293 | 0.2077 |


R@1000 | BM25 (Default)| +RM3 | BM25 (Tuned)| +RM3 |
:---------------------------------------|-----------|-----------|-----------|-----------|
[MS MARCO Passage Ranking: Dev Queries](https://github.com/microsoft/MSMARCO-Passage-Ranking)| 0.8900 | 0.8916 | 0.8911 | 0.8957 |
[MS MARCO Passage: Dev](https://github.com/microsoft/MSMARCO-Passage-Ranking)| 0.8900 | 0.8916 | 0.8911 | 0.8957 |

The setting "default" refers the default BM25 settings of `k1=0.9`, `b=0.4`, while "tuned" refers to the tuned setting of `k1=0.82`, `b=0.72` _on the original passages_.
See [this page](experiments-msmarco-passage.md) for more details.
Note that these results are slightly different from the above referenced page because those experiments make up "fake" scores when converting runs from MS MARCO format into TREC format for evaluation by `trec_eval`.

Note that the above runs are generated with `SearchCollection` in the TREC format, which due to tie-breaking effects gives slightly different results from `SearchMsmarco` in the MS MARCO format.

The following command generates with `SearchMsmarco` the run denoted "BM25 (Tuned)" above (`k1=0.82`, `b=0.68`):

```bash
$ sh target/appassembler/bin/SearchMsmarco -hits 1000 -threads 8 \
-index indexes/lucene-index.msmarco-passage-doc2query.pos+docvectors+raw \
-queries collections/msmarco-passage/queries.dev.small.tsv \
-k1 0.82 -b 0.68 \
-output runs/run.msmarco-passage-doc2query

$ python tools/scripts/msmarco/msmarco_passage_eval.py \
collections/msmarco-passage/qrels.dev.small.tsv runs/run.msmarco-passage-doc2query

#####################
MRR @10: 0.2213412471005586
QueriesRanked: 6980
#####################
```

Note that this run does _not_ correspond to the scores reported in the following paper (which introduced doc2query):

> Rodrigo Nogueira, Wei Yang, Jimmy Lin, and Kyunghyun Cho. [Document Expansion by Query Prediction.](https://arxiv.org/abs/1904.08375) arXiv:1904.08375, 2019.
The scores reported in the above paper refer to entry "BM25 (Anserini) + doc2query" dated 2019/04/10 on the [MS MARCO Passage Ranking Leaderboard](https://microsoft.github.io/msmarco/).
The paper/leaderboard run reports 0.215 MRR@10, which is slightly lower than the "BM25 (Tuned)" regression run above, due to an earlier version of Lucene (7.6) and use of default BM25 parameters.
36 changes: 16 additions & 20 deletions docs/regressions-msmarco-passage-docTTTTTquery.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,42 +70,38 @@ With the above commands, you should be able to reproduce the following results:

MAP | BM25 (Default)| +RM3 | BM25 (Tuned)| +RM3 |
:---------------------------------------|-----------|-----------|-----------|-----------|
[MS MARCO Passage Ranking: Dev Queries](https://github.com/microsoft/MSMARCO-Passage-Ranking)| 0.2805 | 0.2243 | 0.2850 | 0.2266 |
[MS MARCO Passage: Dev](https://github.com/microsoft/MSMARCO-Passage-Ranking)| 0.2805 | 0.2243 | 0.2850 | 0.2266 |


R@1000 | BM25 (Default)| +RM3 | BM25 (Tuned)| +RM3 |
:---------------------------------------|-----------|-----------|-----------|-----------|
[MS MARCO Passage Ranking: Dev Queries](https://github.com/microsoft/MSMARCO-Passage-Ranking)| 0.9470 | 0.9463 | 0.9471 | 0.9479 |
[MS MARCO Passage: Dev](https://github.com/microsoft/MSMARCO-Passage-Ranking)| 0.9470 | 0.9463 | 0.9471 | 0.9479 |

The setting "default" refers the default BM25 settings of `k1=0.9`, `b=0.4`, while "tuned" refers to the tuned setting of `k1=0.82`, `b=0.72` _on the original passages_.
See [this page](experiments-msmarco-passage.md) for more details.

To reproduce the _exact_ conditions for a leaderboard submission, retrieve using the following command:
Note that the above runs are generated with `SearchCollection` in the TREC format, which due to tie-breaking effects gives slightly different results from `SearchMsmarco` in the MS MARCO format.

```bash
wget https://www.dropbox.com/s/hq6xjhswiz60siu/queries.dev.small.tsv

sh target/appassembler/bin/SearchMsmarco -threads 8 \
-index indexes/lucene-index.msmarco-passage-docTTTTTquery.pos+docvectors+raw \
-queries queries.dev.small.tsv \
-output runs/run.msmarco-passage-docTTTTTquery -hits 1000
```

Evaluate using the MS MARCO eval script:
The following command generates with `SearchMsmarco` the run denoted "BM25 (Tuned)" above (`k1=0.82`, `b=0.68`), which corresponds to the entry "docTTTTTquery" dated 2019/11/27 on the [MS MARCO Passage Ranking Leaderboard](https://microsoft.github.io/msmarco/):

```bash
wget https://www.dropbox.com/s/khsplt2fhqwjs0v/qrels.dev.small.tsv
$ sh target/appassembler/bin/SearchMsmarco -hits 1000 -threads 8 \
-index indexes/lucene-index.msmarco-passage-docTTTTTquery.pos+docvectors+raw \
-queries collections/msmarco-passage/queries.dev.small.tsv \
-k1 0.82 -b 0.68 \
-output runs/run.msmarco-passage-docTTTTTquery

python tools/scripts/msmarco/msmarco_passage_eval.py qrels.dev.small.tsv runs/run.msmarco-passage-docTTTTTquery
```

The results should be:
$ python tools/scripts/msmarco/msmarco_passage_eval.py \
collections/msmarco-passage/qrels.dev.small.tsv runs/run.msmarco-passage-docTTTTTquery

```
#####################
MRR @10: 0.27680089370991834
QueriesRanked: 6980
#####################
```

Which matches the score described in [the docTTTTTquery repo](https://github.com/castorini/docTTTTTquery) and also on the official [MS MARCO leaderboard](http://www.msmarco.org/).
This corresponds to the score reported in the following paper:

> Rodrigo Nogueira and Jimmy Lin. [From doc2query to docTTTTTquery.](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf) December 2019.
And is identical to the score reported in [the docTTTTTquery repo](https://github.com/castorini/docTTTTTquery).
44 changes: 41 additions & 3 deletions docs/regressions-msmarco-passage.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,13 +99,51 @@ With the above commands, you should be able to reproduce the following results:

MAP | BM25 (Default)| +RM3 | +Ax | +PRF | BM25 (Tuned)| +RM3 | +Ax | +PRF |
:---------------------------------------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|
[MS MARCO Passage Ranking: Dev Queries](https://github.com/microsoft/MSMARCO-Passage-Ranking)| 0.1926 | 0.1661 | 0.1625 | 0.1520 | 0.1958 | 0.1762 | 0.1699 | 0.1582 |
[MS MARCO Passage: Dev](https://github.com/microsoft/MSMARCO-Passage-Ranking)| 0.1926 | 0.1661 | 0.1625 | 0.1520 | 0.1958 | 0.1762 | 0.1699 | 0.1582 |


R@1000 | BM25 (Default)| +RM3 | +Ax | +PRF | BM25 (Tuned)| +RM3 | +Ax | +PRF |
:---------------------------------------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|
[MS MARCO Passage Ranking: Dev Queries](https://github.com/microsoft/MSMARCO-Passage-Ranking)| 0.8526 | 0.8606 | 0.8747 | 0.8537 | 0.8573 | 0.8687 | 0.8809 | 0.8561 |
[MS MARCO Passage: Dev](https://github.com/microsoft/MSMARCO-Passage-Ranking)| 0.8526 | 0.8606 | 0.8747 | 0.8537 | 0.8573 | 0.8687 | 0.8809 | 0.8561 |

The setting "default" refers the default BM25 settings of `k1=0.9`, `b=0.4`, while "tuned" refers to the tuned setting of `k1=0.82`, `b=0.68`.
See [this page](experiments-msmarco-passage.md) for more details.
Note that these results are slightly different from the above referenced page because those experiments make up "fake" scores when converting runs from MS MARCO format into TREC format for evaluation by `trec_eval`.
Note that these results are slightly different from the above referenced page because those experiments used `SearchMsmarco` to generate runs in the MS MARCO format, and then converted them into the TREC format, which is slightly lossy (due to tie-breaking effects).

To generate runs corresponding to the submissions on the [MS MARCO Passage Ranking Leaderboard](https://microsoft.github.io/msmarco/), follow the instructions below:

The following command generates with `SearchMsmarco` the run denoted "BM25 (Tuned)" above (`k1=0.82`, `b=0.68`), which corresponds to the entry "BM25 (Lucene8, tuned)" dated 2019/06/26 on the leaderboard:

```bash
$ sh target/appassembler/bin/SearchMsmarco -hits 1000 -threads 8 \
-index indexes/lucene-index.msmarco-passage.pos+docvectors+raw \
-queries collections/msmarco-passage/queries.dev.small.tsv \
-k1 0.82 -b 0.68 \
-output runs/run.msmarco-passage.bm25.tuned.tsv

$ python tools/scripts/msmarco/msmarco_passage_eval.py \
collections/msmarco-passage/qrels.dev.small.tsv runs/run.msmarco-passage.bm25.tuned.tsv

#####################
MRR @10: 0.18741227770955546
QueriesRanked: 6980
#####################
```

The following command generates with `SearchMsmarco` the run denoted "BM25 (Default)" above (`k1=0.9`, `b=0.4`), which roughly corresponds to the entry "BM25 (Anserini)" dated 2019/04/10 on the leaderboard (but Anserini was using Lucene 7.6 at the time):

```bash
$ sh target/appassembler/bin/SearchMsmarco -hits 1000 -threads 8 \
-index indexes/lucene-index.msmarco-passage.pos+docvectors+raw \
-queries collections/msmarco-passage/queries.dev.small.tsv \
-k1 0.9 -b 0.4 \
-output runs/run.msmarco-passage.bm25.default.tsv

$ python tools/scripts/msmarco/msmarco_passage_eval.py \
collections/msmarco-passage/qrels.dev.small.tsv runs/run.msmarco-passage.bm25.default.tsv

#####################
MRR @10: 0.18398616227770961
QueriesRanked: 6980
#####################
```
Original file line number Diff line number Diff line change
Expand Up @@ -47,4 +47,30 @@ ${effectiveness}

The setting "default" refers the default BM25 settings of `k1=0.9`, `b=0.4`, while "tuned" refers to the tuned setting of `k1=0.82`, `b=0.72` _on the original passages_.
See [this page](experiments-msmarco-passage.md) for more details.
Note that these results are slightly different from the above referenced page because those experiments make up "fake" scores when converting runs from MS MARCO format into TREC format for evaluation by `trec_eval`.

Note that the above runs are generated with `SearchCollection` in the TREC format, which due to tie-breaking effects gives slightly different results from `SearchMsmarco` in the MS MARCO format.

The following command generates with `SearchMsmarco` the run denoted "BM25 (Tuned)" above (`k1=0.82`, `b=0.68`):

```bash
$ sh target/appassembler/bin/SearchMsmarco -hits 1000 -threads 8 \
-index indexes/lucene-index.msmarco-passage-doc2query.pos+docvectors+raw \
-queries collections/msmarco-passage/queries.dev.small.tsv \
-k1 0.82 -b 0.68 \
-output runs/run.msmarco-passage-doc2query

$ python tools/scripts/msmarco/msmarco_passage_eval.py \
collections/msmarco-passage/qrels.dev.small.tsv runs/run.msmarco-passage-doc2query

#####################
MRR @10: 0.2213412471005586
QueriesRanked: 6980
#####################
```

Note that this run does _not_ correspond to the scores reported in the following paper (which introduced doc2query):

> Rodrigo Nogueira, Wei Yang, Jimmy Lin, and Kyunghyun Cho. [Document Expansion by Query Prediction.](https://arxiv.org/abs/1904.08375) arXiv:1904.08375, 2019.

The scores reported in the above paper refer to entry "BM25 (Anserini) + doc2query" dated 2019/04/10 on the [MS MARCO Passage Ranking Leaderboard](https://microsoft.github.io/msmarco/).
The paper/leaderboard run reports 0.215 MRR@10, which is slightly lower than the "BM25 (Tuned)" regression run above, due to an earlier version of Lucene (7.6) and use of default BM25 parameters.
Original file line number Diff line number Diff line change
Expand Up @@ -44,32 +44,28 @@ ${effectiveness}
The setting "default" refers the default BM25 settings of `k1=0.9`, `b=0.4`, while "tuned" refers to the tuned setting of `k1=0.82`, `b=0.72` _on the original passages_.
See [this page](experiments-msmarco-passage.md) for more details.

To reproduce the _exact_ conditions for a leaderboard submission, retrieve using the following command:
Note that the above runs are generated with `SearchCollection` in the TREC format, which due to tie-breaking effects gives slightly different results from `SearchMsmarco` in the MS MARCO format.

```bash
wget https://www.dropbox.com/s/hq6xjhswiz60siu/queries.dev.small.tsv

sh target/appassembler/bin/SearchMsmarco -threads 8 \
-index indexes/lucene-index.msmarco-passage-docTTTTTquery.pos+docvectors+raw \
-queries queries.dev.small.tsv \
-output runs/run.msmarco-passage-docTTTTTquery -hits 1000
```

Evaluate using the MS MARCO eval script:
The following command generates with `SearchMsmarco` the run denoted "BM25 (Tuned)" above (`k1=0.82`, `b=0.68`), which corresponds to the entry "docTTTTTquery" dated 2019/11/27 on the [MS MARCO Passage Ranking Leaderboard](https://microsoft.github.io/msmarco/):

```bash
wget https://www.dropbox.com/s/khsplt2fhqwjs0v/qrels.dev.small.tsv
$ sh target/appassembler/bin/SearchMsmarco -hits 1000 -threads 8 \
-index indexes/lucene-index.msmarco-passage-docTTTTTquery.pos+docvectors+raw \
-queries collections/msmarco-passage/queries.dev.small.tsv \
-k1 0.82 -b 0.68 \
-output runs/run.msmarco-passage-docTTTTTquery

python tools/scripts/msmarco/msmarco_passage_eval.py qrels.dev.small.tsv runs/run.msmarco-passage-docTTTTTquery
```
$ python tools/scripts/msmarco/msmarco_passage_eval.py \
collections/msmarco-passage/qrels.dev.small.tsv runs/run.msmarco-passage-docTTTTTquery

The results should be:

```
#####################
MRR @10: 0.27680089370991834
QueriesRanked: 6980
#####################
```

Which matches the score described in [the docTTTTTquery repo](https://github.com/castorini/docTTTTTquery) and also on the official [MS MARCO leaderboard](http://www.msmarco.org/).
This corresponds to the score reported in the following paper:

> Rodrigo Nogueira and Jimmy Lin. [From doc2query to docTTTTTquery.](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf) December 2019.

And is identical to the score reported in [the docTTTTTquery repo](https://github.com/castorini/docTTTTTquery).
40 changes: 39 additions & 1 deletion src/main/resources/docgen/templates/msmarco-passage.template
Original file line number Diff line number Diff line change
Expand Up @@ -44,4 +44,42 @@ ${effectiveness}

The setting "default" refers the default BM25 settings of `k1=0.9`, `b=0.4`, while "tuned" refers to the tuned setting of `k1=0.82`, `b=0.68`.
See [this page](experiments-msmarco-passage.md) for more details.
Note that these results are slightly different from the above referenced page because those experiments make up "fake" scores when converting runs from MS MARCO format into TREC format for evaluation by `trec_eval`.
Note that these results are slightly different from the above referenced page because those experiments used `SearchMsmarco` to generate runs in the MS MARCO format, and then converted them into the TREC format, which is slightly lossy (due to tie-breaking effects).

To generate runs corresponding to the submissions on the [MS MARCO Passage Ranking Leaderboard](https://microsoft.github.io/msmarco/), follow the instructions below:

The following command generates with `SearchMsmarco` the run denoted "BM25 (Tuned)" above (`k1=0.82`, `b=0.68`), which corresponds to the entry "BM25 (Lucene8, tuned)" dated 2019/06/26 on the leaderboard:

```bash
$ sh target/appassembler/bin/SearchMsmarco -hits 1000 -threads 8 \
-index indexes/lucene-index.msmarco-passage.pos+docvectors+raw \
-queries collections/msmarco-passage/queries.dev.small.tsv \
-k1 0.82 -b 0.68 \
-output runs/run.msmarco-passage.bm25.tuned.tsv

$ python tools/scripts/msmarco/msmarco_passage_eval.py \
collections/msmarco-passage/qrels.dev.small.tsv runs/run.msmarco-passage.bm25.tuned.tsv

#####################
MRR @10: 0.18741227770955546
QueriesRanked: 6980
#####################
```

The following command generates with `SearchMsmarco` the run denoted "BM25 (Default)" above (`k1=0.9`, `b=0.4`), which roughly corresponds to the entry "BM25 (Anserini)" dated 2019/04/10 on the leaderboard (but Anserini was using Lucene 7.6 at the time):

```bash
$ sh target/appassembler/bin/SearchMsmarco -hits 1000 -threads 8 \
-index indexes/lucene-index.msmarco-passage.pos+docvectors+raw \
-queries collections/msmarco-passage/queries.dev.small.tsv \
-k1 0.9 -b 0.4 \
-output runs/run.msmarco-passage.bm25.default.tsv

$ python tools/scripts/msmarco/msmarco_passage_eval.py \
collections/msmarco-passage/qrels.dev.small.tsv runs/run.msmarco-passage.bm25.default.tsv

#####################
MRR @10: 0.18398616227770961
QueriesRanked: 6980
#####################
```
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ index_stats:
documents (non-empty): 8841823
total terms: 739691803
topics:
- name: "[MS MARCO Passage Ranking: Dev Queries](https://github.com/microsoft/MSMARCO-Passage-Ranking)"
- name: "[MS MARCO Passage: Dev](https://github.com/microsoft/MSMARCO-Passage-Ranking)"
path: topics.msmarco-passage.dev-subset.txt
qrel: qrels.msmarco-passage.dev-subset.txt
models:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ index_stats:
documents (non-empty): 8841823
total terms: 1986612263
topics:
- name: "[MS MARCO Passage Ranking: Dev Queries](https://github.com/microsoft/MSMARCO-Passage-Ranking)"
- name: "[MS MARCO Passage: Dev](https://github.com/microsoft/MSMARCO-Passage-Ranking)"
path: topics.msmarco-passage.dev-subset.txt
qrel: qrels.msmarco-passage.dev-subset.txt
models:
Expand Down
2 changes: 1 addition & 1 deletion src/main/resources/regression/msmarco-passage.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ index_stats:
documents (non-empty): 8841823
total terms: 352316036
topics:
- name: "[MS MARCO Passage Ranking: Dev Queries](https://github.com/microsoft/MSMARCO-Passage-Ranking)"
- name: "[MS MARCO Passage: Dev](https://github.com/microsoft/MSMARCO-Passage-Ranking)"
path: topics.msmarco-passage.dev-subset.txt
qrel: qrels.msmarco-passage.dev-subset.txt
models:
Expand Down

0 comments on commit 2991421

Please sign in to comment.