@leonhardt

Efficient Neural Ranking Using Forward Indexes

, , , , und . Proceedings of the ACM Web Conference 2022, Seite 266–276. New York, NY, USA, Association for Computing Machinery, (2022)
DOI: 10.1145/3485447.3511955

Zusammenfassung

Neural document ranking approaches, specifically transformer models, have achieved impressive gains in ranking performance. However, query processing using such over-parameterized models is both resource and time intensive. In this paper, we propose the Fast-Forward index – a simple vector forward index that facilitates ranking documents using interpolation of lexical and semantic scores – as a replacement for contextual re-rankers and dense indexes based on nearest neighbor search. Fast-Forward indexes rely on efficient sparse models for retrieval and merely look up pre-computed dense transformer-based vector representations of documents and passages in constant time for fast CPU-based semantic similarity computation during query processing. We propose index pruning and theoretically grounded early stopping techniques to improve the query processing throughput. We conduct extensive large-scale experiments on TREC-DL datasets and show improvements over hybrid indexes in performance and query processing efficiency using only CPUs. Fast-Forward indexes can provide superior ranking performance using interpolation due to the complementary benefits of lexical and semantic similarities.

Links und Ressourcen

Tags

Community

  • @leonhardt
  • @dblp
@leonhardts Tags hervorgehoben