From post

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed.

 

Другие публикации лиц с тем же именем

Pseudo Relevance Feedback with Deep Language Models and Dense Retrievers: Successes and Pitfalls., , , , и . ACM Trans. Inf. Syst., 41 (3): 62:1-62:40 (2023)Dealing with Typos for BERT-based Passage Retrieval and Ranking., и . EMNLP (1), стр. 2836-2842. Association for Computational Linguistics, (2021)Selecting which Dense Retriever to use for Zero-Shot Search., , , , и . SIGIR-AP, стр. 223-233. ACM, (2023)Open-source Large Language Models are Strong Zero-shot Query Likelihood Models for Document Ranking., , , и . EMNLP (Findings), стр. 8807-8817. Association for Computational Linguistics, (2023)The Impact of Auxiliary Patient Data on Automated Chest X-Ray Report Generation and How to Incorporate It., , , и . CoRR, (2024)A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models., , , и . CoRR, (2023)Improving Query Representations for Dense Retrieval with Pseudo Relevance Feedback: A Reproducibility Study., , , , , и . ECIR (1), том 13185 из Lecture Notes in Computer Science, стр. 599-612. Springer, (2022)Reduce, Reuse, Recycle: Green Information Retrieval Research., , и . SIGIR, стр. 2825-2837. ACM, (2022)Exploring the Representation Power of SPLADE Models., , и . ICTIR, стр. 143-147. ACM, (2023)Large Language Models for Stemming: Promises, Pitfalls and Failures., , и . CoRR, (2024)