From post

In-Context Retrieval-Augmented Language Models

, , , , , , и . (2023)cite arxiv:2302.00083Comment: Accepted for publication in Transactions of the Association for Computational Linguistics (TACL). pre-MIT Press publication version.

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed.

 

Другие публикации лиц с тем же именем

What Are You Token About? Dense Retrieval as Distributions Over the Vocabulary., , , , , и . ACL (1), стр. 2481-2498. Association for Computational Linguistics, (2023)SenseBERT: Driving Some Sense into BERT., , , , , , , , и . ACL, стр. 4656-4667. Association for Computational Linguistics, (2020)Transformer Language Models without Positional Encodings Still Learn Positional Information., , , , и . EMNLP (Findings), стр. 1382-1390. Association for Computational Linguistics, (2022)Making Retrieval-Augmented Language Models Robust to Irrelevant Context., , , и . CoRR, (2023)What Are You Token About? Dense Retrieval as Distributions Over the Vocabulary., , , , , и . CoRR, (2022)Parallel Context Windows for Large Language Models., , , , , , , , , и . ACL (1), стр. 6383-6402. Association for Computational Linguistics, (2023)Coreference Resolution without Span Representations., , и . ACL/IJCNLP (2), стр. 14-19. Association for Computational Linguistics, (2021)Standing on the Shoulders of Giant Frozen Language Models., , , , , , , , , и 3 other автор(ы). CoRR, (2022)How Optimal is Greedy Decoding for Extractive Question Answering?, , , и . CoRR, (2021)Parallel Context Windows Improve In-Context Learning of Large Language Models., , , , , , , , и . CoRR, (2022)