From post

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed.

 

Другие публикации лиц с тем же именем

Multitask Prompted Training Enables Zero-Shot Task Generalization, , , , , , , , , и 30 other автор(ы). International Conference on Learning Representations, (2022)Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models, , , , , , , , , и 441 other автор(ы). (2022)cite arxiv:2206.04615Comment: 27 pages, 17 figures + references and appendices, repo: https://github.com/google/BIG-bench.Can Transformers Learn to Solve Problems Recursively?, , , , и . CoRR, (2023)Large language models are not zero-shot communicators., , , , , и . CoRR, (2022)Multitask Prompted Training Enables Zero-Shot Task Generalization., , , , , , , , , и 31 other автор(ы). CoRR, (2021)Crosslingual Generalization through Multitask Finetuning., , , , , , , , , и 9 other автор(ы). CoRR, (2022)KMMLU: Measuring Massive Multitask Language Understanding in Korean., , , , , , , , и . CoRR, (2024)BLOOM+1: Adding Language Support to BLOOM for Zero-Shot Prompting., , , , , , , , , и 4 other автор(ы). CoRR, (2022)Fooling MOSS Detection with Pretrained Language Models., и . CIKM, стр. 2933-2943. ACM, (2022)RWKV: Reinventing RNNs for the Transformer Era., , , , , , , , , и 22 other автор(ы). EMNLP (Findings), стр. 14048-14077. Association for Computational Linguistics, (2023)