Author of the publication

Can Transformers Jump Around Right in Natural Language? Assessing Performance Transfer from SCAN.

, , and . BlackboxNLP@EMNLP, page 136-148. Association for Computational Linguistics, (2021)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Augmented Language Models: a Survey., , , , , , , , , and 3 other author(s). Trans. Mach. Learn. Res., (2023)Fine-tuning on Clean Data for End-to-End Speech Translation: FBK @ IWSLT 2018., , , , and . IWSLT, page 147-152. International Conference on Spoken Language Translation, (2018)Can discrete information extraction prompts generalize across language models?, , , , and . CoRR, (2023)Cross-Domain Image Captioning with Discriminative Finetuning., , , , , and . CVPR, page 6935-6944. IEEE, (2023)Can Transformers Jump Around Right in Natural Language? Assessing Performance Transfer from SCAN., , and . BlackboxNLP@EMNLP, page 136-148. Association for Computational Linguistics, (2021)Enhancing Transformer for End-to-end Speech-to-Text Translation., , , , and . MTSummit (1), page 21-31. European Association for Machine Translation, (2019)CNNs found to jump around more skillfully than RNNs: Compositional Generalization in Seq2seq Convolutional Networks., and . ACL (1), page 3919-3923. Association for Computational Linguistics, (2019)Toolformer: Language Models Can Teach Themselves to Use Tools., , , , , , , and . CoRR, (2023)Interpretable agent communication from scratch (with a generic visual processor emerging on the side)., , and . NeurIPS, page 26937-26949. (2021)Robustness of Named-Entity Replacements for In-Context Learning., , , , , , , , and . EMNLP (Findings), page 10914-10931. Association for Computational Linguistics, (2023)