From post

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed.

 

Другие публикации лиц с тем же именем

Deep Speech 2: End-to-End Speech Recognition in English and Mandarin., , , , , , , , , и 24 other автор(ы). CoRR, (2015)Scale Efficiently: Insights from Pretraining and Finetuning Transformers., , , , , , , , , и . ICLR, OpenReview.net, (2022)Deep Speech 2 : End-to-End Speech Recognition in English and Mandarin., , , , , , , , , и 29 other автор(ы). ICML, том 48 из JMLR Workshop and Conference Proceedings, стр. 173-182. JMLR.org, (2016)Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer, , , , , , , , и . (октября 2019)cite arxiv:1910.10683.Character-Aware Models Improve Visual Text Rendering., , , , , , , , , и . ACL (1), стр. 16270-16297. Association for Computational Linguistics, (2023)Scaling Laws vs Model Architectures: How does Inductive Bias Influence Scaling?, , , , , , , , , и . EMNLP (Findings), стр. 12342-12364. Association for Computational Linguistics, (2023)UniMax: Fairer and More Effective Language Sampling for Large-Scale Multilingual Pretraining., , , , , , и . ICLR, OpenReview.net, (2023)Do Transformer Modifications Transfer Across Implementations and Applications?, , , , , , , , , и 6 other автор(ы). EMNLP (1), стр. 5758-5773. Association for Computational Linguistics, (2021)Deep Voice 3: Scaling Text-to-Speech with Convolutional Sequence Learning., , , , , , , и . ICLR (Poster), OpenReview.net, (2018)Mixed Precision Training, , , , , , , , , и 1 other автор(ы). (2017)cite arxiv:1710.03740Comment: Published as a conference paper at ICLR 2018.