From post

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed.

 

Другие публикации лиц с тем же именем

Self-Training and Pre-Training are Complementary for Speech Recognition., , , , , , , и . ICASSP, стр. 3030-3034. IEEE, (2021)Reservoir Transformer., , , , , и . CoRR, (2020)Masked Autoencoders that Listen., , , , , , , и . NeurIPS, (2022)Simple and Effective Unsupervised Speech Synthesis., , , , , и . INTERSPEECH, стр. 843-847. ISCA, (2022)Unified Speech-Text Pre-training for Speech Translation and Recognition., , , , , , , , , и 1 other автор(ы). ACL (1), стр. 1488-1499. Association for Computational Linguistics, (2022)Reservoir Transformers., , , , , и . ACL/IJCNLP (1), стр. 4294-4309. Association for Computational Linguistics, (2021)Pre-trained language model representations for language generation., , и . NAACL-HLT (1), стр. 4052-4059. Association for Computational Linguistics, (2019)vq-wav2vec: Self-Supervised Learning of Discrete Speech Representations., , и . ICLR, OpenReview.net, (2020)wav2vec: Unsupervised Pre-Training for Speech Recognition., , , и . INTERSPEECH, стр. 3465-3469. ISCA, (2019)Large-Scale Self- and Semi-Supervised Learning for Speech Translation., , , , , и . Interspeech, стр. 2242-2246. ISCA, (2021)