From post

Efficient Self-supervised Learning with Contextualized Target Representations for Vision, Speech and Language

, , , и . https://ai.facebook.com/blog/ai-self-supervised-learning-data2vec/, (2022)cite arxiv:2212.07525.

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed.

 

Другие публикации лиц с тем же именем

Reservoir Transformer., , , , , и . CoRR, (2020)Masked Autoencoders that Listen., , , , , , , и . NeurIPS, (2022)Self-Training and Pre-Training are Complementary for Speech Recognition., , , , , , , и . ICASSP, стр. 3030-3034. IEEE, (2021)Simple and Effective Unsupervised Speech Synthesis., , , , , и . INTERSPEECH, стр. 843-847. ISCA, (2022)Scaling Speech Technology to 1, 000+ Languages., , , , , , , , , и 6 other автор(ы). CoRR, (2023)Adaptive Input Representations for Neural Language Modeling., и . ICLR (Poster), OpenReview.net, (2019)Towards End-to-End Unsupervised Speech Recognition., , , и . SLT, стр. 221-228. IEEE, (2022)A Comparison of Discrete Latent Variable Models for Speech Representation Learning., , и . ICASSP, стр. 3050-3054. IEEE, (2021)Improved Language Identification Through Cross-Lingual Self-Supervised Learning., , , , , , , , и . ICASSP, стр. 6877-6881. IEEE, (2022)Measuring the Impact of Domain Factors in Self-Supervised Pre-Training., , , и . ICASSP Workshops, стр. 1-5. IEEE, (2023)