From post

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed.

 

Другие публикации лиц с тем же именем

Token2vec: A Joint Self-Supervised Pre-Training Framework Using Unpaired Speech and Text., , , и . ICASSP, стр. 1-5. IEEE, (2023)SpeechUT: Bridging Speech and Text with Hidden-Unit for Encoder-Decoder Based Speech-Text Pre-training., , , , , , и . EMNLP, стр. 1663-1676. Association for Computational Linguistics, (2022)Improving Attention-based End-to-end ASR by Incorporating an N-gram Neural Network., и . ISCSLP, стр. 1-5. IEEE, (2021)SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing., , , , , , , , , и 4 other автор(ы). ACL (1), стр. 5723-5738. Association for Computational Linguistics, (2022)SD-Eval: A Benchmark Dataset for Spoken Dialogue Understanding Beyond Words., , , , , , , , и . CoRR, (2024)Self-Supervised Acoustic Word Embedding Learning via Correspondence Transformer Encoder., , , и . INTERSPEECH, стр. 2988-2992. ISCA, (2023)CoBERT: Self-Supervised Speech Representation Learning Through Code Representation Learning., , , , и . INTERSPEECH, стр. 2978-2982. ISCA, (2023)The YiTrans Speech Translation System for IWSLT 2022 Offline Shared Task., и . IWSLT@ACL, стр. 158-168. Association for Computational Linguistics, (2022)Multi-View Self-Attention Based Transformer for Speaker Recognition., , , , , , , и . ICASSP, стр. 6732-6736. IEEE, (2022)Pre-Training Transformer Decoder for End-to-End ASR Model with Unpaired Speech Data., , , , , , , , , и . INTERSPEECH, стр. 2658-2662. ISCA, (2022)