Author of the publication

Multi-View Self-Attention Based Transformer for Speaker Recognition.

, , , , , , , and . ICASSP, page 6732-6736. IEEE, (2022)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

SpeechUT: Bridging Speech and Text with Hidden-Unit for Encoder-Decoder Based Speech-Text Pre-training., , , , , , and . EMNLP, page 1663-1676. Association for Computational Linguistics, (2022)Token2vec: A Joint Self-Supervised Pre-Training Framework Using Unpaired Speech and Text., , , and . ICASSP, page 1-5. IEEE, (2023)SpeechT5: Unified-Modal Encoder-Decoder Pre-training for Spoken Language Processing., , , , , , , , , and 3 other author(s). CoRR, (2021)SD-Eval: A Benchmark Dataset for Spoken Dialogue Understanding Beyond Words., , , , , , , , and . CoRR, (2024)SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing., , , , , , , , , and 4 other author(s). ACL (1), page 5723-5738. Association for Computational Linguistics, (2022)Improving Attention-based End-to-end ASR by Incorporating an N-gram Neural Network., and . ISCSLP, page 1-5. IEEE, (2021)CoBERT: Self-Supervised Speech Representation Learning Through Code Representation Learning., , , , and . INTERSPEECH, page 2978-2982. ISCA, (2023)Self-Supervised Acoustic Word Embedding Learning via Correspondence Transformer Encoder., , , and . INTERSPEECH, page 2988-2992. ISCA, (2023)Text-Guided HuBERT: Self-Supervised Speech Pre-Training via Generative Adversarial Networks., , , , and . IEEE Signal Process. Lett., (2024)Pre-Training Transformer Decoder for End-to-End ASR Model with Unpaired Speech Data., , , , , , , , , and . INTERSPEECH, page 2658-2662. ISCA, (2022)