From post

Self-Supervised Models of Audio Effectively Explain Human Cortical Responses to Speech.

, , и . ICML, том 162 из Proceedings of Machine Learning Research, стр. 21927-21944. PMLR, (2022)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed.

 

Другие публикации лиц с тем же именем

Incorporating Context into Language Encoding Models for fMRI., и . NeurIPS, стр. 6629-6638. (2018)Multi-timescale Representation Learning in LSTM Language Models., , , и . ICLR, OpenReview.net, (2021)How Many Bytes Can You Take Out Of Brain-To-Text Decoding?, , , , и . CoRR, (2024)Self-Supervised Models of Audio Effectively Explain Human Cortical Responses to Speech., , и . ICML, том 162 из Proceedings of Machine Learning Research, стр. 21927-21944. PMLR, (2022)Low-dimensional Structure in the Space of Language Representations is Reflected in Brain Responses., , , и . NeurIPS, стр. 8332-8344. (2021)Approximating Stacked and Bidirectional Recurrent Architectures with the Delayed Recurrent Neural Network., , , , , и . ICML, том 119 из Proceedings of Machine Learning Research, стр. 9648-9658. PMLR, (2020)Interpretable multi-timescale models for predicting fMRI responses to continuous natural speech., , , , , и . NeurIPS, (2020)Selecting Informative Contexts Improves Language Model Fine-tuning., , , и . ACL/IJCNLP (1), стр. 1072-1085. Association for Computational Linguistics, (2021)Humans and language models diverge when predicting repeating text., , и . CoRR, (2023)Efficient, Sparse Representation of Manifold Distance Matrices for Classical Scaling., и . CVPR, стр. 2850-2858. Computer Vision Foundation / IEEE Computer Society, (2018)