Author of the publication

Quand être absent de mBERT n'est que le commencement : Gérer de nouvelles langues à l'aide de modèles de langues multilingues (When Being Unseen from mBERT is just the Beginning : Handling New Languages With Multilingual Language Models).

, , , and . TALN-RECITAL, page 450-451. ATALA, (2022)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Quand être absent de mBERT n'est que le commencement : Gérer de nouvelles langues à l'aide de modèles de langues multilingues (When Being Unseen from mBERT is just the Beginning : Handling New Languages With Multilingual Language Models)., , , and . TALN-RECITAL, page 450-451. ATALA, (2022)Modélisation et analyse des coordinations elliptiques par l'exploitation dynamique des forêts de dérivation., and . TALN (Posters), page 609-618. ATALA, (2006)The French Social Media Bank: a Treebank of Noisy User Generated Content., , , , and . COLING, page 2441-2458. Indian Institute of Technology Bombay, (2012)Contextualized Diachronic Word Representations., and . LChange@ACL, page 35-47. Association for Computational Linguistics, (2019)Enhancing BERT for Lexical Normalization., , and . W-NUT@EMNLP, page 297-306. Association for Computational Linguistics, (2019)From Noisy Questions to Minecraft Texts: Annotation Challenges in Extreme Syntax Scenario., , and . NUT@COLING, page 13-23. The COLING 2016 Organizing Committee, (2016)Treebanking User-Generated Content: A Proposal for a Unified Representation in Universal Dependencies., , , , , , , , , and . LREC, page 5240-5250. European Language Resources Association, (2020)Cloaked Classifiers: Pseudonymization Strategies on Sensitive Classification Tasks., , , and . CoRR, (2024)Phonetic Normalization for Machine Translation of User Generated Content., , and . W-NUT@EMNLP, page 407-416. Association for Computational Linguistics, (2019)Can Character-based Language Models Improve Downstream Task Performances In Low-Resource And Noisy Language Scenarios?, , and . W-NUT, page 423-436. Association for Computational Linguistics, (2021)