Author of the publication

Train No Evil: Selective Masking for Task-Guided Pre-Training.

, , , , and . EMNLP (1), page 6966-6974. Association for Computational Linguistics, (2020)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Knowledge Inheritance for Pre-trained Language Models., , , , , , , , , and 1 other author(s). CoRR, (2021)Red Alarm for Pre-trained Models: Universal Vulnerability to Neuron-level Backdoor Attacks., , , , , , , , and . Mach. Intell. Res., 20 (2): 180-193 (April 2023)SHUOWEN-JIEZI: Linguistically Informed Tokenizers For Chinese Language Model Pretraining., , , , , , and . CoRR, (2021)READIN: A Chinese Multi-Task Benchmark with Realistic and Diverse Input Noises., , , , , and . ACL (1), page 8272-8285. Association for Computational Linguistics, (2023)TransNet: Translation-Based Network Representation Learning for Social Relation Extraction., , , and . IJCAI, page 2864-2870. ijcai.org, (2017)A Unified Framework for Community Detection and Network Representation Learning., , , , , , , and . IEEE Trans. Knowl. Data Eng., 31 (6): 1051-1065 (2019)Finding Skill Neurons in Pre-trained Transformer-based Language Models., , , , , and . EMNLP, page 11132-11152. Association for Computational Linguistics, (2022)Adversarial Language Games for Advanced Natural Language Intelligence., , , , , , , , , and . AAAI, page 14248-14256. AAAI Press, (2021)Plug-and-Play Document Modules for Pre-trained Models., , , , , , , , , and . ACL (1), page 15713-15729. Association for Computational Linguistics, (2023)BMInf: An Efficient Toolkit for Big Model Inference and Tuning., , , , , , , , and . ACL (demo), page 224-230. Association for Computational Linguistics, (2022)