Author of the publication

A Brief Study on the Effects of Training Generative Dialogue Models with a Semantic loss.

, , , and . SIGDIAL, page 469-476. Association for Computational Linguistics, (2021)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Do Encoder Representations of Generative Dialogue Models Encode Sufficient Information about the Task ?, , and . CoRR, (2021)Practical Takes on Federated Learning with Pretrained Language Models., , and . EACL (Findings), page 454-471. Association for Computational Linguistics, (2023)Learning an Unreferenced Metric for Online Dialogue Evaluation., , , , , and . ACL, page 2430-2441. Association for Computational Linguistics, (2020)The RLLChatbot: a solution to the ConvAI challenge., , , , , , and . CoRR, (2018)Local Structure Matters Most: Perturbation Study in NLU., , , and . ACL (Findings), page 3712-3731. Association for Computational Linguistics, (2022)Deep Learning on a Healthy Data Diet: Finding Important Examples for Fairness., , , , , and . AAAI, page 14593-14601. AAAI Press, (2023)Detecting Languages Unintelligible to Multilingual Models through Local Structure Probes., , , and . EMNLP (Findings), page 5375-5396. Association for Computational Linguistics, (2022)Measuring the Knowledge Acquisition-Utilization Gap in Pretrained Language Models., , , and . EMNLP (Findings), page 4305-4319. Association for Computational Linguistics, (2023)On Task-Level Dialogue Composition of Generative Transformer Model., , and . Insights, page 41-47. Association for Computational Linguistics, (2020)Memory Augmented Optimizers for Deep Learning., , , and . CoRR, (2021)