From post

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed.

 

Другие публикации лиц с тем же именем

Collaborative Sequence Prediction for Sequential Recommender., и . CIKM, стр. 2239-2242. ACM, (2017)Prompt-Based Rule Discovery and Boosting for Interactive Weakly-Supervised Learning., , , , и . ACL (1), стр. 745-758. Association for Computational Linguistics, (2022)Knowledge Distillation with Perturbed Loss: From a Vanilla Teacher to a Proxy Teacher., , , , , , и . KDD, стр. 4278-4289. ACM, (2024)Improving Multi-turn Dialogue Modelling with Utterance ReWriter., , , , , , и . ACL (1), стр. 22-31. Association for Computational Linguistics, (2019)Cold-Start Data Selection for Few-shot Language Model Fine-tuning: A Prompt-Based Uncertainty Propagation Approach., , , , , и . CoRR, (2022)Aligning Large Language Models with Representation Editing: A Control Perspective., , , , , , , , , и . CoRR, (2024)SeqMix: Augmenting Active Sequence Labeling via Sequence Mixup., , и . EMNLP (1), стр. 8566-8579. Association for Computational Linguistics, (2020)PLaD: Preference-based Large Language Model Distillation with Pseudo-Preference Pairs., , , , , , , , , и . ACL (Findings), стр. 15623-15636. Association for Computational Linguistics, (2024)Local Boosting for Weakly-Supervised Learning., , , , и . KDD, стр. 3364-3375. ACM, (2023)AcTune: Uncertainty-Based Active Self-Training for Active Fine-Tuning of Pretrained Language Models., , , , и . NAACL-HLT, стр. 1422-1436. Association for Computational Linguistics, (2022)