Author of the publication

A Multi-Task Semantic Decomposition Framework with Task-specific Pre-training for Few-Shot NER.

, , , , , , , , , , , , and . CIKM, page 430-440. ACM, (2023)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

HFT: Half Fine-Tuning for Large Language Models., , , , , and . CoRR, (2024)A Multi-Task Semantic Decomposition Framework with Task-specific Pre-training for Few-Shot NER., , , , , , , , , and 3 other author(s). CIKM, page 430-440. ACM, (2023)DemoNSF: A Multi-task Demonstration-based Generative Framework for Noisy Slot Filling Task., , , , , , , and . EMNLP (Findings), page 10506-10518. Association for Computational Linguistics, (2023)A Prototypical Semantic Decoupling Method via Joint Contrastive Learning for Few-Shot Named Entity Recognition., , , , , , , , , and 3 other author(s). ICASSP, page 1-5. IEEE, (2023)Smaller Language Models Are Better Instruction Evolvers., , , , , and . CoRR, (2024)Revisit Input Perturbation Problems for LLMs: A Unified Robustness Evaluation Framework for Noisy Slot Filling Task., , , , , , , , , and 1 other author(s). NLPCC (1), volume 14302 of Lecture Notes in Computer Science, page 682-694. Springer, (2023)Noise-BERT: A Unified Perturbation-Robust Framework with Noise Alignment Pre-Training for Noisy Slot Filling Task., , , , , , and . ICASSP, page 6150-6154. IEEE, (2024)Revisit Out-Of-Vocabulary Problem For Slot Filling: A Unified Contrastive Framework With Multi-Level Data Augmentations., , , , , , , , , and 2 other author(s). ICASSP, page 1-5. IEEE, (2023)Revisit Out-Of-Vocabulary Problem for Slot Filling: A Unified Contrastive Frameword with Multi-level Data Augmentations., , , , , , , , , and 2 other author(s). CoRR, (2023)Upcycling Instruction Tuning from Dense to Mixture-of-Experts via Parameter Merging., , , , , and . CoRR, (2024)