Inproceedings,

Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self-Training Approach.

, , , , , and .
NAACL-HLT, page 1063-1077. Association for Computational Linguistics, (2021)

Meta data

Tags

Users

  • @dblp

Comments and Reviews