Author of the publication

KILM: Knowledge Injection into Encoder-Decoder Language Models.

, , , , , and . ACL (1), page 5013-5035. Association for Computational Linguistics, (2023)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

No persons found for author name Hazarika, Devamanyu
add a person with the name Hazarika, Devamanyu
 

Other publications of authors with the same name

Conversational transfer learning for emotion recognition., , , and . Inf. Fusion, (2021)Using In-Context Learning to Improve Dialogue Safety., , , , , , , and . EMNLP (Findings), page 11882-11910. Association for Computational Linguistics, (2023)"What do others think?": Task-Oriented Conversational Modeling with Subjective Knowledge., , , , , , , , , and . SIGDIAL, page 309-323. Association for Computational Linguistics, (2023)Role of Bias Terms in Dot-Product Attention., , and . ICASSP, page 1-5. IEEE, (2023)Aspect-Sentiment Embeddings for Company Profiling and Employee Opinion Mining., , , , , and . CICLing (2), volume 13397 of Lecture Notes in Computer Science, page 142-160. Springer, (2018)Attention Biasing and Context Augmentation for Zero-Shot Control of Encoder-Decoder Transformers for Natural Language Generation., , and . AAAI, page 10738-10748. AAAI Press, (2022)KILM: Knowledge Injection into Encoder-Decoder Language Models., , , , , and . ACL (1), page 5013-5035. Association for Computational Linguistics, (2023)Texture and Structure Incorporated ScatterNet Hybrid Deep Learning Network (TS-SHDL) for Brain Matter Segmentation., , and . ICCV Workshops, page 1181-1188. IEEE Computer Society, (2017)CESAR: Automatic Induction of Compositional Instructions for Multi-turn Dialogs., , , , , , and . EMNLP, page 11709-11737. Association for Computational Linguistics, (2023)Data-Efficient Alignment of Large Language Models with Human Feedback Through Natural Language., , , , , , and . CoRR, (2023)