From post

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed.

No persons found for author name Sutawika, Lintang
add a person with the name Sutawika, Lintang
 

Другие публикации лиц с тем же именем

What Language Model to Train if You Have One Million GPU Hours?, , , , , , , , , и 8 other автор(ы). EMNLP (Findings), стр. 765-782. Association for Computational Linguistics, (2022)Towards better structured and less noisy Web data: Oscar with Register annotations., , , , , , , , , и 3 other автор(ы). W-NUT@COLING, стр. 215-221. Association for Computational Linguistics, (2022)Crosslingual Generalization through Multitask Finetuning., , , , , , , , , и 9 other автор(ы). ACL (1), стр. 15991-16111. Association for Computational Linguistics, (2023)Multitask Prompted Training Enables Zero-Shot Task Generalization, , , , , , , , , и 30 other автор(ы). International Conference on Learning Representations, (2022)Lessons from the Trenches on Reproducible Evaluation of Language Models., , , , , , , , , и 20 other автор(ы). CoRR, (2024)Samsung Research Philippines - Datasaur AI's Submission for the WMT22 Large Scale Multilingual Translation Task., и . WMT, стр. 1034-1038. Association for Computational Linguistics, (2022)BLOOM+1: Adding Language Support to BLOOM for Zero-Shot Prompting., , , , , , , , , и 5 other автор(ы). ACL (1), стр. 11682-11703. Association for Computational Linguistics, (2023)Multitask Prompted Training Enables Zero-Shot Task Generalization., , , , , , , , , и 30 other автор(ы). ICLR, OpenReview.net, (2022)Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling., , , , , , , , , и 3 other автор(ы). ICML, том 202 из Proceedings of Machine Learning Research, стр. 2397-2430. PMLR, (2023)Utilizing Weak Supervision To Generate Indonesian Conservation Dataset., , , , и . CoRR, (2023)