Author of the publication

Why Can GPT Learn In-Context? Language Models Secretly Perform Gradient Descent as Meta-Optimizers.

, , , , , , and . ACL (Findings), page 4005-4019. Association for Computational Linguistics, (2023)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Parallel Algorithms for Earth Observation Coverage of Satellites., , , , and . Int. J. Perform. Eng., 17 (8): 657 (2021)An efficient CMOS DC offset cancellation circuit for PGA of low IF wireless receivers., , and . WCSP, page 1-5. IEEE, (2010)Preserving Knowledge in Large Language Model with Model-Agnostic Self-Decompression., , , , , , and . CoRR, (2024)Prototypical Calibration for Few-shot Learning of Language Models., , , , and . ICLR, OpenReview.net, (2023)The finite volume local evolution Galerkin method for solving the hyperbolic conservation laws., and . J. Comput. Phys., 228 (13): 4945-4960 (2009)Why Can GPT Learn In-Context? Language Models Secretly Perform Gradient Descent as Meta-Optimizers., , , , , , and . ACL (Findings), page 4005-4019. Association for Computational Linguistics, (2023)Three Decades of Cross-Border Mergers and Acquisitions Research: Mapping the Field and Paths Forward., , , and . IEEE Trans. Engineering Management, (2024)Retentive Network: A Successor to Transformer for Large Language Models, , , , , , , and . (2023)cite arxiv:2307.08621.Machine Learning-based Mental Health Analysis and Early Warning for College Student., , , and . QRS Companion, page 569-578. IEEE, (2021)Fine-Grained Legal Argument-Pair Extraction via Coarse-Grained Pre-training., , , , , , and . LREC/COLING, page 7324-7335. ELRA and ICCL, (2024)