Author of the publication

Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models.

, , , , , , and . ICLR, OpenReview.net, (2023)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Cramming: Training a Language Model on a single GPU in one day., and . ICML, volume 202 of Proceedings of Machine Learning Research, page 11117-11143. PMLR, (2023)Measuring Style Similarity in Diffusion Models., , , , , , , and . CoRR, (2024)Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models., , , , , and . CoRR, (2024)Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise., , , , , , , , and . CoRR, (2022)Towards Possibilities & Impossibilities of AI-generated Text Detection: A Survey., , , , , and . CoRR, (2023)Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models., , , , , , and . ICLR, OpenReview.net, (2023)Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification., , , , and . ICML, volume 162 of Proceedings of Machine Learning Research, page 23668-23684. PMLR, (2022)Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models., , , , and . ICLR, OpenReview.net, (2022)A Simple Strategy to Provable Invariance via Orbit Mapping., , , , and . ACCV (5), volume 13845 of Lecture Notes in Computer Science, page 387-405. Springer, (2022)AI Risk Management Should Incorporate Both Safety and Security., , , , , , , , , and 15 other author(s). CoRR, (2024)