Author of the publication

Tight Auditing of Differentially Private Machine Learning.

, , , , , , , and . USENIX Security Symposium, page 1631-1648. USENIX Association, (2023)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Enter the Hydra: Towards Principled Bug Bounties and Exploit-Resistant Smart Contracts., , , and . USENIX Security Symposium, page 1335-1352. USENIX Association, (2018)An Attack on InstaHide: Is Private Learning Possible with Instance Encoding?, , , , , , , , and . CoRR, (2020)Measuring Forgetting of Memorized Training Examples., , , , , , , , , and 1 other author(s). CoRR, (2022)Poisoning Web-Scale Training Datasets is Practical., , , , , , , , and . CoRR, (2023)Increasing Confidence in Adversarial Robustness Evaluations., , , and . NeurIPS, (2022)The Privacy Onion Effect: Memorization is Relative., , , , , and . NeurIPS, (2022)Label-Only Membership Inference Attacks., , , and . ICML, volume 139 of Proceedings of Machine Learning Research, page 1964-1974. PMLR, (2021)Blind Baselines Beat Membership Inference Attacks for Foundation Models., , and . CoRR, (2024)Universal Jailbreak Backdoors from Poisoned Human Feedback., and . ICLR, OpenReview.net, (2024)Extracting Training Data From Document-Based VQA Models., , , , and . ICML, OpenReview.net, (2024)