Author of the publication

Why Is Public Pretraining Necessary for Private Model Training?

, , , , , , , and . ICML, volume 202 of Proceedings of Machine Learning Research, page 10611-10627. PMLR, (2023)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Measuring Forgetting of Memorized Training Examples., , , , , , , , , and 1 other author(s). CoRR, (2022)Differentially Private Learning with Adaptive Clipping., , and . CoRR, (2019)Evading the Curse of Dimensionality in Unconstrained Private GLMs., , , and . AISTATS, volume 130 of Proceedings of Machine Learning Research, page 2638-2646. PMLR, (2021)Max-Information, Differential Privacy, and Post-selection Hypothesis Testing., , , and . FOCS, page 487-494. IEEE Computer Society, (2016)Noise Masking Attacks and Defenses for Pretrained Speech Models., , and . CoRR, (2024)On the Joint Impact of SU Mobility and PU Activity in Cognitive Vehicular Networks with Improved Energy Detection., , , , , and . VTC Spring, page 1-6. IEEE, (2019)A Method to Reveal Speaker Identity in Distributed ASR Training, and How to Counter IT., , , , , and . ICASSP, page 4338-4342. IEEE, (2022)Why Is Public Pretraining Necessary for Private Model Training?, , , , , , , and . ICML, volume 202 of Proceedings of Machine Learning Research, page 10611-10627. PMLR, (2023)Unintended Memorization in Large ASR Models, and How to Mitigate It., , and . CoRR, (2023)Revealing and Protecting Labels in Distributed Training., , , , , and . NeurIPS, page 1727-1738. (2021)