Author of the publication

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Data Isotopes for Data Provenance in DNNs., , , and . CoRR, (2022)Fawkes: Protecting Privacy against Unauthorized Deep Learning Models., , , , , and . USENIX Security Symposium, page 1589-1604. USENIX Association, (2020)Glaze: Protecting Artists from Style Mimicry by Text-to-Image Models., , , , , and . USENIX Security Symposium, page 2187-2204. USENIX Association, (2023)SalsaPicante: A Machine Learning Attack on LWE with Binary Secrets., , , , , , and . CCS, page 2606-2620. ACM, (2023)"Hello, It's Me": Deep Learning-based Speech Synthesis Attacks in the Real World., , , , , , and . CCS, page 235-251. ACM, (2021)Fawkes: Protecting Personal Privacy against Unauthorized Deep Learning Models., , , , , and . CoRR, (2020)Gotta Catch'Em All: Using Honeypots to Catch Adversarial Attacks on Neural Networks., , , , , and . CCS, page 67-83. ACM, (2020)Backdoor Attacks Against Deep Learning Systems in the Physical World., , , , , and . CVPR, page 6206-6215. Computer Vision Foundation / IEEE, (2021)The Cool and the Cruel: Separating Hard Parts of LWE Secrets., , , , , , and . AFRICACRYPT, volume 14861 of Lecture Notes in Computer Science, page 428-453. Springer, (2024)Post-breach Recovery: Protection against White-box Adversarial Examples for Leaked DNN Models., , , , and . CCS, page 2611-2625. ACM, (2022)