Author of the publication

Elijah: Eliminating Backdoors Injected in Diffusion Models via Distribution Shift.

, , , , , , , , , , and . AAAI, page 10847-10855. AAAI Press, (2024)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Hard-label Black-box Universal Adversarial Patch Attack., , , , and . USENIX Security Symposium, page 697-714. USENIX Association, (2023)Make Them Spill the Beans! Coercive Knowledge Extraction from (Production) LLMs., , , , and . CoRR, (2023)EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural Networks by Examining Differential Feature Symmetry., , , , , and . CoRR, (2021)ParaFuzz: An Interpretability-Driven Technique for Detecting Poisoned Samples in NLP., , , , , , and . CoRR, (2023)DECK: Model Hardening for Defending Pervasive Backdoors., , , , , , , and . CoRR, (2022)Code Search based on Context-aware Code Translation., , , , , and . ICSE, page 388-400. ACM, (2022)Piccolo: Exposing Complex Backdoors in NLP Transformer Models., , , , , and . SP, page 2025-2042. IEEE, (2022)D-square-B: Deep Distribution Bound for Natural-looking Adversarial Attack., , and . CoRR, (2020)Fusion is Not Enough: Single-Modal Attacks to Compromise Fusion Models in Autonomous Driving., , , , , , , and . CoRR, (2023)RULER: discriminative and iterative adversarial training for deep neural network fairness., , , , and . ESEC/SIGSOFT FSE, page 1173-1184. ACM, (2022)