Author of the publication

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

KGDist: A Prompt-Based Distillation Attack against LMs Augmented with Knowledge Graphs., , , and . RAID, page 480-495. ACM, (2024)A Data-free Backdoor Injection Approach in Neural Networks., , , , , , and . USENIX Security Symposium, page 2671-2688. USENIX Association, (2023)DataElixir: Purifying Poisoned Dataset to Mitigate Backdoor Attacks via Diffusion Models., , , , , and . AAAI, page 21850-21858. AAAI Press, (2024)MEA-Defender: A Robust Watermark against Model Extraction Attack., , , , , , , , and . SP, page 2515-2533. IEEE, (2024)Invisible Backdoor Attacks Using Data Poisoning in the Frequency Domain., , , and . CoRR, (2022)PersonaMark: Personalized LLM watermarking for model protection and user attribution., , , , , , , and . CoRR, (2024)Aliasing Backdoor Attacks on Pre-trained Models., , , , and . USENIX Security Symposium, page 2707-2724. USENIX Association, (2023)SSL-WM: A Black-Box Watermarking Approach for Encoders Pre-trained by Self-supervised Learning., , , , , , , , , and 2 other author(s). CoRR, (2022)A Robustness-Assured White-Box Watermark in Neural Networks., , , , , , , and . IEEE Trans. Dependable Secur. Comput., 20 (6): 5214-5229 (November 2023)Invisible Backdoor Attacks Using Data Poisoning in Frequency Domain., , , and . ECAI, volume 372 of Frontiers in Artificial Intelligence and Applications, page 2954-2961. IOS Press, (2023)