Author of the publication

Stealthy and Efficient Adversarial Attacks against Deep Reinforcement Learning.

, , , , , , and . AAAI, page 5883-5891. AAAI Press, (2020)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

MIP: CLIP-based Image Reconstruction from PEFT Gradients., , , , , and . CoRR, (2024)Temporal Watermarks for Deep Reinforcement Learning Models., , , , and . AAMAS, page 314-322. ACM, (2021)BadEdit: Backdooring large language models by model editing., , , , , , , and . CoRR, (2024)Multi-target Backdoor Attacks for Code Pre-trained Models., , , , , and . ACL (1), page 7236-7254. Association for Computational Linguistics, (2023)Stealthy and Efficient Adversarial Attacks against Deep Reinforcement Learning., , , , , , and . AAAI, page 5883-5891. AAAI Press, (2020)Clean-image Backdoor: Attacking Multi-label Models with Poisoned Labels Only., , , , and . ICLR, OpenReview.net, (2023)GuardHFL: Privacy Guardian for Heterogeneous Federated Learning., , , , , , and . ICML, volume 202 of Proceedings of Machine Learning Research, page 4566-4584. PMLR, (2023)Boosting Black-box Attack to Deep Neural Networks with Conditional Diffusion Models., , , , , and . CoRR, (2023)BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation Models., , , , , , and . CoRR, (2021)Stealing Deep Reinforcement Learning Models for Fun and Profit., , , , and . AsiaCCS, page 307-319. ACM, (2021)