Author of the publication

FriendNet Backdoor: Indentifying Backdoor Attack that is safe for Friendly Deep Neural Network.

, , and . ICSIM, page 53-57. ACM, (2020)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Fooling a Neural Network in Military Environments: Random Untargeted Adversarial Example., , , and . MILCOM, page 456-461. IEEE, (2018)Priority Adversarial Example in Evasion Attack on Multiple Deep Neural Networks., , and . ICAIIC, page 399-404. IEEE, (2019)Selective Poisoning Attack on Deep Neural Network to Induce Fine-Grained Recognition Error., , and . AIKE, page 136-139. IEEE, (2019)CAPTCHA Image Generation: Two-Step Style-Transfer Learning in Deep Neural Networks., , and . Sensors, 20 (5): 1495 (2020)POSTER: Detecting Audio Adversarial Example through Audio Modification., , and . ACM Conference on Computer and Communications Security, page 2521-2523. ACM, (2019)MedicalGuard: U-Net Model Robust against Adversarially Perturbed Images.. Secur. Commun. Networks, (2021)Friend-Safe Adversarial Examples in an Evasion Attack on a Deep Neural Network., , and . ICISC, volume 10779 of Lecture Notes in Computer Science, page 351-367. Springer, (2017)FriendNet Backdoor: Indentifying Backdoor Attack that is safe for Friendly Deep Neural Network., , and . ICSIM, page 53-57. ACM, (2020)One-Pixel Adversarial Example that Is Safe for Friendly Deep Neural Networks., , , and . WISA, volume 11402 of Lecture Notes in Computer Science, page 42-54. Springer, (2018)Audio adversarial detection through classification score on speech recognition systems., and . Comput. Secur., (March 2023)