Article,

An LLM can Fool Itself: A Prompt-Based Adversarial Attack.

, , , , , , and .
CoRR, (2023)

Meta data

Tags

Users

  • @dblp

Comments and Reviews