An LLM can Fool Itself: A Prompt-Based Adversarial Attack Xilie Xu, Keyi Kong, Ning Liu, Lizhen Cui, Di Wang, Jingfeng Zhang, Mohan Kankanhalli Published: 2024- Venue: Arxiv View Paper LLMSecurity Date: 2:30-3:30 p.m., 02/07/2024 Location: N09, EB Title: An LLM can Fool Itself: A Prompt-Based Adversarial Attack Authors: Xilie Xu, Keyi Kong, Ning Liu, Lizhen Cui, Di Wang, Jingfeng Zhang, Mohan Kankanhalli Venue: Arxiv, 2023 Paper Link