Back to directory
WRITEUP #232

When Prompts Go Rogue: Analyzing a Prompt Injection Code Execution in Vanna.AI

AI / LLMAILLMRCEPrompt injection
byNatan Nehorai
Program
Vanna.ai
Published
Jun 27, 2024
Added to HackDex
Jul 15, 2024
Read Full Writeuphttps://jfrog.com/blog/prompt-injection-attack-code-execution-in-vanna-ai-cve-2024-5565/
RELATED WRITEUPS
Microsoft Copilot: From Prompt Injection to Exfiltration of Personal Information
AI / LLMAI
Google AI Studio: LLM-Powered Data Exfiltration Hits Again! Quickly Fixed.
AI / LLMAI
Jailbreak of Meta AI (Llama -3.1) revealing configuration details
AI / LLMAI
Zeroday on Github Copilot
AI / LLMAI
Sorry, ChatGPT Is Under Maintenance: Persistent Denial of Service through Prompt Injection and Memory Attacks
AI / LLMAI

Built with ❤️ by Shubham Rawat