Back to directory
WRITEUP #142

Jailbreak of Meta AI (Llama -3.1) revealing configuration details

AI / LLMAILLMPrompt injectionLLM Jailbreak
byKiran Maraju
Program
Meta / Facebook (Llama)
Published
Jul 27, 2024
Added to HackDex
Aug 6, 2024
Read Full Writeuphttps://medium.com/@kiranmaraju/jailbreak-of-meta-ai-llama-3-1-revealing-configuration-details-9f0759f5006a
RELATED WRITEUPS
Microsoft Copilot: From Prompt Injection to Exfiltration of Personal Information
AI / LLMAI
Google AI Studio: LLM-Powered Data Exfiltration Hits Again! Quickly Fixed.
AI / LLMAI
Zeroday on Github Copilot
AI / LLMAI
Sorry, ChatGPT Is Under Maintenance: Persistent Denial of Service through Prompt Injection and Memory Attacks
AI / LLMAI
Unmasking Harmful Content in a Medical Chatbot: A Red Team Perspective
AI / LLMAI

Built with ❤️ by Shubham Rawat