Back to directory
WRITEUP #399

New Google Gemini Vulnerability Enabling Profound Misuse

AI / LLMAILLMLLM JailbreakIndirect Prompt InjectionPrompt leaking
byKenneth Yeung
Program
Google (Gemini)
Published
Mar 12, 2024
Added to HackDex
Aug 14, 2024
Read Full Writeuphttps://hiddenlayer.com/research/new-google-gemini-content-manipulation-vulns-found/#Overview
RELATED WRITEUPS
Jailbreak of Meta AI (Llama -3.1) revealing configuration details
AI / LLMAI
Unmasking Harmful Content in a Medical Chatbot: A Red Team Perspective
AI / LLMAI
Microsoft Copilot: From Prompt Injection to Exfiltration of Personal Information
AI / LLMAI
Google AI Studio: LLM-Powered Data Exfiltration Hits Again! Quickly Fixed.
AI / LLMAI
Zeroday on Github Copilot
AI / LLMAI

Built with ❤️ by Shubham Rawat