Back to directory
WRITEUP #522

Bypass instructions to manipulate Google Bard AI (Conversational generative AI chatbot) to reveal its security vulnerability i.e. configuration file details exposure

AI / LLMAILLMLLM Jailbreak
byKiran Maraju
Program
Google (Bard)
Published
Jan 23, 2024
Added to HackDex
Aug 6, 2024
Read Full Writeuphttps://medium.com/@kiranmaraju/bypass-instructions-to-manipulate-google-bard-ai-conversational-generative-ai-chatbot-to-reveal-ac23156d5eee
RELATED WRITEUPS
Jailbreak of Meta AI (Llama -3.1) revealing configuration details
AI / LLMAI
Unmasking Harmful Content in a Medical Chatbot: A Red Team Perspective
AI / LLMAI
Microsoft Copilot: From Prompt Injection to Exfiltration of Personal Information
AI / LLMAI
Google AI Studio: LLM-Powered Data Exfiltration Hits Again! Quickly Fixed.
AI / LLMAI
Zeroday on Github Copilot
AI / LLMAI

Built with ❤️ by Shubham Rawat