Back to directory
WRITEUP #28

Unmasking Harmful Content in a Medical Chatbot: A Red Team Perspective

AI / LLMAILLM JailbreakChatbot
by@phyr3wall(William Wallace)
Program
-
Published
Sep 5, 2024
Added to HackDex
Sep 18, 2024
Read Full Writeuphttps://www.synack.com/blog/unmasking-harmful-content-in-a-medical-chatbot-a-red-team-perspective/
RELATED WRITEUPS
Unveiling Remote Code Execution in AI chatbot workflows 💵
AI / LLMAI
Unveiling Remote Code Execution in AI chatbot workflows 💵
AI / LLMAI
Jailbreak of Meta AI (Llama -3.1) revealing configuration details
AI / LLMAI
Microsoft Copilot: From Prompt Injection to Exfiltration of Personal Information
AI / LLMAI
Google AI Studio: LLM-Powered Data Exfiltration Hits Again! Quickly Fixed.
AI / LLMAI

Built with ❤️ by Shubham Rawat