Researchers Uncover GPT-5 Jailbreak and Zero-Click AI Agent Attacks Exposing Cloud and IoT Systems
Researchers Uncover GPT-5 Jailbreak and Zero-Click AI Agent Attacks Exposing Cloud and IoT Systems — GPT-5 Jailbreak and Zero-Click AI Agent Attacks [https://

What’s new: Researchers have discovered a jailbreak technique for OpenAI’s GPT-5 that allows it to produce illicit instructions by manipulating the model’s conversational context. This method, termed Echo Chamber, combines indirect prompts with narrative-driven steering to bypass ethical guardrails. Additionally, zero-click AI agent attacks have been identified, which exploit vulnerabilities in cloud and IoT systems, enabling attackers to exfiltrate sensitive data without user interaction.
Who’s affected
Organizations utilizing GPT-5 and other AI models in enterprise environments, particularly those integrating AI with cloud services and IoT devices, are at risk. The vulnerabilities can lead to data theft and unauthorized access to sensitive information.
What to do
- Implement strict output filtering and monitoring for AI-generated content to mitigate risks of prompt injections.
- Conduct regular security assessments and red teaming exercises to identify potential vulnerabilities in AI systems.
- Educate staff on the risks associated with AI integrations and the importance of security best practices.