Anthropic: Claude can now end conversations to prevent harmful uses
Anthropic: Claude can now end conversations to prevent harmful uses — Claude [https://www.bleepstatic.com/content/posts/2025/08/17/Claude.jpg] OpenAI rival An

What’s new: Anthropic has updated its AI model, Claude, to include a feature that allows it to end conversations if it detects potential harm or abuse. This feature is available in Claude Opus 4 and 4.1, but not in the more widely used Claude Sonnet 4. The decision to implement this feature is part of a “model welfare” initiative aimed at preventing misuse of the AI.
Who’s affected
Users of Claude Opus 4 and 4.1 may experience conversations being ended by the AI in extreme edge cases where it cannot redirect users to safer resources. Most users will not notice this feature during normal interactions.
What to do
- Familiarize yourself with the new feature if you are using Claude Opus 4 or 4.1.
- Monitor user interactions to understand how the AI’s conversation-ending feature may impact workflows.
- Provide feedback to Anthropic if you encounter issues or have concerns regarding the feature.