News

Testing has shown that the chatbot shows a “pattern of apparent distress” when it is being asked to generate harmful content ...
A new feature with Claude Opus 4 and 4.1 lets it end conversations with users with "persistently harmful or abusive ...
Claude won't stick around for toxic convos. Anthropic says its AI can now end extreme chats when users push too far.
Anthropic's popular coding model just became a little more enticing for developers with a million-token context window.
Anthropic launches automated AI security tools for Claude Code that scan code for vulnerabilities and suggest fixes, ...