Anthropic has announced an update to its Claude robot usage policy in response to growing safety concerns. The update tightens restrictions on weapons development, now explicitly prohibiting the use of Claude to develop biological, chemical, nuclear, or radiological (CBRN) weapons, as well as high explosive bombs. The company also implemented a new AI Safety Level 3 with the release of Claude Opus 4, to make it more difficult to hack the model and prevent it from contributing to the development of these weapons.
Anthropic also added new restrictions on the use of Cloud for computer control or vulnerability exploitation, including preventing the development of malware or cyber-attack tools. The company has relaxed the restrictions on political content, prohibiting only uses that mislead or disrupt democratic processes or target voters and their campaigns, while restrictions on high-risk uses remain for consumer-oriented scenarios and not for business.