ANTHROPIC
The AI rulebook everyone’s fighting over
Anthropic has officially backed SB 53, a California bill that could set the first state-level rules for major AI companies like OpenAI, Google, Anthropic, and xAI.
If passed, SB 53 would make big AI labs:
Publish public safety reports before launching powerful models
Set up internal safety frameworks
Protect whistleblowers who raise concerns about risks
The bill focuses on preventing “catastrophic risks”, situations where AI could be used to create biological weapons, launch major cyberattacks, or cause billion-dollar damages.
It doesn’t cover smaller issues like deepfakes or AI-generated content.
Rules, risks, and receipts
Opposition has been strong.
Silicon Valley investors and the Trump administration argue that AI rules should come from the federal government to avoid slowing innovation and driving startups away from California.
OpenAI has raised similar concerns but hasn’t directly opposed SB 53.
On the other hand, Anthropic says waiting for Washington isn’t an option.
Co-founder Jack Clark called the bill a “solid blueprint” for managing AI risks while the federal government works on broader rules.
Experts view SB 53 as a more balanced approach than past proposals like SB 1047, which faced heavy backlash.
The bill is now heading for a final Senate vote before reaching Governor Gavin Newsom, who hasn’t said whether he’ll sign it.
Silicon Valley wants to run fast, California just threw a speed bump.