Anthropic Endorses New AI Transparency Bill in California
Artificial intelligence developer Anthropic has become the first major tech company to back a new California bill aimed at regulating advanced AI systems. Known as S.B. 53 and introduced by state Senator Scott Wiener, the legislation would impose the first comprehensive legal obligations on large-scale AI developers in the United States.
The bill sets out to require companies that offer AI services in California to develop, publish, and follow safety-oriented protocols to manage risks posed by their AI models. These companies would also need to provide summaries of their assessments of potential “catastrophic risks” to a designated state office and make them publicly available.
Focus on Transparency and Safety
“With SB 53, developers can compete while ensuring they remain transparent about AI capabilities that pose risks to public safety,” Anthropic stated in a public announcement. The bill aims to transform voluntary safety commitments already made by major AI companies—including Anthropic, OpenAI, Google, and Meta—into mandatory legal requirements.
These commitments include evaluating how AI tools could be misused, such as in aiding cyberattacks or facilitating access to biological weapons, and devising strategies to mitigate these risks. Under S.B. 53, companies would also need to establish an emergency reporting system for critical safety incidents, accessible to both developers and the general public.
Scope of the Bill
The bill specifically targets companies developing frontier AI models that require significant computing power. Of these, only those with annual revenues exceeding $500 million would be subject to the strictest provisions. According to Sen. Wiener, “Anthropic is a leader on AI safety, and we’re really grateful for the company’s support.”
The legislation has strong momentum, having passed both the California Assembly and Senate with broad support. The final vote is scheduled to take place before the end of the legislative session on Friday night.
Mixed Industry Reactions
While many experts have praised the bill, not everyone is on board. Dan Hendrycks, Executive Director of the Center for AI Safety, commented, “This legislation takes a small but important first step toward making AI safer by making many of these voluntary commitments mandatory.”
However, industry groups such as the Consumer Technology Association (CTA) and the Chamber of Progress have voiced strong opposition. The CTA argued on social media that “California SB 53 and similar bills will weaken California and U.S. leadership in AI by driving investment and jobs to states or countries with less burdensome and conflicting frameworks.”
Evolution from Previous Legislation
S.B. 53 is a more refined version of a similar bill, S.B. 1047, introduced last year by the same senator. Although S.B. 1047 passed the Legislature, it was vetoed by Governor Gavin Newsom due to concerns that it could hinder AI innovation. Critics of S.B. 1047 cited its broad language and stringent requirements, such as mandatory third-party audits and prohibitions on releasing models with “unreasonable risk.”
After the veto, Newsom commissioned a working group to revise the legislation. Their recommendations formed the basis of S.B. 53. “We modeled the bill on that report,” said Sen. Wiener. “Whereas S.B. 1047 was more of a liability-focused bill, S.B. 53 is more focused on transparency.”
Growing Consensus on AI Oversight
Helen Toner, interim director at Georgetown University’s Center for Security and Emerging Technology, emphasized the growing consensus around transparency. “S.B. 53 is primarily a transparency bill, and that’s no coincidence,” she said. Anthropic echoed this sentiment, saying that their support came after “careful consideration of the lessons learned from California’s previous attempt at AI regulation.”
Given that California is home to many of the world’s leading AI companies, any legislation passed in the state is expected to have national and global implications. “California is really at the beating heart of AI innovation, and we should also be at the heart of a creative AI safety approach,” Wiener added.
State vs. Federal Regulation Debate
The introduction of S.B. 53 has reignited the debate over whether AI regulation should be handled at the state or federal level. Many in the industry, including OpenAI, argue for a unified national approach. OpenAI’s Director of Global Affairs, Chris Lehane, stated, “America leads best with clear, nationwide rules, not a patchwork of state or local regulations.”
Despite acknowledging the benefits of federal regulation, Anthropic maintains that action at the state level is necessary in the absence of federal consensus. “While we believe that frontier AI safety is best addressed at the federal level instead of a patchwork of state regulations, powerful AI advancements won’t wait for consensus in Washington,” the company said.
Sen. Wiener concluded, “Ideally we would have comprehensive, strong pro-safety, pro-innovation federal law in this space. But that has not happened, so California has a responsibility to act.”
This article is inspired by content from Original Source. It has been rephrased for originality. Images are credited to the original source.
