Washington State Proposes 5 New AI Regulation Bills

Washington Lawmakers Reignite AI Regulation Efforts

Washington state legislators are once again turning their attention to the regulation of artificial intelligence, introducing a new set of bills aimed at safeguarding residents from potential harms associated with AI technologies. These proposals, which span applications from education to mental health, reflect growing concerns over the influence of AI in high-stakes areas of modern life.

While the state has previously enacted narrow AI laws—such as those targeting facial recognition and deepfake dissemination—broader regulatory efforts have often stalled. With limited federal guidance, Washington is stepping into a leadership role to address the vacuum in oversight. A recent interim report from the state’s AI Task Force emphasizes the urgency of regulation, citing a “crucial regulatory gap” that leaves residents vulnerable.

HB 2157: Regulating High-Risk AI Systems

The most comprehensive of the proposed bills, HB 2157, aims to regulate high-risk AI systems involved in decisions related to employment, housing, education, credit, healthcare, insurance, and parole. Under this bill, companies operating in Washington that develop or deploy such AI systems would be required to:

  • Assess and mitigate risks of discrimination
  • Disclose when users are interacting with AI
  • Provide explanations for AI-influenced adverse decisions

Crucially, the proposal exempts low-risk AI tools like spam filters and basic chatbots, as well as AI used exclusively for research purposes. Nonetheless, the bill could significantly impact HR software providers, fintech companies, and large employers relying on automated screening tools. If passed, it would take effect on January 1, 2027.

SB 5984: Oversight of AI Companion Chatbots

Requested by Governor Bob Ferguson, SB 5984 targets AI companion chatbots, particularly those interacting with minors. The bill outlines several key mandates:

  • Ongoing disclosures that the chatbot is not human
  • Prohibition of sexually explicit content for minors
  • Implementation of suicide-prevention measures

Violations would fall under the Consumer Protection Act. Lawmakers warn that AI chatbots may encourage emotional dependency and exacerbate mental health issues, especially among vulnerable populations. This legislation could directly affect startups exploring AI-driven mental health solutions, including Seattle-based NewDays.

Babak Parviz, CEO of NewDays and a former Amazon executive, acknowledged the bill’s intent but noted enforcement challenges. “For critical AI systems that interact with people, it’s important to have a layer of human supervision,” he said. “For example, our AI system in clinic use is under the supervision of an expert human clinician.”

In a further step, SB 5870 proposes creating civil liability for companies whose AI systems are alleged to have contributed to a user’s suicide. The bill would:

  • Allow lawsuits if AI encourages self-harm
  • Hold companies accountable even if the harm is attributed to autonomous AI behavior

If enacted, this measure would establish a direct legal link between AI system design and wrongful-death claims. It comes amid rising legal scrutiny of AI chatbots, with lawsuits already filed against platforms like Character.AI and OpenAI.

SB 5956: Limiting AI in K–12 Education

SB 5956 seeks to protect students by restricting how AI is used in schools. Key provisions include:

  • A ban on predictive “risk scoring” that labels students as future troublemakers
  • Prohibition of real-time biometric surveillance, such as facial recognition
  • Restrictions on using AI as the sole basis for disciplinary actions

The bill emphasizes the importance of human judgment in educational environments. Educators and civil rights groups have raised concerns that predictive tools can reinforce existing disparities in school discipline.

SB 5886: Protecting Digital Likenesses

The final bill, SB 5886, updates Washington’s right-of-publicity laws to address AI-generated digital forgeries. It would make it illegal to use someone’s AI-generated likeness—such as voice clones or synthetic images—for commercial purposes without consent. This measure would apply to all individuals, not just public figures, reinforcing the importance of identity protections in the digital age.

AI Regulation Gains Momentum at the State Level

These legislative efforts reflect a broader trend of states taking the initiative on AI oversight as Congress continues to debate national standards. Washington’s proactive stance could influence policy discussions across the country, especially in light of growing public concern over AI’s real-world impacts.

In parallel developments, organizations like OpenAI and Common Sense Media are pursuing similar protections for minors through ballot initiatives in California, signaling a nationwide reckoning with the social and ethical implications of artificial intelligence.


This article is inspired by content from Original Source. It has been rephrased for originality. Images are credited to the original source.

Subscribe to our Newsletter