New York Joins the AI Regulation Movement
In early 2025, New York joined a group of forward-thinking states, including Colorado, Connecticut, New Jersey, and Texas, by stepping into the arena of state-level artificial intelligence (AI) regulation. On January 8, 2025, legislators introduced two pivotal bills in the New York Senate and State Assembly addressing AI’s role, particularly within employment contexts. The main focus of these proposed bills is to mitigate algorithmic bias and safeguard civil rights.
Senate Bill 1169: The NY AI Act
State Senator Kristen Gonzalez spearheaded the introduction of the NY AI Act, acknowledging growing evidence that unchecked AI systems could perpetuate existing societal inequalities. As a result, this Act is designed to cover “consumers,” defined as any New York resident, and includes provisions for a private right of action, permitting individuals to file lawsuits against tech companies for potential violations.
The act mandates the regulation of AI systems, especially concerning their impact on important life decisions. It introduces requirements for deployers of high-risk AI systems used in consequential decisions to inform end-users before their deployment and allow them to opt-out. Infringements could result in significant penalties and include audit obligations for these companies.
Assembly Bill 768: The Protection Act
Parallelly, New York State Assembly Member Alex Bores introduced the Protection Act with analogous objectives—preventing algorithmic discrimination against protected classes. This bill mandates an independent “bias and governance audit” of AI decision systems to ensure fairness and transparency.
Starting January 1, 2027, deployers of high-risk AI systems will need to demonstrate their due diligence in managing these systems and alert consumers about their operations and potential risks. The Protection Act emphasizes establishing robust risk management policies aligned with prevailing standards like the National Institute of Standards and Technology (NIST) frameworks.
New York City’s Local Law Int. No. 1894-A
Even as the state-level bills await enactment, New York City’s Local Law Int. No. 1894-A already stands as a regulatory pillar, effective from July 5, 2023. This local law aims to mitigate biases in employment decisions facilitated by automated tools, thereby setting an essential precedent for the state.
Employers in New York City must perform bias audits and provide requisite notices when using such automated tools in employment processes. Consequences for non-compliance include fines and the possibility of private legal action, emphasizing the city’s commitment to fairness in the workplace.
Key Takeaways for Employers
- Assess AI Implementations: Identify all AI systems involved in significant decision-making processes within the company.
- Review Data Policies: Ensure data policies align with stringent data protection and privacy standards.
- Prepare for Audits: Stay informed and prepare for compliance audits on high-risk AI systems.
- Establish Internal Protocols: Develop internal mechanisms to disclose and handle AI-related violations.
- Monitor Legislative Developments: Keep abreast of new legislative proposals and stay aligned with federal guidelines.
Employers should remain vigilant, complying with the current NYC AI Law while preparing for potential upcoming regulations. For those with broader interests in regulatory developments, visit aitechtrend.com to stay updated.
Note: This article is inspired by content from https://natlawreview.com/article/q1-2025-new-york-artificial-intelligence-developments-what-employers-should-know. It has been rephrased for originality. Images are credited to the original source.
