New York Eyes AI Rules After California Bill Fails

California’s AB 331: A Bold Attempt at AI Oversight
Driven by concerns about potential bias in critical areas like healthcare, housing, and employment, California’s AB 331 sought to regulate the use of automated decision systems (ADS) – essentially, AI systems that make decisions impacting people’s lives. The bill, as detailed by Hackler Flynn, aimed to bring transparency, fairness, and accountability to the world of AI.
Key Provisions of AB 331:
- Transparency: Companies using ADS would have to disclose when these systems were used in decisions impacting individuals’ rights, explaining how the ADS works and what factors it considers.
- Fairness: The bill aimed to prohibit algorithmic discrimination, preventing AI systems from perpetuating or worsening existing societal biases.
- Accountability: Regular audits of ADS were mandated to ensure fairness and accuracy, with mechanisms for correcting data inaccuracies.
- Notification and Alternatives: Individuals were to be notified when an automated decision tool was used to make a consequential decision about them, and an alternative selection process was to be offered whenever feasible.
Why Did AB 331 Fail?
Despite its ambitious goals, AB 331 faced significant hurdles. Tech companies expressed concerns that the bill was overly burdensome and could stifle innovation. There were also doubts about the feasibility of enforcing its provisions, particularly regarding impact assessments and bias detection.
As Capitol Weekly notes in their analysis, “There was a lack of clear agreement on what constituted an ‘automated decision tool’ or ‘algorithmic discrimination,’ leading to ambiguity and potential loopholes in the bill.” This lack of clarity made it difficult to determine which systems would be subject to the bill’s requirements and how those requirements would be enforced.
Stanford’s HAI, in their report on The AI Regulatory Alignment Problem, further emphasizes the difficulty: “Experts questioned the practicality of enforcing the bill’s provisions, particularly the requirements for impact assessments and bias detection.” They also pointed to the challenges of defining and measuring algorithmic discrimination, as well as the limitations of current bias detection techniques.
New York Picks Up the Torch
The failure of AB 331 in California hasn’t deterred other states. New York is now considering its own legislation to regulate AI, particularly in employment decisions. While the specifics are still under wraps, it’s likely that the New York proposal will draw inspiration from AB 331, focusing on similar principles of transparency, fairness, and accountability.
The Broader Context: AI Regulation in the US
The efforts in California and New York are part of a growing trend of states taking the lead on AI regulation. As reported by the Business Software Alliance, in 2024 alone, state lawmakers across the United States introduced almost 700 AI-related bills, with 113 enacted into law. These bills tackle a range of issues:
- Algorithmic Bias and Discrimination: Ensuring AI systems don’t perpetuate societal inequalities.
- Privacy and Data Security: Protecting personal information from misuse.
- Transparency and Explainability: Making AI systems understandable and accountable.
- Safety and Security: Preventing AI from causing harm or being used maliciously.
However, this state-by-state approach, as highlighted by Cato at Liberty, raises concerns about a fragmented regulatory landscape. The lack of comprehensive federal AI regulation has led to a patchwork of state laws, which could create challenges for businesses operating across state lines and lead to inconsistencies in how AI is regulated.
Furthermore, a study by Encina Advisors, LLC, suggests that some AI-related industries in California might cease operations or relocate due to regulatory burdens. This highlights the need to strike a balance between fostering innovation and addressing ethical concerns.
States like Illinois, New York, Texas, and Vermont are adopting a collaborative approach, bringing together stakeholders from various disciplines to study AI’s potential impacts, as outlined by The Council of State Governments.
The US appears to be leaning towards a decentralized, “bottom-up” approach to AI regulation, with states taking the lead. While this could be more adaptable to the rapid pace of AI innovation, it also carries the risk of creating a complex and potentially inconsistent regulatory landscape.
The Global Landscape: AI Regulation Around the World
The US isn’t alone in this journey. Countries worldwide are developing their own approaches to AI regulation. The European Union is leading the way with its proposed AI Act, a comprehensive legal framework that categorizes AI systems based on risk and imposes corresponding obligations.
In contrast, the G7 is taking a more voluntary approach, focusing on international guiding principles and a voluntary code of conduct for AI developers. Other countries, like China, are also implementing regulations, with a strong emphasis on government oversight and control, as reported by Communications of the ACM.
Potential Benefits and Risks of AI
The debate over AI regulation stems from the understanding that AI presents both significant benefits and substantial risks. According to the University of Cincinnati, AI offers numerous benefits, including:
- Increased Efficiency and Productivity: Automating tasks and optimizing processes.
- Enhanced Healthcare: Developing new treatments and improving diagnoses.
- Economic Growth: Creating new jobs and boosting innovation.
- Solutions to Global Challenges: Addressing climate change, improving education, and strengthening cybersecurity.
However, WalkMe outlines several potential risks:
- Job Displacement: Automating tasks currently performed by humans.
- Privacy Violations: Misuse of personal data collected by AI systems.
- Algorithmic Bias and Discrimination: Perpetuating societal biases through AI algorithms.
- Security Risks: Hacking or manipulation of AI systems.
- Existential Threats: Long-term risks to humanity from advanced AI.
Conclusion: Navigating the Future of AI
The resurgence of AB 331’s core ideas in New York underscores the ongoing debate about regulating AI. The failure of California’s bill, despite its good intentions, highlights the challenges of regulating a rapidly evolving technology. Policymakers must grapple with complex issues, balancing innovation with ethical concerns.
The patchwork of state-level regulations has significant implications. While state-level initiatives can be more agile, they also create potential inconsistencies and compliance challenges. A balanced approach is crucial, one that fosters innovation while addressing ethical concerns and societal risks.
Ongoing dialogue and collaboration between policymakers, industry leaders, and the public are essential. By working together, we can ensure that AI is developed and used in a way that benefits society as a whole.
Read More From AI Buzz

Vector DB Market Shifts: Qdrant, Chroma Challenge Milvus
The vector database market is splitting in two. On one side: enterprise-grade distributed systems built for billion-vector scale. On the other: developer-first tools designed so that spinning up semantic search is as easy as pip install. This month’s data makes clear which side developers are choosing — and the answer should concern anyone who bet […]

Anyscale Ray Adoption Trends Point to a New AI Standard
Ray just hit 49.1 million PyPI downloads in a single month — and it’s growing at 25.6% month-over-month. That’s not the headline. The headline is what that growth rate looks like next to the competition. According to data tracked on the AI-Buzz dashboard , Ray’s adoption velocity is more than double that of Weaviate (+11.4%) […]
