AI Regulation Is Under Threat. Here’s What It Means for People with Disabilities 

AI systems already influence who receives housing, healthcare, and public benefits. A new executive order limiting state oversight poses serious risks for people with disabilities who are disproportionately harmed by automated decision-making.

 

December 10, 2025

 

Technology can help expand access and independence for people with disabilities –  but when it is built or deployed without their input, it can increase discrimination and cut people off from essential services. As AI becomes more embedded in decisions about healthcare, housing, employment, education, and public benefits, policy debates have struggled to keep pace. 

That challenge became even more urgent with President Trump’s new executive order, expected to be signed this week, which would create a single federal standard for AI and prevent states from adopting their own regulatory safeguards. The order described by the White House purports to limit states’ ability to craft stronger protections and concentrate authority in the federal government. This approach, if upheld, would sideline local innovation and upend transparency, accountability, and civil rights enforcement. For people with disabilities – who are often disproportionately harmed by automated systems – this raises profound questions about who will ensure AI is accessible and subject to meaningful oversight.

WHAT AI MEANS

To understand why this matters, we first need to clarify what “AI” – a term that’s been used loosely and inconsistently – actually is.The National Institute of Standards and Technology (NIST) has defined AI as “the capability of a device to perform functions that are normally associated with human intelligence such as reasoning, learning, and self-improvement.” California state law defines AI as “an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.”

HOW ALGORITHMS SHAPE ACCESS TO HEALTHCARE AND DISABILITY SERVICES

Some predictive algorithms take the form: If you are like X, we may not give you a job.  If you are like Y, then you may lose custody of your child. If you are like Z, then you can only receive a certain number of hours of home healthcare. Many of these are not much different than tools and equations used in statistics for decades. These algorithms can be manual or automated, analog or AI. While other algorithms draw on large datasets, using similar inputs to predict what an output might be. These differences are important, especially when we discuss how existing federal and state laws apply and how to regulate these wide-ranging technologies. 

Collectively, these technologies are reshaping the landscape of U.S. healthcare, often with the promise of increasing efficiency and reducing human bias. However, the unchecked and unregulated integration of these tools – from determining patient eligibility for programs like Medicare and Medicaid by government agencies, to determining levels of care and treatment coverage by private insurers – present a threat to patients’ rights and access to care. And when AI systems malfunction, the errors often go undetected or unexplained, making it difficult for patients to understand, question, or appeal decisions that affect their care. For people with disabilities, whose right to live independently and receive community-based services is protected by civil rights law, algorithmic decision-making can quietly erode hard-won protections. 

The promise of AI in healthcare is that it can reduce or eliminate error and subjectivity in its determinations, and increase efficiency. But these systems are not neutral or infallible; they are built and trained by humans and operate within structures shaped by human judgment – which invariably carry human biases that show up in algorithmic outputs. Algorithms are trained on datasets; sometimes that data underrepresents a particular group, as is often the case for people with disabilities, other times that data overrepresents people with disabilities due to longstanding systemic and institutional inequities. As a result, the software can reproduce and amplify existing discrimination. When an algorithm flags a patient for reduced services, for example, it is doing so based on patterns in historical data – not individualized medical judgement. Because healthcare is a business, these systems can also embed profit-driven incentives into clinical decisions, which in turn leads to inappropriate denials of coverage and medically necessary care.  

These systemic failures are evident in both the private and public sector, and have devastating consequences on people’s health. 

  • In North Carolina, an algorithm denied the services an individual with an intellectual disability had relied upon for years for their community placement (and to avoid institutionalization), even though their needs had remained constant. 
  • Private insurers, including United Healthcare and Humana, used an AI tool called nH Predict to “predict” discharge dates for patients receiving post-acute care through Medicare Advantage. This system resulted in severely ill patients, such as a woman paralyzed from a stroke or a legally blind man facing heart and kidney failure, being granted a fraction of the necessary time for rehabilitation. Employees reportedly risked discipline if they deviated from the algorithm’s “target” date. 
  • Optum, UnitedHealth Group’s behavioral health subsidiary, used a bundle of algorithms called ALERT to identify and flag therapy “overuse.” This system incentivized care advocates with bonuses for handling cases quickly and curtailing therapists’ treatment plans, resulting in service denials and reportedly causing some Optum insured patients to be hospitalized after losing access to therapy. 

The Bazelon Center has been looking into some of these tools being rolled out in California to see how they are impacting people with disabilities and help develop localized, community-responsive advocacy strategies to reduce harm and set up guardrails. People with intellectual and developmental disabilities and psychiatric disabilities have reported errors in their medical records, unexplained Medi-Cal service cuts, and confusing appeals processes, leaving people without the supports they need to live independently. 

WHERE FEDERAL AND STATE POLICY IS FALLING SHORT

As concerns about algorithmic decision-making grow, lawmakers at both the state and federal levels have started taking initial steps toward regulation. In Congress, hearings have examined the impact of healthcare algorithms, but meaningful legislative action has yet to follow. 

Some states have taken more concrete action. Colorado passed a broad AI law requiring companies to evaluate high-risk systems for bias, but its implementation has already been delayed after facing intense  pushback from the tech industry. In January 2025, a California state law went into effect that regulates health insurers’ use of algorithms like ALERT for utilization management. The law mandates that algorithms base utilization management decisions on a patient’s individual clinical information, not just comparison against a group dataset. A licensed health care professional – not the tool – must also make the final decision about whether treatment is medically necessary. While promising, the law is fairly narrow and enforcement remains uncertain.

Meanwhile, the federal executive branch is moving aggressively in a deregulatory approach. President Trump’s executive order, if upheld, would not only block states from adopting their own protections; it also warns that federal AI funds may be withheld from states that implement “burdensome” rules. This approach sidelines state innovation and removes an important layer of oversight.

The choices policymakers make today will determine whether these technologies become a force that supports autonomy and community living or that deepens the very systems of exclusion that disability rights laws were designed to dismantle. Without strong guardrails, AI will continue to make life-altering decisions in ways that are invisible, difficult to challenge and potentially discriminatory. Where we go from here will require deeper collaboration and community engagement, a far greater role for people with disabilities in the development, implementation, and regulation of these tools, and clear safeguards that reflect the lived experiences of people most affected by these tools.

States have long played a critical role in advancing civil rights protection, and limiting their ability to regulate AI removes a critical layer of accountability from systems that make life-altering decisions. Through disability-led advocacy, cross-movement partnerships, and sustained government oversight, we must ensure that AI strengthens, rather than undermines, the promise of community integration and self-determination for all people with disabilities.