

What plaintiffs and attorneys need to know about algorithm-based claim evaluations
As insurers adopt artificial intelligence to evaluate personal injury claims, concerns are mounting over whether these systems fairly value injuries—or automate bias. While AI can speed up processing and reduce costs, plaintiffs risk receiving lowball offers or being misjudged by opaque algorithms.
You might not actively seek out artificial intelligence (AI) in your daily life. ChatGPT, Gemini, or others might not be your style, and you prefer a more traditional approach to research and task management.
But you’re in an AI world now, and there’s no avoiding it—whether you like it or not. Even if you choose not to seek out AI for your personal use, it’s being used in matters that affect you, and you might not even know when or why.
AI is rapidly reshaping the insurance industry, including personal injury law. As insurers seek to process claims more efficiently and reduce operating costs, AI-powered tools have become central in evaluating personal injury claims.
These tools analyze medical records, estimate damages, and even predict litigation outcomes. This can accelerate claims decisions, but attorneys and injured plaintiffs have concerns.
Is AI fairly valuing human suffering? Or is it automating bias and undercutting valid claims?
How is AI used in personal injury insurance claims?
Over the past decade or so, major insurers are adopting AI-driven platforms to streamline claims management. They’re using machine learning algorithms, natural language processing, and predictive analytics to complete a variety of functions, including:
- Evaluating the severity of injuries, based on information in medical records and claim forms
- Comparing claims to historical data to estimate “reasonable” settlement values
- Predicting whether a claimant will likely accept a settlement offer or pursue litigation
Often, these processes are conducted with little to no human review. The insurer benefits from efficiency, consistency, and reduced overhead costs (i.e. fewer employees). However, it’s not usually beneficial to the claimant.
How AI insurance processes can affect a claim
- Undervaluing pain and suffering
An AI system can reasonably analyze quantifiable data. It can calculate or estimate treatment costs or lost wages, but pain and suffering can be subjective. An AI system has difficulty putting a financial value on emotional distress and loss of enjoyment of life, for instance. Using AI for intangibles—factors that don’t translate easily into an algorithm—can lead to offers that undervalue the true impact of an injury.
- Racial and socioeconomic bias
AI models learn from historical data. This includes past biases that could be embedded in a dataset. An insurer that has historically paid lower settlements to certain racial and demographic groups might have changed its policies in that regard—but the AI that was trained to do so doesn’t know that. Their AI systems could be continuing outdated trends and policies that further promote disparities while appearing to be neutral.
- Lack of transparency.
When an AI algorithm manages the process of a claim decision, the claimant (and often their attorney) doesn’t know how the decision was reached. When there’s no process, it’s nearly impossible to challenge the decision or determine whether a mistake was made in the evaluation.
Is this just… ok?
No, it’s not. Courts and consumer advocacy groups are beginning to scrutinize the use of AI in insurance. A 2024 California class action lawsuit claimed the insurer’s AI systematically underpaid claims for soft tissue injuries.
Jong, et al. v. Blue Shield of California
A class of plaintiffs filed a lawsuit against Blue Shield in Alameda County. The plaintiffs claimed Blue Shield used an AI system called Claims Data Activator to automatically deny claims based on a lack of medical necessity. The system had rejected claims without reviewing actual medical files, which discriminated against claimants and violated a California requirement for thorough, individualized review.
Prior legal claims related to AI insurance decisions
Jong isn’t the only legal battle involving AI and insurance claims.
Cigna faced a 2023 federal lawsuit in the Eastern District of California. In this case, the insurer used an algorithm called PxDx to reject more than 300,000 benefit claims over a period of two months. The lawsuit claimed doctors spent as little as 1.2 seconds on each.
Plaintiffs further alleged that Cigna violated the Employee Retirement Income Security Act (ERISA) because the software delegated medical necessity decisions to AI rather than licensed doctors. This violated California law (Cal. Health & Safety Code §?1367.01(e)), which requires a physician to review denials.
Legal implications for AI in personal injury insurance
- Legal precedent.
These lawsuits matter. When the court allows a lawsuit to advance, it’s pushing back against unchecked automated claim denials. This leads to better legislation to handle these issues, as well as serving as a deterrent to other insurers from having these types of policies because they don’t want to face similar lawsuits.
- Increasing regulation.
California—and other states—have begun to enact legislation to ban AI-only claim denials. This reinforces the need for human review.
Increased scrutiny could require insurers to implement human oversight, transparency, and auditability in their AI systems.
As personal injury claims involving AI-based evaluations move forward, they must focus on the lack of human discretion, algorithmic bias, and opaque decision-making.
In other words, lawmakers are exploring legislation that would require:
- Disclosure when AI is used in claim evaluation
- Human review of any denied or reduced claims
- Audits to test for racial, gender, or economic bias
How plaintiffs’ attorneys can disrupt the AI-based insurance claims process
Your attorney in a personal injury insurance claim should consider if it’s necessary to do the following:
- Demand transparency about how your settlement offer was determined;
- Submit detailed narratives and documents that humanize your experience;
- Consult medical or vocational experts to challenge the AI valuation of your claim; and
- File a discovery request about the insurer’s claim evaluation method.
But… there’s also a flip side.
Your plaintiff’s attorney might begin to use AI, also.
There are tools available that might predict case outcomes based on the jurisdiction and judge, assess jury verdict trends, and model lifetime care costs based on the type and severity of your injury. This type of “plaintiff science” might work to level the playing field if attorneys are equipped with the tools and knowledge to leverage it correctly.
AI might improve efficiency in claims processing, but it must not come at the expense of fairness and justice. As the legal field catches up with the rapid pace of technological change, transparency, oversight, and advocacy will be key to ensuring injured individuals receive the compensation they deserve. Plaintiffs and their lawyers must stay alert to both the potential and the pitfalls of AI in personal injury law.


#Algorithms #Affect #Injury #Claims