The topic of artificial intelligence (AI) is on every conference agenda. How is it really being used in oncology? Oncologists see a completely different value and utility for AI applications than do managed care and employers. What are the issues and what guardrails and transparency will be needed in 2025 and beyond?
Algorithms and AI
Many state laws restrict insurers’ abilities to make medical necessity determinations using their own clinical criteria. State laws do not apply to self-insured employers. Utilization management processes have been implicated in lawsuits against insurers involving the improper use of algorithms or AI to make claims determinations.
Algorithms are a series of human-provided instructions that are activated in response to certain situations and change only when a person changes the algorithm. AI is made up of several algorithms, is not completely dependent on human intervention to change its algorithms, and is able to adapt itself to situations based on available data.
The common issue addressed by lawsuits against health plans for algorithm and AI use is the removal of human beings to review and make determinations regarding the appropriateness of care. While mainly focused now on algorithms, these factors seem to be an indicator of future health-plan litigation involving AI.1
Growth of AI in the Public Domain
In 2017, Google introduced a transformer model that could figure out relationships between billions of text examples and predict the next text in the sequence. In 2020, an AI startup released GPT-3, a generative pretrained transformer model, which absorbed public domain data in an unequaled manner.
As recently as December 2022, OpenAI released ChatGPT, a chatbot that exploded on the scene, reaching 100 million active users within 2 months. Microsoft invested $10 billion in OpenAI and has already integrated it into Windows and the Bing search engine. By March 2023, OpenAI released ChatGPT4, reportedly trained on 500 times more data than the earlier model.
You have probably already used ChatGPT4, perhaps without realizing it. It is available for free at https://openai.com/blog/chatgpt. You ask it a question and it gives you an answer.
Algorithms, AI, and Cancer Care
Cancer is a leading driver of morbidity and mortality across the world, and it is estimated that 30.2 million new cancer cases will be diagnosed in 2040. Despite significant improvements in diagnosis and management, we need to do better, which will require working smarter, faster, and more effectively with shrinking financial and labor resources. The large volume of data related to diagnosis, the development of therapies, patient outcomes, imaging, laboratory results, and clinical research is rapidly outstripping the capacity of the human brain to process this much information. It is possible that AI may evolve into important tools to distill needed information into relevant patterns for the benefit of individual patients and providers.
Ways AI Is Already Helping Cancer Detection
All signs point toward a near future where AI models play an increasingly present role in combating cancer, from the earliest moments to late-stage treatment.
A new medical AI model claimed to accurately predict the activity of genes at the cellular level, spotting cancerous cells before they metastasize. Researchers trained their model to detect signs of 19 different types of tumors after observing medical patient images. The model was reportedly able to detect cancer and predict a tumor’s molecular profile all based on cellular features included in its training data. It could also forecast a patient’s survival potential across different cancer types. The model, called CHIEF (Clinical Histopathology Imaging Evaluation Foundation) was trained on 60,000 whole-slide images of tissues from lungs, prostates, colons, and other organs.2
The promise of AI for cancer treatment broadly falls into several categories: prediction, detection, drug discovery, and treatment implementation. A study published in Nature Medicine involving nearly 500,000 patients in Germany concluded that doctors using an AI detection model confirmed more cases of breast cancer than doctors acting on their own. Specifically, doctors using the AI achieved a cancer detection rate 17.6% higher than those who didn’t. The FDA has also already approved marketing for an AI software design to help identify signs of prostate cancer.3
A separate AI model created by researchers at the National Institutes of Health called LOgistic Regression-based Immunotherapy-response Score (LORIS) demonstrated the ability to predict certain groups of cancer patients who might benefit best from certain immunotherapy treatments.4
A diagnosis still requires a human doctor who can look over the evidence and draw their own expert conclusion based on years of real-world experience. We’re already living in a world where doctors can use these tools to bolster their own abilities. It’s less clear though, if AI will ever be reliable enough to remove doctors from that dynamic entirely.5
As We Learn, Warnings Increase
There is a risk of placing too much faith in AI screening and detection tools too quickly. Several of the early clinical application models are in research phases and will require more testing before they are deployed in healthcare facilities at scale. There’s also the risk of an opportunist taking advantage of the overly broad umbrella term “AI” to pitch far less tested models as more effective than they actually are.
There are already numerous cases of people receiving wrong and potentially dangerously incorrect diagnoses after interacting with popular large language models. One study published in JAMA Pediatrics last year found that OpenAI’s ChatGPT incorrectly diagnosed 83% of pediatric case studies it was presented with.6 Models like these are also prone to occasionally hallucinating false facts and doing so with a confident tone.
Guidance Is Being Developed
In 2023, the American Medical Association issued “Principles for Augmented Intelligence Development, Deployment, and Use.”7 In May 2024, the American Society of Clinical Oncologists published “Principles for the Responsible Use of Artificial Intelligence in Oncology.”8
Federal and State Regulations Slowly Evolving
At the federal level, most AI-related policies are coming from the Center for Medicare & Medicaid Services (CMS). The April 12, 2023, CMS Medicare Program: Contract Year 2024 Policy and Technical Changes to the Medicare Advantage Program Final Rule — Provisions on Use of AI in Utilization Management/Prior Authorization Processes, included protection for individual circumstances of an individual, guidance against using algorithms or AI tools that don’t account for individual circumstances. An October 30, 2023, Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence required Health and Human Services to develop strategic plan on deployment of AI in health sector, and AI assurance policy for evaluation of AI-enabled healthcare tools.
The January 17, 2024, CMS Interoperability and Prior Authorization Final Rule addressed payer/provider interoperability by January 2027 for PA processes and allows AI to meet new rules but must ensure providers are properly involved in decision-making. Then on February 6, 2024 – a CMS Frequently Asked Questions offered further guidance, noting that plans must base coverage decisions on individual patients’ condition, supported by medical notes, patient history and patient’s provider recommendations.9 Despite much discussion, only 4 states passed regulations in 2024 related to AI and Algorithms – NY, Utah, Colorado, Illinois, and California. The general tone of state regulation was focused on consumer protection.10
Use of AI and Algorithms in Healthcare Can Be Dangerous
AI tools could help solve workforce challenges, but implementation can be difficult. The University of Illinois Hospital and Health Sciences System was testing an AI-backed tool that drafts responses to messages. A patient misspelled the name of a medication. The mistake led the AI to give side effects for a drug the patient wasn’t using when a nurse forgot to double-check the response. It almost killed the patient, and it happened on Day 1. Administrative or operational products, like ambient scribes or revenue cycle management tools, may be a safer place to start.12
Unfolding Investigations Into Use of AI and Algorithms by Payers
The Senate published a detailed report on “Refusal of Recovery: How Medicare Advantage Insurers Have Denied Patients Access to Post-Acute Care.”13 The October 17, 2024 report focused on the 3 largest Medicare Advantage (MA) insurers: UnitedHealthcare, Humana and CVS (Aetna) and how use of algorithms and AI were developed and used to adapt prior authorization and utilization management to boost profits by targeting costly yet critical stays in post-acute care facilities, endangering the health of vulnerable MA patients. This chilling report addressed the use of an Optum-owned algorithm (often called NaviHealth or nHealth), which was described in the August 2023 issue of Oncology Practice Management, “Artificial Intelligence is Here: What You Need to Know”.
The National Association of Insurance Commissioners (NAIC) for US states recently issued its own review of the threats of AI and algorithms, “Consumer Health Advocacy at the NAIC: Artificial Intelligence in Health Insurance: The Use and Regulation of AI in Utilization Management, FINAL REPORT November 2024 in November 2024.”14 The NAIC findings suggest that “while AI presents opportunities for plan efficiency, unregulated use of AI could also exacerbate existing bias and discrimination, particularly for marginalized and disenfranchised communities who already experience disparate health care access challenges…AI is regularly used by health insurance plans to conduct UM activities. Proponents cite reductions in administrative burden and expedited approvals...However, there are risks that must be considered, such as the exacerbation of existing biases, prioritization of misaligned incentives, and use of technologies outside their original use case or design leading to unintended harm.”
ProPublica issued 2 very disturbing reports last year on their investigations into payer abuse of algorithms and AI. The first one focused on companies owned or used by insurers (including Anthem), like Evicore and Carelon.15 Evicore (owned by Express Scripts) was called out for using a “dial” algorithm process to increase or decrease denial rates and medical review based upon profitability and control of medical claims rather than patient need and medical necessity. Also mentioned in this report were the findings of a 2018 audit by the Centers for Medicare & Medicaid Services that Health Care Service Corporation, (HCSC) a Blue Cross Blue Shield insurer, had hired EviCore to review previous authorizations. EviCore, the audit found, played a role in making “inappropriate denials” for 30 patients because it failed to keep its cancer guidelines up-to-date in its policies/algorithms. As a result, EviCore retrained its staff. HCSC did not respond for comment. The report also called out a 2022 settled lawsuit against Carelon (owned by Elevance Health, formerly Anthem). In 2022, Carelon settled a lawsuit for $13 million that alleged the company, then called AIM, had used a variety of techniques to avoid approving coverage requests. Among them: the company set its fax machines to receive only 5 to 10 pages. When doctors faxed prior authorization requests longer than the limit, company representatives would repeatedly deny them for failing to have enough documentation.
The second ProPublica report exposed a Cigna algorithm process and was followed by lawsuits in CT and CA on behalf of patients affected by the process. Cigna built a technology-based system that allows its doctors to instantly reject a claim on medical grounds without opening the patient file, leaving people with unexpected bills, according to corporate documents and interviews with former Cigna officials. Over a period of 2 months, Cigna doctors denied over 300,000 requests for payments using this method, spending an average of 1.2 seconds on each case, the documents show. Cigna emphasized that its system does not prevent a patient from receiving care — it only decides when the insurer won’t pay. “Reviews occur after the service has been provided to the patient and does not result in any denials of care,” the statement said. Cigna knows that many patients will pay such bills rather than deal with the hassle of appealing a rejection.
The Cigna algorithm, called Procedure to Diagnosis or PxDx, created a list of tests and procedures approved for use with certain illnesses. The system would automatically turn down payment for a treatment that didn’t match one of the conditions on the list. Denials were then sent to medical directors, who would reject these claims with no review of the patient file.16
What Can Practices Do This Year?
Algorithm and AI use is rapidly moving forward in clinical care, operations, and in payer management of prior authorization, coverage decisions, and denials. Regulation, guidance, and legislation are already not keeping pace with the technology and its applications. Patients with cancer can be helped and also hurt or abused by various applications of these advances.
To help protect, support, and defend vulnerable cancer patients, practices can start with these steps:
- Designate an AI lead in your group. Support the time they will need to meet locally and nationally, virtually or in person, to join networking and informational groups.
- As you learn, consider what applications may be good first steps for your patients and team.
- Watch for signs of insurer AI denials increasing, lack of transparency, or if you see them reference use of comparisons with like patients and diagnoses.
- Learn who in your state, town, and federal leadership is involved or interested in AI oversight and monitoring, collaborate, and join forces.
- Read the major federal and national reports on AI referenced in this editorial, as well as AI lawsuits to see how patients were impacted.
- AI may not affect oncology payer coverage significantly now but watch for lessons in their focus on post-acute care – that was targeted because of cost outliers. Oncology will soon become a target as well.
Share your thoughts and experiences with me at
References
- Alexander A, Tellner C, Norwood H. Algorithms in Private Health Insurance Litigation: The Precursor to AI? Bloomberg Law. Accessed January 27, 2025. www.kaufmandolowich.com/news-resources/algorithms-in-private-health-insurance-litigation-the-precursor-to-ai-bloomberg-law-contributors-abbye-e-alexander-esq-christopher-j-tellneresq-henry-e-norwood-esq-8-20-2024/
- Wang, X, Zhao, J, Marostica E., et al. A pathology foundation model for cancer diagnosis and prognosis prediction. Nature. 2024;8035:970-978.
- Eisemann N, Bunk S, Mukama, T, et al. Nationwide real-world implementation of AI for cancer detection in population-based mammography screening. Nat Med. 2025. https://doi.org/10.1038/s41591-024-03408-6.
- Chang TG, Cao Y, Sfreddo HJ. LORIS robustly predicts patient outcomes with immune checkpoint blockade therapy using common clinical, pathologic and genomic features. Nat Cancer. 2024;5(8):1158-1175.
- Degeurin M. AI is already changing the ways we fight cancer. Popular Science. January 8, 2025. Accessed January 27, 2025. www.popsci.com/technology/ai-cancer-research/#:~:text=It's%20supercharging%20how%20we%20detect,human%20doctors%20are%20still%20critical.&text=Though%20AI%20still%20can't,to%20make%20them%20more%20effective.
- Barile J, Margolis A, Cason G. Diagnostic Accuracy of a Large Language Model in Pediatric Case Studies. JAMA Pediatr. 2024;178(3):313-315.
- American Medical Association. Principles for the responsible use of AI in healthcare. Accessed January 27, 2025. https://society.asco.org/sites/new-www.asco.org/files/ASCO-AI-Principles-2024.pdf
- American Society of Clinical Oncology. Six guiding principles for AI in oncology. Accessed January 27, 2025. https://society.asco.org/news-initiatives/policy-news-analysis/asco-sets-six-guiding-principles-ai-oncology
- Silverboard D. Regulation of AI in healthcare utilization management and prior authorization increases. Holland & Knight Alert. Accessed January 27, 2025. www.hklaw.com/en/insights/publications/2024/10/regulation-of-ai-in-healthcare-utilization-management
- Silverboard D. Consumer Health Advocacy at the NAIC: Artificial Intelligence in Health Insurance: The Use and Regulation of AI in Utilization Management, FINAL REPORT November, 2024. Accessed January 27, 2025. www.hklaw.com/en/insights/publications/2024/10/regulation-of-ai-in-healthcare-utilization-management
- National Association of Insurance Commissioners. Artificial Intelligence and Health Insurance Report. Accessed January 27, 2025. https://content.naic.org/sites/default/files/national_meeting/Final-CR-Report-AI-and-Health-Insurance-11.14.24.pdf
- Olsen E. AI could be a game changer, but healthcare needs to be ‘exceedingly careful’. Healthcare Dive. October 28, 2024. Accessed January 27, 2025. www.healthcaredive.com/news/AI-healthcare-implementation-risks-patient-care/730953/
- Senate Homeland Security and Governmental Affairs Committee. Refusal of Recovery: How Medicare Advantage Insurers Have Denied Patients Access to Post-Acute Care. Accessed January 27, 2025. www.hsgac.senate.gov/wp-content/uploads/2024.10.17-PSI-Majority-Staff-Report-on-Medicare-Advantage.pdf
- National Association of Insurance Commissioners. Artificial Intelligence and Health Insurance Report. Accessed January 27, 2025. https://content.naic.org/sites/default/files/national_meeting/Final-CR-Report-AI-and-Health-Insurance-11.14.24.pdf
- Miller T. “Not Medically Necessary:” inside the company helping america’s biggest health insurers deny coverage for care. ProPublica Report. October 26, 2024. Accessed January 27, 2025. www.propublica.org/article/evicore-health-insurance-denials-cigna-unitedhealthcare-aetna-prior-authorizations
- Rucker P. How CIGNA saves millions by having its doctors reject claims without reading them. ProPublica. March 25, 2023. Accessed January 27, 2025. www.propublica.org/article/cigna-pxdx-medical-health-insurance-rejection-claims
