Imagine a world in which health care companies compile and analyze private healthcare data from individuals using AI. That world exists today, for what these companies argue is solely to provide more efficient healthcare. But now imagine a world where your private healthcare data, meticulously analyzed by advanced AI systems, could be wielded against you by these very same companies.
It sounds like dystopian fiction.
But today, there is genuine concern and growing debate about whether the information you provide to health insurance companies, indeed information that is demanded by them—intended to facilitate access to care and secure approval for necessary treatments—might one day serve as a tool to deny you coverage or inflate your premiums.
Since being signed into law in 2010, the Affordable Care Act (ACA) — a.k.a. Obamacare — has been a safeguard for millions, ensuring that pre-existing conditions cannot be used as a basis to deny coverage or increase premiums. Yet, with every election cycle, the politics reignite, with Republicans looking to repeal or significantly alter the laws governing the ACA, often by restructuring law’s required Health Insurance Marketplace.
Unfortunately, such changes could strip away protections for individuals with cancer, diabetes, cardiac disease, or other chronic illnesses.
Meanwhile, the healthcare insurance sector is embracing the role of artificial intelligence. Anthem, UnitedHealth Group and Cigna are all now harnessing AI to revolutionize utilization management and review processes. On its own, this is no bad thing. Such advancements could potentially expedite pre-authorizations, better predict patient outcomes and more effectively manage costs. Furthermore, medical privacy laws, such as the Health Insurance Portability and Accountability Act (HIPAA) of 1996, protect our personal health information.
Yet, AI-powered innovation could become a double-edged sword. Lacking guardrails to protect those with pre-existing conditions, the very data that fuels AI management systems could present consumers with a substantial risk. Companies could be tempted to utilize private patient data, processed through AI, for risk profiling — a move that could lead to coverage denials or skyrocketing premiums based on an individual’s health profile. And even if that data were protected by law, AI is making the identification of individuals simply by a few snippets of personal data, and with that the ability to stratify patients based on diagnosis and risk factors by simply associating the patient identifiers with diagnostic lab and test results.
We’ve been here before. Pre-existing conditions have long been used to deny insurance coverage or charge exorbitant fees for even minimal coverage, effectively locking people out of affordable health care. Indeed, previous battles over pre-existing conditions have shaped the current protections offered by the ACA.
High-risk pools, the primary alternative for those unable to get insurance, were often underfunded and provided limited benefits with higher costs. The passage of HIPAA in 1996 offered some protections for individuals switching jobs. Still, access remained precarious. Passing the ACA, which extended comprehensive protections to everyone, regardless of health history, proved pivotal. No longer could insurers deny coverage or inflate premiums based on a patient’s previous or current conditions.
Despite numerous attempts to dismantle or dilute the ACA, the judicial system — notably in landmark Supreme Court cases such as King v. Burwell and Texas v. United States — has so far ruled to uphold these vital protections. But the lesson is clear. As emerging technologies like AI and data-driven solutions add new layers of complexity, we must remain vigilant in advocating for the welfare of the public.
Healthcare is a landscape of constant evolution. It is characterized by technological innovation and medical advancements. It is also characterized by shifts in financing and delivery. Major insurers must balance these developments by prioritizing ethics and transparency in data handling even as they continue to invest in AI.
Robust privacy protocols are not just a legal necessity; they are crucial for maintaining trust between patients and providers. Such measures will ensure that AI’s capabilities enhance patient care and do not become instruments of exclusion.
Richard Cote, MD, is the Edward Mallinckrodt Professor and chair of the department of pathology and immunology at the Washington University School of Medicine in St. Louis and faculty co-director of the university’s Cordell Institute for Policy in Medicine & Law. Mary Mason, MD, is associate director of the Cordell Institute and an adjunct professor of law.