By Bradley Merrill Thompson and Jason E. Christ, Epstein Becker Green

The application of artificial intelligence technologies to health care delivery, coding and population management may profoundly alter the manner in which clinicians and others interact with patients, and seek reimbursement. While on one hand, AI may promote better treatment decisions and streamline coding and claims submission, there are risks associated with unintended bias that may be lurking in the algorithms. AI is trained on data. To the extent that data encodes historical bias, that bias may cause unintended errors when applied to new patients. This can result in errors in utilization management, coding, billing and healthcare delivery.

The following hypothetical illustrates the problem.

A physician practice management service organization (MSO) adopts a third-party software tool to assist its personnel in make treatment decisions for both the fee-for-service population and a Medicare Advantage population for which the MSO is at financial risk. The tool is used for both pre-authorizations and ICD diagnostic coding for Medicare Advantage patients.

The MSO’s compliance officer observes two issues:

1. It appears Native American patients seeking substance abuse treatment are being approved by the MSO’s team far more frequently than other cohorts who are seeking the same care, and

2. Since the deployment of the software, the MSO is realizing increased risk adjustment revenue attributable to a significant increase in rheumatic condition codes being identified by the AI tool.

Though the compliance officer doesn’t have any independent studies to support it, she is comfortable that the program is making appropriate substance abuse treatment and utilization management recommendations because she believes that there may be a genetic reason why Native Americans are at greater risk than others. With regard to the diagnostic coding, she:

1. Is also comfortable with the vendor’s assurances that their software is more accurate than eyes-on coding;

2. Understands that prevalence data suggests that the elderly population in the United States likely has undiagnosed rheumatic conditions; and,

3. Finds through her own investigation that anecdotally it appears that the software, while perhaps over-inclusive, is catching some diagnoses that could have been missed by the clinician alone.

Is the compliance officer’s comfort warranted?

The short answer is, of course, no.

There are two fundamental issues that the compliance officer needs to identify and investigate – both related to possible bias. First, is the tool authorizing unnecessary substance use disorder treatments for Native Americans, (overutilization) and at the same time not approving medically necessary treatments for other ethnicities (underutilization)? Overutilization drives health spend and can result in payment errors, and underutilization can result in improper denials, patient harm and legal exposure. The second issue relates to the AI tool potentially “finding” diagnostic codes that, while statistically supportable based on population data the vendor used in the training set, might not be supported in the MSO’s population. This error can result in submission of unsupported codes that can drive risk adjustment payment, which can carry significant legal and financial exposure.

 

eLearning Library Subscription
Unlimited access to over 60 courses, assessments, and training curriculums designed to enhance job-specific, self-paced learning for one full year.

Special pricing available for Groups. Train your entire team! Learn more here.

 

 

About the Authors:

Bradley Merrill Thompson is a Member of the Firm at Epstein Becker Green. He counsels software, medical device, and drug companies on a wide range of FDA regulatory issues. Mr. Thompson leads the firm’s AI initiative to comprehensively serve the legal needs of those clients that either develop or use artificial intelligence tools. To develop a deeper understanding of machine learning algorithms, Mr. Thompson also is presently completing a Masters in Applied Data Science at the University of Michigan.

 

 

 

Jason E. Christ is a Member of the Firm at Epstein Becker Green. He extends his successful defense strategies in enforcement actions by the government to safeguard health plan clients’ risk adjustment payment programs. As one of the first attorneys focused on risk adjustment in Medicare Advantage plans, Mr. Christ has transformed how clients organize, monitor, and ultimately defend their programs. Committed to keeping clients ahead of health care regulations, he speaks about fraud and abuse issues to industry groups.