.

The MEAT Criteria Audit Simulation That Reveals Exactly How Bad Your Coding Really Is

MEAT Criteria Audit Simulation

Your coding team says they understand MEAT criteria. They’ve been through training. They pass your internal QA. Your capture rates are decent. You assume your MEAT criteria coding is solid.

Then you run a RADV audit simulation using actual CMS audit standards, and you discover that 18% of your HCCs don’t have adequate MEAT criteria documentation. That’s not a minor problem. That’s a potential multi-million dollar recoupment risk.

Most organizations don’t know how bad their MEAT criteria coding is because they’ve never tested it properly. Here’s how to run an audit simulation that reveals the truth.

The Problem with Internal QA

Your internal QA team reviews charts and validates that MEAT criteria is present. They’re using your organization’s standards for what constitutes adequate MEAT. Those standards are probably more lenient than what CMS auditors will accept.

Internal QA reviewers know your providers. They understand your documentation patterns. They give benefit of the doubt in ambiguous situations. “Dr. Johnson always evaluates this thoroughly even when she doesn’t document every detail, so we’ll pass it.”

CMS auditors don’t know Dr. Johnson. They don’t give benefit of the doubt. They apply documentation standards literally. If the MEAT criteria isn’t explicitly documented in the encounter note, the HCC doesn’t survive the audit.

The gap between your internal standards and CMS audit standards is where your risk lives.

Setting Up the Simulation

A proper RADV audit simulation requires treating your own charts like a hostile auditor would treat them. Here’s the process.

Pull a random sample of 200 member-year records from last year. Use true random sampling, not convenience sampling. If you cherry-pick charts that look good, you’re defeating the purpose.

For each member, identify all HCCs that were submitted for that year. Pull the source documentation that supported each HCC. This is exactly what happens in a real RADV audit.

Now comes the critical part: have external reviewers audit those HCCs using strict CMS standards. Not your internal team. External reviewers who don’t know your organization and have no incentive to be lenient.

Give them clear instructions: “Validate whether documentation supports each HCC using the strict interpretation of MEAT criteria. If you need to make assumptions or give benefit of the doubt, the HCC fails.”

What the Simulation Reveals

When organizations run rigorous audit simulations, they discover patterns they didn’t know existed.

Chronic kidney disease is often coded based on outdated GFR values. The encounter note mentions CKD but the most recent GFR is from 18 months ago. There’s no current evaluation of kidney function. Under strict MEAT standards, that fails.

Diabetes with complications is coded but the complication isn’t actually evaluated during the encounter. The note says “diabetes with neuropathy” in the assessment but doesn’t document any neuropathy symptoms, exam findings, or treatment modifications. That’s just copying forward a diagnosis, not demonstrating active management.

CHF is coded but the note doesn’t document volume status, functional capacity, medication management, or any evidence that heart failure was actually evaluated. Just “CHF” in the problem list and “continue current medications.” That fails strict MEAT criteria.

These patterns are invisible in internal QA because your team has been trained to accept this level of documentation. The audit simulation reveals that CMS won’t accept it.

The Error Rate That Matters

In a RADV audit, CMS calculates an error rate based on the percentage of sampled HCCs that don’t have adequate documentation. If your error rate exceeds certain thresholds, they can extrapolate and demand recoupment across your entire membership.

When organizations run audit simulations, error rates typically fall into three categories:

Below 5%: You’re doing well. Some errors are inevitable. Below 5% suggests your MEAT criteria coding is solid.

5-10%: You’ve got problems but they’re probably not catastrophic. Focus on the specific conditions where errors are concentrated and fix them systematically.

Above 10%: You’re in trouble. This error rate would trigger significant recoupment in a real audit. You need immediate systematic remediation.

Most organizations running their first rigorous audit simulation discover they’re in the 8-15% error range. That’s fixable, but it requires acknowledging the problem and investing in solutions.

The Condition-Specific Pattern Analysis

The real value of audit simulation isn’t just knowing your overall error rate. It’s understanding which conditions are driving errors.

Run your error analysis by HCC category. You might discover that diabetes coding is solid (2% error rate) but your CHF coding is terrible (25% error rate). That tells you exactly where to focus improvement efforts.

You might find that certain provider practices have much higher error rates than others. That suggests documentation problems specific to those practices, which you can address through targeted education.

You might discover that certain coders are more lenient than others in applying MEAT standards. That reveals the need for coder calibration and more consistent QA standards.

Without this granular analysis, you’re trying to fix everything at once. With it, you can prioritize the highest-risk areas.

Closing the Gap

Once the audit simulation reveals your MEAT criteria gaps, fixing them requires three parallel efforts.

Immediate remediation: Go back through recently coded charts in the high-error categories and revalidate them. Delete HCCs that won’t survive audit. Yes, this reduces your current risk scores. Better to reduce them voluntarily than have CMS force recoupment later.

Provider education: Share specific examples from the audit simulation with providers. “Here’s a CHF note that failed MEAT criteria audit. Here’s what adequate CHF documentation looks like. This is the standard CMS will hold us to.”

Process changes: Build MEAT criteria validation into your coding workflow. If a coder assigns an HCC, they need to document which specific MEAT elements they identified in the note. That documentation becomes your audit trail.

Most organizations resist running rigorous RADV audit simulations because they’re afraid of what they’ll find. That’s backward. Finding problems through internal simulation is infinitely better than finding them through CMS audit. Internal simulation gives you time to fix problems. CMS audit gives you recoupment demands.

If you haven’t run a true RADV audit simulation using external reviewers and strict standards in the past 12 months, you don’t actually know how good your MEAT criteria coding is. You’re operating on hope instead of data. Stop hoping and start testing.