The CBIZ 2024 SOC Benchmark Study analyzed 193 SOC reports across industries. User access reviews ranked as the #2 exception category at 15.6%, trailing only logical and physical access controls at 17.2%. Failure to remove terminated access — a direct consequence of broken access review processes — ranked third at 12%. Add those two together and access management is responsible for nearly one in three SOC 2 exceptions written.
That number is consistent year over year. Not because companies aren't trying. Because the way most companies approach access reviews doesn't match what auditors actually look for when they open the tenant.
This is what auditors look for. And this is where most Okta and Entra ID environments fall short.
"Access reviews are failing not because the process doesn't exist — but because the process that's documented doesn't reflect what's actually happening inside the IdP."
The access review ran. The revocations didn't.
This is the most common pattern. A quarterly access certification was completed on schedule. Reviewers confirmed or flagged access across the user population. Some flags were raised. Most were not acted on before the certification cycle closed.
An auditor asking for access review evidence doesn't just want the campaign completion report. They want to see the follow-through: which access was flagged for revocation, when the revocation was executed, and how quickly. A completed campaign with a 40% revocation follow-through rate is not a clean control. It's a documented gap with timestamps attached.
In Okta, this shows up as accounts where the last access certification action was "revoke" and the account is still active. In Entra ID, it shows up in Access Review history where recommendations were generated but not enforced because the setting to auto-apply reviewer decisions wasn't enabled. Both are the same problem: the review ran, the decision was made, and the system didn't close the loop.
The fix is enforcement, not process. Auto-apply reviewer decisions in Entra ID Access Reviews. Build a revocation SLA into your Okta campaign completion criteria. Don't close a campaign until the revocations are confirmed executed. A review without enforcement is a checkbox, not a control.
The wrong people are reviewing access.
SOC 2 CC6.2 requires that access is reviewed periodically by a person with the authority to make informed decisions about whether that access is appropriate. In practice, access certifications get routed to whoever is listed as the manager in the HR system — which is often not the person who knows what the application does, who needs it, or what level of access is appropriate.
When a reviewer doesn't understand what they're reviewing, they approve everything. A 100% approval rate on an access certification is an auditor's first signal that the review wasn't substantive. Reviewers who approve every access right in their queue, regardless of what the access actually is, are certifying the status quo, not verifying appropriate access.
The Schneider Downs 2024 analysis of recurring SOC 2 exceptions identified this pattern explicitly: access certifications where reviewers are not positioned to assess the appropriateness of the access being certified are treated as a design deficiency, not just an operating gap. A deficiency at the design level requires a program fix, not a quarterly remediation.
For each application in scope, identify the person who knows what appropriate access looks like — typically the application owner or department head, not the org-chart manager. Route certifications to that person. Document the routing logic. An auditor who sees named application owners as reviewers across your campaigns understands that the review had substance.
Service accounts are excluded from the scope.
Access reviews in most companies cover human users. The non-human accounts — service accounts, API keys, OAuth application grants, automation identities — get excluded because they're harder to attribute to a human reviewer and harder to revoke without breaking something.
Auditors know this. When they pull the access certification scope and see that service accounts and machine identities are excluded, that's a scope gap that requires a separate control narrative. If you can't certify non-human access on the same cadence as human access, you need a documented compensating control that explains how you know those accounts still have appropriate access. "We haven't looked" is not a compensating control.
With the deployment of Copilot, Power Automate, GitHub Actions, and SaaS-embedded AI agents accelerating, the non-human account population is growing faster than the human one at most companies this size. NIST's 2026 guidance on AI agent identity explicitly calls out that AI agents require the same access lifecycle management as human users: ownership assignment, access reviews, and deprovisioning on termination. If your access review scope doesn't include machine identities, your 2026 SOC 2 evidence package has a gap that wasn't there in 2024.
Start by inventorying every non-human account in your Okta or Entra tenant. Assign a human owner to each one. Build a lightweight annual review process for service accounts that answers three questions: does this account still need to exist, does the access level still match the business need, and who is accountable if it's compromised. Document the output. That's a defensible control.
The evidence doesn't match the policy.
The access review policy says quarterly. The last completed campaign was 14 months ago. The access review policy says reviews are completed within 30 days of campaign launch. The evidence shows campaigns that ran for 60 and 90 days before closing.
Auditors compare policies to evidence. When the evidence doesn't match the policy, the policy is the problem — because the policy is what sets the control expectation. A company that commits to quarterly reviews and completes one per year has documented its own failure. A company with an annual review policy and one completed review has a clean control.
Write your policy to match what you can actually execute, not what sounds most rigorous. An annual review completed on schedule with documented follow-through is a stronger control than a quarterly commitment with spotty execution. If quarterly certifications in Okta are creating reviewer fatigue and campaign abandonment, change the cadence before you change the evidence. Fix the policy first, then run the campaign.
What compliance automation gets right — and what it misses.
Platforms like Drata, Vanta, and Secureframe do real work. They monitor control state continuously, alert on drift, surface evidence automatically, and connect to your identity provider via API to pull user population data. For a SOC 2 Type I assessment, they reduce the evidence collection burden significantly. That's legitimate value.
What they can't do is fix what's inside the tenant. They can tell you that your access review campaign completion rate is 64%. They can't tell you why 36% of reviewers didn't complete their queues, whether the access that wasn't reviewed is high-risk or low-risk, or what it will take to get your Okta provisioning process to a state where the evidence doesn't require manual explanation.
In early 2026, the compliance automation market went through a significant credibility event when a major provider's audit process came under scrutiny for producing standardized reports that didn't reflect actual control state. The details made clear what practitioners already knew: the gap between a compliance platform's dashboard and a clean auditor opinion lives inside the tenant, not inside the tool. The tool can report what's there. It can't fix it. And it can't tell the difference between a control that's implemented and a control that's implemented correctly.
If you use a compliance automation platform, use it for evidence collection and continuous monitoring. Use a practitioner review — someone who reads Okta system logs and Entra ID audit trails — to assess whether what the platform is reporting reflects a defensible control. The platform answers "do you have an access review process." The practitioner answers "will your access review evidence survive the auditor's follow-up questions."
Access reviews fail SOC 2 audits for the same reasons year after year. Not because the policy doesn't exist. Because the policy doesn't describe what actually happens, the review doesn't cover what auditors actually check, or the evidence doesn't connect to action. Any one of those breaks the control.
The CBIZ data makes the pattern clear: access reviews and terminated access removal together account for nearly 30% of all SOC 2 exceptions. These are not novel findings or unusual environments. They are the default state of an identity program that hasn't been reviewed by someone who knows what auditors look for.
Your auditor will find the gap. The question is whether you find it first.
Get your access reviews audit-ready before the window closes
The gaps above are findable and fixable — but timing matters. Access review remediation takes 2–4 weeks to execute and 60–90 days to generate clean evidence. Starting after your auditor opens the fieldwork phase is too late.
Risk Ready Identity conducts focused Identity Security assessments for compliance-driven SaaS companies. A two-week engagement that reads your Okta or Entra ID tenant directly, identifies every access review gap an auditor will find, and hands you a prioritized remediation plan before the audit clock starts. Fixed scope, fixed price, no open-ended billing.
The next one goes deeper.
First-hand observations from inside real IAM assessments. No pitch. No filler. Straight to your inbox when it publishes.
Subscribe, it's freeSources. CBIZ 2024 SOC Benchmark Study — access review exception frequency (15.6%) and terminated access removal exception frequency (12.0%) across 193 SOC reports. Schneider Downs 2024 Top SOC 2 Findings — recurring design deficiencies in access certification programs. NIST Special Publication 800-63-4 — digital identity lifecycle guidance. NIST 2026 AI Agent Identity research — access lifecycle requirements for non-human identities and AI agents. Gartner IAM for LLM-Based AI Agents (G00852165, April 2026) — shadow agent density and access review requirements for autonomous AI workflows.