Then someone asks: “What did your AI agent do last Tuesday?”
You don’t know. That’s the gap.
What SOC2 Actually Requires
SOC2 isn’t a checklist you complete once. It’s a trust framework built around five criteria — security, availability, processing integrity, confidentiality, and privacy — and at the heart of each is one persistent question: Who did what, when, and can you prove it?
For years, “who” meant a person. A user logs in. A developer pushes code. An admin changes a permission. Every meaningful action traces back to a human identity, and logging infrastructure was built to capture exactly that.
AI agents are quietly breaking this assumption — and most organizations have no idea the exposure is building.
The Problem: AI Agents Are Not People
An AI agent running inside your environment doesn’t have a user account in the traditional sense. It holds a credential — an API key, an OAuth token, a service account with elevated permissions. When it acts, it may execute a chain of dozens of decisions in seconds: reading a file, calling an API, querying a database, updating a CRM record, triggering a downstream workflow.
Each action gets logged somewhere. The problem is that what gets logged is the system event, not the agent decision. Your SIEM sees “Service account X read file Y at 14:32.” It does not see “The AI agent was trying to summarize last quarter’s vendor contracts and chose this file as part of that task.”
For humans, context is recoverable. You can ask them. For agent actions, the context lived in an inference call that ran and disappeared.
What an Auditor Sees When They Look at Your AI Deployment
Put yourself in an auditor’s chair. They’re evaluating your Security criterion — specifically, whether you have logical access controls and meaningful monitoring of user and system activity.
You show them your SIEM logs. Clean. Comprehensive.
Then they ask about your AI deployment. You explain that agents run automated workflows with access to customer data, vendor systems, and internal APIs. The auditor follows up:
- What data did each agent access, and for what stated purpose?
- Who authorized that specific agent run?
- What decisions did the agent make during execution?
- If something went wrong, how would you reconstruct what happened?
If your honest answer is “we can see the API calls but not the reasoning,” you have a problem that your current logging infrastructure was never designed to solve.
The Four Things You Can’t Prove Without Agent Logs
Standard infrastructure logs tell you that something happened. They don’t tell you why. When auditors or incident responders need to trace an agentic event, they need four layers of information:
Intent — What task was the agent running? Was it authorized by a human, or triggered automatically by a schedule or another system?
Scope — What resources did the agent access? Did it stay within its defined boundaries, or reach something outside its intended domain?
Decision trace — What choices did the agent make during execution? If it accessed one file over another, why? If it chose to escalate an action, what triggered that decision?
Outcome — What did the agent produce or change? What downstream effects occurred, and in what order?
SOC2 as a framework hasn’t explicitly named AI agents yet — the standard was written before agentic systems were a real concern. But it does require that you can answer auditor questions about any system that interacts with in-scope data. AI agents are that system now.
The Gap Is Growing — and Auditors Are Starting to Notice
Something shifted in 2024 and 2025: AI agents moved from prototype to production. Real companies are running agents that touch customer records, manage vendor relationships, and make decisions with genuine compliance implications. The velocity is accelerating.
Auditing standards lag deployment by a few years, but they do catch up. AICPA has been working through how AI-assisted systems fit within Trust Services Criteria. Forward-thinking auditors are already asking questions that standard logging tools can’t answer.
The gap that nobody is talking about today will be the audit finding everybody is managing by 2027.
What Good Agent Observability Looks Like
Organizations getting ahead of this are building what practitioners call “agent observability” — structured logging that captures behavior, not just system events:
- Session-level tracing: Every agent run gets a unique session ID that groups all downstream actions under a single traceable task
- Authorization records: Who or what initiated the agent, under what policy, and with what permissions at the time of execution
- Action manifests: Structured logs of every read, write, API call, and state change — linked back to the originating session
- Decision artifacts: Where the model supports it, preserved records of key reasoning points — what the agent was asked, what data it considered, and what it chose
- Diff records: Before-and-after states for anything the agent modified, queryable by session
None of this requires inventing new SOC2 controls. It requires extending your existing privileged access monitoring to treat agent sessions the way you treat privileged human sessions.
What to Do Before Your Next Audit
If you’re running AI agents in a SOC2 environment — or planning to — three things are worth doing now:
Map your agent surface. List every AI agent, automation, or scheduled workflow that touches in-scope systems. Include API integrations, background tasks, and anything operating under a service account.
Run a logging gap assessment. For each agent, ask: if something went wrong or an auditor asked, could you fully reconstruct what the agent did and why? If the honest answer is “partly,” you have a gap.
Start the conversation with your auditors. Teams that get ahead of this bring it to the table first. “Here’s how we’re thinking about agent auditability” is a very different conversation than “we hadn’t thought about it.”
SOC2 was built for a world where humans took actions and machines recorded them. That world is changing faster than most compliance teams realize. The organizations that extend their logging posture now will be ahead — in audits, in incident response, and in the trust they earn with customers who need to know their data is being handled responsibly.
The gap nobody is talking about is the one that shows up in your audit report.