Rethinking Compliance In The Age Of Intelligent Agents

Tony Bradley Senior Contributor
Tony Bradley covers the intersection of tech and entertainment.
June 11, 2025
Compliance Intelligent Agents

As AI takes on a larger role in managing compliance, the definition of trust—and who maintains it—is rapidly evolving.

Compliance has long been one of the least glamorous aspects of cybersecurity. Necessary, yes—but often repetitive, reactive and resource-draining. That’s changing fast.

AI is starting to reason over frameworks, detect inconsistencies and make recommendations about what your business should do next. Vanta AI Agent is a clear example of this evolution – aiming to turn governance into a dynamic, data-driven process. But it also raises new questions about transparency, accountability and whether trust itself can—or should—be automated.

I recently spoke with Jeremy Epling, chief product officer at Vanta, about the motivation behind the agent. “From day one, this whole notion of automated compliance and continuous GRC, continuous control monitoring has been at the heart of our founding mission,” he told me. Epling described the current landscape of compliance as burdened by unstructured files—policy documents, screenshots and spreadsheets—and emphasized that the AI Agent is designed to automate and unify those fragmented processes.

Compliance, Once a Bottleneck, Now a Business Enabler

For many companies, compliance has historically been a blocker—something that slows down audits, sales and vendor onboarding. Tony English, CISO at WorkJam, described that pain firsthand for me. “Before Vanta, our compliance efforts were manual and largely time-consuming,” he said. “It became a bottleneck for our small security team, slowing down sales cycles and diverting valuable time toward documentation and evidence gathering.”

With the shift to continuous monitoring, platforms like Vanta—and increasingly, their AI agents—promise not only faster audits but smarter ones. English said WorkJam now spends about an hour a week on compliance tasks instead of seven or eight. “Compliance has moved from a resource-draining task into a function that strengthens our overall security posture.”

The significance here isn’t about one vendor. It’s about a broader industry trend: compliance moving from episodic to real-time, from reactive to proactive. And AI is the connective tissue making that shift possible.

What It Means to Trust an Algorithm

Of course, the more autonomy we grant AI, the more critical it becomes to know how it works. Is it explaining its reasoning? Is it using up-to-date evidence? Can it cite its sources?

“A major focus for us has been on AI quality,” Epling said. “We have an internal team of former auditors and GRC experts that go through and run our human eval loop on golden data sets… and we lean into references and explanations. If we give a recommendation, we tell you where it came from.”

That traceability matters. With security reviews and audits becoming more dynamic, AI has to be more than helpful—it has to be right. And when it’s not, there must be clear signals and paths for correction. Platforms that support feedback loops, accuracy metrics and user control (such as setting concise vs. verbose answer preferences) are more likely to foster real trust.

The Human Element in a Machine-Led World

Despite impressive gains, AI agents aren’t eliminating human expertise—they’re redefining it. “We’ve seen a huge shift,” English told me. “Responsibilities are now more transparent, ownership is better distributed and our security and engineering teams operate from a shared view of strong compliance.”

The AI Agent, in this case, isn’t replacing the team—it’s amplifying it. By detecting policy conflicts, pre-validating evidence and flagging overlooked risks, it frees up human bandwidth to focus on higher-order tasks. And that kind of augmented intelligence might be the most responsible application of AI in compliance today.

But the temptation to over-trust is real. Over time, users will grow comfortable with the AI’s outputs—especially if those outputs pass audits and reduce friction. At what point does convenience erode scrutiny? And who’s watching the watcher?

Epling acknowledged the concern and said his team is working toward more customer-facing transparency. Ideally, the customer should have visibility of how a solution or AI agent is performing against their success criteria. That kind of transparency keeps vendors accountable too.

A Blueprint for What Comes Next

WorkJam sees Vanta’s AI Agent as the next logical step—automating routine tasks, identifying inconsistencies early and creating space for security to be a proactive business function. That aligns with what many GRC leaders now want: not just to check the box, but to build a culture of trust that’s as responsive as the threats it faces.

As AI begins to write, monitor and enforce compliance, it’s reshaping more than workflows. It’s redefining the relationship between security teams and the systems they manage. The challenge ahead isn’t simply deploying more advanced agents—it’s making sure those agents remain transparent, accurate and accountable to human judgment.

Because trust can be accelerated by automation—but it can’t be outsourced entirely.

Back

Become a Speaker

Become a Speaker

Become a Partner

Subscribe for our weekly newsletter