Artificial Intelligence4 min read

Why AI Speed Without Access Control Is a Recipe for Risk

Photo for Chris WolfChris Wolf
Business professional interacting with a digital AI interface on a tablet, surrounded by icons representing security, access control, analytics, and enterprise risk management.

Here's what keeps CIOs in regulated industries up at night: You can build AI data collectors in minutes today using open source tools like LlamaIndex and Model Context Protocol (MCP). But if employees are going out and doing this on their own, you run the risk of massive exposure—not just from a security perspective, but from legal and compliance risks like GDPR violations, running afoul of HIPAA regulations, or breaking the terms of NDAs with existing customer or partner contracts.

The problem isn't that AI is inherently dangerous. The problem is speed without governance.

When I talk to enterprises about AI governance, I hear them voice the same assumption repeatedly: "If I'm logged into the corporate network using my access credentials, then the AI tool will use my credentials on its behalf. I can pass a token to the tool, so everything's secure and perfect."

That's the kind of thinking that gets you into trouble.

Where things get slippery is when AI tools and APIs start working together. Now they're using service accounts that might have elevated privileges. When you're dealing with AI agents talking to other AI agents, the chain of trust can disappear very quickly if you're not properly governing it.

Another issue nobody talks about: your data collectors could inadvertently turn into data writers. If I'm creating an MCP server to access data, how do I know it's not also capable of writing data back into a database? That creates all kinds of potential problems.

The Token Problem Gets Worse With AI

Traditional access control assumes you're dealing with human users and well-established APIs. But when you get into agentic AI, how are you controlling token passing and permissions between different AI agents?

Tokens can be cached, but they then become high-value targets. If hackers gain access to cached tokens, they can use them to communicate with other AI agents and exploit data collection processes. This happens all the time.

As new technologies emerge, attack surfaces continue to evolve. We need to create safeguards against the exploits we can anticipate.

Why Data Duplication Is Making Everything Worse

Some teams try to handle access control by copying their data sets and locking each one down separately. I’ve seen this happen more than once. However, while it may work initially, it doesn’t scale. In a few early RAG setups, I’ve heard of orgs growing their data footprint by 7x just trying to make permissions work.

That’s a lot of duplication. You’re spinning up multiple versions of the same data using the same models simply to manage access. Now, you need extra storage, more GPUs, and way more infrastructure to run what’s basically the same thing over and over.

Costs pile up quickly. And the security model ends up being clunky and hard to manage. It turns something that should be smart and flexible into a heavy, fragile mess.

The hidden costs go deeper. Once you start duplicating, you have multiple copies of backups of the same data because you technically have multiple data repositories with different permissions. You're compounding the complexity as there's much more logging and more audit controls you have to put in place. From a regulatory compliance standpoint, you now have more repositories to manage and audit.

The Approval Problem Nobody's Solving

Here's a question most enterprises can't answer: How are your MCP servers approved?

For most organizations, they're not tightly controlled. That's the problem. Individual software developers can add uncontrolled data collectors to AI services, and that's completely out of sight of IT operations.

VMware’s approach focuses on what I call the unglamorous but essential work—the client side of MCP where we can enforce role-based access controls and identity controls. IT operations should have a central MCP tool registry where they decide which MCP servers are approved for use. Your AI and application workflows should flow through that central control point.

This isn't about stifling innovation. It's about making sure IT governance and your CISO have a say in how AI tools access your data.

"Boring" Infrastructure Matters

If you're an ML engineer, you don't want to spend time developing role-based access controls. That's not exciting work. But just because it's not exciting doesn't mean it's not important.

At VMware, we don’t treat access management like someone else’s problem. Our tools help govern your data and how it’s accessed, centrally and securely. It’s not flashy work, but nonetheless essential for safe AI.

Some vendors say they'll support their own data collectors, but in the same breath add that third-party data collectors are not their problem. That's where things get out of hand quickly. You need centralized control where IT truly has a say, not individual developers adding ungoverned data collectors.

The best practices come down to collaboration with your security and identity teams and your CISO. Make sure you feel confident about where you're at from a security and compliance perspective. If you're not confident, then the answer should be simple: This is too immature and too much of a business risk, so we're not going to do it.

For governance frameworks, start with the MCP project community. There are solid recommendations around secure authorization for MCP servers using OAuth. Read those technical guidelines and become familiar with what exists today.

But here's the key: You have to be able to prove your approach to an auditor. Your opinion only carries you so far. You need provability end-to-end.

The organizations that will succeed with AI are the ones that resist the temptation to move fast and break things. They're building the boring but critical infrastructure that lets them innovate safely. When your AI initiative leads to a compliance nightmare or a security breach, no one will be there to pat you on the back if your primary focus had been speed over safety.