As generative AI moves from the experimental phase into mainstream enterprise adoption, it’s driving a wave of truly game-changing innovations. Large language models are already delivering enormous value in everything from cancer detection to product design to language translation, setting in motion an estimated $4.4 trillion in annual economic value according to McKinsey1. For the first time in history, we’re able to interact with powerful AI tools in a conversational way, and generative AI combines that natural-language interaction with a “human-like” creativity that can rapidly produce brand-new content, including text, code, video, audio and more. It's not an overstatement to say we’re in the early stages of a once-in-a-generation leap forward in productivity – one that will transform major business functions such as software development, customer support, sales and marketing.
But amid all this promise and excitement, there’s a core challenge that’s looming large: to capture AI’s full potential over the next decade, we must collectively address the essential issue of privacy. Of course, privacy has long been a hot-button issue. But among the CEOs and CIOs I talk to, there’s an understanding that generative AI is now making the privacy challenge both more consequential and more complex. Data is the indispensable “fuel” that powers AI innovation, and the job of keeping proprietary data private and protected has intensified. In short, we need to architect a new approach that balances the tremendous business value of AI with privacy safeguards we can trust.
Enterprises are now being asked to address three key privacy issues in particular: first, how do you minimize the risk of intellectual property “leakage” when employees interact with AI models? Second, how do you ensure that sensitive corporate data will not be shared externally? And third, how do you maintain complete control over access to your AI models? Many of the CEOs I speak with are actively asking their legal teams to dig in and collaborate with IT to define a new set of privacy standards built for the complex nuances of generative AI. It’s a complicated undertaking to say the least.
Enter VMware AI Labs
Inside VMware, we’ve been navigating these very same challenges, which inspired us to form VMware AI Labs more than a year ago. We gave this team of engineers a simple charter: Build an enterprise-grade architecture for AI that addresses the need for robust privacy guardrails, while preserving our freedom of choice to select from a variety of AI models and tools, including open-source innovations. Every step of the way, VMware AI Labs worked in close collaboration with our General Counsel, Amy Fliegelman Olli, and her legal team. Together, our engineers and lawyers sorted through the complex intricacies of how to choose an AI model, how to train it using domain-specific data, and how to manage the inferencing phase in which employees interact with the model.
After conducting an in-depth assessment of the offerings available in the market, our VMware AI Labs team realized there was simply nothing commercially available that addressed their needs. So they set to work building an AI architecture that addresses five requirements that are essential to every enterprise: privacy, choice, cost, performance, and compliance.
Private AI: An enterprise-grade architecture for AI Innovation
The outcome of all this intensive work by our internal team is an approach we call Private AI. Simply put, Private AI is an enterprise-grade architecture that balances the business advantages of AI with the privacy and compliance needs of the organization. We view this architecture as a ground-breaking way to help businesses capture the full potential of AI, and we’re excited to make it available to all our customers globally.
In contrast to public AI models that can expose businesses to a variety of risks, Private AI is an architecture built from the ground up to give businesses greater control over how they select, train and manage their AI models. That level of control and transparency is precisely what every legal team is now demanding.
AI Innovation in a Multi-Cloud World
At its core, the Private-AI architecture we’ve built represents a multi-cloud approach, enabling our customers to utilize valuable data that’s spread across multiple clouds. Today, 87 percent of businesses2 are using two or more public clouds, and as they accelerate their AI initiatives the pressure is on to better manage data residing across public clouds, private data centers, and at the edge where data is generated and processed.
With a Private AI architecture, businesses have the flexibility to run their AI models of choice – both proprietary and open-source – in close proximity to where their data resides. That leads to better performance and faster response times when employees query and interact with the model. This approach also improves cyber-security protections because it gives IT teams greater visibility and control, enabling them to establish smart, automated security policies that safeguard sensitive data powering their AI applications.
Equally important, a private-AI architecture enables businesses to operate AI and non-AI workloads together, using common management and operation models. That reduces total cost of ownership, and it allows organizations to easily adopt new AI models and services as they become available. The leaders I talk to are keenly aware that AI innovation is advancing at lightning speed, and they don’t want to bet on a single, vertical AI stack for all their business needs. In general, many of our customers are discovering that their AI strategy is deeply intertwined with their multi-cloud environment, which they rely on to run their business every day.
We recently announced a joint offering with NVIDIA, a recognized global leader in AI innovation, along with a reference architecture for Private AI for organizations looking to build their own Private-AI architecture.
Privacy in the Age of AI Innovation
Protecting the privacy of enterprise data and intellectual property has always been strategically important. But as generative AI takes root in mainstream businesses, the privacy challenge has become even more urgent and mission critical. With a private-AI approach, we’re equipping organizations to accelerate their AI initiatives while giving them greater control over how they choose, train and utilize large language models. Ultimately, Private AI is about unleashing the enormous business value of AI applications, while mitigating the risks inherent in this next wave of AI innovation.
[1] McKinsey, The economic potential of generative AI: The next productivity frontier. June 2023
[2] VMware FY23 H1 Benchmark, May 2022; N=1080 Enterprise (5000+ employee) Technology Decision Makers