Artificial Intelligence4 min read

Private AI: Realizing the Value of AI in Government While Controlling the Costs and Risks of Innovation

World Globe
Authors:
Massimiliano Claps, Research Director, Government Insights
Aaron Walker, Research Manager, Government Trust and Resiliency Strategy

IDC Guest Post

Federal, state, local, and tribal governments have realized the benefits of AI for years — particularly in tax and revenue agencies1, health and human service agencies2, homeland security3, and the defense and intelligence4 community. The foundational elements of generative AI (GenAI) have developed throughout the past decade; however, the advent of consumer GenAI tools, with user-friendly, multimodal capabilities, triggered interest among global government technology leaders. As they understood the opportunities of GenAI, government agencies began defining a road map to increase employee productivity, enhance citizen experience, integrate data, improve digital resilience, and eventually automate end-to-end processes through multiagent orchestration.

Government leaders quickly concluded that “irresponsible use (of AI) could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security.”5 Therefore, several governments have issued guidelines to ensure responsible use of AI, including (but not limited to):

  • The U.S.’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, stating that the federal government should lead in managing “the risks from the Federal Government’s own use of AI and increase its internal capacity to regulate, govern, and support responsible use of AI”
  • The U.S. NIST's AI Risk Management Framework, which aims to help the private sector and governments to “incorporate trustworthiness considerations in the design, development, deployment, and use of AI systems”6
  • The EU AI Act, passed in 2024, standardizes requirements for AI solutions in the market, prohibits AI in certain high-risk systems, creates transparency requirements, and applies to the EU and any third-country providers or companies placing AI systems on the EU market. The European Commission also established the EU AI Office, which aims to coordinate, develop, implement, and enforce AI policies across the EU. 
  • South Korea’s comprehensive AI act, which is under review by the National Assembly, includes a regulatory framework, standards on copyrights of AI-generated content, and guarantees developer access to AI technologies. It aims to promote growth within Korea’s AI industry while enforcing stricter requirements on high-risk systems.
  • The G7 launched the Hiroshima AI Process under Japan’s presidency in 2023, is a comprehensive policy framework focused on analyzing AI risk, developing guiding principles for AI, creating a code of conduct, and promoting cooperation in support of responsible AI solution development.  

These policies gave further impetus to governments’ investments in AI and accelerated government leaders’ desire to work with technology partners to identify specific AI platforms and infrastructure that maximize investment value while controlling costs and risks.

Making Strategic Deployment Choices to Maximize the Value of AI While Controlling the Risks and Costs of Innovation

As the need to deploy AI responsibly has grown, on high-performing, scalable, and secure infrastructure and agile data and application development platforms, government technology leaders, including chief artificial intelligence officers (CAIOs) and Chief Data Officers (CDOs), have worked with CIOs and CTOs to explore alternative pathways to standardize governance policies and adopt AI securely. 

Government IT and business leaders are exploring private AI capabilities to be deployed on-premises or in sandboxed or hosted environments. IDC defines private AI as the use of enterprise datacenter infrastructure and AI framework capabilities by IT practitioners for developing/deploying enterprise-specific AI/ML workflows. Private AI may include interconnected providers or enterprise-owned facilities could host the datacenter assets. When using private AI, enterprises will also look for AI platforms that support hybrid deployment options that allow them to govern model development and use. 

Government technology and business leaders that invest in private AI capabilities do so to maximize:

  1. Control over data location, compliance, access, and security. Across the globe, policymakers are ramping up digital sovereignty requirements.  According to IDC’s Digital Sovereignty survey, 45% of government IT executives recognize the need to maintain a challenging balance between AI innovation and digital sovereignty. Private AI enables government CAIOs and other leaders to have direct control over procedures and tools to control data governance, sovereignty, and security.
  2. Predictability of capital expenditure. Although “the fastest acquisition for Generative AI tools may be no new acquisition, or as simple as using an existing cloud platform contract to access Generative AI tools,” it should be acknowledged that on top of cloud subscriptions, cloud service providers charge for usage, featured tiers, and more, increasing the risk of losing control of costs. 
  3. Ownership of strategic architectural choices over IT infrastructure, data, model, and applications. Although over 60% of governments globally think GenAI is already significantly disrupting their organization or will do so in the next 18 months,7 most of them are at early stages of that journey. Therefore, it is imperative to ensure that modeling, coding, and other architectural components are future proof. Private AI provides government IT leaders with an avenue to better control the risks of lock-in by .Taking a platform approach typically allow for more flexibility to utilize tools and APIs from multiple vendors and update services without disruption. Platforms provide better cloud scaling and lower resource requirements by pooling AI infrastructure.

Key Considerations for Governments Investing in Private AI

Private AI can help government CAIOs and other IT leaders accelerate the time to value of AI and GenAI while controlling its risks and costs. As leaders embrace private AI, they should consider the following areas of investment:

  1. The need to invest in capacity and competencies to translate national policies into solutions that enable them to implement private AI with appropriate data governance controls and cybersecurity solutions
  2. The need to align private AI capabilities with use cases that have attributes, such as predictable scalability and performance requirements, and multiple interoperability dependencies that justify investment
  3. The need to invest in AI and data architecture and engineering competencies that enable best-in-class capabilities of private AI platforms and integrate them with workloads running in other environments

Leaders should also consider the benefits of a platform approach that allows increased flexibility to experiment with and utilize new AI models and services as market conditions change. These platforms should come with built-in automation and tools, significantly reducing the necessity for maintaining specialized internal skillsets to ensure success. By strategically investing in these areas and leveraging a platform approach, government CAIOs and IT leaders can maximize the benefits of private AI while effectively managing its risks and costs.

Conclusion

IDC believes the public sector AI and GenAI markets will continue to grow rapidly in coming years based on emerging use cases demonstrating the technology’s ability to help government organizations achieve desired mission outcomes. 

Private AI solutions can help government organizations empower citizens and employees with improved usability and automation, navigate AI responsibly, and maximize returns on AI investments, all while securely deploying private AI without increasing the risk of sensitive data loss to ensure that innovative solutions are secure and future proofed.