Could AI Become a Public Utility?

As AI becomes ubiquitous, a fundamental question has emerged among policymakers, strategists, and investors: should AI be treated as a public utility?

Could AI Become a Public Utility?
Could AI Become a Public Utility?
QUICK TAKE · AI Summary

AI is critical infrastructure. Foundational AI now underpins economies and national security, prompting utility-style oversight.

Too few control too much. A handful of firms dominate AI, creating risks around access, equity, and resilience.

Governments are stepping in. Nations are testing public and hybrid models to govern AI as a shared resource.

In the span of just a few years, artificial intelligence has transitioned from a niche technology to an essential foundation of digital economies and national security strategies. It powers search engines, optimizes logistics, automates customer service, and increasingly augments—or replaces—human decision-making. As the technology becomes ubiquitous, a fundamental question has emerged among policymakers, strategists, and investors: should AI be treated as a public utility?

To explore this question, it’s necessary to understand what constitutes a public utility. Traditionally, utilities are services so vital to daily life—such as electricity, water, and telecommunications—that governments intervene to ensure universal access, fair pricing, and systemic reliability. These sectors often begin with private innovation but eventually face public regulation or direct involvement to protect collective interests.

Applying that framework to AI requires distinguishing between consumer-facing applications and the underlying infrastructure—namely, the large foundational models, training data, and compute resources that support AI at scale. It is the infrastructure layer of AI—the models like GPT-4, the high-performance compute clusters powering them, and the algorithmic advances behind general-purpose intelligence—that now resembles the foundational systems of a public good.

There are four major forces pushing AI toward utility status. First, its criticality is no longer speculative. Generative AI alone could contribute trillions to the global economy, with applications in nearly every sector from finance to healthcare to education. It is rapidly evolving into a general-purpose technology akin to the steam engine or the internet, with the potential to shape productivity, labor markets, and global competitiveness.

Second, the AI landscape is currently dominated by a small group of actors. A handful of American and Chinese firms control the vast majority of frontier AI development and compute infrastructure. This level of concentration raises systemic risks similar to those that led governments in the early 20th century to regulate energy monopolies. If access to advanced AI remains a privilege controlled by a few firms, it could stifle competition, exacerbate inequalities, and create vulnerabilities across sectors that depend on the technology.

Third, there is a growing national security imperative. Governments around the world are increasingly framing AI as a strategic asset. The United Arab Emirates is investing heavily in sovereign AI models and data centers through its “Stargate” initiative. France, Germany, and others in the EU are funding open foundational models that are aligned with regional values and sovereignty concerns. These developments reflect a growing realization that reliance on foreign-controlled AI systems poses unacceptable risks, especially when AI is used in defense, cybersecurity, intelligence, and critical infrastructure.

Finally, there are substantial market failures in long-term safety and alignment. Private firms have incentives to prioritize speed and performance over transparency, alignment, or robustness. While some companies publish voluntary safety and ethics guidelines, the lack of enforceable, binding frameworks means risks such as disinformation, systemic bias, or model misuse remain largely unaddressed. Public utilities are often regulated precisely to internalize these kinds of externalities—an approach that might be necessary for AI if voluntary compliance proves insufficient.

What would it actually mean to treat AI as a public utility? Around the world, various models are emerging. Some governments are investing directly in national AI capabilities, building public infrastructure akin to electricity grids or railways. Others are experimenting with hybrid public-private partnerships, combining state funding and oversight with private sector innovation. Still others are exploring open-source AI ecosystems, where foundational models are shared as public goods and supported by academic and nonprofit institutions.

Each approach has trade-offs. Full nationalization can ensure security and policy alignment but may come at the cost of speed and innovation. Public-private models allow for dynamism but raise concerns about regulatory capture and inconsistent accountability. Open-source models can democratize access but make it harder to control for safety and misuse. And a purely market-led approach, even if buttressed by regulation, risks repeating the excesses and vulnerabilities seen in the early days of telecom and energy deregulation.

From a financial perspective, the implications are significant. Governments may need to redirect capital toward sovereign compute infrastructure, public AI research institutions, and open foundational model development—investments that resemble the large-scale industrial initiatives of the past century. At the same time, private investors will need to recalibrate their expectations. As foundational models become commoditized or regulated, the high-margin, monopoly-like returns seen in early AI may compress. Future value creation may shift to application-layer startups, AI compliance and audit tools, and verticalized enterprise solutions with proprietary data or domain-specific expertise.

This shift also introduces geopolitical complexity. If AI is treated as a utility, its governance cannot remain entirely domestic. Just as nuclear energy and financial systems required multilateral coordination, AI’s cross-border risks and dependencies demand shared standards, agreements, and institutions. The contrast between the United States’ decentralized innovation ecosystem, China’s tightly controlled AI industrial policy, and the European Union’s rights-based regulatory regime illustrates the difficulty of reaching consensus. Yet without some degree of global cooperation, AI could become fragmented, unequal, and unstable.

Looking ahead, governments and businesses must make strategic choices. Policymakers should consider establishing national AI trusts or public investment vehicles to steward long-term infrastructure and model development. Regulatory sandboxes can help balance innovation and oversight in frontier research. Meanwhile, companies should prepare for a shift in value capture—away from proprietary models and toward integration, agility, and trust. In a future where AI is increasingly seen as essential infrastructure, strategic differentiation will depend less on scale and more on governance, reliability, and the ability to integrate responsibly into critical workflows.

So, could AI become a public utility? The answer is not simply yes or no. Rather, it is already happening, gradually and unevenly. In some countries, AI infrastructure is being built, funded, or governed with clear public-interest mandates. In others, debates are still in their early stages. But the underlying logic—concerning criticality, concentration, security, and safety—mirrors the path that utilities like electricity and telecommunications have followed in the past.

AI will likely not be a utility in the traditional sense, owned and operated entirely by the state. But it will require utility-like governance: public accountability, systemic oversight, and equitable access. The real strategic question is not whether AI will be governed as a utility—but how quickly we recognize its public importance, and whether we act early enough to shape its trajectory for the benefit of all.