How the U.S. Could Reclaim AI from Tech Oligarchs

AI and National Security: An Uneasy Nexus

In July, the U.S. Department of Defense took a significant step toward embedding artificial intelligence (AI) deeper into its military operations. It awarded contracts, each potentially worth $200 million, to major players like OpenAI, Google, Anthropic, and xAI. Though relatively modest by military standards—especially when compared to billion-dollar submarine deals—these contracts are emblematic of a larger governmental strategy. The Biden administration has increasingly framed AI as central to maintaining a competitive edge over geopolitical rivals, particularly China.

This alignment of public funds with private firms has been hailed by Pentagon officials as essential to preserving a strategic advantage. Yet, at the same time, it marks a victory for a new breed of tech magnates-turned-defense contractors. Figures like Alex Karp of Palantir and Larry Ellison of Oracle now occupy a dual role as both entrepreneurs and national security stakeholders. Their message is clear: AI isn’t just a commercial asset—it’s a civilizational imperative.

The Bubble Nears Its Breaking Point

Despite these optimistic portrayals, the AI industry’s financial underpinnings are shakier than they appear. Companies like OpenAI burn through billions annually, and the sector is increasingly being described as a bubble poised to burst. Although industry leaders publicly claim they are not seeking government bailouts, their recent statements suggest otherwise. OpenAI’s CFO recently floated the idea of a government “backstop” to support large-scale infrastructure purchases, while David Sacks warned that pulling back AI investments could trigger a recession.

These declarations serve a dual purpose: they both justify continued public investment and prepare the ground for future financial rescues. While AI may not yet be as systemically critical as mortgage-backed securities were in 2008, the industry’s influencers are working hard to make it so.

Public Subsidy Disguised as Strategic Necessity

The recent White House initiative dubbed “the Genesis Mission” is a case in point. This program aims to provide AI firms with access to government datasets to improve model training, effectively offering another layer of public support. When the bubble finally bursts, these same firms are likely to invoke national security rhetoric to demand bailouts, masking corporate welfare as patriotic duty.

Americans will be asked to dig into their pockets not to save jobs or stabilize markets, but to reimburse billionaires like Sam Altman, Larry Ellison, and Elon Musk for their speculative ventures. The justification? To ensure that AI remains an American asset, not a Chinese one.

Enter the Defense Production Act

Given the AI industry’s increasing entwinement with national security, it is worth considering whether such a critical sector should remain in private hands. The Defense Production Act (DPA) of 1950 provides a potential legal pathway for more direct government control. Originally passed during the Korean War, the DPA grants the executive branch substantial authority to commandeer resources for national defense.

Although its language is broad and somewhat vague, it has been used over the years to support everything from energy suppliers during California’s 2001 crisis to pandemic-related medical equipment under the Trump administration. The act allows the president to prioritize government contracts and even penalize non-compliant companies. While outright nationalization has legal constraints, the DPA offers significant leverage—especially if AI is deemed a “scarce” or “critical” resource.

Weaponizing Propaganda Against Its Creators

The irony is that the industry’s own rhetoric provides the strongest argument for its expropriation. If AI is as indispensable as its champions claim, then the government has a duty to ensure its continuity and accessibility. Why should such a vital national resource be monopolized by a handful of ultra-wealthy individuals? The same justification used to funnel public funds into AI firms can be used to bring them under public control.

Legal precedents do impose limits, as seen in the 1952 Youngstown Steel case where the Supreme Court blocked President Truman from seizing the steel industry. However, the DPA includes provisions that can be activated through executive orders if a resource is deemed essential. This could provide a legal workaround for significant intervention in the AI sector.

Building Political Will for Expropriation

Currently, the political appetite for such bold action is minimal. But that can change. In recent years, policies once thought unfeasible—like rent freezes—have gained traction. Framing the issue as a choice between expropriation or bailout could galvanize public support. It would transform the DPA from a relic of Cold War-era governance into a tool for reclaiming democratic control over critical infrastructure.

Creating a constituency around this vision will require work. Legal scholars and policy experts must draft viable frameworks, while activists and lawmakers push the narrative that reclaiming AI is not only possible but necessary. The growing influence of socialists and progressives in American politics creates fertile ground for such initiatives.

Reimagining National Security

Ultimately, this is about more than AI. It’s about redefining what national security means in a democratic society. It’s not merely about defense against foreign adversaries, but also about protecting the public from the unchecked power of oligarchs. The DPA could serve as the foundation for a new era of economic democracy—one in which strategic industries serve the public good, not private profit.

Perhaps it’s time to turn the oligarchs’ own arguments against them. If artificial intelligence is too important to fail, then it’s too important to be privately owned.


This article is inspired by content from Original Source. It has been rephrased for originality. Images are credited to the original source.

Subscribe to our Newsletter