The AI Bubble Might Pop Even If There Are No “Dark GPUs”
Data center GPUs are “melting” under subsidized and unprofitable workloads while circular revenues mask fundamentally unsustainable economics that are increasingly detached from the real world economy. Combined with rising data center related debt, we may now be in the midst of an AI bubble.
This is the author’s opinion only, not financial advice, and is intended for entertainment purposes only. The author receives no compensation for writing this article and has no business relationship with any of the companies mentioned.
No, you could not call me an AI bear in the three years since ChatGPT was released. I argued that Nvidia was far from a bubble in March 2024, dismissed rumors of major problems with Nvidia’s new Blackwell chip in August 2024, and pushed back hard against the DeepSeek panic in January 2025. Since I experimented with Google’s TensorFlow a decade ago, I have been an AI enthusiast who believes Nvidia is one of the greatest companies in economic history, led by Jensen Huang, one of the great visionaries of our time.
The narrative following ChatGPT’s release in November 2022 was straightforward: generative AI changes everything, and Nvidia’s GPUs are the engines of that revolution. This narrative triggered unprecedented capital expenditure (capex) by hyperscalers like Microsoft, Google, Amazon, and Meta, which in 2025 have been spending well over half of operating cash flow on capex. Given the disruption I saw in AI, it did not concern me at the beginning that this colossal spending consistently outpaced any near term revenue from AI services. However, the increasingly circular nature of AI investments and revenues among these large players coupled with a recent surge in AI related bond issuance, has made me increasingly cautious.
A primary bull case for AI, recently articulated by investors like Gavin Baker, cites the never ending demand for Nvidia’s GPUs and the fact that there are almost no “dark GPUs”. This, they argue, distinguishes today from the dot-com bubble, where vast amounts of unused fiber optic cable were laid. But who in their right mind argues that the dot-com bubble will repeat itself exactly? The concern is not an identical rerun, but the underlying dynamic of unsustainable capital expenditure.
It may be true that chips inside data centers are “melting” under intense loads. Yet this high utilization says little about actual return on investment when a large share of AI applications is offered for free or priced far below computing cost. Much of this apparent “demand” is therefore a function of subsidized supply. Free tiers and heavily subsidized monthly plans are the true drivers of “hot GPU” utilization, fueling a race for user acquisition rather than sustainable profit. An AI bubble can certainly form even while data center chips are “melting” and Nvidia’s revenue is surging.
This structurally unprofitable cycle is being financed by unprecedented capex. The top four hyperscalers are on track to spend between $350 and $400 billion on AI infrastructure in 2025. This staggering expenditure is often justified by “booming” cloud revenues, such as Microsoft’s 39% percent Azure growth in the recent quarter. However, part of this revenue is illusory. It circulates within a closed loop in which hyperscalers invest billions in AI labs, and those labs then use that capital to buy compute from the investing parent or sister company, for example OpenAI’s Azure commitment, Anthropic’s AWS primary status, and Google’s investment in Anthropic. The scale challenge is stark: J.P. Morgan estimates the industry would need roughly $650 billion dollars of annual revenue, indefinitely, to earn only a ten percent return on the AI buildout.
The hyperscalers themselves are effectively captive customers of a near-monopoly. Nvidia has literally created the data center GPU market and has controlled about 90% of it for the past three years, with a business performance unprecedented for a hardware company: In 2023, industry analysts estimated that the H100 accelerator, then the workhorse of AI training, cost roughly $3,320 to manufacture yet sold for $25,000–$30,000. This profitability continues: The company reported Q2 2026 results on August 27 with its data center business generating $41.1 billion in quarterly revenue, while Nvidia's overall GAAP gross margin was reported at 72.4%. The economics of Nvidia's latest Blackwell generation GPU remain staggering. It is estimated that the flagship double GPU GB200 sells for approximately $60,000–$70,000. Given that Nvidia maintains roughly 70% gross margins and its manufacturer, TSMC, operates at around 60%, this implies the actual manufacturing cost for the GB200 is only about $7,200–$8,400. Such profitability relies entirely on the absence of viable alternatives.
While some investors, notably Dr. Michael Burry, have recently focused on the allegedly overstated depreciation periods hyperscalers apply to GPUs, I doubt the hyperscalers are fundamentally miscalculating the technical lifespans of the chips, especially as older GPU generations like Nvidia’s Ampere and Volta still remain operational. In my opinion, the core of the bubble lies in the monopoly prices Nvidia can command. Hyperscalers have purchased GPUs worth hundreds of billions at monopoly prices, even though downstream AI applications are nowhere near generating revenues on that scale.
The speculative capex, once funded by massive operating cash flows, is now shifting to the debt markets. AI-related capex is projected to consume 94% of operating cash flow for the hyperscalers. Consequently, a borrowing frenzy has begun. In just two months of 2025, Meta, Oracle, and Alphabet issued a combined $75 billion in bonds. Meta alone borrowed $27 billion to fund a single data center. AI-related corporate debt issuance in 2025, at $141 billion, has already surpassed the total for 2024.
This is the true parallel to the dot-com era. The 2000 crash was also marked by circular revenue schemes and a sharp increase in corporate debt. Another parallel is the nineteenth-century railroad boom, a speculative, debt-driven capex cycle. But while that bubble left behind durable assets such as railroads and land, the AI bubble is built on ephemeral assets. Hyperscalers are stretching their balance sheets to pay monopoly prices for hardware useful for years, not decades. The “hot GPUs” process mostly unprofitable work, justified by accounting fictions and now increasingly funded by debt.
The economics of the AI labs illustrate this growing unsustainability. According to analysis by Ed Zitron, Anthropic spent $2.66 billion year-to-date on AWS compute to generate $2.44 billion in revenue, while OpenAI spent 8.7 billion dollars on inference alone on Microsoft’s Azure for a reported $13 billion in revenue. These are not the metrics of economically sustainable enterprises.
No one doubts that AI is real. On the contrary, that is part of the problem. Thousands of white-collar jobs are being replaced faster than the economy can adapt. Consumers are already facing a recession, and roughly 70 percent of GDP comes from consumer spending. When everyday items go out of reach, the disconnect becomes clear. A bubble means a decoupling from fundamentals. When consumers can no longer afford a bottle of Heinz ketchup or a bowl from Chipotle while big tech burns billions on an unprofitable technology, we are already in the midst of one.
Follow me on X for frequent updates (@chaotropy).