Consent Preferences

Why Putting AI Data Centers In Space Doesn’t Make Much Sense

Jeff Bezos said gigawatt AI data centers will orbit Earth in 10 to 20 years. Thermodynamics makes this idea unfeasible. Without convection, waste heat must radiate through panels spanning millions of square metres. Add latency, radiation risks and costs.

Why Putting AI Data Centers In Space Doesn’t Make Much Sense
ISS-54 ELC-1, main solar arrays and radiators seen from the Cupola (NASA, Public domain, via Wikimedia Commons)

Disclosure: The author holds beneficial long positions in Rocket Lab Corp. (NASDAQ: RKLB), Kraken Robotics Inc. (OTC Markets: KRKNF), Northrop Grumman Corp. (NASDAQ: NOC) and Amazon.com, Inc. (NASDAQ: AMZN). This article is provided for informational and entertainment purposes only and does not constitute financial advice. The views expressed here represent the author’s personal opinion. The author receives no compensation for this article and has no business relationship with the company mentioned. Please see the full "Legal Information and Disclosures" section below.

Jeff Bezos predicted at Italian Tech Week in Turin on October 3 that gigawatt-scale data centers will be deployed in space within the next two decades, citing continuously available solar energy as the decisive edge. I'm extremely bullish on the space economy in general, but I couldn't help pumping the brakes on this particular vision.

At first glance, the concept is undeniably appealing. As NVIDIA’s CEO Jensen Huang emphasized in his GTC 2025 keynote, AI is becoming a "power‑limited industry", and constraints on terrestrial infrastructure are real. Solar in orbit has clear advantages: above the atmosphere you get roughly the full solar constant (~1,361 W/m²) versus ~1,000 W/m² at sea level under ideal conditions, plus no clouds or night, delivering ~30–40% more irradiance and a far better duty cycle. Launch costs have fallen from around $10,000 per kilogram to under $2,000 with SpaceX’s Falcon Heavy, and projected sub-$100 costs with Starship make the economics look increasingly viable.

Companies are already moving internet infrastructure skyward. SpaceX’s Starlink serves millions of users globally, proving the operational viability of large LEO constellations. And AST SpaceMobile has unfolded the largest commercial communications array ever in low Earth orbit—BlueWalker‑3’s 693‑square‑foot phased array—demonstrating we can build sizable orbital hardware. With falling launch costs, maturing satellite technology, and surging demand for compute, the narrative is compelling for investors seeking the next frontier of computing literally above the clouds.

But here’s where physics becomes the party crasher: cooling. High‑performance computing turns nearly all consumed electric power into heat. On Earth, we rely on conduction into liquid coolants and convection to air (plus heat rejection through cooling towers or dry coolers). In space, you can still conduct heat within the spacecraft to a radiator, but in the vacuum of space there’s no ambient gas or fluid to convect into; the only way to dump heat to the environment is radiation. The Stefan‑Boltzmann law is unforgiving at the temperatures electronics like: at radiator temperatures around 300–350 K (≈ 27–77 °C), even a near‑ideal surface only emits a few hundred to ~800 W per square meter. Real space radiators at ~300 K typically reject ~100–350 W/m² once you include emissivity, view factors, and operational constraints. That implies on the order of 1–3 m² of radiator area per kilowatt, or 1–3 million m² for a gigawatt data center. With realistic assumptions, my calculations for a one gigawatt data center yield a necessary surface area of at least 2.2 million m², which means the radiator must measure 1.1 million m² (since it radiates from both sides). This translates to a square with edges exceeding one kilometer. I doubt this would be economically feasible.

The International Space Station, often cited to show complex computing works in orbit, actually illustrates the constraint. The ISS averages ~75–90 kW of electrical power (peaking higher in sunlight), and it carries extensive radiator wings and an active thermal control system just to stay in balance. Its onboard computing is modest by data‑center standards, yet the thermal hardware is substantial and must constantly optimize pointing to deep space to radiate heat. Scaling that to thousands of AI accelerators is a heat-rejection problem that today's technologies cannot easily overcome.

The radiation environment poses additional complications. Cosmic rays threaten data integrity and generate additional heat through particle interactions with electronics. Error-correction systems, redundancy, and radiation hardening all add computational overhead and energy consumption, further exacerbating the thermal challenge. Modern GPUs and AI accelerators, optimized for maximum performance per watt on Earth, would require substantial redesigns to operate reliably in space while managing these constraints.

Maintenance is another issue. Swapping a failed pump or board in a terrestrial data center takes hours. In orbit, a failed radiator panel or loop could idle racks until a servicing mission intervenes. Robotic servicing is advancing (e.g., Northrop Grumman’s MEV life‑extension missions in GEO), but ambitious NASA efforts like OSAM‑1 were canceled—underscoring how hard it is to make on‑orbit servicing routine and economical at scale.

Even the latency “advantage” deserves nuance. LEO links add on the order of tens of milliseconds round‑trip, which is great for consumer internet but glacial for tightly coupled AI training that relies on microsecond‑class fabrics (NVLink/NVSwitch domains, InfiniBand). Big distributed training jobs synchronize frequently; adding 20–50 ms to collective operations is a non‑starter for many workloads meant for in‑rack or in‑row interconnects.

The economics, too, require a reality adjustment. Launch prices have dropped dramatically, but they’re still far above pouring concrete and laying fiber. Add the mass and complexity of multi‑square‑kilometer radiators, redundancy, radiation hardening, and insurance for billion‑dollar assets in debris‑rich orbits and the business case thins out fast.

None of this dismisses space‑based computing entirely. Specialized edge cases make sense: on‑orbit preprocessing of Earth‑observation data before downlink, or small compute nodes integrated into comms satellites where the data already is. But the vision of gigawatt-scale orbital data centers powering Earth's AI revolution ignores fundamental physical constraints.

The space economy will boom nonetheless. Launch providers (SpaceX, Rocket Lab), direct‑to‑device players (AST SpaceMobile), and Earth‑observation companies will thrive by delivering indispensable services. Investors should absolutely watch those platforms. But when it comes to replacing terrestrial data centers with orbital ones, the laws of thermodynamics argue for keeping our servers—and expectations—grounded.

A more pragmatic near‑term path: subsea and offshore hybrids. Underwater or floating near‑shore data centers paired with offshore renewables (floating solar, wind) and seawater‑linked liquid cooling can deliver large thermal headroom without radiators the size of cities. Microsoft’s Project Natick found underwater modules had markedly lower failure rates than comparable land‑based gear, and multiple pilots are now exploring offshore floating data‑center platforms. An potentially interesting play on subsea infrastructure could be the Canadian company Kraken Robotics (OTC Markets: KRKNF).

Follow me on X for frequent updates (@chaotropy).

General Disclaimer & No Financial Advice: The content of this article is for informational, educational, and entertainment purposes only. It represents the personal opinions of the author as of the date of publication and may change without notice. The author is not a registered investment advisor or financial analyst. This content is not intended to be, and shall not be construed as, financial, legal, tax, or investment advice. It does not constitute a personal recommendation or an assessment of suitability for any specific investor. Readers should conduct their own independent due diligence and consult with a certified financial professional before making any investment decisions.

Accuracy and Third-Party Data: Economic trends, technological specifications, and performance metrics referenced in this article are sourced from independent third parties. While the author believes these sources to be reliable, the completeness, timeliness, or correctness of this data cannot be guaranteed. The author assumes no liability for errors, omissions, or the results obtained from the use of this information.

Disclosure of Interest: The author holds beneficial long positions in Rocket Lab Corp. (NASDAQ: RKLB), Kraken Robotics Inc. (OTC Markets: KRKNF), Northrop Grumman Corp. (NASDAQ: NOC) and Amazon.com, Inc. (NASDAQ: AMZN). The author reserves the right to buy or sell these securities at any time without further notice. The author receives no direct compensation for the production of this content and maintains no business relationship with the companies mentioned.

Forward-Looking Statements & Risk: This article contains forward-looking statements regarding product adoption, technological trends, and market potential. These statements are predictions based on current expectations and are subject to significant risks and uncertainties. Investing in technology and growth stocks is speculative, subject to rapid change and competition, and involves a risk of loss. Past performance is not indicative of future results.

Copyright: All content is the property of the author. This article may not be copied, reproduced, or published, in whole or in part, without the author's prior written consent.

Do Not Sell or Share My Personal information