China's Underwater Data Centers vs. Liquid Cooling for AI

The global race to mitigate the immense energy footprint of artificial intelligence has moved from land to sea. As companies build AI models with trillions of parameters, the search for sustainable infrastructure is no longer a niche concern but a central economic and environmental challenge. Building on foundational research by Microsoft, Chinese technology firms are now deploying the world’s first commercial-scale underwater data centers (UDCs), representing a significant development in high-efficiency computing. These projects in Hainan and Shanghai apply the principles of seawater cooling for AI efficiency at an unprecedented scale. This analysis examines the documented capabilities, engineering trade-offs, and the competitive landscape of this subsea approach, providing a reality check on its role in the future of sustainable AI.
Key Points
• Microsoft’s Project Natick established the viability of UDCs, demonstrating an eightfold increase in server reliability and a Power Usage Effectiveness (PUE) of 1.07.
• China’s commercial projects are applying this model to support high-density AI workloads, with the Hainan facility designed to save 122 million kWh of electricity and 105, 000 tons of freshwater annually.
• A primary engineering trade-off is the high Mean Time To Repair (MTTR) for “lights-out” modules, which contrasts with the lower server failure rate.
• UDCs are one of several competing sustainable AI cooling technologies, alongside on-land solutions like liquid immersion and direct-to-chip cooling, each suited for different applications.
Diving Deep: Natick’s Underwater Legacy
The concept of placing data centers on the seafloor was validated by Microsoft’s multi-year Project Natick. This research provided the critical engineering proof that underpins today’s commercial ventures, establishing the core metrics for judging any subsea deployment. The project’s second phase involved deploying a 40-foot-long data center with 864 servers 117 feet deep off Scotland’s Orkney Islands from 2018 to 2020.
The findings were significant. The underwater servers showed a failure rate just one-eighth that of an equivalent land-based facility. Microsoft attributed this remarkable reliability to the inert nitrogen atmosphere, which mitigates corrosion, and the absence of human intervention and temperature fluctuations. Furthermore, the facility achieved a Power Usage Effectiveness (PUE) of 1.07 by using a direct heat-exchange system with seawater. This is a substantial improvement over the 2021 global average PUE of 1.57, as reported by the Uptime Institute. The prefabricated module was also built and deployed in under 90 days, demonstrating a model for rapid capacity expansion near coastal population centers.

From Lab to Ocean: China’s AI Plunge
Leveraging the principles proven by Project Natick, Chinese companies are advancing the UDC model from research to commercial reality, providing a crucial China underwater AI data center update for the industry. The project off Hainan Island, operated by Highlander, is the world’s first commercial-scale UDC, with its initial phase installed in December 2022. The full facility is planned to comprise 100 data cabins, each a 1, 300-ton pressurized vessel.
The technical specifications indicate a clear focus on high-performance and AI workloads. Highlander states a single cabin has computing power equivalent to 60, 000 PCs and can process over 4 million high-definition images in 30 seconds. The project is designed to save 122 million kWh of electricity, 105, 000 tons of freshwater, and 68, 000 square meters of land annually compared to a land-based counterpart. An even more ambitious project is planned for Shanghai’s Lingang New Area. According to reporting from Yicai Global, it aims to deliver exascale-level computing (one quintillion calculations per second) to serve the intense demands of AI model training and financial trading in the Yangtze River Delta.
Submerged Servers: Balancing the Equation
While the efficiency gains are well-documented, a balanced analysis reveals the underwater data center pros and cons. The primary benefit is the near-elimination of cooling costs, which, as the U. S. Department of Energy notes, can account for up to 40% of a traditional data center’s energy use. Locating these facilities offshore near the 50% of the world’s population living within 200km of a coast also directly addresses data latency for end-users.
However, significant operational hurdles remain. The “lights-out” model that enhances reliability creates a maintenance paradox. An IEEE Spectrum analysis points out that while server failure rates are lower, the Mean Time To Repair (MTTR) is drastically higher, as a failed component may require retrieving the entire multi-ton module. Furthermore, the long-term environmental impact is still under study. A report in Frontiers in Marine Science calls for detailed assessments, citing concerns that concentrated thermal plumes, acoustic noise, and electromagnetic fields could disrupt local marine ecosystems. These engineering realities must be managed for the technology to achieve widespread, responsible adoption.

Cooling Titans: The Competitive Landscape
Underwater data centers are one of the most notable of the latest sustainable AI cooling technologies to enter the rapidly expanding market. Driven by the immense energy demands of AI - training GPT-3 was estimated to consume 1, 287 MWh of electricity per one foundational study - the green data center market is projected by Allied Market Research to grow from $62.3 billion in 2022 to over $265 billion by 2032. In this competitive landscape, a China underwater data center vs liquid cooling AI comparison highlights distinct trade-offs.
UDCs offer a unique combination of zero land/water use and extreme PUE, but other technologies provide compelling alternatives for different scenarios.
Oceans of Data: Computing’s New Frontier
China’s commercial UDC deployments represent a powerful, real-world test of a radical infrastructure design. If these projects prove economically viable and environmentally manageable, they will provide a blueprint for other coastal megacities struggling with high land costs and strained power grids. This development challenges the data center industry to pursue transformative designs that treat sustainability as a core architectural principle. The success of these subsea facilities will be watched closely by the global tech industry. As AI’s energy appetite continues to grow, will the industry’s most significant innovations be found on land or beneath the waves?
Weekly AI Intelligence
Which AI companies are developers actually adopting? We track npm and PyPI downloads for 263+ companies. Get the biggest shifts delivered weekly.
About this analysis: Written with AI assistance using AI-Buzz's proprietary database of developer adoption signals. Metrics sourced from npm, PyPI, GitHub, and Hacker News APIs. See our methodology | Report a correction
Read More From AI Buzz

Vector DB Market Shifts: Qdrant, Chroma Challenge Milvus
The vector database market is splitting in two. On one side: enterprise-grade distributed systems built for billion-vector scale. On the other: developer-first tools designed so that spinning up semantic search is as easy as pip install. This month’s data makes clear which side developers are choosing - and the answer should concern anyone who bet

Anyscale Ray Adoption Trends Point to a New AI Standard
Ray just hit 49.1 million PyPI downloads in a single month - and it’s growing at 25.6% month-over-month. That’s not the headline. The headline is what that growth rate looks like next to the competition. According to data tracked on the AI-Buzz dashboard , Ray’s adoption velocity is more than double that of Weaviate (+11.4%)

Pydantic vs OpenAI Adoption: The Real AI Infrastructure
Pydantic, a data validation library most developers treat as background infrastructure, was downloaded over 614 million times from PyPI in the last 30 days - more than OpenAI, LangChain, and Hugging Face combined. That combined total sits at 507 million. The gap isn’t close. This single data point exposes one of the most persistent blind