Is your private cloud (on-prem) ready for you to move your AI off the public cloud?

CIO.com reported that “As their AI programs mature, many IT leaders appear to be moving AI workloads from the public cloud to the private cloud or on-premises environments to control costs and protect data privacy. But data center experts warn that huge and hidden costs may await CIOs attempting to update legacy data centers for the AI era, as preparing for AI workloads can be much more involved than adding a couple of GPUs.” The July 16, 2025 article entitled “Moving AI workloads off the cloud? A hefty data center retrofit awaits” (https://tinyurl.com/56n8xrvv) included the comments about “Millions of dollars in build costs”:

 A greenfield build of an AI-ready data center could cost $11million to $15 million per megawatt, not including compute power, says Everett Thompson, founder and CEO of data center sale and lease company WiredRE.

CIOs with in-house AI ambitions need to consider compute and networking, in addition to power and cooling, Thompson says.

“As artificial intelligence moves from the lab to production, many organizations are discovering that their legacy data centers simply aren’t built to support the intensity of modern AI workloads,” he says. “Upgrading these facilities requires far more than installing a few GPUs.”

Rack density is a major consideration, Thompson adds. Traditional data centers were designed around racks consuming 5 to 10 kilowatts, but AI workloads, particularly model training, push this to 50 to 100 kilowatts per rack.

“Legacy facilities often lack the electrical backbone, cooling systems, and structural readiness to accommodate this jump,” he says. “As a result, many CIOs are facing a fork in the road: retrofit, rebuild, or rent.”

Cooling is also an important piece of the puzzle because not only does it enable AI, but upgrades there can help pay for other upgrades, Thompson says.

Is anyone surprised?

Next
Next

 Are AI Superathletes worth $100million bonuses?