November 15, 2025
AI’s Dirty Little Secret: It’s Out of Juice
Houston, We Have a Grid Problem
Data is the new oil. AI is the new rocket fuel. Everyone’s building digital engines to go to the moon—Startups, hyperscalers, your nan’s knitting club. But here’s the snag: those engines need power. And we’re not just talking a few extra plugs behind the sofa.
According to TechRadar, nearly 40% of new data centres might hit a brick wall by 2027 — not because of cost, regulation, or even talent (shocker), but because they simply can’t get enough juice to run. No power = no AI. Even Skynet would be stuck twiddling its metallic thumbs.
Let’s unpack the mess: how AI’s appetite for compute is creating a gridlock, what it means for scaling, and what fast-moving tech companies should do before investing in their next GPU-palooza decision.
AI’s Power Hunger Is Not Sustainable
Training LLMs like GPT-4 consumes more electricity than some countries do in a year (looking at you, Malta). And that’s before factoring in daily inference, edge computing, and the endless stream of A/B tests your product teams are obsessing over.
This isn’t theoretical—it’s happening now:
- NVIDIA’s latest data centre chips are designed for performance, not efficiency.
- AI startups are snatching up server space like it's Glasto tickets.
- Utilities can't approve new infrastructure fast enough to keep up.
The result? Bottlenecks. Literal physical ones. Grid congestion in major metro areas where data centres want to be, but can’t afford to wait five years for new transmission lines. High-density computing meets ye olde infrastructure. Something’s got to give.
From Hype to ‘Help, It’s Not Plugged In’
I spoke to a CTO last month who was eyeing a move to a hybrid AI-cloud model. His devs were buzzing with transformer models and inference pipelines. The blocker? Not talent. Not budget. Their hosting provider couldn’t guarantee the power supply for the GPU rack they booked four months ago.
It’s becoming common:
- Colocation delays: Power constraints are delaying colocation builds by 12–24 months in London, Frankfurt, Amsterdam, and beyond.
- Redistribution strategies: Some teams are shipping compute-intensive training workloads to Iceland—or wherever green energy is cheapest and plentiful.
- Startup stallouts: I've seen early-stage AI companies pivot, not because their model didn’t work, but because they couldn’t operate at required scale without crippling latency and infra headaches.
Moral of the story? You can't just 'scale' AI like a SaaS feature release. The pipes need to be there first.
What It Means For Hiring and Talent Strategy
Let’s connect some not-so-obvious dots. If you’re hiring for high-performance data science, MLOps, or AI engineering—but your infra can’t support training a simple LSTM—then you're not scaling AI. You're collecting salaries.
Before you open the floodgates on senior hires in ML, ask:
- What’s our actual compute profile?
- Can we access scalable power—or is this a rental issue disguised as a build one?
- Are we better served by inference optimisation over model re-training?
- Where are our blockers—code, people, or electrons?
That role you think is 'urgent AI hire #3' might actually be a glorified hamster in a wheel… if there's no infrastructure to unleash their output. Harsh? Maybe. True? Definitely.
Founders, Don’t Just Power Up — Think Bigger
If you’re a founder leading a Greentech or Fintech scaleup betting big on AI: great. You’re in a thrilling space. But you are now also in the power business—whether you like it or not.
My advice?
- Map the grid first. Speak to hosting providers about any regional constraints before you commit to new workloads.
- Prioritise energy efficiency skills. Staff with experience in distributed compute, model compression, or green infra design will be in hot demand.
- Challenge assumptions. Sometimes the smart move isn’t hiring more AI engineers… it’s optimising inference pathways and buying time.
- Be boringly strategic. Like it or not, power constraints will become a board-level discussion. Wrap it into your ops planning early.
Conclusion: AI Scale = Talent + Infra + Electrons
This is the bit people don’t tell you when they sell you the ‘AI will eat the world’ dream. Power is finite. Infrastructure is slow. And even the smartest LLM team in the world is basically stranded on a power-boat without petrol if the grid says, ‘not today, mate.’
If you're hiring aggressively for AI capability, you'd better make sure your infrastructure team has green lights across the board. And if you're not sure—ask. Then decide: do we need another AI head, or an electrical engineer?
Build smart. Scale sustainably. And before you go full Skynet—check the plug.
Back to news