Back to magazine

Understanding the data center load: From AI’s energy demands to grid stability

Understanding the data center load: From AI’s energy demands to grid stability

What happens when AI consumes more electricity than some of the entire nation’s total energy consumption per year? Yes, we are talking about approximately hundreds of thousands of Terawatt-hours per year. It is expected that AI will represent about 20 per cent of the data center power demand.

Stacked area chart showing projected global data center electricity demand from 2020 to 2030, with growth driven by the United States, China, Europe, Asia excluding China, and the rest of the world, highlighting rapid increases after 2024.
Figure 1. Global data center electricity consumption, by equipment, base case, 2020–2030 by IEA reports.

The electricity demand driven by the proliferation of AI-focused data centers is increasing. It is projected to more than double, from 415 TWh in 2024 to over 945 TWh by 2030, making data centers one of the fastest-growing sources of electrical load in advanced economies. The race to scale AI is no longer just about training bigger models, it’s about energy. Data centers, once silent backbones of the digital economy, are now front and center in global energy debates. Goldman Sachs research projects that the five highest US hyperscale enterprises will have a combined $736 billion of capital expenditures in 2025 and 2026. That’s not just a statistic; it’s a wake-up call.

The hidden challenge: Power fluctuations in AI workloads

The power management issues with large AI training workloads involve tens of thousands of GPUs. A traditional Google search consumes approximately 0.3 watt-hours (Wh); however, a query employing an AI feature such as a ChatGPT or Google AI-powered response can consume up to ten times as much electricity, ranging from 2.9 to 3.6 watt-hours or more, depending on the task and model employed. To put this into perspective, Google processes about 571 million searches every hour (≈158,548 per second), while ChatGPT alone handles around 104 million prompts per hour (≈2.5 billion per day) reported by Axios. This enormous energy footprint is not concentrated in one location but rather divided among hundreds of hyperscale and enterprise data centers worldwide, each operating vast fleets of servers and GPUs. This means that even with fewer queries overall, the energy footprint of AI-driven searches can rapidly surpass that of traditional searches. The increased energy usage for AI searches is due to the complicated computations needed in generative AI models as opposed to the simpler information retrieval of classical search.

Bar chart illustrating hyperscale company capital expenditures from 2020 to 2026 in billions of U.S. dollars, showing historical growth through 2023 and projected increases reaching over $400 billion by 2026.
Figure 2. Hyperscale company capital expenditures, Source: Goldman Sachs

Grid stability under threat

There is a computation-heavy phase where each GPU works on the local data, and a communication-heavy period where all the GPUs synchronize on the data because these jobs are synchronous. Large power swings happen because compute-heavy phases utilize a lot more power than communication phases. As the number of training jobs increases, so does the amplitude of these power fluctuations. The frequency range of these power fluctuations presents an even greater difficulty (shown in Figure 3 for Llama) because it has the potential to physically disrupt the power grid infrastructure if it coincides with utilities’ essential frequencies. Therefore, we need to stabilize the power of such workloads.

Line graph showing node-level power demand during Llama-70B model training on a simulated 8-node system, illustrating fluctuating power draw between 6,000 and 8,800 watts across multiple nodes over six minutes.
Figure 3. Example of power demand during Llama-70B training across 8 nodes, Source: 2024 United States Data Center Energy Usage Report by Berkeley Lab Energy Analysis and Environmental Impact Division.

Typical data center load: Cooling, backup, and complex power electronics

Each data center contains several servers, and as a result, it generates a large amount of heat as it processes data. Running those servers and maintaining cooling is one of the main causes of electricity consumption.

High energy usage, fluctuating operational demand, and substantial cooling needs are characteristics of these loads. The load’s internal configuration, which includes backup power and internal protection, might be extremely complicated. Since data centers and other computational demands are frequently found in places with affordable and dependable electricity, rigorous grid planning and connecting techniques are needed. Since large-scale loads will progressively dominate future power systems, the main challenge is to proactively address these difficulties to better understand the dynamics of the grid.

Modern data center interior with rows of blue-lit server racks and a glowing digital network overlay symbolizing cloud computing, AI processing, and high-performance data infrastructure.

Equally important, the industry still lacks clear regulations and standardized methods for modeling and integrating such complex loads, making data access and consistency a growing challenge for planners and researchers alike.

For data centers, the challenge is that several components interact across very different time scales: protection systems, power electronics, controllers, and the energy management system (EMS), especially since most data centers rely on backup sources such as UPS units or generators. Fast switching events inside converters happen in microseconds, while protection systems and ride-through behaviors are observed in milliseconds.

Testing the interactions between all these systems is crucial, not only for validating current designs, but also for exploring new control approaches and technologies that could help mitigate grid impacts, such as E-STATCOM, peak shaving, or reactive compensation. Hardware-in-the-Loop (HIL) simulation makes it possible to test both ends of this spectrum. FPGA-based tools capture the ultra-fast switching harmonics of rectifiers, inverters, and solid-state transformers, while CPU-based solvers handle slower but equally critical behaviors like voltage stability and fault ride-through. By combining these, you can validate how a data center’s internal systems interact with the larger grid, ensuring that cooling, backup power, and complex electronic loads remain stable even under stress.

As data centers continue to expand in scale and complexity, their impact on global energy systems cannot be ignored. The combination of massive electricity demand, fluctuating loads, and significant cooling requirements underscores the urgent need for smarter grid planning and innovative engineering solutions. By embracing real-time simulation, advanced modeling, and more efficient infrastructure design, the industry can ensure that data centers remain reliable, sustainable, and resilient cornerstones of the digital economy.