
By: Julia Matevosyan, Associate Director & Chief Engineer, Energy Systems Integration Group (ESIG)
Julia Matevosyan is a Senior Member of IEEE with a PhD in Electrical Engineering from the Royal Institute of Technology, Stockholm. She is an expert in renewable energy integration, with extensive experience in grid code compliance, wind and hydro power production planning, and transmission system development. Julia has contributed significantly to advancing the technical requirements for connecting renewable generation to power systems and is a recognized authority on power system analysis and planning. She previously served as Lead Planning Engineer at ERCOT and is now Associate Director and Chief Engineer at ESIG.
Data centers as large loads
Data centers are rapidly emerging as one of the most significant new categories of electricity demand. According to various U.S. Department of Energy (DoE), EPRI and Grid Strategies projections, their installed capacity will grow quickly, from approximately 20 GW in 2023 to more than 120 GW by 2030 (upper bound scenario), and they are geographically concentrated in a few regions where their interconnection requests represent a substantial portion of total new load.
Data centers belong to a broader class of modern large loads, which also include electric vehicle fleet depots, hydrogen electrolysis plants, semiconductor and battery manufacturing facilities, and other energy-intensive industries. These facilities differ fundamentally from historical industrial loads such as aluminum smelters or chemical plants. While older industrial loads were large but steady and predictable, modern large loads have characteristics that create new and complex challenges for the power system.

Key characteristics that distinguish specifically data centers as a new large load category include:
- Load demand: A single facility can be hundreds of megawatts, and gigawatt-scale projects are now being proposed.
- Voltage level at interconnection: They typically connect at transmission-level voltages, not distribution.
- Geographic clustering: Multiple facilities are concentrated in areas such as Northern Virginia or Dallas–Fort Worth, creating very large, localized demand.
- Power electronics-based interface: Data centers use converters to interface with the grid, leading to unique interconnection challenges including sensitivity to voltage and frequency disturbances, power quality and protection issues.
- Dynamic load profiles: AI training clusters can create large and rapid fluctuations, with tens of megawatts of change within seconds, a behavior more akin to variable generation than traditional demand. Figure 1 shows an AI data center’s load profile exhibiting high-speed, high-amplitude active power consumption fluctuations during AI training.
- Opaque to system operators: Many large loads are developed by private entities that disclose little about their timing, characteristics, load shapes, or flexibility potential, complicating forecasting and operational planning. This creates significant uncertainty for load growth forecasting and planning.
- Speed of buildout: Hyperscale data centers are typically built in two to three years, much faster than the seven to ten years required for major transmission upgrades, creating timing mismatches that cause interconnection backlog, resource adequacy concerns and transmission bottlenecks.
- Fault ride-through and tripping behavior: During transmission-level disturbances large load facilities switch over from the grid to their backup supply, amplifying the effect of otherwise routine events. A simultaneous switching of multiple large load facilities concentrated in one geographical area may occur, potentially leading to cascading loss of generation and involuntary customer load shedding.
Taken together, these characteristics position hyperscale data centers as the most pressing example of modern large loads. Their unique features create broad implications for the planning, operation, and reliability of the power system — challenges that are examined in the following sections.
Power system-level implications
The emergence of large loads poses significant implications for the power system. Their growth highlights a mismatch between the speed at which these facilities are built and the timelines required to reinforce the grid and build new generation. Developers can bring large data centers online in about 2-3 years, while transmission and generation planning, permitting, and construction often take a decade or more. This transmission and generation planning mismatch creates risks of congestion and inadequate capacity to serve demand where it arises. In operations, short-term load forecasting uncertainty (day-ahead and intraday) leads to sub-optimal generation unit commitment, while high demand variability leads to growing operating reserve needs.
Reliability concerns also stem from the ride-through behavior, voltage/frequency stability, and oscillatory behavior of data centers. Large clusters of facilities can trip offline during otherwise routine faults, producing cascading events that affect the bulk system.
In Northern Virginia, disturbance reviews documented 1,500–1,800 MW of load tripping offline during transmission faults. ERCOT studies have shown that the simultaneous tripping of 2.6 GW of concentrated data center load could exceed safe frequency limits, creating the risk of cascading generator outages. These examples demonstrate that the challenges posed by hyperscale data centers are no longer theoretical but are already being observed in practice.
Power quality impacts also arise in the shape of harmonics, flicker, and rapid reactive power swings propagating beyond the facility’s converters and cooling systems, into transmission corridors, complicating protection schemes and degrading power quality for other customers. Finally, limited observability and incomplete data sharing hinder system operators from assessing the full implications of these loads. With increasing size, variability and new performance characteristics introduced by new large loads there’s a growing need for high-speed data recording (such as phasor-measurement units, dynamic fault recorders and dynamic disturbance recorders).
Interconnection studies and modeling needs
Existing interconnection study practices are not well suited for data centers and other large loads. Current methods rely heavily on static load models, which are inadequate for capturing fast dynamics, converter behavior, ride-through performance and complex interactions of large loads with other grid elements. As a result, interconnection processes often fail to capture the true impact of these facilities on system stability and reliability.
Advanced modeling is required to address these limitations. Electromagnetic transient (EMT) simulations are necessary to study the high-speed dynamics of Uninterruptible Power Supply (UPS) systems, converters, and clustered data center behaviors. Without detailed and accurate EMT studies, important oscillatory modes and fault interactions may be missed.
For the detailed interconnection studies to be representative, high fidelity equipment models are necessary both in phasor domain and EMT. While the requirements for generator models by now are well understood and both vendor-specific and generic models can be obtained and parametrized to represent actual power plants, this cannot be said about large load models for the following reasons:
- Lack of established modeling, model quality testing, model validation and benchmarking requirements for data centers or other large loads.
- Diversity in large load site uses and designs as well as confidentiality issues that obstruct the development of representative standard library models.
- Lack of understanding of aspects of large loads important for grid impact modeling and studies.
- Lack of high-resolution data and no process for post-commissioning model validation.
The above gaps parallel the early days of inverter-based resource (IBR) integration, when incomplete models slowed the ability to study system impacts. Similarly, for large load model development an iterative process is needed. System operators need to develop modeling requirements to ensure usefulness and fidelity of interconnection and planning studies. Equipment manufacturers need to develop validated vendor-specific equipment models, focusing on equipment aspects relevant for the grid. Developers and consultants will then be able to piece together a model representative of each large load site that fulfils system operator’s requirements. High-resolution data measurement and retention requirements need to be established to enable post-commissioning model validation. Standard library models of large loads may still be useful, e.g. for wide area planning studies or feasibility studies, where specific large load design is still uncertain.
ESIG large load task force
To address some of the above gaps, the Energy Systems Integration Group (ESIG) established the Large Load Task Force (LLTF). The main goal is to coordinate research, share best practices, and develop solutions for the interconnection of emerging large loads. The task force brings together system operators, utilities, researchers, and industry stakeholders to address these challenges collectively.

As shown in Figure 2, the LLTF’s scope spans all of the areas described in this article, including:
- Data collection and load forecasting
- Interconnection process and performance
- Modeling requirements
- Transmission planning
- Resource adequacy
- Wholesale market options
While interconnection issues have been the immediate priority, the task force also considers broader system impacts such as transmission planning, resource adequacy and market integration of large loads. These areas are less developed but are critical to ensure that the growth of large loads supports, rather than undermines, system reliability and efficiency.
Key takeaways
Large data centers exemplify the new class of large loads that are reshaping the power system. Their unique characteristics—including size, clustering, converter interfaces, dynamic behavior, and opacity—create implications across planning, operations, interconnection, and reliability. Real-world events in regions such as PJM and ERCOT confirm that these challenges are already materializing.
Addressing them requires new approaches to interconnection studies, with EMT modeling, standardized load models, and transparent data sharing at the center. Beyond technical studies, broader coordination is needed across markets, transmission planning, and adequacy assessments.
The ESIG Large Load Task Force provides a unique forum to organize this work, convene experts across domains, and develop practical outputs that support system operators and industry stakeholders. Continued collaboration through the LLTF will be essential to ensure that large data centers — and other emerging large loads — can be reliably integrated into the power system of the future.