Meeting electrical infrastructure demands in data centers

A data center’s electrical system requires a robust, reliable infrastructure that far surpasses that of its commercial and industrial facility peers.

By Christopher M. Johnston, PE, Syska Hennessy Group, Atlanta May 16, 2013

Learning objectives

  • Understand today’s data center demands and how to meet them.
  • Know the requirements for key equipment and its installation.
  • Understand how to properly specify wiring for various voltages.

To describe a data center using an analogy, a data center is a womb with no view—for computers. Designed to make complex equipment comfortable, the data center requires a robust and highly reliable electrical infrastructure that far surpasses that of its commercial and industrial facility peers.

These high-reliability infrastructure differences are achieved through meeting unique operational efficiencies, properly selecting and installing electrical equipment, and specifying proper wiring and design methods with appropriate voltages, all while meeting maintainability requirements for planned maintenance.

The first step in this process is to define key electrical system base requirements/data center goals. These are typical for a high-reliability installation:

1. Redundant components and systems are equivalent to a person leaving the house in the morning with pants that are too big, so he grabs a belt and a pair of suspenders. If the belt breaks, the suspenders will keep the pants up and vice versa. Either way, he’s covered.

2. Concurrent maintainability means assuring that every component and system (both power and cooling) that supplies the computers can be taken out of service for replacement, repair, or maintenance without shutting the computers down.

3. Fault tolerance, different from concurrent maintainability, means that when any component or system breaks or fails, the systems automatically reconfigure so that the computers don’t shut down. Fault tolerance is an automatic process; concurrent maintainability is a manual process. Part of fault tolerance is compartmentation so that a fire or explosion in one area does not result in total loss of power, cooling, or both to the computers.

4. Complete standby power backup is achieved with a generator plant that is set to provide power when the utility company is not available.

5. Selective overcurrent coordination of circuit breakers and/or fuses is achieved so that during a fault, only the minimum amount of the system is shut down. Ideally, the system will open only the circuit breakers that supply the single piece of failed equipment and nothing else upstream.

6. Modular, scalable construction allows a data center to expand down the road without overbuilding capacity on day one. This is crucial for two reasons: First, everyone is watching their pocketbooks, so if 10 MW of computers will eventually be needed but only 5 MW is needed on day one, the total cost of ownership (TCO) can be minimized by building a modular, scalable shell for 10 MW, but only 5 MW of interior infrastructure for day one. Second, the modular, scalable data center is easier to maintain. Data centers with an excess of unused capacity are a maintenance headache. Careful consideration of the ultimate facility configuration and expansion phases is necessary to minimize risk and eliminate the need for computer equipment shutdown during expansion.

7. Underground circuits are employed in data centers for two reasons: Contractors think they are less expensive to install, and they provide physical security and compartmentation for the data center’s wiring system. However, it is important to note that they require special calculations during the design phase. The Neher-McGrath calculations, found in the National Electrical Code (NEC) 310.15.C and ANNEX B, must be used to design all underground circuits. These calculations often result in the number and size of conductors run underground being substantially larger than would be required overhead. Thus, the anticipated economy compared to overhead circuits is often a false hope.

8. Emphasis on operating efficiency (reduced operating expense, or OPEX) and minimizing TCO can be met by reducing power usage effectiveness (PUE).

Each one of these requirements/goals is crucial because, unlike the typical commercial or industrial facility, the data center load is continuous, with increased ambient temperatures in many areas. For example, the back sections of data cabinets can be 104 to 113 F where branch circuit wiring is installed, while hot aisles can reach the same 104 to 113 F where branch circuit wiring is run upstream from the cabinets. These elevated temperatures result from higher supply air temperature to the computer equipment as a strategy to reduce PUE. Electrical equipment rooms (except those containing storage batteries) can operate at up to 104 F to reduce PUE. The extreme temperatures of a data center make its design for high operating temperatures in addition to code requirements that much more critical than a typical commercial or industrial facility design.

Operations and maintenance

Beyond unique base design requirements, skilled data center designers also must consider equipment maintenance during design, as ease of maintenance will be crucial to meeting continuous and reliable data center operations. Because a lot of maintenance is required to sustain the critical environment, concurrent maintainability, arc flash labeling, and mean time to repair (MTTR) reduction all play a role in maintaining a data center’s electrical operations.

Designing the data center’s electrical systems to achieve concurrent maintainability means creating an arrangement where any piece of equipment or system that supplies the computers can be taken offline for maintenance purposes while the load continues to operate.

On occasion, maintenance is performed on a piece of equipment while it’s energized (hot work). While a high-reliability data center is designed for concurrent maintainability, some operators choose hot work to reduce maintenance time. While there are a lot of safety procedures in place for this type of maintenance, the best way to understand the risks associated with each piece of data center equipment is to understand its arc flash labeling. This label reflects the arc flash hazard calculated for each piece of equipment and identifies the level of personal protective equipment (PPE) and distances required for safe maintenance. It’s important to understand that some maintenance on small parts in restricted places cannot be performed with NFPA 70E levels 3 and 4 PPE.

Minimizing the time it takes to repair a piece of crucial data center electrical equipment and get it back in service to meet the load’s needs (MTTR) also is important in calculating data center maintenance in advance and when specifying equipment. Proper specification can reduce MTTR. For example, a 4000 A, low voltage, drawout-mounted circuit breaker can be withdrawn and replaced from shelf stock in 15 minutes, while a similar stationary-mounted circuit breaker might take an hour or more to replace.

Electrical equipment selection

Now that base and maintenance design requirements have been fulfilled, a data center’s electrical equipment selection will take center stage. Circuit breakers are exclusively used in data centers (except for the occasional use of medium-voltage fuses with utility switchgear at the outside the building) for their ability to reduce MTTR and aid concurrent maintainability, and relative ease of achieving selective overcurrent coordination.

Circuit breakers can be mounted in one of two ways: stationary-mounted where the breaker is bolted to the bus, or drawout-mounted where it’s connected to the bus through a finger mechanism that makes it easy to turn a crank or lever and withdraw the circuit breaker. When drawout-mounted, the circuit breaker can actually reduce MTTR and help concurrent maintainability, while all fusible switchgear is stationary mounted and therefore takes longer to replace than a drawout-mounted circuit breaker.

UL 1558 switchgear is often specified instead of UL 891 switchboards in data centers. A switchboard is rated for fault current to last no more than three cycles, which is equivalent to 0.05 sec or a bit more than 50 millisec. Switchgear, on the other hand, is rated to carry fault current for 30 cycles, or 0.5 sec. While stronger and more robust, switchgear also carries a higher price tag and often a larger space requirement. Selection of switchboards or switchgear becomes critical when performing selective overcurrent coordination.

Two techniques employed are zone selective interlocking and layering of short time breaker trips. Regardless of the technique, the switchgear or switchboard circuit breaker that clears the fault may be programmed to wait up to 0.4 sec before tripping; this is termed short-time delay. Common settings are 0.1, 0.2, 0.3, 0.4, and 0.5 sec; UL 1558 switchgear should be specified instead of UL 891 switchboards if the upstream circuit breaker has a short time trip but no instantaneous trip

The circuit breakers may have to be derated if the calculated X/R at a fault is unusually high. (This is another way of stating that the calculated power factor at a fault is unusually low.) Molded case circuit breakers are rated for various maximum X/R, depending on their interrupting rating (IR): 1.73 X/R for 10 thousand amps interrupting capacity (KAIC) IR, 3.18 X/R for 10 to 20 KAIC, and 4.9 X/R for higher than 20 KAIC. Insulated case circuit breakers are rated for 6.59 X/R. Power circuit breakers are rated for 6.59 X/R if they are unfused, but only for 4.9 X/R if they are fused. Derating can be substantial—if a fused power circuit breaker rated 200 KAIC is applied where the X/R is 19.9, the 200 KAIC interrupting rating must be derated by 17% to 166 KAIC.

These high X/R situations typically occur in data centers when the standby power plant is paralleled with the utility for a closed transition load transfer. This is the situation when the available fault current and X/R are at the maximum; it’s not unusual for a standby generator to have an X/R of 32. Ideally, the design engineer should do an analysis of the electrical system to determine how much the maximum fault current and X/R are available at each breaker to ensure that the breaker can safely interrupt the load as designed. This analysis also should include consideration of the anticipated trip unit settings of the circuit breaker. If a circuit breaker is part of a selective overcurrent coordination scheme, as data centers should be, a breaker without an instantaneous trip must be able to carry the available fault current until its short-time trip times out and it clears the fault. In this situation, the circuit breaker should be applied at its withstand rating, which is usually lower than its interrupting rating. Once this analysis is completed, the X/R rating—and therefore the interrupting and withstand ratings—of the circuit breaker needed in this location can be properly specified.

Because the data center load is both critical and constant, all circuit breakers supplying a critical load should be 100% rated because the use of 80% rated circuit breakers unnecessarily increases cabling costs. For example, if the data center has a 400 amp continuous load, a 500 amp circuit breaker applied at 80% capacity would be sufficient; however, then 500 amp wiring must be supplied downstream of the circuit breaker, costing up to 25% more than what’s actually needed with a 100% rated breaker. While more costly, the 100% rated circuit breaker will reduce TCO and the costs of over-designing.

Switchgear bus and circuit breaker terminations are routinely designed to permit conductors to operate at a 90 C rating during maintenance and emergency conditions. While a piece of wire for commercial use might be rated to operate at a peak condition of 75 C, data centers demand a higher ampere rating and conductor temperature to supply more power when needed. Often, these needs occur during an emergency or maintenance condition.

It is best practice that all circuit breakers carrying critical load (IT, network, and continuous cooling equipment) be tested to ANSI/NETA Standard for Acceptance Testing Specifications for Electrical Power Equipment and Systems during commissioning. Many circuit breakers will fail to trip or trip when they should not trip if not tested and, instead, are put directly into service. It’s rare to find the opportunity to take the circuit breaker offline for performance testing, especially in a critical facility with a continuous load, even if the electrical systems are concurrently maintainable. It’s not uncommon for engineers to see a uniform small circuit breaker failure rate of as much as 6% to 15%. So, it goes without saying that this step is crucial.

Wiring types, methods

Base electrical infrastructure design and equipment selection are supported by appropriate specification of data center wiring. From the type of wiring chosen to its installation methods, voltages, and support, wiring is literally the veins of the data center’s body.

Copper is the preferred conductor material for its ease of use, historical low risk, and ability to work in tight places. That being said, aluminum conductors can be used for large feeders when first-cost reduction is necessary, even though aluminum wires are more challenging to terminate into circuit breakers or a bus, as aluminum expands and contracts more than copper when the load is changed. The larger aluminum conductors often require more space within switchgear, switchboards, and panelboards. Aluminum connections also require more testing and maintenance. A best practice with aluminum conductors is to annually thermoscan the joints and terminations during peak loading conditions. Tightening substandard joints and connections is then usually performed at a time where risk of a critical load outage is minimized.

A variety of wiring methods are used in data centers. Data centers are primarily filled with overhead and underground conductors in conduits and ducts, while bus ducts, cable trays, and cable buses are also employed.

Electrical contractors prefer underground conductors because they perceive that installed costs will be reduced by automatically saving 5 ft of run on both ends and eliminating hanging expense. They assume that the same number and size of conductors are installed underground as overhead. Proper design using the Neher-McGrath calculations often requires larger numbers and sizes of conductors to be installed underground than overhead, reducing or eliminating this perceived advantage. Underground conductors must be sized larger to counter the added insulation naturally provided by the ground. With overhead conductors, however, it’s easier to get rid of the naturally generated heat.

Additionally, scalable, modular data center construction can make proper installation of underground ducts for future equipment challenging, as there’s no 100% accurate way to know where the ducts should surface for future build-out.

Care must be taken to size conductors appropriately for elevated ambient temperatures in computer equipment racks, data hall hot aisles, and electrical equipment rooms. NEC Table 310.15(B)(16) assumes that the ambient temperature is 86 F. However, where the ambient temperature is higher than 86 F, the conductor won’t continuously carry the load current for which it’s rated at 86 F, and must be derated for the actual ambient temperature.

While sometimes used in the data center’s electrical infrastructure, bus ducts face both reliability and maintainability issues because of the multiple joints found in bus ducts. Bus duct joints are typically found every 10 ft in straight runs, so for every 100 ft of straight run, there can be as many as 11 joints (remember that fittings, elbows, etc., add additional joints). This can make bus ducts more susceptible to failure and maintenance more difficult. Additionally, bus ducts are factory-assembled products made to fit field measurements. If any of the measurements are wrong or a bus duct piece doesn’t fit, it cannot be field-modified. A new piece must be ordered from the factory, often with a considerable wait attached.

Cable trays, typically used overhead, resemble a ladder hanging from the ceiling and are used in a data center’s electrical design for their reliable, flexible, low-cost installation. Single conductor and multi-conductor cables can be installed in cable tray, and armored cables are often specified to provide increased fault tolerance. Cable tray can be easily modified in the field to fit conditions, so precise measurement is not as critical as it is for bus duct. It is important to realize that every cable in a cable tray can be lost if just one faults and burns, unless all of the cables are armored. Another crucial point is that stacking cable trays one above another can lead to cascading failures. If a cable faults in the bottom tray, it could cause a fire that burns every cable in that tray as well as in the ones above.

Cable bus is an alternative to bus duct that has many advantages. Assembled like a cable tray with large single conductor power cables run within it, including spacer blocks between the cables, it can be easily modified in the field to fit field conditions. Contrary to bus ducts, cable buses typically only have two terminations (one at each end, with solid cable in between) and no joints, making them more reliable. The reduced number of terminations and joints also reduces maintenance.

Voltage and installation

Both low and medium voltages are used in today’s data centers. Proper voltage selection is beyond the scope of this article. Selection of appropriate insulation types is essential for providing desired reliability. Low-voltage (600 V or below) insulation on conductors is usually rated at 94 F with NEC type thermoplastic high heat nylon (THHN) coated wire insulation being used overhead in dry locations and NEC type rubber high heat waterproof (RHHW-2) or XLP-2 (cross-linked polyethylene) in damp, wet, or underground locations. Medium-voltage cables (1,000 V or more) are usually shielded, with 194 F or 221 F rated ethylene propylene rubber (EPR) or XLP insulation, and with 100%, 133%, or 173% insulation levels selected, based on the system neutral grounding.

If the system neutral is solidly grounded, then a 100% insulation level is normally specified. If the system neutral is impedance grounded and will be allowed to operate for up to an hour phase grounded, then 133% insulation level is normally specified. If the system neutral is impedance grounded and will be allowed to operate for more than one hour with a phase grounded, then 173% insulation level is normally specified. (High voltage is 69,000 V or greater, which is not normally used inside data centers and is typically designed to be installed outdoors for utilities.)

Data centers require a highly robust and reliable electrical infrastructure that far surpasses that of commercial and industrial facilities. In addition, elevated temperatures are encountered in many areas as operators try to increase PUE and operational efficiency. Meeting these operational efficiencies requires proper specification of equipment and wiring, and implementation of design methods with appropriate voltages and systems. A coordinated effort is needed to ensure the data center’s electrical infrastructure will be built to last. 


Christopher M. Johnston is a senior vice president and the chief engineer for Syska Hennessy Group‘s critical facilities team. Johnston specializes in the planning, design, construction, testing, and commissioning of mission critical 7×24 facilities, and leads team research and development efforts to address current and impending technical issues in critical and hypercritical facilities. With more than 40 years of engineering experience, he has served as quality assurance officer and supervising engineer on many projects.