Centering on data

With the need to balance a number of complex, changing demands (such as scalability, sustainability, and shifts in codes), data center projects are among the most complex an engineer can face. Here, top experts in the data center field offer advice on getting projects to compute.

By Jenni Spinner, Contributing Editor March 14, 2011

Meet our roundtable participants

  • Paul Bearn, PE, Associate, Electrical Services Engineer, KlingStubbins, Philadelphia, Penn.
  • Eric Kirkland, PE, LEED AP, Senior Vice President, Director of Engineering, SmithGroup, Phoenix, Az.
  • Bill Kosik, PE, LEED AP, Chicago Managing Principal, HP EYP Mission Critical Facilities, Inc., Chicago, Ill.
  • Shariar Zaimi, PE, President/CEO, EDG2, McLean, Virg.

General

CSE: What engineering challenges does a data center project pose that are different from other projects?

Eric Kirkland: Data centers offer unique challenges driven by their high need for system reliability at near-zero fault tolerance levels, coupled with the dynamic growth in IT processing power driving high-energy demands for power, and cooling. Design solutions require strategies to help reduce energy consumption without increasing operational risk.

Shariar Zaimi: One major challenge in designing contemporary data centers revolves around load densities in excess of 12 kW per server rack (approximately 375 W/sq ft) and at the same time achieving power usage effectiveness (PUE) of less than 1.5. There is also an interesting dynamic involving artificially engineered and publicized low PUEs of 1 to 1.1 to contend with!

Paul Bearn: Data centers require that an extraordinary amount of electrical power be delivered into a computer room area, and they need that power delivered without interruption. These criteria lead directly to several key engineering problems: how to remove the heat generated by the high power densities, how to ensure that power and cooling will be delivered during maintenance or in the event of an equipment failure, and how to do all this without wasting energy, and in a cost-effective manner. Challenges to solving these problems include efficient utilization of space within the facility to house multiple redundant substations, generators, and cooling systems; access to ventilation air for air-side economizers for the computer systems (free cooling), and for cooling of the generators; and storage of the fuel and water required to support operations in the event of a prolonged utility outage. The devil is in the details with data centers as in no other facility.

Bill Kosik: Electrical power density, reliability, and cost. On a per-unit-area basis, a data center will have 20 times the power of an equally sized commercial office building and 40 times as much annual energy consumption. The design of a data center requires expertise to fit all of the systems and equipment in the building, and cool the very high-density computer equipment in a small space. Data centers have a much higher level of reliability compared to an office building. This requires more equipment to meet the redundancy requirements. So in addition to the added level of sophistication in designing and operating a data center, the redundant equipment takes additional space. Finally, construction cost is commensurate with data center size and reliability. The best costing metric is a blend between building size, reliability, and total IT system power.

CSE: How have the needs and characteristics of data centers changed in recent years?

Kosik: Data centers in general are getting larger and more powerful; cooling and power system technology are also advancing to meet the needs of the advanced computing platforms.

Zaimi: Constant development in server technology has contributed to increased power densities. As a result, the data center has increased risk of thermal shutdown or thermal runaway. This increase in power use has many facility owners focusing on energy efficiency and sustainable, PUE-driven design. The focus of data center design has shifted from reliability to supporting high-density computer loads using less power.

Bearn: The power densities in today’s data centers are nearly an order of magnitude higher than they were 10 or 20 years ago; compared to an office space, which might demand 7 to 12 W/sq ft, data center power demand might range from 6-100 W/sq ft for a “low” density data center, to 15-250 W/sq ft for “medium” density, to 500 W/sq ft or more for a high-density data center space. And weekend shutdowns are now a thing of the past; operators and their customers expect facilities to be available all day, every day.

CSE: Please describe a recent project you’ve worked on—share problems you’ve encountered, how you’ve solved them, and aspects of the project you’re especially proud of.

Zaimi: A client recently changed requirements during the project design and implementation due to an interest in technology innovations. This required flexibility to meet the schedule while our team investigated and proved concepts that both validated some innovations and debunked others. The POR was written one year prior to beginning the BOD .

Kirkland: The economic downturn has increased the availability of commercial real estate. We see a number of clients looking at these properties for data center conversion. Upgrading the utility infrastructure and configuring the systems for maximum effectiveness without compromising the user’s ability to incorporate new technology can be quite challenging yet very rewarding. For a recent project, we were able to incorporate best-in-class design strategies within a existing structure located in the US southwestern region that included outside air economizer, indirect evaporative cooling of chilled water, fan array technology, electronic server cabinet hot isle containment, provision to support liquid cooled cabinets, overhead power busway distribution systems, adiabatic direct air evaporative cooling, and humidification.

Bearn: A key client request was for closed transition at both the medium- and low-voltage levels to reduce the exposure of the UPS system batteries to power outages. Closed transition initially appeared to result in an unacceptably high magnitude of available fault current during parallel operations, as well as unacceptably high arc-flash hazard. Various solutions were reviewed, including the addition of reactance to the system to reduce fault current, but in the end the problem was solved by ensuring that closed transitions could never occur simultaneously on the medium-voltage and low-voltage distribution systems. Equipment interrupting ratings were not exceeded in any mode of operation, and arc-flash levels were reduced to within our customer’s target levels. Another frequently encountered challenge is dispelling the myth that Emergency Power Off is an automatic code requirement applicable to all computer room spaces. In fact, NFPA 75 and NEC 645 make it clear that EPO is not a strict requirement; rather, an EPO system is simply a condition which must be satisfied if it’s desired to take advantage of the optional less stringent and less restrictive wiring installations offered by NEC 645, such as running network cabling beneath the raised floor.

CSE: Several companies, such as IBM, recently released guidelines for owners to use in gauging data center effectiveness. What recommendations would you offer other engineers in maximizing the effectiveness of their data center projects?

Kirkland: The data center design community is united in its effort to improve the energy efficiency and manage the environmental impacts of their operation. To that end there is a vast amount of information sharing of best practices. There are any number of professional organizations, sustainable design councils, federal agencies, IT equipment manufacturers, and associations that actively decimate information to raise awareness and steward advancement of data center design.

Bearn: The “tiering” system defined by TIA-942 is perhaps the most common benchmark used to compare data center resiliency; PUE is becoming the standard to compare the efficiency of data center support infrastructure. Unfortunately, these standards are frequently misinterpreted and misapplied; for the metrics to be useful, the standards must be understood thoroughly and employed consistently. Groups such as The Data Center Metrics Coordination Task Force (representing 7×24 Exchange, ASHRAE, The Green Grid, Silicon Valley Leadership Group, the DOE Save Energy Now Program, the EPA ENERGY STAR Program, the U.S. Green Building Council (USGBC), and The Uptime Institute) are working to eliminate these inconsistencies through such publications as “Recommendations for Measuring and Reporting Overall Data Center Efficiency.” (PDF)* Engineers need to stay current with the latest research—or better yet, get involved with these organizations to help push the envelope even further.

Kosik: I think it is important to not reinvent the wheel on metrics, especially in the data center market. I would recommend using technical data from the Green Grid, the USGBC’s LEED for data centers, the U.S. federal government greenhouse gas protocol, and others as appropriate. Not every standard is perfect, and it is appropriate to augment the published material to better suit the client’s requirements (as long as it is duly noted and does not violate copyright laws!), but I don’t think it is advisable to use a proprietary guideline that might not represent the industry as a whole.

CSE: The U.S. Dept. of Energy has launched an initiative to help increase the energy efficiency of data centers. Why is this such a concern, and how have you dealt with it in your work?

Kosik: I think this is a natural course for the DOE to take. This is through their Industrial Technologies Program (ITP). They have been targeting commercial office buildings and heavy energy users such as industrial and manufacturing facilities for a number of years and have built a successful program assisting end-users in reducing their energy use. I suspect that since the data center marketplace is starting to mature, it was time to tackle the energy monsters called data centers.

Bearn: The aggregate national energy consumption of data centers doubled from 2000 to 2006, and is expected to double again by the end of this year. The cost both to our national economy and to individual data center owners is significant, as is the environmental impact of such high power usage and its associated carbon footprint. “Report to Congress on Server and Data Center Energy Efficiency”, U.S. EPA ENERGY STAR Program.

Zaimi: Our company was ahead of the curve and has USDOE certified Data Center Energy Practitioners (DCEP) on staff. The system introduces a number of new matrices of energy and air management for data centers which we’ve already implemented in a number of our clients’ data centers. All of our LEED engineers are certified in DCEP and follow the USDOE’s procedures, which are an effective benchmark.

Kirkland: It is estimated that the information and communications technology business sector accounts for 2% of the global carbon emissions. The US EPA estimates servers and data centers consume up to 1.5% of electric power generation. Data centers that use water-based cooling systems, generally considered the most efficient design, consume large quantities of water as part of their operation. To improve energy performance, lower operational cost, and become more eco-friendly, our designs have to be focused on purposeful exploration of the design issues that reduce waste and influence efficient operation.

Automation and controls

CSE: What factors do you need to take into account when designing building automation and controls for a data center?

Zaimi: A comprehensive monitoring system must integrate all of the control points for both mechanical and electrical systems. They operate at different speeds and must be integrated on the same platform for effective monitoring. This architecture should be developed so that it provides effective control and monitoring to data center operators. Additionally, monitoring within the data center must include monitoring for particulates and gaseous systems. The advent of economizers introduces possible contaminates within the data center environment, thereby introducing the need to monitor for air quality.

Kirkland: To manage data center energy use, it must be fully instrumented to meter the demands of all service utilities with submetering for all major system energy consumers. The system should measure and track the entire data center energy profile, including water use, cooling water energy BTUH, power distribution units, power panels serving HVAC equipment, central plant systems, lighting, etc. The system should also monitor weather conditions at the site: dry bulb temperature, dew point, and wind speed. On the floor the system will monitor server cabinet inlet temperature, room dew point, server cabinet hot isle temperature, and building pressure relationships. The control system will monitor system and equipment alarm conditions and initiate operation of backup systems when component failures occur.

CSE: How does implementing automated building controls in an existing structure differ from designing one for a new one?

Kirkland: Automated building control system architecture is generally set up to be readily expandable. The designer must understand the control network protocol to ensure new devices and equipment are compatible.

Zaimi: Whenever work is being performed on an existing facility, special precautions must be in place to prevent unscheduled outages. Implementing new control systems is probably the most intrusive type of retrofit, and a project of this nature requires detailed planning. The controls retrofit must take place in logical stages, including carefully written methods of procedures with back-out procedures for each step. New facilities are much easier to retrofit as the controls can be tested and changed as required to work out all of the bugs prior to bringing the facility on-line.

CSE: What are some common problems you encounter when working on such systems?

Kirkland: One issue is the ability of the system to be properly tuned to forecast trends and predicatively respond to them. This is vitally important if we are going to be successful in maintaining close control of the building environment. The system should not under- or over-shoot control setpoints.

Codes and standards

CSE: How have changing HVAC and/or electrical codes and standards affected your work on data centers?

Zaimi: Although well-intended, codes can differ from or even contradict one another. For example, ASHRAE, the International Building Codes, and the USGBC each have requirements for energy efficiency that may not be practical for data centers or may contradict code requirements of the NEC. We have been waiting for several years for the new LEED data center guidelines while our clients have been requesting LEED-certified facilities. The NEC has recently indicated that it is ready to provide the long-awaited changes to the Emergency Power Off (EPO) requirements, which have plagued the operation of data centers for decades.

Kirkland: ASHRAE Technical Committee 9.9 publication of “Thermal Guideline for Data Processing Environments” has made a profound impact on the design of energy-efficient data centers. The standard built consensus among the building professional engineering community and the IT equipment suppliers on the thermal environment required for highly reliable data processing systems and established design criteria that was otherwise difficult to gain agreement on. It has opened the door to unprecedented advancement and groundbreaking achievements in data center energy-efficient designs.

CSE: Which codes and standards prove to be most challenging in data center work?

Kirkland: In some building development standards it is difficult to classify and account for the uniqueness of the data center building type. Many of these standards base occupancy on the size of the facility, which drives design planning elements for internal exiting of space largely occupied by equipment. Many of my clients would opt for flooded fire suppression extinguishing systems if given the choice, but most jurisdictions have a requirement for water-based fire sprinkler systems necessitating them to install both, driving up the cost of construction. The parking area requirements are grossly overstated for this building type, which in some cases hampers the ability for future building expansion in a market that’s growing at unprecedented rates.

Zaimi: The most challenging are LEED energy and atmospheric requirements, EPO requirements, and EPA emissions requirements as they pertain to generator plants. It is nearly impossible to obtain a 10% to 14% reduction in energy when more than half of a data center load is processing load whereby nearly 100% of the energy savings must be taken out of the mechanical plant. The current EPO requirements of the NFPA are so out of date that they are impractical. There is very little sense in requiring EPO systems to shut down the IT equipment when there is no requirement to shut down the UPS plant if it is outside of the IT equipment area. We are looking forward to seeing the revisions for EPO requirements. The EPA tier classification requirements for generator emissions are confusing as the tier classification is dependent on the size of the generator and whether or not the generator is to be used as either an emergency generator or a non-emergency generator.

CSE: Can you name a recent challenge you encountered in this area, and how you worked to overcome it?

Kirkland: Many times EPO devices are installed in IT equipment spaces in accordance with the NEC articles 645 and 685. Most data center managers are obviously nervous about this provision, considering it a single point of failure in the system. Many of our newer data center designs have done away with raised underfloor systems for cable management, opting instead to use overhead ladder rack systems populated with plenum rated cable. Our air distribution systems use the data center room itself as the supply plenum and build cabinet enclosures to create separation for equipment heat rejection. The fire alarm system monitors the IT floor for any signs of a fire ignition source and initiates the activity of the fire suppression system; if the suppression system discharges, the fire alarm system shuts down all HVAC equipment and UPS power to the IT floor. Under these conditions the requirements of NEC articles 645 and 685 are met with no requirement for EPO.

Zaimi: Use and class for buildings; zoning issues for commercial parks and industrial areas, which have conflicting requirements; and electrical capacity of the utilities within these areas are recent challenges.

CSE: What’s the most important factor to keep in mind when wrestling with codes/standards issues on a data center?

Kirkland: First and foremost, understand that codes were created to make projects safer or address issues that have been problematic in the past. We must all bear this in mind when we feel a code shouldn’t apply to our specific project. When wrestling with these issues, I first research the code provision to fully understand its intent and then determine what provisions or mitigating circumstances justify an appeal to the authority having jurisdiction.

Electrical/power

CSE: What’s the one factor most commonly overlooked in electrical systems at data centers?

Bearn: No matter how much automation is incorporated into a data center, it will be operated by people and must be designed with this in mind. There’s a frequent misconception that a reliable data center is a complex data center. However, such complexity can often lead to human error, and therefore loss of reliability, whereas a simple design can help ensure that such mistakes are avoided.

Kirkland: One factor most commonly overlooked at data centers is the specific IT load programming requirement and its respective watts per rack density from a day-one install to a final day-install through a phased buildout of the data floor. The local utility company available capacity and growth plans, as well as the ability to provide an economical solution that offers a modular, scalable, phased growth strategy, are critical to any data center.

CSE: What types of products do you most commonly specify in data centers? Describe the UPS system, generators, etc.

Zaimi: As a part of design of any data center, we specify UPS systems, standby power generation plants and switchgear, as well as power distribution units, substations, etc. We traditionally design more medium-voltage power distribution systems than low-voltage systems.

Kirkland: The types of electrical products most commonly specified in data centers are uninterruptable power supplies, power distribution units, switchgear/switchboards, generators, and transfer switches. To allow for greater flexibility in distribution from the main switchboards to the computer/IT equipment, we specify overhead busway distribution systems. UPS systems are now available with 96% to 98% efficiencies utilizing technologies such as modular load sharing with automatic load sensing/shedding and 3-phase IGBT switching technology that reduces loss due to the double conversion process. Additionally, grounding, lightning protection, and power metering are significant parts of the electrical installation.

CSE: How have sustainability requirements affected how you approach electrical systems?

Kirkland: Sustainability requirements for data centers have brought attention to the efficiencies of the electrical distribution system components (such as UPSs, transformers, and cable losses) as well as the voltage requirements of the IT rack equipment. Designing for higher voltage at the racks such as 400V, 415V, or 480V eliminates the need for transformers improving the overall system efficiency. Also, many data center floor spaces will utilize zoned occupancy sensors to provide general lighting within aisle spaces supplemented with local task lighting. The cascading effect of these system strategies leads to lower cooling requirements and improved power usage effectiveness (PUE).

Zaimi: With requirements for modern data centers to have a PUE less than 1.6, we must use innovative strategies such as 240/415 V power distribution systems, medium-voltage power distribution systems, medium-voltage UPS systems, and DC power distribution systems.

Kosik: There is a big industry push to make the entire power delivery train from the main incoming electrical service to the power distribution units respond as linearly as possible to the actual IT load. For example, at low loads (<25% of the peak IT load), a typical electrical system make be working at less than 80% efficiency, while at 100% load the same system will work at 95% efficiency. We want to see that 95% at all loading levels. And the manufacturers have responded to this and are putting equipment in the market that is significantly more efficient.

Bearn: In addition to a general desire to reduce their impact on the environment, data center operators also have a strong economic incentive to reduce power usage, and a great deal of attention is focused in this area. A few key areas where this is most apparent include UPS systems, use of 415/240-V distribution, and power monitoring systems.

CSE: What common mistakes do you see in data center systems designed by other engineers?

Zaimi: We see many, including single points of failure, electrical infrastructures not well-matched to the client’s mission, incorrect use of low-voltage 480 V in platforms with loads in excess of 5MVA when closed transition is a requirement, and incorrect relaying and protective device settings.

Bearn: We quite often come across data centers that are stated to meet the Tier 3 or 4 requirements of TIA-942 or The Uptime Institute, but on closer examination do not fully conform. The quantities of major MEP system equipment modules might comply, but are all the distribution and support systems also compliant? If there’s a refrigerant leak into the chilled water system, can cooling be maintained? Are there redundant control networks, with redundant power sources? Are generators rated for unlimited runtime as required by The Uptime Institute, and has each generator been provided with dual-redundant starter motors and battery systems as mandated by TIA-942? Has sufficient separation, compartmentalization, or protection been provided for redundant components to ensure that a single event, such as a sprinkler head discharge or localized fire, will not cause a complete outage?

Kirkland: Commonly overlooked items often are related to implementation of a modular, scalable, phased growth strategy that provides for higher watts per rack density as the data center floor is populated with equipment. The ability for concurrent maintenance and the ability to maintain uptime in a critical environment can be hampered by not carefully planning for the future IT programming needs.

CSE: Discuss power usage effectiveness (PUE) and how you use it to measure your engineering design in data centers. Are there other metrics you use?

Kirkland: PUE is the ratio of total facility power to the IT load power. By paying critical attention to the efficiencies of UPSs, transformers, rack distribution equipment voltage level, and lighting, we can reduce losses in our distribution system that have cascading downstream effects of lowering the mechanical cooling demand, all of which lead to lower PUE. Additional metrics include IT power usage effectiveness measured by the ratio of IT equipment power draw in watts divided by computing power measured in FLOP (floating point operations), cooling system efficiency measured by the equipment power draw in kilowatts divided by the level of cooling generated in tons, HVAC system effectiveness measured by the ratio of power supplied to IT equipment as compared to power supplied to HVAC equipment, and airflow efficiency which measures the ratio of fan power in watts as compared to airflow produced in cubic feet per minute.

Kosik: PUE is a good metric in that it is very accessible and well understood. It is one of the primary metrics we use in conveying energy efficiency of a data center. The Green Grid is the primary keeper of the calculation methodology for PUE and a number of emerging metrics like WUE (water use effectiveness) and metrics on IT productivity. Water and carbon footprint are also in our stable of metrics.

Fire/life safety

CSE: What trends and technologies have effected changes in fire detection/suppression systems in data centers?

Kirkland: The very nature of data centers with high-powered electronic equipment poses concerns for fires initiated from electrically energized electronic equipment. The monitoring and early warning detection of the data center floor for products of combustion in their incipient stage using photoelectric, ionization, and air sampling detection technologies is highly recommended. The options for total flooding extinguishing systems have increased. Manufacturers of suppression agents have developed more choices with regard to chemical compositions that are environmentally friendly. Other options include de-ionized water mist submicron droplet extinguishing systems.

Zaimi: Limited-area gaseous suppression, widespread use of pre-action sprinkler systems, and very early smoke detection (VESDA) are among the trends and technologies that have effected changes in fire detection/suppression systems.

CSE: How have the costs and complexity of fire protection systems changed in recent years?

Kirkland: The costs and complexity have gone up, mostly due to the design and arrangement of the data center equipment cabinets to gain power utilization efficiencies that compact and compartmentalize the data center floor. This arrangement requires a higher level of zoning of the fire detection and suppression systems.

CSE: What are some important factors to consider when designing a fire and life safety system in a data center within an occupied building? What things often get overlooked?

Kirkland: Careful consideration of the fire suppression system to be utilized is important. Some suppression agents are safe for human occupancy when discharged while others are not, so understanding how the space will be occupied is a consideration. Another important design consideration to plan for is space for suppression agent tanks. Some suppression agents are stored in gas form; others are stored as a liquid which can impact the number and size of tanks required. Things that often get overlooked are proper allocation of space for the agent storage tanks, accurately calculating the amount of agent required taking in consideration room leakage rates, recommended placement and pipe lengths to discharge heads, development of operational procedures for testing of the system, and the need for a backup suppression agent for recharging the system after discharge.

Sustainable buildings/energy efficiency

CSE: How often is sustainability a concern with data center clients?

Kosik: Five years ago sustainability was just starting to be embraced by a handful of forward-thinking clients. I even had a client tell me (in so many words) that energy efficiency was absolutely the last criteria that our design team should consider. Now I see the same client promoting his more recent facilities as energy efficient and green.

Bearn: Sustainability is a consideration on every construction project; data centers are no exception. As data centers operate on a 24-hour basis, the savings from any energy savings measure is greatly accelerated, with payback achieved in one-third the time compared to an 8-hour operation. The high energy use of a typical data center means that real operating dollar savings are available.

Kirkland: Many large public companies want to carry over a sustainable image and practices into all of their facilities, including data centers. Federal, state, and local government agencies are also enforcing their own mandates of energy efficiency and sometimes LEED certification requirements into new data facilities.

CSE: With changing awareness of sustainability issues and an increased number of products, has working on green structures become easier, or more challenging?

Kirkland: With this awareness, engineers are given the chance to think outside the box and introduce strategies that may be unconventional. It’s an exciting time, and we look forward to the challenge. Facility operators and IT managers are still developing a comfort level with these new strategies.

Bearn: The greater interest in sustainability issues has made it easier for designers to incorporate green design elements. However, green design implies a substantial time commitment to review various technologies and system topologies, perform energy calculations and compare options, and generate the required documentation if a project is to be LEED certified.

Kosik: There are some things that are easier, like more guidelines and standards. Vendors have responded well in releasing new equipment like UPS systems, modular air handling units, and computer room air handling units, but it is like peeling back an onion: once you think you’ve solved a problem, another one is exposed. The analysis just keeps getting more granular.

CSE: What issues affect your ability to retrofit or retrocommission data centers?

Kirkland: In retrofitting data centers the main challenge is determining the current state of operations when the building control systems aren’t fully instrumented to measure system demands. Keeping the existing data center fully operational while implementing an expansion can be challenging. If the original system infrastructure was laid out in a modular approach, then it becomes a little easier to add new components without affecting operating systems.

Bearn: With careful planning most critical facilities can be successfully upgraded and retrofitted. It’s key to review and take advantage of existing distribution maintenance and bypass elements. In many existing facilities, select redundant elements can be temporarily repurposed as primary delivery paths without the need for interruption to ongoing operations. By updating a facility one subsystem at a time, the benefits of an up-to-date, energy-efficient, and sustainable facility can be provided with little or no impact to end users.

HVAC

CSE: What unique requirements do data center HVAC systems have that you wouldn’t encounter on other structures?

Zaimi: Expanded range of operational temperatures and humidity due to energy concerns are unique requirements. For example, ASHRAE Standard TC.9 for office buildings designed for human occupants has different requirements with respect to temperature/humidity compared with the data center environment. The trend now is to push the envelope for temperature/humidity within the data center, but often this creates operating conditions that are out of control, either too dry or too wet. It is difficult to recover from either extreme and reset to the previous condition.

Kosik: The sheer size and sophistication certainly differentiates data centers from other building types. It is not uncommon to see a 5000-ton central chilled water plant in a large data center.

Kirkland: Rarely do you find buildings that require the level of uptime you find in data centers with parallel utility paths and standby equipment for all supporting mechanical and electrical systems. Taking it one step further, you commonly find data centers backing up data centers.

CSE: How do data center projects differ by region, due to climate differences and other factors?

Zaimi: Cost of power and use of free cooling are the major differences from one region to another.

Kirkland: The power needs for data centers are growing at an astronomical pace, so the cost of power is important. You need a high-speed fiber telecom infrastructure with multiple pathways. If you can build in a mild climate you can reduce significantly the amount of power consumed by the HVAC system.

Kosik: Based on our analyses, it is feasible to see an annual PUE of 1.8 go down as low as 1.3 just by changing location, cooling system type, economizer strategy, and electrical system type.