The evolution of data center infrastructure in North America

Data centers have become increasingly important under the Industrial Internet of Things revolution. What are the most crucial considerations for the IT infrastructure of a data center?

By Anil Gosine, MG Strategy+ May 5, 2017

The IT infrastructure market is undergoing a rapid digital transformation. Canada and parts of the United States have become ideal locations for primary co-location and disaster recovery data centers hosting global enterprises. Data center modernization or new construction projects can deliver futuristic data centers consisting of highly automated infrastructure in which applications and data will be deployed and provisioned based on evolving workload demand. Power and cooling—the backbone of data centers— must be designed and constructed to be flexible enough to keep up with automated, virtualized, dynamic technologies of the future while planning for capacity growth, efficiency demands and budgets.

Is the data center secure enough?

With rising cyber security concerns, protecting servers and information assets in data centers is critical. Security—both physical and cyber—has to be assessed, continuously improved and new systems may need to be put in place to increase the security posture in this sector. IT operations are a crucial aspect of most organizational operations around the world. One of the main concerns is business continuity; organizations rely on the information systems to facilitate operations and customer services. It is necessary to provide a reliable infrastructure for IT operations, in order to minimize any chance of disruption. Both physical and cyber information security continue to be a major concern; data center upgrades or new construction must offer a secure environment that minimizes the chances of a security breach. The data center therefore must possess high standards for assuring the integrity and functionality of its hosted computer environment, which should include redundancy of mechanical cooling and power systems and network fiber optic cables.

How to cool the data centers down?

A number of data center hosts are selecting geographic areas that take advantage of the cold climate to mitigate the extensive costs of cooling their server infrastructure. As data centers pack more computing power, managing the significant heat that the semi-conductors generate is consuming more and more of the operating costs of a data center; consumption is at approximately 2% of U.S. total power consumption. Facilities in the northern United States and Canada are able to employ a technology called “free cooling” to reduce this energy cost in half, utilizing the outdoor air to supplement the work done by the energy intensive components of the cooling-system. Free cooling uses a specialized heat exchanger that uses cool outdoor air to chill water or glycol that circulates to the server racks. This reduces the load on the large energy consumption equipment—compressors and pumps. For facilities deploying the “free cooling” technology, ideally-located facilities can operate two-thirds of the time without running their chillers; this easily can save US $250,000.00 per year.

Public, private or hybrid: What’s best for you data?

For companies that continue to own and operate their own data center, their servers are used for running the Internet and intranet services needed by internal users within the organization, e.g., e-mail servers, proxy servers, and domain name system (DNS) servers. Network security elements should be deployed: firewalls, virtual private network (VPN) gateways, situational awareness platforms, intrusion detection systems, etc. An on-site monitoring system for the network and applications also should be deployed to provide insight to hardware health, multi-vendor device support, automated network device discovery and quick deployment. In addition, off-site monitoring systems can be implemented to provide a holistic view of the LAN and WAN performance. A company’s data center provides the capability to optimize the equipment housed within these centers, strengthen the network infrastructure and work with partners on the integration of their data. Any internal upgrade project should reduce the power usage effectiveness (PUE) and create a culture focused on being green and efficient that provides the Infrastructure for the delivery of digital business services. Companies that continue to operate their own data centers will face future challenges to ensure that every asset is utilized optimally.

Business impact examples for upgrades: 

Data center infrastructure management

Data center infrastructure management (DCIM) is the integration of information technology (IT) and facility management disciplines to centralize monitoring, management and intelligent capacity planning of a data center’s critical systems. By embracing DCIM, data center operators will bring consistency, predictably and control to IT operational metrics, while also improving service assurance. This can be achieved through the implementation of specialized software, hardware and sensors, DCIM-enabled common, real-time monitoring and a management platform for all interdependent systems across IT and facility infrastructures. A DCIM product can help a data center manager identify and eliminate sources of risk and to increase availability of critical IT systems. It can be used to identify interdependencies between facility and the IT infrastructure and to alert facility managers to gaps in system redundancy, and provide dynamic, holistic benchmarks on power consumption and efficiency to measure the effectiveness of "green IT" initiatives. It’s important to measure and understand data center efficiency metrics. Energy metrics, as well as server, storage, and staff utilization metrics, will allow a more complete view of all enterprise data centers.

Data center boom in North America

The combination of cheap power and cold weather puts Canada and upper regions of the United States in a similar class with Sweden and Finland, which host huge data centers for Facebook and Google. The data that will be managed by companies is estimated to grow by a multiple of 40 over the next decade, requiring a huge growth in the data center footprint over the next five years. Data custody is a dimension that plays into the selection of data center location, Canada is seen as having an advantage over the United States because many companies are concerned about the U.S Patriot Act, which allows the U.S. government to intercept and examine data stored in the U.S. without a search warrant.

Key items that will revolutionize the data center architecture and how people and firms manage it will be:

  • Modular data centers – scalability and capacity will require pre-fabricated deployments with flexible packages.
  • Software-defined networking – decouples network topology and traffics management from network hardware to seamless interaction between application workloads and network infrastructure.
  • Energy efficiency –DCIM software packages to automate management processes, consolidate servers and improve transactional performance.
  • Interconnected framework – customer demands will require reliable and secure resources with faster access that are governed with Master Service Agreements for managing various service level agreements (SLAs).

With Asia growing faster than expected, providers have been working hard to keep up with the region’s growth. This has led to data centers that may not have the best energy and water efficiencies designed and implemented within them; this is going to be an area needing future upgrades and consumers education on the efficiency benefits. Data center operators will continue to be challenged to convince customers that their facility is a secure place to store data as this is the lifeblood of their business. 

Data center facilities are the heart of the any company’s electronic business infrastructure, giving life to a significant percentage of business activities. As these mission-critical facilities are unmanned and remotely managed from a network operations center, there are vulnerabilities that need to be addressed and mitigated. Access controls will be important to manage—the data center management team must know the vendors and internal and external customers, and maintenance practices and procedures must be rigid, centralized and authenticated.

Another key is to ensure that the IT’s support staffing structure does not become understaffed and overworked, as it is expected to deliver an up-time as high as 99.999%. This statistic has a powerful significance: virtual perfection is expected. As a data center environment is complex, administrators must avoid technical short-cuts because this will take its toll on support procedures and compromise the overall system security. Server virtualization has been a game-changing technology for IT, providing efficiencies and capabilities that were not possible within a physical environment. While server virtualization has continued to mature and advance, companies must have a strategy to implement it in order to realize the benefits and manage cybersecurity threats. Data center operators can now plan, deploy and maintain a sound virtual infrastructure.

About the author

Anil Gosine has over 18 years of construction management, operations and engineering experience within the industrial sector, with a primary focus on electrical, Instrumentation and automation process and systems in the U.S., Canada, and Central America. Anil is the global program manager for global industrial projects with MG Strategy+ and leads the Strategic Efficiency Consortium Security Workgroup with specific focus on cybersecurity metrics, threats, vulnerabilities, and mitigation strategies for ICS and security intelligence and analysis. MG Strategy+ is a CFE Media content partner.