Virtualization benefits and challenges
Virtualization has significant benefits in computing and in networking, but the IT staff, OT staff, or system administrators must truly know their servers and network so they can be ready for challenges or potential cyber security breaches.
Virtualization has significant benefits in computing and in networking and that is why both have been accepted so readily. This is especially true in operational technology (OT) networking and control systems, where the rest of the system is intended to live for 30 years, and the life of the computer and network components is less than two years.
Virtualization also permits rapid changes and agile re-deployment, which are necessary in the Internet of Things (IoT) environment.
Virtualizing computers and servers, as well as network components, can add a significant measure of safety and robustness to the network.
Storing images of the virtual machines off-site, in the cloud, or at another location means that if the site has an accident, or the site network ends up destroyed by weather (like Hurricane Katrina did to many petrochemical plants), it will be easy to re-construct the systems, re-use the disk images, and be back in business. In addition, virtual systems have a failover mode, where a defective disk simply switches to a backup on the fly, and the failed component can end up repaired while the system continues to run.
There is, however, a fundamental issue with lifecycle. This is especially true with OT systems such as building automation, factory automation, and process control system networks.
The control system, its input/output (I/O), and the final control elements (valves, etc.) are built to last the life of the project-easily 30 years. Unfortunately, through the action of the market and Moore’s Law, computer, server, and network components have a lifecycle of about 18 months. This disparity is infamous in project work, where the time from project initiation to startup of the network and control system may be as long as 48 months. So, when the end user receives control of the system, it is obsolete by as much as 30 months and may not be maintainable.
Virtualization solves this problem by creating virtual machines that run on operating systems that would otherwise be obsolete and no longer maintained. As an extreme example, there might be a laboratory information management system (LIMS) that operates on Microsoft Windows 95 that would have to be completely re-written to run on a modern operating system. Running this application on a virtual machine allows the user to continue to use the application and the operating system it was written for without worry that it is obsolete and not maintainable.
More secure environment
Completely virtualizing the servers and networks provides a measure of security that wasn’t there before. Virtualization by itself won’t necessarily make the system secure, but it will get rid of much of the chance for hardware to be compromised by, say, inserting a USB stick or a CD-Rom or DVD with malware on it. Virtualization severely reduces the number of physical devices that the user needs to control as well.
Network segmentation is also easier, and there’s more direct control with policies and procedures. This means a great deal when there’s a dynamic network where the edges are variable, due to user devices going in and out of the network.
While there are great benefits from virtualization, there can also be serious challenges. One of the challenges is that the IT staff, OT staff, or system administrators must truly know their servers and network. Especially in a virtualization overlay on an existing physical network, the administrator must know exactly what the system is doing, what it needs to do, and how it will grow for future expansion.
The user can’t just throw another managed switch on a line and call it good. The data center that is being virtualized needs to have adequate and appropriate electric power and backup generation in case of power outages. The user also needs to make sure the building has adequate heating and cooling resources and that it is secure from physical penetration.
From a design standpoint, the user needs to make sure that the virtualized system has enough availability to operate better than the old system did.
As a part of that understanding, users should consider:
- Changing best practices. This means system administrators need to borrow from OT systems the concept of front end engineering design (FEED). The virtualized network must be specified at least as well as a physical set of hardware and software. A FEED must be clear and complete and understood by all the stakeholders in the system.
- Changing standards. "The nice thing about standards is that there are so many of them," said legendary computer scientist Andrew S. Tanenbaum. Tanenbaum may be cynical, but he is not wrong. One of the things that can hurt a virtual system is a change in a standard that makes the way the system is virtualized not work or not work well. The system administrator needs to keep up on standards better than just running a standard hardware/firmware system. One of the issues virtual systems must deal with is the hardware. Often, the idea that the system is virtual is taken to mean you can run the system on significantly less costly servers and other hardware. This is far from true. The hardware and firmware used in a virtual system needs to be much more robust than a conventional system.
- Changing the architecture of the network. From the very beginning, a network information management tool needs to be implemented. In any virtual environment, it is even more critical than in a standard networking situation to be able to see down into the system—all the devices and nodes, virtual or not, that are on the network. The user needs the ability to scale from a small system to a huge system of various interfaces. The user also needs to avoid virtual machine sprawl so that storage is centralized and not located at each computer while maintaining security. In a virtual network, combining a network information management tool with a good vulnerability scanner is critical to proper security implementation.
- New skills and organization for IT and admin staff. The user and the user’s staff need to have training and experience in handling virtualization and virtual networks. The system is not the same as a standard system. It needs to be operated, designed, and maintained differently, and those skill sets must be available before the system is virtualized.
- IoT, the cloud, and virtualization. Virtualization is ubiquitous, and the sensor-centric networks that make up the IoT are becoming ubiquitous as well. Most data goes to the cloud, where virtual servers and hosted desktops permit data as a service (DaaS) applications to be ubiquitous as well.
Virtualization technology is in thousands of devices and systems already, and with the huge growth of IoT and cloud computing, engineers’ lives in the demanding and intense manufacturing automation environment will become smoother, more efficient, and profitable.
Frank Williams is the chief executive at Statseeker, a provider of network monitoring technology. This content originally appeared on ISSSource.com. ISSSource is a CFE Media content partner. Edited by Chris Vavra, production editor, CFE Media, firstname.lastname@example.org.
Original content can be found at www.isssource.com.