Engineering and IT Insight: Future is virtual for manufacturing IT
Virtualization benefits include faster startup, more computing power, and easier upgrades.
I have seen the IT future, and the future is virtual. Future IT environments will be built on virtualized systems, and this will include the multiple IT systems used in manufacturing. This includes databases, historians, HMIs, schedulers, and even controllers that we use to run our manufacturing operations. Today virtual systems have many forms from internet accessed “cloud” based systems to company owned server farms running hypervisor servers. Virtualization benefits for manufacturing IT include faster applications, startup in hours instead of months, more computer power, and easier upgrades.
Virtual systems are an old idea that has been resurrected with modern technology. During the early days of computing, in the 1960’s, there was a concept called TSO (time sharing option) available on IBM’s MVS operating system. It essentially made one IBM mainframe computer look like multiple IBM mainframe computers each linked to a separate terminal, each running one application. The modern equivalent takes this to the next level and makes one server act like multiple servers at the operating system layer. There are multiple methods for virtualization, but the one that will be most common in all future server farm environments will be based on the hypervisor concept. A hypervisor is a stripped down operating system that provides a virtual environment for other operating systems. It masks the underlying hardware by providing a complete virtual hardware environment. The virtual environment provides each virtual machine with a number of core processors, memory, disk space, and network access. The hypervisor runs guest operating systems, such as Microsoft Windows Server 2008, Microsoft Windows 7, Linux, and even Microsoft DOS. The most used hypervisor systems are VMWare’s vSphere (a stripped-down and customized Linux), Citrix’s XenServer (also a stripped-down and customized Linux), and Microsoft’s Hyper-V (a stripped-down and customized MS-Server 2008).
Moore’s Law is the reason that virtualization will become the dominant IT implementation architecture. Moore’s Law is an empirical observation made in1965 that the number of components in integrated circuits that can be placed on a chip doubles every two years. Forty-five years later, most experts believe that we still have several more doubling cycles to go before we reach hard physical limits.
While the speed of processors has started to level off at about 3 GHz due to limitations of heat dispersion and the speed of light, chip manufacturers have instead added core processors to chips. Four, eight, and 16 core processors are common, and future systems will have hundreds of core processors. Memory density has also increased, following Moore’s law, making massive amounts of RAM available for the multicore processors. Disk density and network speeds are also doubling at their own rates, providing terabytes of inexpensive disk storage and Gigabit network access. These advancements mean that a single virtualization server can replace tens of nonvirtualized servers and this will cause a major change in how we build, buy, use, and manage IT services. With so much computing power available in a single server, the old IT concept of one server per main application does not make sense.
Another feature of the hypervisor environment is that it takes only minutes to create a new virtualized server. All of the major hypervisor systems allow you to take a snapshot of an existing system and to use that snapshot to create a new server by copying the snapshot file. The first installation of an operating system takes the normal amount of time, but later installations can be done in minutes. Most virtualized IT departments keep snapshots of multiple operating systems at different patch levels so that they can quickly build any system that an application needs.
The new IT hardware model will be a utility system model, just like the electrical power model and telephone model. In today’s environment it may take weeks or months to order a new server. This includes time to obtain approval, check if there will be space for the server in the server farm, determine the hardware required by the application, order the hardware, install the operating system, and add it to the corporate network. All this must be done before the application is even loaded on the server. Compare this to ordering a new phone line or an additional electrical outlet. The support organization manages the available resources, autonomously adds or deletes hardware to meet needs, and rapidly provides requested services. The modern IT environment will run in a similar manner. The cost of a virtual server (processer, memory, and disk) in a virtualized environment is only a few percent of the cost of an equivalent physical server. Virtual server costs are so low that they will probably be handled as normal expenses and not as capitalized purchases. In the future you will decide on the application needed, obtain the required server specifications from the vendor, order a virtual server from the IT department, and install the software in hours or days instead of months. IT departments will manage the virtual servers, expand the server farm to meet expected needs, and decommission virtual servers (free their resources) when the applications are no longer needed.
Future columns will address the economic drivers that will force most, if not all, manufacturing IT systems into the virtualized environment. These include the actual cost savings in hardware, HVAC, and space that users will see in this environment; how virtualization provides a cost-effective patching strategy; using virtualization in application upgrades; and how virtualization handles backups and standby problems.
- Dennis Brandl is president of BR+L Consulting in Cary, NC, www.brlconsulting.com. His firm focuses on manufacturing IT. Contact him at dbrandl(at)brlconsulting.com. Edited by Mark T. Hoske, Control Engineering, www.controleng.com.
|Search the online Automation Integrator Guide|
Case Study Database
Get more exposure for your case study by uploading it to the Control Engineering case study database, where end-users can identify relevant solutions and explore what the experts are doing to effectively implement a variety of technology and productivity related projects.
These case studies provide examples of how knowledgeable solution providers have used technology, processes and people to create effective and successful implementations in real-world situations. Case studies can be completed by filling out a simple online form where you can outline the project title, abstract, and full story in 1500 words or less; upload photos, videos and a logo.
Click here to visit the Case Study Database and upload your case study.