Five technologies for a future-proof edge control platform and flexible stack deployment

Digital transformation insights

  • Fault tolerance, virtualization, containerization, remote management and cybersecurity enable reliable, flexible and scalable control systems.
  • A vendor-agnostic “one-box” edge solution simplifies operations, supports multiple applications and enables seamless remote management and upgrades.
  • Upgrading control platforms enhances operations, meets compliance and lays the groundwork for long-term innovation while minimizing disruptions.

Digital transformation has aptly been described as a journey, and it’s a journey that nearly every manufacturer has been on since the introduction of Industry 4.0 a decade ago. Each organization goes at its own pace, with different stops along the way toward the goal of new efficiency, insight and safety. While each path is different, the commonality is the need to change without disrupting the business.

Day to day, digital transformation manifests itself most practically at the ground level. For most organizations, there are limited windows to make a technology change, and any change must be weighed against potential risk, impact or disruption to operations.

Modernization of control platforms provides a key opportunity to leverage new technologies to improve current operations as well as lay a foundation for long-term growth and digital innovation. These projects are often catalyzed by a few common drivers where the business must adapt:

  • External triggers: Upgrading processes and data capture to meet compliance with new regulations, reporting requirements and/or industry mandates that are often environmental or security related.

  • Reliability: Organizations looking to the break-fix cycle or that have faced too many disruptions under the umbrella of data loss and downtime (planned or unplanned). Each of these can introduce risk into the business as well as divert teams from higher-value work, particularly innovation and transformation.

  • Supportability: Underlying computer hardware is no longer supportable due to parts, application support or OS support. Applications may be versions behind and have fallen out of vendors’ support windows.

  • More computing power: Modern software that conveys new capability and security requires new levels of compute and memory capacity. Additionally, older infrastructure is not equipped to support interaction with cloud applications, having been deployed years ago for traditional on-premises operations.

Organizations can take advantage of these opportunities to build and deploy modern control architectures that are future proof and provide the reliability and computing power to support evolving requirements of the business. Deploying a resilient and flexible edge compute infrastructure provides the foundation for increased automation and digitalization through software deployment and data acquisition. Importantly, it also opens the door to invite other functions to the table for cross-functional alignment of current and future needs.

Deploy one box for edge control

Digital transformation outcomes are predicated on the ability to translate the abundance of data generated into actionable insights. Automation and control generate most of this data at edge locations.

Automation and control platforms today are increasingly software driven, requiring greater processing power and a level of reliability far beyond the traditional off-the-shelf computing platforms or traditional industrial PCs (IPCs) the industry has relied on for years as good enough.

Organizations now want it all. They’re looking for one “box” they can deploy in any environment — whether on a remote asset, the factory floor or in an operations center — with the power to run any software stack they choose. They want to use open, vendor-agnostic tools, re-task equipment, deploy new software without disruption and run the box remotely without downtime or human touch.

What’s in the box?

By converging operational technology (OT) and information technology (IT) tools and techniques, teams can deploy a single hardware box that supports all those needs. They can have production applications in one computing platform that is accessible remotely, with open, non-proprietary standards. This enables organizations to deploy their own stack wherever they need it and run it with maximum reliability, with little if any touch. Using a core set of technologies — fault tolerance, virtualization, containerization, remote management and cybersecurity — organizations are building a future-proof control platform for current and future needs.

Fault tolerance

Some of the most fundamental aspects of digitalization are the availability of computing resources and the reliability of those resources to run without unplanned downtime. More than ever, OT and IT teams must jointly select purpose-built compute hardware capable of meeting current and future needs. As control architectures become more complex and run higher-level production software, it is essential for computing resources to run continuously with minimal on-site IT support.

There are now fault-tolerant servers available that combine redundant internal CPU, storage, I/O and power sources with monitoring software to predict failure and proactively resolve issues to deliver 99.99999% availability. This hardware-based fault tolerance in a single computing platform reduces the failover latency to seconds (vs. the failover of a cluster) and requires a fraction of the complexity and cost of standing up and managing a cluster, not to mention the recovery time if it fails.

This level of computing reliability opens doors to new flexibility and the ability to comfortably consolidate the existing computing infrastructure and deploy more complex software stacks. A control engineer, for example, gains the option to move control into a fault-tolerant platform and can now decide whether to continue to use traditional programmable logic controllers (PLCs) or move the logic to a control platform — or even embrace use of software-based PLCs. By using such platforms, organizations are able to go years without unplanned downtime or the need to refresh computing infrastructure. This reliability matches the long lifespan of OT assets that teams are accustomed to, rather than the three-year tech refresh cycle of typical IT equipment.

The benefits of virtualization

Operations teams need to deploy a range of production applications. Building on the fault tolerance of a computing platform, teams can consolidate software workloads. Virtualization provides the means to deploy multiple applications as partitioned individual virtual machines (VMs).

Using virtualization, the core human machine interface supervisory control and data acquisition (HMI SCADA), distributed control systems (DCS) and historians continue to run on the platform, but as virtual machines. They may easily be supplemented with batch capability or complemented with engineering workstations. Additionally, running multiple VMs enables a team to run applications from multiple vendors simultaneously. A single compute platform with virtualization may run Rockwell Automation PlantPAx alongside AVEVA System Platform and Inductive Automation Ignition. Virtualization provides an efficient way for teams to deploy all the software they require.

Importantly, manufacturers can connect to cloud implementations, particularly for manufacturing execution systems (MES) integrated into enterprise resource planning (ERP) and order operations at the enterprise level. This integration is essential for multisite deployment and standardization. Depending on the compute platform sizing, organizations are running anywhere from three or four VMs up to a few dozen on a single platform. The virtualized software stack running on a fault-tolerant computing platform provides full reliability and prevents any computing downtime for the applications. Virtualization is a key tool for OT teams to deploy layered software, re-task equipment and support deployment of future software as needed.

Using containerization

Applications are deployed as containers, a software package that contains all codes and dependencies. This allows the application to run consistently across computing environments. For operations teams managing heterogenous environments, containers provide an approach to deploy with consistency. These applications are then managed through orchestration. Common applications include Docker and Kubernetes.

Containers provide a useful complement to VMs for a few reasons. First, they provide the ability to deploy software or potentially test in a lighter-weight package that offers continuous updating and deployment. Second, use of containers and orchestration (discussed in the next section) is well understood by IT teams. They provide IT teams with segmented space on an OT compute platform, and IT applications can easily run alongside OT applications. From an OT standpoint, running containers within a segmented VM provides a landing spot for IT to deploy their applications without risk of disrupting OT applications running in parallel VMs.

Most notably, continuous deployment with containers meets both OT and IT requirements to evolve software and deploy new software in an ongoing fashion without disrupting operations.

Remote management and edge orchestration

Remote management has always been a key requirement in automation and control, particularly related to stranded assets or remote sites with limited IT staff. Many options exist today to build remote management into the control platform, and they are a key element in the control platforms of the future.

Teams can deploy thin managers and remote desktop to monitor asset performance remotely. Many organizations strive for the single-pane-of-glass functionality and visibility of operations. Connectivity via OPC UA is another simple means of connecting the control platform into the operations center. The platform also offers store forward of data in the event of intermittent connectivity due to network disruption or low bandwidth WAN connectivity.

Perhaps of greatest emerging interest is the technique of edge orchestration, which is a method of deploying software one-to-many from a centralized location. This method of easily deploying and updating software is well established in the IT world and is now more common in OT applications. It allows continuous integration and continuous deployment (CI/CD) for rapid updates. Using this approach, teams can remotely deploy new software and incorporate DevOps approaches into the OT world.

Cybersecurity concerns

Cybersecurity concerns and considerations are perhaps the most consistent requirement and often the top issue for both OT and IT teams. Any modernization project or software deployment catalyzes the discussion of cybersecurity, a topic that often brings OT and IT teams together to find a joint solution.

Cybersecurity teams often reference the NIST Cybersecurity Framework (CSF), which outlines specific capabilities for an organization to consider. It encompasses a range of software applications and myriad vendor choices. This full range of cybersecurity solutions can be deployed using virtualization to run alongside other OT applications. Vendors such as Fortinet, Palo Alto, Xage, Claroty, Dragos and others are well recognized in industrial automation and provide a comprehensive range of cybersecurity capabilities. Each also provides applications available as VMs.

The fault-tolerant computing platform is crucial not just for enhancing cybersecurity applications, but also for delivering the essential reliability OT applications demand. This reliability enables IT applications to run on the same platform, allowing teams to streamline operations by managing a single platform instead of maintaining two separate systems.

The formula for future-proof control

Control is one of the building blocks of the smart factory or any other highly automated and digitalized system. Organizations need the ability to deploy compute infrastructure for advanced control that supports a digital transformation journey, but they face the conundrum of when to make a change and what to implement. By combining fault tolerance, virtualization, containerization, remote operation and orchestration and cybersecurity, they have the foundation and flexibility to meet their current and future needs. Fault tolerance, and in more practical terms, reliability, are a fundamental requirement for advanced operations. With this in place, companies have a core architecture to move teams forward, embrace new capabilities and connect on-premises operations to the cloud at the pace they desire.

Author: DoShik Wood, Senior Director, Product and Solution Marketing, Penguin Solutions

Author Bio: For more than 20 years, DoShik has focused on the integration of industrial software and software automation. His primary areas of interest are exploring use cases and getting a practical understanding of how organizations are successfully achieving digital transformation.

KEYWORDS

Digital transformation, digitalization, Industry 4.0

LEARNING OBJECTIVES

  • Understand the critical role of fault-tolerant computing platforms in ensuring continuous operation and minimizing downtime in manufacturing environments.
  • Explore how virtualization and containerization technologies optimize software deployment, allowing for flexibility and scalability in industrial control systems.
  • Evaluate the integration of edge orchestration and remote management techniques to enable seamless deployment and management of software updates across distributed manufacturing environments.

CONSIDER THIS

In the modern, highly connected industrial environment, how can manufacturers modernize control platforms to ensure reliability, scalability and future-proofing while supporting digital transformation?