Insights on multi-core processors
Industrial systems have traditionally been built from a collection of separate platforms, each running its own operating system on independent hardware. This was driven by a need for critical applications to respond quickly and predictably to real-time events. However, such task isolation in hardware comes at a significant expense in terms of space, excess heat generation, and greater management complexity for a production facility. (Link to three other Control Engineering articles in this four-part series, at bottom of this posting.)
Multi-core processors (MCPs) can consolidate various functional requirements of an automation system, including control, safety, visualization, and network security onto a single board. For an example of this consolidation, let’s look at a hard real-time process, such as a robot arm for assembly operations. Using an MPC, motion control of the robot arm can run on one core of the MCP, while a second core running Microsoft Windows can update a human-machine interface (HMI) display and also connect to industrial Ethernet.
This consolidation scenario is becoming almost commonplace today, driven by benefits of embedded systems that formerly required multiple hardware platforms now shrinking to a single hardware platform.
Other drivers for MCPs’ adoption into industrial systems are:
- Bounded determinism to ensure predictable behavior, which is a requirement of hard real-time automation systems. Faster CPU cycles can increase the amount of work done within a given cycle time; however, bounded determinism is still necessary to ensure that a stable and accurate system can be deployed regardless of the compute performance of that system. Bounded determinism thus ensures that real-time tasks run with guaranteed, repeatable real-time responses.
- Dedicated CPU cores that work with specific operating systems positively impact interrupt latency, compared to traditional platforms that must share a CPU with nondeterministic tasks. By being able to guarantee low interrupt latencies, real-time performance of control loops can be improved.
- Decreased clock jitter. MCPs can deliver improved clock jitter performance. In a system where jitter (that is, spread of variations in response to an event) becomes a considerable percentage of cycle time, it adversely affects stability and quality of the control algorithm. This can be especially relevant for naturally unstable systems such as motion controllers.
- Expanded resources for scaling. By adapting a digital signal processor to run on dedicated cores of a multi-core CPU, machine builders gain advantages in having expanded resources to more easily scale for more complex algorithms.
- Optimize contention for resources. With a multi-core CPU, contention for resources such as pipelines and cache is optimized, real-time interrupt latencies are reduced, and control-loop cycle times execute with very high precision, which as we know are all fundamental factors for best-in-class industrial applications.
– Ian Gilvarry, is strategic marketing manager for Industrial Automation, Intel Corp. [NASDAQ: INTC] – Edited by Frank J. Bartos, P.E., a Control Engineering contributing content specialist. Reach him at email@example.com.
– Multi-core processors: Software key to success – Multi-thread parallel computing software and appropriate user tools advance the promise of high performance and power efficiency from multi-core processors.
– Growing applications for multi-core processors – Multi-core processors have wide industrial application potential—from vision inspection systems to motion control—as developers increasingly implement the technology, initially in high-end systems.
– Computing Power: Multi-Core Processors Help Industrial Automation – Two or more independent execution cores on one microprocessor chip can match—or exceed—single-core chip performance by running at lower frequencies and using less power. Different software programming is required to obtain full benefits.