Consequence-driven engineering needed for critical systems and processes
Learning from cyber-related incidents in the manufacturing automation sector is one of the best ways to teach users how to remain secure.
When the first attack on the Ukraine occurred in December 2015 that led to a power outage for 225,000 residents for six hours, Idaho National Laboratory (INL) sent officials to the affected power plant to learn what happened. They created a hands-on cybersecurity training based on what they learned from the attack.
"We need cyber-informed engineering for the absolutely most critical systems and processes and integrated risk management for executives, said Zach Tudor, associate lab director at INL, during his keynote address at the API’s 12th Annual API Cybersecurity Conference in Houston, TX, Tuesday. "We want to use the right technology in the right spot."
He was making a point where manufacturers today need a new approach for cybersecurity and one approach is to look at consequence-driven engineering (CCE).
CCE provides organizations with the steps required to examine their own environments for high-impact events/risks; identify implementation of key devices and components that facilitate that risk; illuminate specific, plausible cyber-attack paths to manipulate these devices, and develop concrete mitigations, protections, and tripwires to address the high-consequence risk. The ultimate goal of CCE is to help users take the steps necessary to thwart cyber attacks from even top-tier, highly resourced bad guys that would result in a catastrophic physical effect.
That is all a part of increased cyber awakenings, said Tudor, who is responsible for INL’s national and homeland security mission and its nuclear nonproliferation, critical infrastructure protection and defense systems missions.
"We know there is a lot to do, the adversaries are relentless, but we are too," he said.One of the new ways organizations are looking at security is implementing and using artificial intelligence, but Tudor is not totally sold on that right now."How much control do we give to artificial intelligence?" Tudor asked.
Yes, it allows for a more autonomous system, but "We are looking at what the bounds of autonomy should be. Even the autonomous system needs bounds to see what it can do."
Adding in the idea of autonomy, Tudor asked how does risk change when you become autonomous? On top of that, how does it all work—and integrate—with the issue of legacy systems?
In terms of looking at new approaches to security, users today are facing more intense, sophisticated challenges that continue to grow, such as:
- Hacking is a commodity
- Evolution of the Internet of Things (IoT)
- Digital islands won’t exist
- Increased attack and response surfaces.
One of the ideas to handle those challenges is the concept of resiliency knowing attackers may get in, but the user can handle that by being able to react and having a defense-in-depth profile.
The idea is systems have to be more resilient against any and all kinds of attacks, but there is more to it than individual resiliency. There needs to be a combined effort.
Looking at the audience, Tudor pointed to various people and said, "You may be resilient and you may be resilient, but we are not resilient together."
Gregory Hale is the editor and founder of Industrial Safety and Security Source (ISSSource.com), a news and information Website covering safety and security issues in the manufacturing automation sector. This content originally appeared on ISSSource.com. ISSSource is a CFE Media content partner. Edited by Chris Vavra, production editor, CFE Media, firstname.lastname@example.org.
See related stories from ISSSource linked below.