Cyber security experiment reveals threats to industrial systems

A recent report shows how “honey pots” designed to look like municipal water utility networks attract many hackers. Security experts offer their analysis of the findings and suggest how they could influence your defensive strategies.

By Peter Welander October 4, 2013

At the Black Hat conference in July, Trend Micro presented a report about an experiment the company conducted where it deployed 12 honey pots around the world that were designed to look like the ICS (industrial control system) networks of municipal water utilities. Between March and June, these attracted 74 intentional attacks, including at least 10 where the attackers were able to take over the control system.

The report is available at the bottom of this article.

To unpack the significance of this experiment and draw out lessons you can use as you plan your defensive strategies, three cyber security experts offer their analysis.

Michael Assante is ICS and SCADA lead for the SANS Institute, and was vice president and chief security officer at NERC. He developed strategy for the control systems group at Idaho National Labs, and was vice president and chief security officer for American Electric Power.

Tim Conway is technical director, ICS and SCADA for the SANS Institute and was director of NERC compliance and operations technology at NIPSCO.

Matt Luallen is founder of Cybati, a cyber security training and consulting company, and is a frequent columnist and security contributor to Control Engineering.

There is one aspect of Trend Micro’s report that is not included in the discussion. It attributes at least some of the hacking activity to groups connected with the Chinese Army. The consensus opinion of our participants is that while making this specific type of attribution might well be correct, it goes beyond what could be proven conclusively from the available evidence. Moreover, it suggests that such groups are looking for targets of opportunity rather than following a strategy that selects more specific types of systems to attack.

Let’s begin with an overview of the study and what it involves.

Assante: Trend Micro put together a number of honey pots that appeared to exist geographically in different parts of the world. They picked a municipal water system as the design for the honey pot, so they built something that looks like a water system, I assume it has names and labels like a water system, and it has some level of technical architecture that makes it look like control systems linked to a municipal water utility. The system was put out there, accessible from the Internet. The researcher was interested in the amount of activity such Web-facing control systems receive from the threat community. He’s done some research in this area, so this project had two goals: First, to validate his interest by seeing if those people are looking for such targets, if they would find the honey pot realistic, and then what they would do with it. Second, he wanted to gauge the technical capabilities of those who came looking.

Trend Micro’s creators probably had two things in mind: First, they chose a target like a water system of a small municipality to keep the project manageable. Second, they picked architectures that tend to be more susceptible in light of where they see the threat community. As targets, such systems are low-hanging fruit. Being a water system, it’s a very small control system, which sends a message to all of us: There’s no such thing as too small. There were those who came to look at the target, and they came with attack capabilities that were aligned with the target. So if you believe you’re too small and nothing like this is ever going to happen to you, guess what, these systems were designed to look like a small system in the middle of nowhere and attackers still came.

Did the attackers know how to approach industrial systems and communication protocols, or did that reduce their effectiveness? Many users try to take comfort in the idea of "security by obscurity," believing that hackers don’t know how to deal with industrial networks.

Assante: Most of the attackers came simply because they had some general Internet exploit capabilities, but weren’t fully prepared to deal with the realities of a control system. Control systems have common elements like an OS layer and the application layer, and in this case Web-based remote access. But a small subset of the people who came was prepared to dig into control systems and came with enough capability to take over the systems they found. Around 10 of those who got in were able to establish full control over the system that was being simulated in the honey pot. Four of them did it by manipulating the industrial protocol being simulated or hardware devices.

Those attackers came with the right tools, experience, and a plan of what they wanted to do to operate at the level of the industrial protocol and hardware—not just at the application level or OS level.

That leaves a question: Of the 10 who took control of the process, did they do anything that might have harmed the process, or is this just a learning expedition? Could they change HMIs? Could they move setpoints? Did they put down a Trojan to keep a foothold to maintain access? There has to be some indication of the motivation of the threat actors that took over the process. What do we think their intent was? Is there anything we can learn from their motivation?

Since this was a simulated target, would a skilled hacker be able to realize that he wasn’t in a real control system?

Luallen: I have assessed systems for my courseware where I could virtualize it or use the real equipment. When I look at the virtual version path, I know that it doesn’t have the sophistication needed for the types of attack surfaces that I want to represent. If I flip that around and think of how an attacker will think the system should react, I don’t think you have to be too sophisticated to do that as part of the evaluation. If you don’t want to get caught, you have to make sure something is real before you go after it.

So there are hackers and there are hackers. We tend to think of them in a more abstract sense rather than as individuals.

Assante: We use the hacker label in a very general sense. Some individuals and groups bring in different skill sets. If the actors involved can actually see how they’re interacting with the target system, and they are highly experienced with the components of that system and how those components behave, then they are not going to see the things they expect to see, which will help them determine that they are looking at a facsimile and not the real thing.

There are ways to say, “What am I looking at?” You give it a command with the expectation that a particular component will respond in a particular way, and if it doesn’t, you know you aren’t dealing with a real-world situation. The good news is that I don’t think many threat actors are at that level of sophistication and experience with ICS components. Every system is made up of many different things in different layers. Different hackers are good at different parts.

Conway: The bad-news side to that discussion is that we can say the very good people are very limited in numbers, and those very good people would have identified that this was a honey net. Those people would not have brought to bear all their tools and capabilities just for someone else to capture them and do some analysis. So if you’re talking about people who are not the best of the best and look at what they achieved, that’s the scary piece of information. This system was online and available for a short period of time, and you had numbers of people getting in, doing HMI attacks using SQL injection, cross-site request forgery, stealing credentials, exfiltrating the VPN configuration files, and so on. There are a lot of bad things that happened, and we can say that this wasn’t the best of the best, because they would have known they were in a honey net. [Honey net and honey pot are similar in concept, but the former suggests a larger-scale system. Ed.]

Assante: Another bad thing that is harder to get our arms around is that all this activity was on a few honey nets. In the defensive communications circle, we know incidents are occurring, we have generalized reporting by the ICS CERT and that kind of thing, but we know that real-world reporting is much more limited. If this experiment is any indicator, we have to believe that attacks against real systems are occurring, or at least intrusions or interests, and those compromises are very difficult for the system owners to detect. Owners have a hard time acknowledging and understanding that their systems have had reconnaissance run against them or a real live intrusion. Most end users don’t have the capability for detection, but for those that do, their freedom or desire to share that information is limited. Unfortunately, we as defenders have a very limited view of the state of play.

Scary stuff, certainly. So now what?

Conway: When we look at it and say, “What do we do about it?”, I think of things like, disable Internet access, look at your trusted resources, impose a USB media lockdown, whitelist applications, and so on. But then I ask myself, “Did Trend Micro do anything to make these honey nets more visible as targets?” I look at how much time and effort they put in to make sure these systems were indexed and queried with Google. They made sure they’re accessible within SHODAN. They went into all the environments and customized and tailored them so they had a right language setting for the different web browsers. So turn that around and take the approach that asset owners should do that kind of reconnaissance on themselves. Asset owners should ask, “How attractive a target are we? Can someone find our system through Google? Are we available on SHODAN?” If you try it and find that you are easy to locate, how do you make yourself less visible to attackers? We say security by obscurity is a waste of time and irrelevant, and I think that’s true if you’re being specifically targeted, but if people are just looking for a target of opportunity, it definitely makes sense to keep yourself more hidden.

Luallen: That’s a key point. The open source intelligence that people can gain from companies promoting themselves, or connecting themselves, or making too much information available through SHODAN, or vendor documentation, or even presentations at cheer-me-on conferences.

Assante: Reducing the attractiveness of your system for compromise certainly works when people are applying a capability or tool that they have looking for it (for example, crafted searches for Internet facing ICS components). If you reduce the observables for them to find you, that’s a good thing. What it doesn’t do is help if somebody is finding you for a different reason, meaning you’re a target because of the community you serve or other reason for a directed attack.

How practical is it for individual companies to reduce their visibility? How do you do that?

Assante: If you’re web accessible, there are things you can’t do. You can’t hide that fact, but you can reduce the likelihood that somebody is going to correlate what’s there. As a hacker, I can see A, B, C, and D in your system, which leads me to believe that you are this kind of operation and I should use this tool on you.

The first thing you should be doing is looking at yourself and saying, “What am I telling people?” That’s the first thing to understand. Is there a reason I need to make that information available? Is there an operational benefit? If there isn’t, figure out how you can deny that information. Once you do that, stand back and say, “I did the best I could here. Now, what’s the next thing I can do to mitigate the risk?” 

It seems that one of the toughest things for asset owners to determine is if they have experienced intrusions. Most companies aren’t going to set up a honey pot or honey net to determine if hackers have broken in or are trying to break in. But aren’t there easier methods? What about canaries?

Assante: A canary is anything that can send up an observable alert if anything happens to it. It can be as simple as putting a computer on a sub-net such that no other computer should ever access. If something touches it, you know that it’s from outside your normal behavior.

Conway: If you have a network that’s using all TCP/IP V4 or all Modbus for normal communication, you can put in a canary with listeners for all other protocols. If anybody talks to it using a different protocol, you know something’s configured wrong or something worse is happening. Another possibility, most medium to large utilities have test networks, and attackers don’t necessarily know that they are in a test network. So many companies are already running a honey net for all practical purposes where they can install some of these canary devices. If somebody is trolling around, he won’t know it’s a test network and the test network doesn’t actually have connectivity to real devices. For an attacker, they look exactly the same as a real system. You should be looking for activity in the test networks, all the time. Use the honey pots that you already have.

Assante: You can find canaries that align with your skill set that you can set up and then watch and listen. You might not be able to do the forensic investigation afterwards, but at least you have a trip wire that says you might have a bigger problem. You can go to your supplier and ask, “Is our system supposed to do that?” That’s a very important capability.

Luallen: When you look at what you’ve got and the resources you have available, there’s a strong incentive to avoid having to deploy additional equipment. This isn’t a skill that you can just throw on to all your existing personnel without additional investments of training and time. When you look at the range of tools that you might put in place, it’s important to realize what you already have. What kinds of skills and tools are already there so you don’t have to put in more systems and be able to manage them. The canary model is great to look for traffic that shouldn’t be there, but to know what shouldn’t be there, you need to know what should be there. That means knowing what you already have and how it communicates. Go down to the grass roots: What do I have and how do those things talk to each other? If you do put in a canary, what are you going to do when it detects something?

Assante: When you’re getting a new control system or you have come to a new situation with an existing control system, you have to establish your base lines. How does this work? What is required for it to work? What is spurious or unnecessary? You should be able to get this from your supplier, particularly during the procurement phase. There are tools available, like the SOPHIA tool from Idaho National Labs, that are designed to passively baseline your communications at the port and channel level. You have to build a profile of the system and then you can tell when there’s a deviation. Most deviations are misconfigurations or somebody making a change in settings, but you still need to do something about it. You have to run it down and find out why it changed. That requires an investment in time and resources.

Luallen: You have to know what you don’t need. When somebody buys a new control system, during the procurement they list all the functionality they need. By the time it gets on site, it has all sorts of other functionality. You have to ask your supplier what’s in there that you don’t need. Anything that’s in there, even if you don’t use it, has to be secured and maintained. There’s a major supplier of panel-based HMIs that is now including Adobe Reader in all its products. This is a horizontal application that has had vulnerabilities, and it will be in a situation where the user may not know it’s there and there is little chance it will be patched. Unless you have a very good reason why you need it, take it off.

So, ultimately, was this test a good idea?

Assante: I applaud the project in that we have very few learning opportunities in the industrial control system space. We have to learn what’s going on and then use that to determine how we defend these systems. Honey pots are good because the people owning the system don’t mind sharing what happened. We have to share it in enough detail that we can extract some lessons learned.

Edited by Peter Welander, pwelander@cfemedia.com

ONLINE

For more information, visit:

https://cybati.org

www.inl.gov

www.sans.org

www.trendmicro.com

Key concepts:

  • Cyber security researchers can create test targets for hackers to measure numbers and skills of attackers.
  • Analysis of data collected helps provide defenders with a better sense of who the threat actors are and how they break into networks.
  • Results can provide practical suggestions for defense strategies. 

Link for Trend Micro digital edition report