CONTROL-SYSTEM SECURITY ATTACK MODELS


By Andrew Ginter, VP Industrial Security, Waterfall Security Solutions

Many cybersecurity practitioners assume that standard IT security practices are sufficient to secure industrial control systems, but this is not so. The difference between IT systems and control systems lies not in the kind of technology deployed on the networks, but in that technology’s focus. The focus for an industrial control system is, not surprisingly, control. Industrial control systems control large, complex and often dangerous physical processes. The attacks that control system owners and operators lose sleep over are cybersabotage attacks, not cyberespionage attacks. Any mis-operation of complex, dangerous physical equipment at a power plant or refinery by attackers on the other side of the planet, however briefly, is an unacceptable risk.

Classic IT security tolerates certain risks and expects a certain degree of compromise from time to time. IT firewalls are porous by design – after all, they permit email messages and Web pages into protected IT networks. Every one of those emails or Web pages can contain an attack, and attacks do reach through IT firewalls from time to time, in spite of the best efforts of firewall vendors or IT staff. This is why intrusion detection is so important on IT networks. Applying this approach to control-system networks is, frankly, dangerous.

Recent reports show that intrusion detection systems take an average of 1-2 months to detect compromised equipment but, again, any mis-operation of physical equipment, no matter how brief, is unacceptable. Control system security standards are evolving to emphasise strong intrusion prevention in the form of unidirectional security gateways and removable media controls over intrusion detection systems. Modern attack modelling makes the reasons for such measures very clear.

BLIND TO ATTACKS

Too many control-system security practitioners apply IT security best practices to control-system networks without a clear idea of how those practices leave networks at risk. Fundamentally, all software has bugs and all software can be hacked, even security software. All intrusion detection systems are software and can be defeated. All firewalls are software and can be defeated.

For example, imagine a control-system technician working from home. Imagine that the technician’s laptop has been compromised by an attacker who wants to sabotage operations at a particular site. The compromise arrived via a spear-phishing attack, and is now running in the background on the laptop. Anti-virus did not catch the attacker’s malware because the attacker wrote this little piece of code, and no other machine in the world has ever seen it. This means, of course, that there is no anti-virus signature for the malware.

When the technician uses the compromised laptop to open a VPN connection to the plant to do some control-system work, the malware wakes up. The engineer

starts a remote desktop type of tool and logs in using a two-factor login dongle hanging on his keychain. The malware waits a number of minutes – long enough to be confident the login process has completed. After a set interval, the malware makes the remote desktop window disappear. For example, this can be done by moving the window to an invisible virtual screen.

The malware now starts sending pictures of the remote desktop window to the attacker, and sending the attacker’s mouse movements and keystrokes into the remote session. In effect, the attacker has taken over the remote-desktop window from the technician. The laptop’s VPN may have been configured to prevent such “split tunneling”, but the malware has access to the raw network devices on the laptop. With a bit of coding, the malware can send anything the attacker wishes to the technician’s home network, no matter how the VPN is configured. Bringing up a deceptive window for the technician saying “your remote desktop session has become unresponsive – checking for a solution to this problem” is “icing on the cake”.

What about IT protections? Will the plant’s intrusion-detection system catch this attack? Well, let’s think this through. Did the intrusion-detection system complain when the technician logged in? Of course not. The technician logs in remotely all the time. Does the intrusion-detection system complain when the technician starts to use tools on the industrial network, reprogram devices or operate physical equipment? Of course not. This is what the technician does remotely all the time. Did the extra-long encryption key on the VPN save us? No. The attack came through the VPN. Encryption protects against man-in-the-middle attacks, not against compromised endpoints. Did two-factor authentication save us? No. The attacker waited until the technician had logged in using the two-factor authentication mechanism before hiding and taking control of the remote control window.

ATTACK TRAINING IS ESSENTIAL

Remote access attacks are only one example. Nothing is ever absolutely secure. The real question is attack difficulty. A reality that all risk teams must take into account is that attack difficulty decreases with every year that passes. Attack tools continue to become more sophisticated; what five years ago might have seemed an “advanced attack” is today within the reach of script kiddies.

All this is elementary to penetration testers and other white-hat experts who specialize in attacks on industrial control systems. This is especially true of attack specialists with experience in software development. Turning “advanced attacks” into checkbox options on an attack tool is, after all, simply a matter of programming.

Too many control-system experts evaluate risk based on “vulnerabilities, impacts and likelihood” without a clear understanding of modern attack tools, techniques and capabilities. Too many experts use IT security best practices, without having evaluated those practices against the protective needs of industrial control systems.

Carrying out an accurate risk assessment of an industrial control system requires a team with at least the following skill sets:

• Physical security: explore insider and physical attacks
• Physical process: evaluate physical consequences of mis-operation
• Safety systems: understand limitations of these systems
• Equipment-protection systems: explore how to damage equipment
• Control systems: understand how the physical process can be mis-operated
• Cyberattacks: work with the other experts to invent attack scenarios
• Information technology: evaluate potential attacks against deployed defenses
• Business costs: understand the cost of compromise

The focus of the team is to compile a representative list of types of high-consequence attacks that are possible on control systems, and evaluate how confident we should be that each class of attack would be defeated by existing security measures. The team must also evaluate how much effort

or money today’s attacker must invest to bring about each kind of attack. The team still carries out all of the normal functions of a risk-assessment team, including threat-actor enumeration and evaluation, but expresses the results of those efforts in terms of attack capabilities.

COMMUNICATING RISK

A standard risk calculation evaluates possible attacks by their likelihood and consequences and fits attacks into a matrix such as the one below. This is the matrix senior management and boards of directors are accustomed to seeing for commonplace risks, such as earthquakes, tornadoes and flu pandemics. The problem with this risk matrix is that there is no reliable way to assign a likelihood to an attack. Any calculation of likelihood is, bluntly, fictional.

Microsoft Word - Andrew Ginter Waterfall.docx

A more appropriate way to describe risks due to deliberate attacks is a “design basis threat” model, or “attack model”. Design basis threat is used in physical attack modelling, and is a document describing the strongest kinds of physical attacks a site is required to repel with a set, high degree of confidence.

An attack model enumerates representative threats in one dimension, and attack capabilities on the other axis. Each cell in the model is a representative example of a kind of attack. A line through the model represents the design basis threat. For a given level of confidence, the security risk team is confident that attacks below the line are going to be repelled, but attacks above the line may not be.

Microsoft Word - Andrew Ginter Waterfall.docx
Standard IT Security Applied to Control System Cybersabotage Risks.

A sample attack model for cybersabotage attacks on an industrial site, such as a power plant or chemical plant, is shown below. The red line in the model shows the design basis threat represented by standard IT security protections for this typical site – anti-virus systems, firewalls, VPNs, security update programs, intrusion detection and so on. Standard protections provide surprisingly little protection against the spectrum of attacks industrial sites face today.

USING ATTACK MODELS

We can evaluate the Andrew Ginter Waterfall Fig 2-5effects of proposed security enhancements using this attack model. A security team might look at the model above and conclude that it is unacceptably weak.

The team might propose, for example, to upgrade all of the plant firewalls to new “next-gen” firewalls with “deep packet inspection”. How would this change the model? Not very much, unfortunately. When the risk team re-evaluates the attacks against the new defensive posture, little has changed. This is a cybersabotage attack matrix and “next-gen” firewalls are better at repelling espionage attacks than sabotage attacks.

We can do the same for a proposal to install removable media control software throughout the control system, and train control-system personnel about the risks of removable media. Again, little changes. High-volume attacks have already been dealt with by deploying anti-virus throughout the control system, and the targeted attackers we are worried about prefer to operate through firewalls by remote control.

With the gateways deployed, the team could take a second look at removable media controls and training. With the gateways in place, it turns out removable media controls and training now block additional classes of attacks. With the preferred attack path through firewalls eliminated, these new measures now further reduce the attack surface.

UNDERSTANDING RISK

Telling our senior managers that all risk is “likelihood multiplied by impact” does them a disservice. Senior management and board members have a fiduciary duty to manage risks for shareholders. These decision-makers deserve accurate information, not made-up likelihoods as a basis for their decisions. Exposing a risk matrix to these people allows them participate meaningfully in design basis threat decisions. A board member, for example, could lean forward and draw her own line saying, “these are the threats we are willing to accept and not accept”, leaving the security team to go away and determine what it would cost to meet that new standard.

Cybersabotage-attack modelling serves a second purpose in that it makes clear to security teams the serious limitations of IT-style security postures on industrial control-system networks. Because of the very serious consequences of any mis-operation of control-system computers and physical equipment, control-system security must be based on a much stronger foundation of intrusion-prevention technology than is possible for IT networks. Firewalls are too porous for industrial networks, and intrusion detection systems are too slow.

Almost all industrial sites publish large quantities of information to corporate networks, and accept almost nothing back. Unidirectional-gateway technology can replicate database servers and file servers from industrial networks to corporate networks with no risk to the industrial network. Removable media controls can further lock down industrial networks in ways that make no sense with porous firewalls at the perimeter.

The time has come to start evaluating control system attack risks accurately, and to continue the trend towards deploying much stronger intrusion prevention for industrial networks than is possible for IT networks. ■


ABOUT THE AUTHOR

Andrew GinterAndrew Ginter is the Vice President of Industrial Security for Waterfall Security Solutions. Andrew spent 25 years leading the development of software products for communications networks, industrial control systems, control system to enterprise middleware, and industrial control system security. He now represents Waterfall to NERC CIP, FERC, ANSSI, NIST, NRC and other standards authorities and regulators. Andrew is currently the co-chair of the ISA SP99 Working Group 1, and an active contributor to the Industrial Internet Consortium’s Security Framework.


This article first appeared in Cyber Security Review, Autumn 2015 edition published by Delta Business Media.