CYBERSECURITY WON’T EVOLVE IF WE’RE ONLY CLEANING UP AFTER A BREACH


By Ryan Olson, Intelligence Director at Palo Alto Networks

In recent years we have seen dramatic changes in both the perpetrators of cyberattacks and the techniques they use. Cyber-attackers are carrying out sophisticated and multi-faceted attacks that are costing businesses millions in revenue, fines, and reputational damage, while more politically-motivated attackers are actively building cyber military capabilities and launching sophisticated campaigns, looking to take advantage of weak links in cyber defences. If cybersecurity is to evolve, we need to prevent before they take hold, rather than simply cleaning up the mess only after a breach has occurred.

By concentrating too hard on the small number of sophisticated attacks that make newspaper headlines, organisations are at risk of failing to address the threat from attackers who aren’t giant criminal networks, terror organisations or nation states, but who are still challenging adversaries.

You hear it quite a lot these days: A company concludes it has fallen victim to an advanced attacker. They hire a company that specialises in investigating breaches to take a look. They confirm there is no way the attack could have been stopped. What a terrible, unavoidable disaster. Except, quite often, that’s simply not the case.

Over the last six years, there has been a series of sophisticated attacks that have received significant media coverage and been analysed in considerable depth. From the Aurora attack on Google to the Stuxnet attack on the Natanz nuclear site in Iran – these are sophisticated, large-scale attacks using one or more zero-day vulnerabilities to crack particularly powerful, well protected resources. Without doubt, there has been an increase in the number of advanced attacks of this nature.

However, these are not the only attacks to consider. It is also the case that attackers rarely use their most effective techniques first, and the vast bulk of successful attacks are far from advanced. And it’s highly likely that sophisticated attacks that prove effective have already been documented for others to learn from.

My perspective is that several things need to happen to reverse this trend of organisations mistakenly assuming they can do little to protect themselves from attack.

Firstly, we need to understand that we can’t cede so much territory to attackers who are using basic approaches first and saving rare, valuable or expensive attacks for harder or more valuable targets.

Secondly, we need to educate employees to reduce the chances of their actions from enabling an attack to be successful.

Finally, we have to build resilient infrastructure that prevents attacks from doing any damage in the first place.

IT’S ABOUT CULTURE

Threat detection is a must-have – but it’s not the first layer of any defence, and it shouldn’t be. It’s vital to change the mindset of both IT security and other staff from detection to prevention as the first step. The best IT staff in the world are powerless in the face of a workforce that couldn’t care less about security or, even worse, takes proactive steps to circumvent it.

Any security system has to be flexible enough to allow access, while still maintaining security. The users have to buy into the process, and that means explaining what security is in place and why it needs to be there. Department-specific policies, reflecting the functions and needs of different teams, are generally regarded as a good thing – even best practice. But when a business restructures or, for example, undertakes digital transformation, these policies have to be reconfigured, and the entire approach reconsidered. Organisations that embrace agility are also a challenge in security teams – information flows in multiple, often unexpected, directions.

The penalty for not addressing these requirements is twofold; the IT department certainly doesn’t want to be seen as the part of the organisation preventing business transformation, close co-operation or other, similar corporate trends. Secondly – and most importantly – the early adopter elements that most often run up against these barriers are usually influential within the business – and more likely than many other groups to circumvent or subvert security protocol in order to just get stuff done.

It’s inevitable that users will try to evade security steps, but as policy becomes more draconian, savvy users will try to evade security altogether. There’s a clear need to get the users on side – while also keeping the relevant business leaders happy that security is addressing the risks to the business. The problem, if policy shifts too far toward draconian, is that the users may become the threat.

One of the most effective ways to approach this problem is to look at combining application and user IDs, which removes some of the awkwardness of securing access via specific computers of appliances. This allows organisations to be more flexible and recruit staff with the right credentials to make use of their company’s applications, data and networks.

To a certain extent, policy is often the result of a failure in technology. Get the tech right, and policies can be more flexible.

THE INFRASTRUCTURE TO PREVENT OCCURRENCE

It’s vital, given the two previous points, to have the infrastructure in place to stop an attack occurring. Log analysis, or monitoring offline traffic, is only useful in retrospect.

More important is the design and creation of networks that make it possible to understand and control exactly who has access to what.

Perhaps the single most interesting illustration of this is the convergence of Information Technology (IT) with Operational Technology (OT). Traditionally, OT, the engineering-centric software and equipment used by utility companies and manufacturers to control industrial equipment, was airgapped from frailer, faster-evolving and fundamentally less controllable IT. OT was – and is – built around slow, careful iteration and reliability – not fast development and raw performance. This worked fine, but it’s become less and less realistic to ignore the benefits of adding IT to the mix. Performance tracking, remote diagnostics and health and safety improvements – all of these make convergence an imperative. But it’s even more important to ensure that only those applications, people and data that are needed go anywhere near controllers whose operation can literally hold the power of life and death.

Organisations need to design their networks to understand and control exactly who has access to what. On the understanding that an attack will, at some point, occur, the next step is to ensure that security strategy addresses every stage of the attack lifecycle. Once an attack has broken through the perimeter, there are still ways and means of preventing access – for example, by having rules in place to establish who should have database access from which location. This reduces the chances of sensitive or valuable information being breached.

Creating network segmentation with a zero-trust model, so no network component implicitly trusts another, is a good first step. Combining it with ‘data diodes’ – technologies that only let data travel in one direction – application IDs rather than port-based policies, and evaluations of access based on user ID are all elements to an effective, integrated approach to security.

WELCOME THE ZERO DAY

At the beginning of this article, I argued that most attacks are less than sophisticated. And if you do have a sophisticated team of attackers, they’re not going to start with their best and brightest tools. They won’t use a zero day exploit, because if they can get in with a three- year-old vulnerability, they save themselves a lot of bother. If an attacker has to use a zero day then congratulations. You’re a tough target. On the other hand, if they use an old attack – you’re not trying hard enough.

If we wanted to take any positive out of a zero day attack it’s that you’ve raised the bar so high they are using time and resources to target your network, as it’s a very expensive proposition, and a very small subset.

FROM RESPONSE TO INTELLIGENCE

Observing and acting first always beats reacting after the fact, and there is, thankfully, a move to reverse some of the shrugging acceptance of the last six years. Sharing information, and observing tactics, is a valuable defensive approach given new power by collaborative tools and services. If you can see attacker A has deployed tactics in the past that have been flagged on your systems, you can do something about it before it becomes a problem.

Transition from threat response to threat intelligence, the sharing of open source information and insight, is the final pillar of a robust defence. Accepting that we can do something about breaches, whether trivial or serious, and that proactivity, shared intelligence and collaboration, both within and without an organisation, is a viable approach, is the best hope we have of securing and protecting our organisations. ■


ABOUT THE AUTHOR

Ryan OlsonRyan Olson is the director of Palo Alto Networks’ threat intelligence team, Unit 42; responsible for collection, analysis and production of intelligence on adversaries targeting organisations around the world. Prior to joining Palo Alto Networks, Olson served as Senior Manager in Verisign’s iDefense Threat Intelligence service. His area of expertise is detecting and identifying actors and groups conducting cyber-crime and cyber-espionage operations. He is a contributing author to the book, “Cyber Fraud: Tactics, Techniques and Procedures,” and primary author of “Cyber Security Essentials.” He holds a Bachelor of Science degree in Management Information Systems from Iowa State University, and a Master of Science degree in Security Informatics from The Johns Hopkins University.


This article first appeared in Cyber Security Review, Autumn 2015 edition published by Delta Business Media.