The Behavioral Economics of Why Executives Underinvest in Cybersecurity

Determining the ROI for any cybersecurity investment, from staff training to AI-enabled authentication managers, can best be described as an enigma shrouded in mystery. The digital threat landscape changes constantly, and it’s very difficult to know the probability of any given attack succeeding — or how big the potential losses might be. Even the known costs, such as penalties for data breaches in highly regulated industries like health care, are a small piece of the ROI calculation. In the absence of good data, decision makers must use something less than perfect to weigh the options: their judgment.

But insights from behavioral economics and psychology show that human judgment is often biased in predictably problematic ways. In the case of cybersecurity, some decision makers use the wrong mental models to help them determine how much investment is necessary and where to invest. For example, they may think about cyber defense as a fortification process — if you build strong firewalls, with well-manned turrets, you’ll be able to see the attacker from a mile away. Or they may assume that complying with a security framework like NIST or FISMA is sufficient security —just check all the boxes and you can keep pesky attackers at bay. They may also fail to consider the counterfactual thinking — We didn’t have a breach this year, so we don’t need to ramp up investment — when in reality they probably either got lucky this year or are unaware that a bad actor is lurking in their system, waiting to strike.

The problem with these mental models is that they treat cybersecurity as a finite problem that can be solved, rather than as the ongoing process that it is. No matter how fortified a firm may be, hackers, much like water, will find the cracks in the wall. That’s why cybersecurity efforts have to focus on risk management, not risk mitigation. But this pessimistic outlook makes for a very tough sell. How can security executives get around the misguided thinking that leads to underinvestment, and secure the resources they need?

Over the past year, my behavioral science research and design firm, ideas42, has been interviewing experts across the cybersecurity space and conducting extensive research to identify human behavioral challenges at the levels of engineers, end users, IT administrators, and executives. We’ve uncovered insights about why people put errors into code, fail to install software updates, and poorly manage access permissions. (We delve into these challenges in Deep Thought: A Cybersecurity Story, a research-based novella.) Our findings point to steps that security executives and other cybersecurity professionals can take to work around CEOs’ human biases and motivate decision makers to invest more in cyber infrastructure.

Appeal to the emotions of financial decision makers.

The way that information is conveyed to us has a huge effect on how we receive and act on it. For cybersecurity professionals, it’s intuitive to describe cyber risk in terms of the integrity and availability of data, or with quantifiable metrics like packet loss, but these concepts aren’t likely to resonate with decision makers who think about risk very differently. Instead, cybersecurity professionals should take into account people’s tendency to overweight information that portrays consequences vividly and tugs at their emotions. To leverage this affect bias, security professionals should explain cyber risk by using clear narratives that connect to risk areas that high-level decision makers are familiar with and already care deeply about. For example, your company’s risk areas may include customer data loss as well as the regulatory costs and PR fallout that can affect the company’s reputation. It’s not just about data corruption — it’s also about how the bad data will reduce operational efficiency and bring production lines to a standstill.

Replace your CEO’s mental model with new success metrics.

Everyone uses mental models to distill complexity into something manageable. Having the wrong mental model about what a cybersecurity program is supposed to do can be the difference between a thwarted attack and a significant breach. Some CEOs may think that security investments are for building an infrastructure, that creating a fortified castle is all that’s needed to keep a company safe. With this mental picture, the goals of a financial decision maker will always be oriented toward risk mitigation instead of risk management.

To get around this, CISOs should work with boards and financial decision makers to reframe metrics for success in terms of the number of vulnerabilities that are found and fixed. No cybersecurity system will ever be impenetrable, so working to find the cracks will shift leaders’ focus from building the right systemto building the right process. Counterintuitively, a firm’s security team uncovering more vulnerabilities should be considered a positive sign. All systems have bugs, and all humans can be hacked, so treating vulnerabilities as shortcomings will create an unintended incentive for an internal security team to hide them. Recognize that the stronger the security processes and team capabilities are, the more vulnerabilities they’ll discover (and be able to fix).

Read more…

Source: Harvard Business Review