McKinsey Says Companies Are Underestimating Insiders Threats to Cyber Security
Cyber programs often miss the significant portion of risk generated by employees, and current tools are blunt instruments. A new method can yield better results.
Insider threat via a company’s own employees (and contractors and vendors) is one of the largest unsolved issues in cybersecurity. It’s present in 50 percent of breaches reported in a recent study. Companies are certainly aware of the problem, but they rarely dedicate the resources or executive attention required to solve it. Most prevention programs fall short either by focusing exclusively on monitoring behavior or by failing to consider cultural and privacy norms.
Some leading companies are now testing a microsegmentation approach that can target potential problems more precisely. Others are adopting in-depth cultural change and predictive analytics. These new approaches can yield more accurate results than traditional monitoring and can also help companies navigate the tricky business of safeguarding assets while also respecting employees’ rights.
Organizations sometimes struggle to clearly define insider threat. In this article, we use the term to mean the cyberrisk posed to an organization due to the behavior of its employees, rather than other kinds of insider threat, such as harassment, workplace violence, or misconduct. For these purposes, contractors and vendors are also considered employees; many of the largest cases in recent memory have trusted third parties at their center.
Briefly, inside threats arise from two kinds of employees: those who are negligent and those with malicious intent (see sidebar, “Double trouble”). Negligent or co-opted insiders are easy for companies to understand; through poor training, middling morale, or pure carelessness, usually reliable workers can expose the company to external risks. However, organizations often misunderstand malicious insiders in two ways.
First, malicious insiders do not always seek to harm the organization. Often, they are motivated by self-interest. For example, an employee might use client information to commit fraud or identity theft, but the motive is self-enrichment rather than harm to the employer. In other cases, employees may be seeking attention, or have a “hero complex” that leads them to divulge confidential information. They might even think they are acting for the public good, but in reality they are acting for their own benefit. Understanding motive can help companies shape their mitigation strategy.
Second, malicious insiders rarely develop overnight or join the company intending to do it harm. In most recent examples of malicious insider events, normal employees became malicious insiders gradually, with months or years of warning signs leading up to a culminating insider event.
How big an issue is it, really?
In a world of competing cyber-priorities, where needs always seem to outpace budgets, it can be tempting to underinvest in combating insider threat. The risk is not well understood, and the solution feels less tangible than in other cyber areas. Many executives have asked us, “Is this actually an important issue? How much risk does it represent?”
We recently reviewed the VERIS Community Database, which contains about 7,800 publicly reported breaches from 2012 to 2017, to identify the prevalence of insider threat as a core element of cyberattacks. We found that 50 percent of the breaches we studied had a substantial insider component (Exhibit 1). What’s more, it was not mostly malicious behavior, the focus of so many companies’ mitigation efforts. Negligence and co-opting accounted for 44 percent of insider-related breaches, making these issues all the more important.
In addition to being frequent, insider-threat breaches often create substantial damage. We have seen high-value events in which customer information was stolen by both negligent and malicious insiders in financial services, healthcare, retail, and telecom in recent years. Some companies lost hundreds of millions of dollars. Pharmaceutical and medical-products companies, as well as governments, have seen a significant rise in intellectual-property theft by malicious insiders.
Why current solutions fall short
To combat the risks of malicious insiders, most companies rely on user-behavior monitoring software (Exhibit 2). These rules-based or machine-learning-based applications ingest troves of data about employee actions, especially their use of IT systems. Generally, they attempt to identify divergence from what is considered “normal” behavior for that employee. When the software spots an anomaly, a small team investigates.
While this method can be helpful, we find that it usually falls short, for four reasons:
- By the time negative behaviors are detected, the breach has often already occurred. The organization is already at a disadvantage, and it cannot deploy an active defense.
- Monitoring for “divergence from normal behavior” creates a huge number of false positives, wasting much of the investigation team’s time.
- Serial bad actors may not be caught; malicious activity may be built into the baseline of “normal” activity.
- Collecting massive amounts of employee data creates privacy concerns and significant potential for abuse.
Beyond these issues, some organizations take this type of monitoring to an extreme, deploying military-grade software and conducting full-blown intelligence operations against their employees. Several recent news stories have highlighted the risks of overstepping the organization’s cultural and privacy norms. Best practices and necessary precautions in the defense industry may be seen as invasive at a bank or insurer.