CrashOverride: its aftermath and its implications.
Robert M. Lee (of Dragos) began his talk with some skepticism about our capacity to learn. “In the aftermath of each attack, people tend to use it to advocate for the positions they’ve already held. There’s little evidence of people learning and changing their minds.”
That said, however, he offered a moderately encouraging assessment of the current reality of ICS security. “As the desired scalability of an attack increases, it requires an exponential level of resource investment by the adversary to achieve increased levels of disruption; as complexity of the system increases, the resources required to achieve scalable disruptive attacks also increases while adversary confidence of success decreases.” The North American power grid is a very complex system, and we can take some comfort from that: “We’re not,” Lee said, “all going to die” (at least not in a massive cyber attack; he wasn’t promising conditional immortality).
CrashOverride and the Ukrainian grid attacks.
CrashOverride is a malware framework used in the cyber-attack Ukraine’s electric grid sustained in 2016. (It was not used in the 2015 attack against power distribution in western Ukraine.) The Ukrainian government has attributed that attack to Russian security services. Lee declined to offer specific attribution (he regards attribution to nation-states as an essentially idle activity, inherently unsure and not something that readily lends itself to analysis by security companies) but he did say that Dragos saw connections with the Sandworm team, threat actors that targeted infrastructure companies in the United States and Europe in 2014, and Ukrainian electrical utilities in 2015.
CrashOverride was interesting because it was “very scalable to the grid as a whole.” In such attacks we see familiar stages. Stage 1 involves intrusion, reconnaissance, and so on, “all the typical stuff you’d do against an IT network.” Stage 2 is the more interesting attack on industrial control systems themselves. A lot of what we see in the media is stage 1 stuff. It’s a big leap from spearphishing to a grid failure. So we need nuance.”
CrashOverride was a Stage 2 attack.
CrashOverride was interesting, Lee said, because there were no exploits or vulnerabilities required to execute the attack. Rather the attackers learned how the system ran, used completely legitimate techniques, and played those techniques back in their attack.
Our adversaries have not only learned our systems, but also how we respond to outages, incidents, perturbations. They can build this knowledge into their original attack. “If you were running indicators of CrashOverride today,” Lee said, “you’d find nothing. The threat is a human attacker using a methodology. What matters is not the malware, but the underlying tradecraft.”
What the incident reveals about us and about the opposition.
When we look dispassionately at CrashOverride and what it can tell us about ourselves, we see, Lee said, hindsight bias. We see the difficulty of defense. We see the copying and pasting of IT (and AI) best practices. And above all we see fear dominating facts.
Considering the adversary, we see indications of the adversary’s intent, and the ease with which they can conduct attacks, and the ways in which attacks can be effectively undetectable.
Like Joe Weiss, who delivered the address immediately before his, Lee strongly advocated mission assurance. He urged the audience to challenge the received wisdom people repeat after an attack, and challenge exaggerated fear. “Understand the limitations of your adversary—they have bad managers and people problems, too, just like everyone else.” And the attacker has to be right a lot too: every single step you take on the offense works against you.
What’s difficult is not defense or offense per se, but rather operations. “If you suck at defense, you’re going to suck at offense,” Lee argued. And he also cautioned against thinking that more widespread access to classified intelligence (or information) will make significant inroads against the threat. “I don’t understand why people associate ‘TS/SCI/NOFORN’ with ‘accurate.'” (To a question, Lee said that we tend to over-focus on indicators. “Indicators are useful in forensics, not in predicting the next attack. The problem isn’t sharing. And if the Government’s classified data are so good, why did OPM happen?” The kind of sharing we need is knowledge sharing. Tell people how you solved a problem. Sharing an IP address isn’t particularly useful sharing a lesson learned is.)
A realistic look at the problems that remain.
We do, Lee concluded, have real problems. Few people know how to protect the industrial control systems that run the world, and the threat landscape remains mostly unknown. “The number one infection vector every year is unknown. The number two vector is spearphishing. What all this means is, if we caught it, that’s because IT caught it going in.”
He also argued that intelligence is not an indicator feed. It’s an answer to a set of questions, and it should enable you to know yourself, know the adversary, and know what to do. Intelligence focuses on the last two things, but we tend, as a community, to focus on the first.
What should intelligence provide?
The goal of intelligence should be, Lee said, reduction of adversary dwell time (not, he emphasized, prevention of attacks). Detection and remediation are the important things. Intelligence should also help us focus on risk reduction, and to indicate whether our security investments match the threat landscape and the enterprise’s requirements.
Malware is not a threat, Lee said. It’s a capability. “We talk about activity groups, which are the real threats.”
And attribution too often amounts to an attractive nuisance. “Get off this attribution crap,” is how Lee put it, lapsing into the analyst’s demotic argot. “Most people aren’t very good at it, and what matters isn’t whether this is the Russian state, but whether this is someone acting in the interest of the Russian state.”
Figure out your crown jewels, and figure out the adversaries out there who’ve gone after that sort of thing. Do threat modeling, “and remember that intelligence answers a question.”