Monday, October 17, 2011
The Myth of Cyberterror
UPI's article on cyberterrorism helpfully states the obvious: there's no such thing. This is in sharp contrast to the rhetoric in cybersecurity discussions, which highlights purported threats from terrorists to the power grid, the transportation system, and even the ability to play Space Invaders using the lights of skyscrapers. It's all quite entertaining, except for 2 problems: 1) perception frequently drives policy, and 2) all of these risks are chimerical. Yes, non-state actors are capable of defacing Web sites and even launching denial of service attacks, but that's a far cry from train bombings or shootings in hotels.
The response from some quarters is that, while terrorists do not currently have the capability to execute devastating cyberattacks, they will at some point, and so we should act now. I find this unsatisfying. Law rarely imposes large current costs, such as changing how the Internet's core protocols run, to address remote risks of uncertain (but low) incidence and uncertain magnitude. In 2009, nearly 31,000 people died in highway car crashes, but we don't require people to drive tanks. (And, few people choose to do so, except for Hummer employees.)
Why, then, the continued focus on cyberterror? I think there are four reasons. First, terror is the policy issue of the moment: connecting to it both focuses people's attention and draws funding. Second, we're in an age of rapid and constant technological change, which always produces some level of associated fear. Few of us understand how BGP works, or why its lack of built-in authentication creates risk, and we are afraid of the unknown. Third, terror attacks are like shark attacks. We are afraid of dying in highly gory or horrific fashion, rather than basing our worries on actual incidence of harm (compare our fear of terrorists versus our fear of bad drivers, and then look at the underlying number of fatalities in each category). Lastly, cybersecurity is a battleground not merely for machines but for money. Federal agencies, defense contractors, and software companies all hold a stake in concentrating attention on cyber-risks and offering their wares as a means of remediating them.
So what should we do at this point? For cyberterror, the answer is "nothing," or at least nothing that we wouldn't do anyway. Preventing cyberattacks by terrorists, nation states, and spies all involve the same things, as I argue in Conundrum. But: this approach gets called "naive" with some regularity, so I'd be interested in your take...
TrackBack URL for this entry:
Listed below are links to weblogs that reference The Myth of Cyberterror:
"Law rarely imposes large current costs, such as changing how the Internet's core protocols run, to address remote risks of uncertain (but low) incidence and uncertain magnitude." I concur. I would also apply that principle to proposals to restrict carbon output in order to address the possible risks of AGW.
Posted by: Doug Levene | Oct 17, 2011 8:47:22 PM
When the evidence of a threat is largely classified, public discussions of security threats tend to be frustrating because values often masquerade as empirical evidence. One side always says there is a huge threat; another side always responds that there is no real threat. In my experience, these sorts of empirical claims are often created by normative values: The more one favors a particular response, the more one assess the threat to match that response. But perhaps I'm too cynical.
Posted by: Orin Kerr | Oct 18, 2011 1:37:48 AM
Might I suggest that, perhaps, the concept of "changing how the Internet's core protocols run" may misconstrue the problem. Many (I would argue most, but for the purposes of this argument "many" suffices) common vulnerabilities about which the popular media (over?) hypes stem from the failure of private actors to implement well-known security measures. (I can provide a nice list of these; see also the list of the FTC's data security enforcement actions.)
While I do not argue that such implementations are without cost (they ARE expensive!), I do suggest that this is not a question of "regulation restructuring existing infrastructures" but rather "regulation encouraging behavior favoring avoiding negative externalities where the market fails to do so."
As to whether such externalities (i.e., cyber terror attacks) are likely to occur, or constitute a Type I, Type II, or some other hybrid risk/error, I suggest that *at least* we know of one high-probability/non-trivial-cost consequence - identity theft/fraud. A second consequence of non-trivial-cost (I am unsure of its probability) is theft of intellectual property.
The first consequence, at least according to 46 state legislatures, the Federal legislature, and several federal administrative agencies, is of sufficient societal impact to justify legislating and/or promulgating regulations. (e.g., state security breach notification statutes, HIPAA, GLBA, and the FTC's enforcement actions - also, the SEC stepped into the fray last week promulgating data security risk disclosure regulations in investor reports.)
Obviously this isn't a conclusive response, but I want to illustrate the difference between encouraging "good" behavior on the part of individual actors and "retooling the whole system."
Posted by: David Thaw | Oct 18, 2011 1:42:09 AM
Orin: I agree with you. I feel particularly nervous where there are not only structural reasons for people to take a position (for example, they are a government official with responsibility for cybersecurity, and hence face concentrated reputational risk if there is an attack), but financial ones. A number of former cybersecurity officials, including Richard Clarke and Mike McConnell (along with Stewart Baker), have taken private sector jobs that involve cybersecurity services. That conflict of interest is underappreciated in media coverage of threats.
Posted by: Derek Bambauer | Oct 18, 2011 1:36:56 PM
David: you're right. There's no such thing as "cybersecurity." The rubric covers a number of different risks and threats. Identity theft is obnoxious and expensive, but hardly a national security risk. IP theft seems more profound, especially if it's defense-related technologies. And stuff like Stuxnet is of sufficient gravity that regulation seems warranted. The challenge is to figure out how much security is enough: do we need two-factor authentication everywhere, or just for tasks like banking? Should all data be encrypted? (Note the wave of unencrypted backup tape losses recently.) Overall, though, I do think information security is a classic externality, which is concomitantly a classic rationale for regulation.
Posted by: Derek Bambauer | Oct 18, 2011 1:41:57 PM
In my experience, the conflict of interest is pretty widely understood on the government side, and not, on the whole, all that likely to influence the positions of the individuals in the government. I've spoken with Stewart Baker about these issues, for example, and whether he is right or wrong, I haven't the slightest doubt that he absolutely believes what he says. The tougher case is on the other side, I think: For example, the media often presents civil liberties advocates as neutral "experts" even when their funding depends on them maintaining a particular message -- and even when there is a significant gap between on-the-record and off-the-record positions. That has been my experience, at least.
Posted by: Orin Kerr | Oct 19, 2011 1:29:20 AM
Orin: I can't speak to the civil-liberties side (any more than can anyone who watches cable news), but over the past two years I've had a good measure of interaction with folks in government positions "responsibl[e] for cybersecurity" and almost unilaterally saw a bias toward their protective responsibilities, if for no other reason than they are overwhelmed and underfunded. Most of the CISOs I interviewed for my doctoral research (admittedly this data is now 2-3 years old) exhibited similar characteristics.
Posted by: David Thaw | Oct 19, 2011 3:28:02 AM
The comments to this entry are closed.