27.06.2013 Views

6th European Conference - Academic Conferences

6th European Conference - Academic Conferences

6th European Conference - Academic Conferences

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Merritt Baer<br />

Identifying new nodes also requires a model that takes into account the creative possibilities that exist<br />

in the cyber world (which do not exist as concretely in, for example, the nuclear world) for moves that<br />

serve what biological models call “posturing”— flexing muscles to show capability rather than to enact<br />

any immediate goal. Species which posture rather than fight tend to compete via a “war of attrition.”<br />

Applying this to international security reveals that there are more available cyberwar decision paths<br />

than those which enact straightforward violence. As Rohde (2010) stated, taking into account<br />

posturing is useful because it accounts for different forms of power on the changing landscape in<br />

which the competition occurs. Rohde explains, “Climate change, for example, may have unforeseen<br />

consequences for how nations behave: a war of attrition may become more aggressive.” This game<br />

cannot be modeled linearly based on how many canons or bombs a country has stockpiled; actual<br />

capabilities may be less or more than those the country chooses to posture. (See, e.g., Woodward<br />

2010 on the “speculative” possibility that Stuxnet was an Israeli attack on an Iranian target.) Cyberwar<br />

posturing requires a model more nuanced than M.A.D. To fully exploit the potential for modeling game<br />

theoretical strategies, we must recruit diverse minds to think up new possible nodes, and validate<br />

different forms of power to determine what strategies serve the end goal.<br />

5.3 Weighting nodes intelligently<br />

Once one isolates the problem and defines the corresponding set of goals in a given situation, one<br />

must evaluate the other players‟ likely moves. Game theory can play an important role at this stage<br />

because it is well-established that human cognition tends not to react to threats in a fully rational way,<br />

or as economics would dictate. Jonathan Renshon and Nobel Prize winner Daniel Kahneman have<br />

written on these human cognitive obstacles to economically-optimal decisions. According to<br />

Kahneman and Renshon (2006), “humans cannot make the rational calculations required by<br />

conventional economics. They rather tend to take mental shortcuts that may lead to erroneous<br />

predictions, i.e., they are biased.” Using game theory to make a security strategy that is a calculated<br />

derivative of mapped potential outcomes allows decisionmakers to lessen those biases and respond<br />

to threats proportionately/economically.<br />

The fact that there are limited existing examples of cyberwarfare interactions complicates this stage of<br />

analysis—successful programming in games like chess and Othello have relied upon finite patterns of<br />

previous actions: “A hill climbing algorithm can… be used based on a function of the number of<br />

correct opponent move predictions taken from a list of previous opponent moves or games.” (Hamilton<br />

et al., 2002: 4). Lack of behavioral precedent models will increase the margin of error—if one could<br />

use a killer heuristic (prioritizing moves that have been shown to produce cutoff in other situations),<br />

the pruning would be more successful. (Winands 2004). It is possible that red-teaming could provide<br />

some approximations of history—indeed, one of the recommendations in the Report of the Defense<br />

Science Board (2010: viii) is to “establish red teaming as the norm instead of the exception.” And all<br />

players must play on the board of limited empirical history.<br />

In an intersecting sense, the uses of game theory in assigning weight neutrally to nodes of a decision<br />

tree may be especially useful in the cyber context because our reactions seem to derive from<br />

evolutionary strategies, and cyber may activate those uniquely. Having a "face" to the threat is crucial<br />

to our reaction, according to psychologist Daniel Gilbert (2007) who offers as example that global<br />

warming does not push our buttons like terrorism and other threats "with a mustache" do (think of the<br />

resources we devote to deaths by terrorism, compared to deaths by cancer or hunger). Cyberwar has<br />

a degree of sanitation to it—unlike bombs and tanks, it does not necessitate face-to-face<br />

confrontation with the effects of one‟s decisions. (See Baer 2010b).<br />

6. Avoiding cyberwar: Could we have cyber disarmarment?<br />

The economic inefficiencies of an offensive cyber arms race (not to mention the danger of allowing<br />

the US and others to stockpile a cyber arsenal) have led some to propose solutions to avoid this<br />

altogether. Harvard Professor Jack Goldsmith (2010) has proposed something akin to an international<br />

negotiating architecture to preempt cyberwar and the costs of cyberdefense. Certainly, the U.S. would<br />

benefit from having red lines drawn. But even if we could have the prescience to create a sense of<br />

rules that would anticipate the new ways in which the Internet will be useful for attack (which is<br />

unlikely given the range of possibilities, many of which might not be directly violent— “the range of<br />

possible options is very large, so that cyberattack-based operations might be set in motion to<br />

influence an election, instigate conflict between political factions, harass disfavored leaders or entities,<br />

or divert money.”-- National Research Council Committee on Offensive Information Warfare Section<br />

28

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!