Attacker Models for Wireless Sensor Networks - Pi1
Attacker Models for Wireless Sensor Networks - Pi1
Attacker Models for Wireless Sensor Networks - Pi1
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
<strong>Attacker</strong> <strong>Models</strong> <strong>for</strong> <strong>Wireless</strong><br />
<strong>Sensor</strong> <strong>Networks</strong><br />
(How Security in WSNs Differs From Traditional Security)<br />
Felix Freiling<br />
University of Mannheim, Germany<br />
joint work with Zinaida Benenson<br />
Summer School "Protocols and Security <strong>for</strong> <strong>Wireless</strong> <strong>Sensor</strong> Actor <strong>Networks</strong>"<br />
Schloss Dagstuhl, March 2008<br />
1
"A system without an adversary<br />
definition cannot be secure."<br />
Virgil Gligor<br />
2
• Section 2.2 ...<br />
– "... we assume that individual sensors are untrusted. Our<br />
goal is to design the SPINS key setup so a compromise of a<br />
node does not spread to other nodes."<br />
– "... any adversary can eavesdrop on traffic, inject new<br />
messages, and replay old messages."<br />
• What does the adversary do with compromised<br />
nodes?<br />
• How many nodes can the adversary compromise?<br />
3
ACM CCS 2002<br />
• Section 2.4 ...<br />
– "... active manipulation of sensor's data-inputs."<br />
– "... sensor node is under complete physical control of the<br />
adversary."<br />
• Is this the same assumption as in SPINS?<br />
• Is injection, replay, eavesdropping considered?<br />
4
Problem and Goal<br />
• Assumptions about adversaries critically influence<br />
correctness and efficiency of protocols<br />
– So they should be as precise as possible<br />
• Precise adversary assumptions<br />
– help to increase confidence in security of the scheme<br />
– help to make schemes comparable<br />
• Goal: Propose a set of well-defined attacker models<br />
<strong>for</strong> WSNs<br />
• Advantages:<br />
– Gives algorithm designers a toolbox to choose from<br />
– Makes it easier to compare algorithms<br />
– Precise enough to support rigorous analysis<br />
5
Toolbox Overview<br />
• Node-centered model<br />
– How is the behavior of nodes affected by<br />
the adversary?<br />
– Effects on channels are attributed to a<br />
particular node<br />
• <strong>Attacker</strong> model is a set of basic attacker<br />
models<br />
– Basic attacker model is a pair (i, p) of<br />
values from two dimensions<br />
6
Two Main Dimensions<br />
• Intervention: What can the attacker do?<br />
– How does the attacker potentially change<br />
the normal behavior of an individual sensor<br />
node?<br />
• Presence: Where can he do it?<br />
– Which part of the sensor network can the<br />
attacker potentially influence?<br />
7
Presence<br />
• Local<br />
– <strong>Attacker</strong> affects a small connected subset of sensors<br />
– Example: single person ad-hoc attacker<br />
• Distributed<br />
– <strong>Attacker</strong> affects multiple small connected subsets of<br />
sensors, total set of influenced sensors is unconnected<br />
– Examples: single person mobile adversary, uni<strong>for</strong>mly<br />
distributed adversary, threshold adversary (t-out-of-k)<br />
• Global<br />
– <strong>Attacker</strong> affects all sensor nodes<br />
– Examples: powerful organization with lots of eavesdropping<br />
and manpower resources, worst case assumption about<br />
software vulnerability (WSN worm)<br />
8
Intervention (1/2)<br />
• Crash<br />
– <strong>Attacker</strong> can destroy the sensor node without reading any internally<br />
stored data<br />
– Examples: physical destruction, remove battery<br />
• Eavesdropping<br />
– <strong>Attacker</strong> can see all messages on incoming and outgoing channels<br />
of the sensor node<br />
– Example: place wireless receiver close to node<br />
• X-ray<br />
– Crash + Eavesdropping +<br />
– <strong>Attacker</strong> can read the full memory contents of the sensor node<br />
(data and code)<br />
– Examples: access via JTAG interface, exploit software vulnerability,<br />
take sensor node to the lab and open it physically<br />
9
Intervention (2/2)<br />
• Disturbing<br />
– Crash + Eavesdropping +<br />
– <strong>Attacker</strong> can modify part of the data memory of the sensor node<br />
– Examples: Influence routing table by sending fake routing table<br />
updates, influence temperature reading by placing a cigarette<br />
lighter close to temperature sensor, inject fake packets, replay<br />
packets, spoof packets<br />
• Modifying<br />
– Disturbing + X-ray +<br />
– <strong>Attacker</strong> can inspect and modify full data memory of the sensor<br />
node<br />
– Examples: read access via JTAG, exploit software vulnerability<br />
• Reprogramming<br />
– Modifying +<br />
– <strong>Attacker</strong> can inspect and modify full data and code memory of<br />
sensor node<br />
– Example: read/write access via JTAG<br />
10
Lattice of Basic <strong>Attacker</strong> <strong>Models</strong><br />
reprogramming<br />
modifying<br />
disturbing<br />
X-ray<br />
crash<br />
eavesdropping<br />
X → Y = Y includes all<br />
attacker behavior of X<br />
null<br />
11
Combining Basic <strong>Attacker</strong>s to<br />
<strong>Attacker</strong> <strong>Models</strong><br />
• An attacker model is a set of basic<br />
attacker models<br />
– Possibility to define hybrid attacker<br />
assumptions<br />
• Example: An adversary can be<br />
– local reprogramming and<br />
– global eavesdropping<br />
12
Usage in the Literature<br />
• Eschenauer, Gligor: A key-management scheme ... (ACM CCS 2002)<br />
– "... active manipulation of sensor's data-inputs."<br />
– "... sensor node is under complete physical control of the adversary."<br />
– Probably combination of distributed disturbing, distributed reprogramming<br />
and global eavesdropping<br />
• Chan, Perrig, Song: Random key predistribution ... (IEEE S&P 2003)<br />
– "... an adversary may be able to undetectably take control of a sensor node<br />
and compromise the cryptographic keys."<br />
– Probably distributed X-ray, but maybe distributed modifying.<br />
• A. Perrig, R. Szewczyk, V. Wen, D. Culler, D. Tygar: SPINS ...<br />
(MobiCom 2001)<br />
– "... we assume that individual sensors are untrusted. Our goal is [that] the<br />
compromise of a node does not spread to other nodes."<br />
– Probably local or distributed X-ray, or distributed reprogramming.<br />
13
Comparison<br />
• Fault-Tolerance/Self-Stabilization:<br />
– Byzantine processes (Lamport, Shostak, Pease) as random node<br />
behavior<br />
• Byzantine = reprogramming adversary<br />
– Arbitrary data (not code) perturbation of self-stabilization (Dijkstra)<br />
• Self-stabilization adversary is a "finite" modifying adversary<br />
• Dolev-Yao attacker model<br />
– Network-centered model aiming at confidentiality and <strong>for</strong>mal<br />
analysis<br />
• Global eavesdropping with parts of the disturbing adversary (no data<br />
modifications)<br />
• Cryptography attacker models<br />
– Notion of polynomially bounded adversary<br />
– Practical vs. in<strong>for</strong>mation theoretic security<br />
– Notion of passive adversary (X-ray without injection)<br />
14
Why is the toolbox<br />
specific to WSNs?<br />
• Existence of environmental sensors result in<br />
disturbing adversary<br />
• If you can inspect a node you can also crash<br />
it (X-ray vs. passive crypto adversary)<br />
• Difference between data and code in Harvard<br />
architecture motivates difference between<br />
modifying and reprogramming adversary<br />
15
Theoretical Basis<br />
of Intervention Dimension<br />
• Three general dimensions of behavior based on goals of the<br />
attacker<br />
– Different property types are fundamentally different<br />
• Three types of properties:<br />
– Safety: If something is done, then it is done correctly<br />
• Examples: Correct aggregation computation, integrity of computation,<br />
authentication of messages<br />
– Liveness: Something (eventually) is done<br />
• Examples: Messages are sent periodically, termination of a protocol run<br />
– In<strong>for</strong>mation flow: Who learns which in<strong>for</strong>mation<br />
• Examples: Confidentiality of message contents, confidentiality of<br />
cryptographic keys stored in memory<br />
• Specific useful combinations yield Intervention levels<br />
• Unification of attacker models from Crypto and Fault-tolerance<br />
16
safety<br />
full code<br />
perturbation<br />
full state<br />
perturbation<br />
participants<br />
limited state<br />
perturbation<br />
crash<br />
crashrecovery<br />
liveness<br />
network<br />
in<strong>for</strong>mation flow<br />
17
Discussion<br />
• Time-dependent development of basic<br />
attacker models can be incorporated<br />
• Base station is usually uncompromised; is<br />
this necessary?<br />
• Is a presence level between local and global<br />
necessary?<br />
– Example: Jamming in large connected parts of the<br />
network<br />
• Open <strong>for</strong> suggestions ...<br />
18
References (1/3)<br />
Attacks on sensor networks:<br />
• Physical attacks:<br />
– A. Becher, Z. Benenson, M. Dornseif: Tampering with motes: Real-world<br />
physical attacks on wireless sensor networks. Security in Pervasive<br />
Computing (SPC), 2006<br />
• Per<strong>for</strong>ming stack smashing (buffer overflow exploit) on a Mote:<br />
– http://travisgoodspeed.blogspot.com/2007/08/machine-code-injection-<strong>for</strong>wireless.html<br />
– http://travisgoodspeed.blogspot.com/2007/09/memory-constrained-codeinjection.html<br />
• Software attacks and countermeasures:<br />
– N. Cooprider, W. Archer, E. Eide, D. Gay, J. Regehr: Efficient Memory<br />
Safety <strong>for</strong> TinyOS. SenSys 2007.<br />
– Q. Gu and R. Noorani: Towards Self-propagate Mal-packets in <strong>Sensor</strong><br />
<strong>Networks</strong>. Proc. ACM Conference on <strong>Wireless</strong> Network Security, 2007.<br />
19
References (2/3)<br />
Investigated WSN security papers:<br />
• L. Eschenauer, V. D. Gligor: A keymanagement<br />
scheme <strong>for</strong> distributed sensor<br />
networks. ACM CCS 2002.<br />
• H. Chan, A. Perrig, D. Song: Random key<br />
predistribution schemes <strong>for</strong> sensor networks.<br />
IEEE Symp. on Security and Privacy. 2003<br />
• A. Perrig, R. Szewczyk, V. Wen, D. Culler, J.<br />
D. Tygar: SPINS: Security protocols <strong>for</strong><br />
<strong>Sensor</strong> networks. MobiCom 2001.<br />
20
References (3/3)<br />
Background on attacker modeling:<br />
• Z. Benenson, P. M. Cholewinski, F. C. Freiling: Vulnerabilities and Attacks in<br />
<strong>Wireless</strong> <strong>Sensor</strong> <strong>Networks</strong>. Chapter in <strong>Wireless</strong> <strong>Sensor</strong>s <strong>Networks</strong> Security,<br />
Cryptology & In<strong>for</strong>mation Security Series (CIS). IOS Press, 2007<br />
• Z. Benenson, F. C. Freiling, T. Holz, D. Kesdogan, L. Draque Penso: Safety,<br />
Liveness, and In<strong>for</strong>mation Flow: Dependability Revisited. ARCS Workshops<br />
2006.<br />
Classic descriptions of attacker models in crypto and fault-tolerance:<br />
• D. Dolev, A. C. Yao: On the security of public key protocols. IEEE Transactions<br />
on In<strong>for</strong>mation Theory, Vol. 29 (2), 1983.<br />
• M. Fitzi, M. Hirt, U. Maurer: General Adversaries in Unconditional Multi-party<br />
Computation, Proc. ASIACRYPT 1999.<br />
• L. Lamport, R. Shostak, M. Pease: The Byzantine generals problem. ACM<br />
Trans. Programming Languages and Systems. Vol 4(3), 1982<br />
• Z. Liu, M.Joseph: Specification and verification of fault-tolerance, timing and<br />
scheduling. ACM Trans. Programming Languages and Systems, Vol 21(1),<br />
1999.<br />
• H. Schepers, J. Hooman: A trace-based compositional proof theory <strong>for</strong> fault<br />
tolerant distributed systems. Theoretical Computer Science, Vol. 128 (1-2),<br />
1994.<br />
21
Advertisement:<br />
Utimaco Cryptoserver<br />
• (German) Competitor of IBM 4758 PCI<br />
22