Download presentation
Presentation is loading. Please wait.
1
Revisting Random Key Pre-distribution Schemes for Wireless Sensor Network By Joengmin Hwang and Yongdae Kim, Computer Science and Engineering, University of Minnesota
2
Outline Introduction Notation Background of Existing key pre-distribution schemes based on random graph theory Evaluation of Giant component theory for key pre-distribution scheme Utilizing sensor hardware to control transmission range Author’s solution for efficient path-key identification algorithm compared to existing schemes
3
Goal of Authors : Re-visits random graph theory and uses giant component theory by Erdos and Renyi to show that even if the node degree is small, most of the nodes in the network can be connected. The relationship between connectivity, memory size and security to show that they can reduce the amount of memory required or improve security by trading off a very small number of isolated nodes.
4
Notation -d the expected degree of a node i.e., the expected number of secure links a node can establish during key set-up -k number of keys in a node’s key ring -n network size in nodes -r communication radius -n’ the expected number of neighbor nodes within communication radius of a given node -p probability that 2 nodes share a key -Pc probability that graph is connected -B ratio of largest component size to network size -P size of the key pool -A area of the field -w number of key spaces constructed in networks -T number of key spaces carried by each nodes -x number of nodes captured
5
Key pre-distribution in wireless sensor networks Eschenauer and Gligor (EG) In EG, each node randomly picks a subset (called a key ring ) of keys from a large key pool and any pair of nodes can establish a secure connection if they share at least one common key. The three phases are : 1. initialization, 2. key set-up and 3. Path key identification
6
Chan, Perrif and Song(CPS) Extends EG and developed 2 key pre-distribution techniques: q-composite key pre-distribution and a random pair-wise keys scheme 1. q-composite in this case requires any 2 nodes to share at least q common keys to establish a secure link 2. Random pair-wise keys in this case uses pair-wise scheme based on the observation that not all n-1 keys need to be stored in the node’s key ring to have a connected randome graph with high probability
7
Du, Deng, Han and Varshney (DDHV) Combined basic scheme with Blom’s scheme. Allows that any pair of (n-1) nodes finds a secret pair-wise key between them with much smaller number of keys than the actual number of nodes. The tradeoff is that, unlike the (n-1) pairwise key scheme, Blom’s scheme is not perfectly resilient against node capture
8
In all of the above schemes it is not certain that two nodes can generate a pair-wise key. Instead, they have only a guarantee with probability p that this will be possible. To find p so that n nodes in the sensor network are connected, they use a random graph theory.
9
Random Graph Theory and Key Pre-distribution scheme Erdos and Renyi provided a theory how to determine p so that Pc is almost 1 (i.e. the graph is almost surely connected) This is called Global connectivity. For a network to be connected with probability Pc, p, as determined by the key information(key ring or pool size) should be greater than p as obtained from Erdos and Reny’s Theory. P required – is from Erdos and Reny’s Theory (d/n’) P actual – actual local connectivity is determined by key ring size and key pool size in EB, by the key space in DDHV and by the number of pairwise keys sotred in each node in CPS
10
Theorem 1 : P required = d/n’ (Attempts to get entire graph connected) EG : Pactual = 1-((P-k)!)^2 DDHV : Pactual = 1 – ((w-t)!)^2/(P-2k)!P! CPS : Pactual = m/n
11
Desirable Node Degree Desired node degree is discussed in the context of network connectivity, network capacity and energy consumption by controlling transmission power. The node degree is controlled by adjusting the transmission power. Higher node degree requires higher transmission power, which increases the energy consumption. All we have to do is crank up the power, right?
12
No! With a high node degree, the network connectivity increases, but the interference between the neighboring nodes increases, and therefore, the network capacity decreases. If the node degree is decreased, the connectivity decreases, which in the extreme case results in the network being disconnected. The low degree reduces the communication interference, but since the connectivity is poor the number of hops will be increased, which will decrease the network capacity. (Not to mention power consumption issues)
13
Desired Node Degree CON’T The optimal node degree to maximize the network capacity has been considered as a GUESSTIMATE in recent literature. The exact constant remains as an opent problem. (If each node is connected to less than.074 log n nearest neighbors then the network is asymptotically disconnected with probability 1 as n increases while if each node is connected to more than 5.1774 log n nearest neighbors then the network is asymptotically connected with probability approaching 1 as n increases. )
14
Required Local Connectivity Revisted – Theorem 2 When node Degree is 6, B is Close to 1. When node Degree is 14, Pc approaches 1 This means that even with a very small node degree most of the nodes are connected to each other. Increasing the node degree to make Pc very close to 1 causes only a small number of isolated nodes to be connected. Their Goal is to utilize the transmission power control feature to bridge the gap between network transmission range and secure transmission range.
15
Revisiting Random Graph Theory to re-evaluate pre-distribution schemes (Varies by k) Theorem 1 : P required = d/n’(Erdos-Renyi) Attempts to get whole graph connected *Theorem 2 : P required = a/n’ (Erdos-Renyi) Attempts to get sufficiently large Giant component For EG where P (size of key pool) = 100000 and n(network size in nodes) = 10000 : Theorem 1 : To make P actual >= Prequired, k = 214 when Prequired =.3664 Theorem 2 : To make P actual larger than Prequired =.0794, k = 91.
16
Comparison of this estimation for the different key ring size. Since we can reduce k by Theorem 2, the number of links an adversary could attack decreases and we can say that a reduction of k leads to a higher resilience against node capture.
17
DDHV
19
Re-evaluation of DDHV (varies as r) With r = 40 for fixed t=2, to obtain Pc =.9999 by Theorem 1, at most w = 10 can be selected. But, to create a giant component with size more than 98% by Theorem 2, w = 49 can be selected. If the memory size k is fixed, the security level is 4.9 times higher that provided by theorem 1. (w/t^2 = 49/4 vs. 10/4)
20
Re-evaluation of CPS Maximum network size = n = m/p By applying Theorem 2, instead of Theorem 1, we can increase the supportable network size. For example, for fixed m = 200, to satisfy Pc =.9999 by Theorem 1, p =.3664 and n = 545. But to create a giant component with size more than 98%, p =.0794 is enough and this increases the supportable network size to n = 2518.
21
Communication Overhead In previous section, by reducing the shared key information we could save the memory size. However after the key setup phase, the graph connected via secure links must have a very sparse node degree.
22
Cascade On – Ratio of Hops
23
Cascade off – Ratio of Hops
24
Transmission Range
25
Required Range Extention
26
Communication Overhead
27
Computational Cost FTTL – If the number of hops in the path between I and j is h, the total number of encryptions and decryptions are both h. CSN – the number of hops the unicast message traverses is always 2, so the total of 2 encryptions and 2 decryptions are used
28
Communication Cost FTTL is bigger than the unicast cost of CSH HOWEVER CSN trades off communication with computation
29
Author’s solution is a new protocol described in extension paper 1. Key setup phase is same as original. 2. After the key setup phase each node I broadcasts its own id and the id of unconnected node through itself. After the first broadcast message of each node, some of the neighboring nodes will be connected. Node I, on the other hand, checks its neighbors unconnected list to see if it can find others to help. This may not require any additional broadcast message after the first broadcasted of unconnected node list set. As time goes by, more neighbor nodes are connected.
30
Recall Goal of Authors : Re-visits random graph theory and uses giant component theory by Erdos and Renyi to show that even if the node degree is small, most of the nodes in the network can be connected. The relationship between connectivity, memory size and security to show that they can reduce the amount of memory required or improve security by trading off a very small number of isolated nodes.
31
Conclusion Showed that they could reduce the amount of memory required or improve security by trading-off a very small number of isolated nodes. Simulation shows that communication overhead does not increase significantly even after reducing the node degree Showed that nodes can dynamically adjust their transmission power to establish secur links with the targeted networked neighbors Proposed an efficient path-key identifcation algorithm and compared it with existing schems
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.