A Trust Evaluation Framework in Distributed Networks: Vulnerability Analysis and Defense Against Attacks IEEE Infocom 2006 19.6.30
Background Trust Distributed computer networks – A well studied concept in sociology and psychology. Distributed computer networks – Ad hoc network, sensor networks and P2P network – Rely on collaboration among network participants. To secure distributed networks – Traditional: data integrity,confidentiality, authentication, etc – New non-cryptography based security approach:trust evaluation. Traditional schemes aim to secure ad hoc routing protocols focus on preventing attackers from entering the network through secure key distribution/authentication and secure neighbor discovery. Those schemes, however, are not effective in situations where malicious nodes have gained access to the network, or some nodes in the network have been compromised. Trust function: Provide an incentive for good behavior. Provide a prediction of one’s future behavior. Detect malicious and selfish entities. 19.6.30
Outline Trust Evaluation Foundation Attacks and Protection – Trust concept, notation, metric, models Attacks and Protection – Bad-mouthing; on-off; conflicting-behavior; etc. Trust management system and its applications in Ad hoc networks. – Secure routing; Malicious node detection 19.6.30
Trust Concept in Computer Networks The most appropriate interpretation of trust in computer networks is belief. – One entity believes that the other entity will act in a certain way, or believes that the network will operate in a certain way. 19.6.30
Trust Notation and Metrics Notation of of Trust relationship: – {Subject:agent,action} – The subject trusts the agent to perform an action. Subject - usually represents one entity; can be a group of entities; Agent - one entity, a group of entities, or even the network; Action - an action performed (or a property possessed) by the agent. Case 1: If the subject believes that the agent will perform the action for sure, subject “trust” the agent to perform the action. no uncertainty; Case 2: If the subject believes that the agent will not perform the action for sure, subject “trust” the agent not to perform the action. no uncertainty; Case 3: If the subject has no idea of whether the agent will perform the action or not, subject does not have trust in the agent. Highest uncertainty. What is the physical meaning of these trust values? Uncertainty ↔ Trustworthiness 19.6.30
Entropy-based Trust Metric Trust value (T):measures uncertainty and is a function of entropy. p :the probability with which the agent will perform the action in the subject’s point of view p ↔ T: one-to-one mapping 19.6.30
Trust Models Estimate trust value based on direct observation Estimate trust value based on recommendations (third parties’ opinion) – Trust propagation. Concatenation trust propagation Multipath trust propagation Trust model: calculate trust via trust propagation 19.6.30
Axioms of Trust Propagation Axiom 1: Concatenation propagation of trust does not increase trust. Axiom 2: Multipath propagation of trust does not reduce trust. Axiom 3: Trust based on multiple recommendations from a single source should not be higher than that from independent sources. Trust models should satisfy all the axioms. Trust models are not unique. Action-r is to make recommendation of other nodes about performing action. The value of Rab is positive 19.6.30
Entropy-based model Concatenation trust propagation Multipath trust propagation 19.6.30
Probability-based model S positive and F negative feedback Pr(observation|P = p)=B(S + 1; F + 1). Beta(7,2) 19.6.30
Probability-based model Concatenation trust propagation 19.6.30
Probability-based model Multipath Propagation 19.6.30
Outline Trust Evaluation Foundation Attacks and Protection – Trust concept, notation, metric, models Attacks and Protection – Bad-mouthing; on-off; conflicting-behavior; etc. Trust management system and its applications in Ad hoc networks. – Secure routing; malicious node detection 19.6.30
Bad Mouthing Attack Malicious nodes providing dishonest recommendations – Frame up good entities – Boost trust values of malicious peers Defense: Recommendation Trust – The action trust and recommendation trust records are maintained separately. Assign low weight to the recommendations from the nodes with low recommendation trust. Attacks and Protection Trust evaluation is an attractive target for attackers. Attackers’ goals – Damage the network, e.g. reduce performance – Keep their own trust value above a certain threshold. – Cause inaccurate trust records. good nodes have low trust value bad nodes have high trust value – Discourage cooperation 19.6.30
On-off Attack Time-domain inconsistence attack Dynamic Property of Trust: – The observation made long time ago should not carry the same weight as those made recently. – Forgetting Factor β (0 < β ≤ 1) actions at time t1 K actions at time t2 A simple scenario: – Stage 1: Behave well for 100 times – Stage 2: Behave badly for 100 times, – Stage 3: Stop doing anything for sometime – Stage 4: Behave well again. 19.6.30
On-off Attack When the system does not forget, i.e. β = 1, this attacker has positive trust value in stage (2). That is, this attacker can have good trust values even after he has performed many bad actions. When using a large forgetting factor, the trust value may not represent the latest status of the entity. When using a small forgetting factor β = 0.001, the attacker’s trust value drops rapidly after it starts behaving badly in stage (2). However, it can regain trust by simply waiting in stage (3) while the system will forget his bad behaviors quickly. Large β : trust value cannot keep up with users’ current status; Small β :Attackers can recover trust values by waiting. 19.6.30
On-off Attack-- Defense Solution: dynamic forgetting – When trust value is high, forget faster;When trust value is low, forget slower. Trust value can keep up with the entity’s current status after the entity turns bad. An entity can recover its trust value after some bad behaviors, but this recovery requires many good actions. 19.6.30
Conflicting Behavior Attack User-domain inconsistence. The attackers behave well to one group of users and behave badly to another group of users. These two groups develop conflicting opinions about the malicious users. T{A : X, action} = T1 (high) T{B: X, action} = T2 (low) – B provides recommendation about X to A – A compares B’s recommendation and A’s own experience – A will assign low recommendation trust to B. Two groups will not trust the recommendations from each others. 19.6.30
Conflicting Behavior Attack Group A: user 1,4-10 Group B: user 11-20 Attackers: 2,3 19.6.30
Other Attacks Sybil Attack: Newcomer Attack – Malicious node creates several faked IDs, then faked IDs can share or even take the blame, which should be given the malicious node. – Defense: Authentication Newcomer Attack – Malicious nodes can easily remove their bad history and significantly reduce the effectiveness of trust management. – Defense: access control policy and authentication 19.6.30
Outline Trust Evaluation Foundation Attacks and Protection – Trust concept, notation, metric, models Attacks and Protection – Bad-mouthing; on-off; conflicting-behavior; etc. Trust management system and its applications in Ad hoc networks. – Secure routing; malicious node detection 19.6.30
Communication Procedure Find multiple routes to the destination. Find the packet-forwarding trust worthiness of the nodes on the routes Selects a trustworthy route to transmit data. After the transmission, updates the trust record based on its observation of route quality. 19.6.30
Effects of Trust Management First, network throughput can be significantly degraded by malicious attackers. Second, after using trust management, the network performance can be recovered because it enables the route selection process to avoid less trustworthy node. Third, when the simulation time increases, trust management can bring the performance close to that in the scenario where no attackers are presented, since more and more accurate trust records are built over time. Trust evaluation can improve network throughput because the malicious node has less chance to be on the route, and can be detected. 19.6.30
Effects of Trust Management MDP FAR Recommendation mechanism improves the performance of malicious node detection. 19.6.30
Bad Mouthing Attack Bad mouthing attck has little effects to the throughput. 19.6.30
Bad Mouthing Attack Using recommendation trust in malicious node detection significantly improves detection rate. 19.6.30
On-off Attack When the same packet deliver ratio is achieved, the adaptive forgetting scheme results in the lowest trust value of malicious users. 19.6.30
Conflicting-behavior Attack R1:no recs to subgroup A, and honest to subgroup B R2:no recs to subgroup A, and no recs to subgroup B R3:bad recs to subgroup A, and no recs to subgroup B R4:bad recs to subgroup A, and honest to subgroup B In R1 and R4, attackers can in fact help the network performance; attackers have higher recommendation trust than good nodes. In R3, malicious nodes always have much lower recommendation trust than good nodes. In R2, attackers do not help the network by providing honest recommendations and can not be detected easily 19.6.30
Conflicting-behavior Attack When confliction behavior attack presents, using recommendation trust for malicious node detection can reduce detection rate. 19.6.30
19.6.30
Discussion What we can learn? Organization and expression Entropy-based model can partly explain the meaning of trust Attacks statement and Simulation design Effective with gray hole and bad mouthing attack 19.6.30
Discussion Need improvement Effective with On-off,confilictin-behavior attack?? Fig 14—Adaptive forgetting improves little V.D Other attacks Error Reporting Collusions … Formula Variable errors 19.6.30