A survey of trust and reputation systems for online service provision

Slides:



Advertisements
Similar presentations
Bayes rule, priors and maximum a posteriori
Advertisements

Conceptual Framework for Dynamic Trust Monitoring and Prediction Olufunmilola Onolaja Rami Bahsoon Georgios Theodoropoulos School of Computer Science The.
Rulebase Expert System and Uncertainty. Rule-based ES Rules as a knowledge representation technique Type of rules :- relation, recommendation, directive,
1 CS 6910: Advanced Computer and Information Security Lecture on 11/2/06 Trust in P2P Systems Ahmet Burak Can and Bharat Bhargava Center for Education.
The Security Analysis Process University of Sunderland CIT304 Harry R. Erwin, PhD.
COURSE: JUST 3900 INTRODUCTORY STATISTICS FOR CRIMINAL JUSTICE Instructor: Dr. John J. Kerbs, Associate Professor Joint Ph.D. in Social Work and Sociology.
A Survey of Trust in Social Networks WANITA SHERCHAN, IBM Research-Australia SURYA NEPAL and CECILE PARIS CSIRO ICT Centre Presented by Jacob Couch.
Modeling Human Reasoning About Meta-Information Presented By: Scott Langevin Jingsong Wang.
An Approach to Evaluate Data Trustworthiness Based on Data Provenance Department of Computer Science Purdue University.
Filtering Information with Imprecise Social Criteria: A FOAF-based backlink model EUSFLAT 2005 (Barcelona) Elena García-Barriocanal
CS 589 Information Risk Management 6 February 2007.
CS 589 Information Risk Management 23 January 2007.
Dept. of Computer Science & Engineering, CUHK1 Trust- and Clustering-Based Authentication Services in Mobile Ad Hoc Networks Edith Ngai and Michael R.
Trust Level Based Self-Organized Routing Protocol for Secure Ad Hoc Networks Li Xiaoqi, GiGi 12/3/2002.
Chapter 20: Social Service Selection Service-Oriented Computing: Semantics, Processes, Agents – Munindar P. Singh and Michael N. Huhns, Wiley, 2005.
Kemal AkkayaWireless & Network Security 1 Department of Computer Science Southern Illinois University Carbondale CS 591 – Wireless & Network Security Lecture.
Security Models for Trusting Network Appliances From : IEEE ( 2002 ) Author : Colin English, Paddy Nixon Sotirios Terzis, Andrew McGettrick Helen Lowe.
Pervasive Computing and Communication Security (PerSec 2006) March 13th, 2006 Florina Almenárez, Andrés Marín, Daniel Díaz, Juan Sánchez
S519: Evaluation of Information Systems Social Statistics Inferential Statistics Chapter 8: Significantly significant.
The Beta Reputation System Audun Jøsang and Roslan Ismail [1] Presented by Hamid Al-Hamadi CS 5984, Trust and Security, Spring 2014.
Chapter 20: Social Service Selection Service-Oriented Computing: Semantics, Processes, Agents – Munindar P. Singh and Michael N. Huhns, Wiley, 2005.
The Security Analysis Process University of Sunderland CSEM02 Harry R. Erwin, PhD.
1 Robot Environment Interaction Environment perception provides information about the environment’s state, and it tends to increase the robot’s knowledge.
1 Chapter 1 – Background Computer Security T/ Tyseer Alsamany - Computer Security.
High-integrity Sensor Networks Mani Srivastava UCLA.
Chapter 11 Statistical Techniques. Data Warehouse and Data Mining Chapter 11 2 Chapter Objectives  Understand when linear regression is an appropriate.
Introduction to Scientific Research. Science Vs. Belief Belief is knowing something without needing evidence. Eg. The Jewish, Islamic and Christian belief.
Chapter 20: Social Service Selection Service-Oriented Computing: Semantics, Processes, Agents – Munindar P. Singh and Michael N. Huhns, Wiley, 2005.
Time-Space Trust in Networks Shunan Ma, Jingsha He and Yuqiang Zhang 1 College of Computer Science and Technology 2 School of Software Engineering.
Computer Science and Engineering 1 Mobile Computing and Security.
Analysing the Relationship between Risk and Trust By Audun Jøsang and Stéphzne Lo Presti.
Presented By: Madiha Saleem Sunniya Rizvi.  Collaborative filtering is a technique used by recommender systems to combine different users' opinions and.
Collaborative Filtering - Pooja Hegde. The Problem : OVERLOAD Too much stuff!!!! Too many books! Too many journals! Too many movies! Too much content!
REASONING UNDER UNCERTAINTY: CERTAINTY THEORY
Yandell - Econ 216 Chap 1-1 Chapter 1 Introduction and Data Collection.
Presented by Edith Ngai MPhil Term 3 Presentation
Talal H. Noor, Quan Z. Sheng, Lina Yao,
Ethical Decision Making
OPERATING SYSTEMS CS 3502 Fall 2017
Lan Zhou, Vijay Varadharajan, and Michael Hitchens
Management & Planning Tools
Recommendation Based Trust Model with an Effective Defense Scheme for ManetS Adeela Huma 02/02/2017.
12. Principles of Parameter Estimation
The Beta Reputation System
Search Engines and Link Analysis on the Web
SAMPLING OF PARTICIPANTS
Statistical Models for Automatic Speech Recognition
Director, Institutional Research
UNIT-4 BLACKBOX AND WHITEBOX TESTING
Taking Charge of Your Health
Anonymity, Unlinkability, Undetectability, Unobservability, Pseudonymity and Identity Management – A Consolidated Proposal for Terminology Authors: Andreas.
CAE: A Collusion Attack against Privacy-preserving Data Aggregation Schemes Wei Yang University of Science and Technology of China (USTC) Contact Me.
Hidden Markov Models Part 2: Algorithms
Service-Oriented Computing: Semantics, Processes, Agents
Cryptography and Network Security
iSRD Spam Review Detection with Imbalanced Data Distributions
CHAPTER 9 (part a) BASIC INFORMATION SYSTEMS CONCEPTS
ITEC 810 Workshop Paper: A Survey of Web-based Social Network Trust
Appropriate Access InCommon Identity Assurance Profiles
Service-Oriented Computing: Semantics, Processes, Agents
Roc curves By Vittoria Cozza, matr
Security in SDR & cognitive radio
1 Chapter 8: Introduction to Hypothesis Testing. 2 Hypothesis Testing The general goal of a hypothesis test is to rule out chance (sampling error) as.
12. Principles of Parameter Estimation
A Trust Evaluation Framework in Distributed Networks: Vulnerability Analysis and Defense Against Attacks IEEE Infocom
Mathematical Foundations of BME Reza Shadmehr
Cryptography and Network Security
UNIT-4 BLACKBOX AND WHITEBOX TESTING
Recommender System.
Continuous Random Variables: Basics
Presentation transcript:

A survey of trust and reputation systems for online service provision Audun Jøsang, Roslan Ismail, Colin Boyd

Background for trust and reputation systems Definition 1 (Reliability trust). Trust is the subjective probability by which an individual, A, expects that another individual, B, performs a given action on which its welfare depends. This definition includes the concept of dependence on the trusted party, and the reliability (probability) of the trusted party, as seen by the trusting party. Problem: having high (reliability) trust in a person in general is not necessarily enough to decide to enter into a situation of dependence on that person

Background for trust and reputation systems Definition 2 (Decision trust). Trust is the extent to which one party is willing to depend on something or somebody in a given situation with a feeling of relative security, even though negative consequences are possible (detail see [Josang-IFIPTM04] next week). It explicitly and implicitly includes aspects of a broad notion of trust including: dependence on the trusted party subjectivity on “willing” and “with a feeling of relative security” reliability of the trusted party, a.k.a. success probability utility in the sense that positive utility will result from a positive outcome, and negative utility will result from a negative outcome risk attitude in the sense that the trusting party (or trustor) is willing to accept the situational risk resulting from negative outcomes given that the success probability is less than 1

Background for trust and reputation systems Definition 3 (Reputation). Reputation is what is generally said or believed about a person’s or thing’s character or standing. The difference between trust and reputation can be illustrated by the following perfectly normal and plausible statements: (1) I trust you because of your good reputation (2) I trust you despite your bad reputation Statement (2) above reflects that the relying party has some private knowledge about the trustee, e.g. through direct experience or intimate relationship, and that these factors overrule any reputation that a person might have.

Background for trust and reputation systems Reputation can be considered as a collective measure of trustworthiness (in the sense of reliability) based on the referrals or ratings from members in a community. An individual’s subjective trust can be derived from a combination of received referrals and personal experience. In order to avoid dependence and loops it is required that referrals be based on first hand experience only, and not on other referrals.

Background for trust and reputation systems Trust systems produce a score that reflects the relying party’s subjective view of an entity’s trustworthiness Reputation systems produce an entity’s (public) reputation score as seen by the whole community.

Security and trust Traditional security mechanisms will typically protect resources from malicious users, by restricting access to only authorized users. However, information providers can act maliciously by providing false or misleading information, and traditional security mechanisms are unable to protect against such threats. Trust and reputation systems on the other hand can provide protection against such threats. Hard security: provided by traditional security mechanisms like authentication and access control Soft security: provided by social control mechanisms of which trust and reputation systems are examples.

Security and trust Communication security includes encryption of the communication channel and cryptographic authentication of identities. Authentication provides so-called identity trust Users are also interested in knowing the reliability of authenticated parties, or the quality of goods and services they provide. This latter type of trust will be called provision trust and only trust and reputation systems (i.e. soft security mechanisms) are useful tools for deriving provision trust

Collaborative filtering Collaborative filtering systems (CF) have similarities with reputation systems in that both collect ratings from members in a community. Difference between CF and reputation systems: CF takes ratings subject to taste as input, whereas reputation systems take ratings assumed insensitive to taste as input Why: the assumption behind CF systems is that different people have different tastes, and rate things differently according to subjective taste. If two users rate a set of items similarly, they share similar tastes, and this information can be used to recommend items that one participant likes Implementations of CF techniques (assuming participants are trustworthy and sincere) are often called recommender systems.

Reputation network architectures The reputation network architecture determines how ratings and reputation scores are communicated between participants in a reputation system. Centralized (e.g., eBay): a central authority derives reputation scores for each participant

Reputation network architectures Distributed (e.g., P2P networks) – each node derives reputation scores of other nodes Scalability is a problem for capacity limited entities

Reputation computation engines Summation or average of ratings Positive score minus the negative score (e.g., eBay) Average of all ratings (e.g., Amazon) Weighted average of all ratings (e.g., weights based factors such as a rater’s reputation) Bayesian systems (see [Josang-BECC02] next week) take binary ratings as input (i.e. positive or negative) and compute reputation scores by statistical updating of beta probability density function The a posteriori (i.e. the updated) reputation score is computed by combining the a priori (i.e. previous) reputation score with the new rating Discrete measures rate performance in the form of discrete verbal statements (VT, T, UT, VUT); a trustor can weigh the raters based on discrepancy between its interaction experience vs. ratings received

Reputation computation engines Belief theory A’s belief in B (called an opinion) is denoted by W(A:B)=(b, d, u, a). Here b, d, and u represent belief, disbelief, and uncertainty respectively where b +d +u =1, and a is the base rate probability in the absence of evidence This opinion’s probability expectation value E(W(A:B)] = b +au, which can be interpreted as the reliability trust of B Subjective logic operators are used to combine opinions: Two trust paths (Alice-Bob-David, and Alice-Claire-David) can each be computed with the discounting operator The two paths can be combined using the consensus operator The end product is W(Alice:David)

Reputation computation engines Fuzzy logic Reputation is represented as a fuzzy measure with membership functions describing the degrees of trust, e.g., a trust value of range [−1.25, 1.25] denotes very low trust and so on. A node who has 0.25 trust is 75% very low trust (a membership function) and 25% low trust (another membership function) Fuzzy logic provides rules for reasoning with fuzzy measures

Reputation computation engines Flow model Computing trust or reputation by transitive iteration through looped or arbitrarily long chains Typically a constant reputation weight for the whole community such that participants can only increase their reputation at the cost of others Google: many hyperlinks into a web page will contribute to increased PageRank whereas many hyperlinks out of a web page will contribute to decreased PageRank of that web page Let P be a set of hyperlinked web pages and let u and v denote web pages in P. Let N − (u) denote the set of web pages pointing to u and let N+(v) denote the set of web pages that v points to. Let E be some vector over P corresponding to a source of rank. Then, the PageRank of a web page u is: where c is chosen such that

Summary Criteria for evaluating reputation computation engines: Accuracy for long-term performance Distinguish between a new entity of unknown quality and an entity with poor long-term performance Weighting toward current behavior The system must recognize and reflect recent trends in entity performance Robustness against attacks The system should resist attempts of entities to manipulate reputation scores Smoothness Adding any single rating (i.e., recommendation) should not influence the score significantly

Summary Criteria (1), (2) and (4) are easily satisfied by most reputation engines Criterion (3) on the other hand will probably never be solved completely because there will always be new and unforeseen attacks for which solutions will have to be found. The problems of unfair ratings (due to subjectivity) and ballot stuffing (due to collusion) are probably the hardest to solve in any reputation system that is based on subjective ratings from participants. How to detect and filter out unfair ratings and ballot stuffing (e.g., discounting based on statistical analysis or reputation/trust weight) is an active research direction