Presentation is loading. Please wait.

Presentation is loading. Please wait.

1-1 Incentive Mechanisms for Large Collaborative Resource Sharing Objectives:  Why Resource harnessing  Examples of resource harnessing  Grid computing.

Similar presentations


Presentation on theme: "1-1 Incentive Mechanisms for Large Collaborative Resource Sharing Objectives:  Why Resource harnessing  Examples of resource harnessing  Grid computing."— Presentation transcript:

1 1-1 Incentive Mechanisms for Large Collaborative Resource Sharing Objectives:  Why Resource harnessing  Examples of resource harnessing  Grid computing  P2P computing  Resource sharing  Assumptions  Considerations  What are incentives?  Trust as a mechanism to provide incentives

2 1-2 Resource Harnessing  Huge interest in linking up resources  Grid computing, P2P computing, computing utilities, etc.  It is all about sharing  Quality of Service  Security  Participation versus Cost

3 1-3 Resource Harnessing: Grid Example  Virtual Private Grids (PVG) is a framework for “renting” collection of resources  “Collection” is defined as follows:  able to deliver predefined performance metrics  performance delivered at predefined geographical locations  cost of provisioning is optimized or bounded

4 1-4 Resource Harnessing: Grid Example Grid Resource GR GridResource GridResource GR multiplex GR GridResource GridDomain base VPGR

5 1-5 Resource Harnessing: Grid Example  SO (service originator) presents the VPG Spec. via a VPG Manager (VPGM)  VPGM negotiates with different Grids via a MetaGrid Resolver (MGR)  Grids (GRs) bid for the VPG creation requests  VPGM selects the best bid SO VPGS VPGM  Location spec  QoS specs  Cost preference GR …… MGR Contract negotiation bid with (QoS/cost) VPG creation request Grid Engineerin g Admission Control

6 1-6 Resource Sharing  Assumptions  Resource owners have committed their resources Honestly To be used efficiently To be used for the overall good of the community  Considerations  Free riding  Malicious entities  Non cooperative entities Incentives are needed for resources to cooperate honestly

7 1-7 Resource Harnessing: P2P Example  Since, we deal with public resources, we need to address the following  How can we encourage resources to cooperate 70% of all users do not share files 50% of all requests are satisfied by the top 1% sharing hosts  How can we deal with security  We do not want security to become an overhead!  Can we use “trust” as an incentive?

8 1-8 Trust Considerations  How can we define “trust” in an operational way? Who will evaluate trust?  Trust maintenance can result in an efficient process especially in a very large-scale system. Hence, our task is to come up with an efficient model for maintaining trust  Techniques for managing and evolving trust in a large-scale distributed system  Mechanisms for maintaining trust from ongoing transactions

9 1-9 Overall Trust Model

10 1-10 Trust Terminology  Identity trust  Behavior trust  Honesty  Accuracy  Set of recommenders  Set of trusted allies

11 1-11  To make the trust model efficient  the overall NC system is divided into NCDs  trust is a slow varying attribute  the number of contexts is limited to printing, storage, and computing Trust Level (TL)Equivalent numerical valueDescription A1very low TL B2low TL C3medium TL D4high TL E5very high TL Trust Model Characteristics

12 1-12 Why Behavior Trust Trust Attributes IdentityBehavior Importancefoundationlayer Costfixedvariable Changeabilityvery seldomyes Naturegivengained Replacementyesno Propagationimmediatewith time Perceptionexistslearned

13 1-13 Notation  Let and represent recommenders set and trusted allies set, respectively  Let the honesty of recommender as observed by be denoted as  Let denote the recommendation for given by to at time for context  Let denote the recommendation for given by to where for the same and

14 1-14 Computing Honesty  Let  The value of will be less than a small value if recommender is honest  Therefore, is computed as

15 1-15 Computing Accuracy  Let denote the true trust level of obtained by as a results of monitoring the transaction  Let  The value of will be an integer value ranging from 0 to 4  Therefore, is computed as

16 1-16 Computing Trust & Reputation  Before can use the recommendation given by to calculate the reputation of, needs to be adjusted to reflect the accuracy of recommender  This shift is given by

17 1-17 Computing Trust & Reputation  Trust relationship expressed as  Direct trust relationship and the reputation of expressed as and,respectively.  The decay function is expressed as  Let and

18 1-18 Simulation Setup  A discrete event simulator was used  The transactions arrival process modeled using a Poisson random process  30 NCDs were used in the simulation  The size of R is fixed and set to 4  The size of T is fixed and set to 3  The TL were randomly generated from [1-5]

19 1-19 Performance Measurement  The measure of performance used is the ability of the trust model to correctly predict the trust that exists between two NCDs  This is quantified by determining the success ratio as follows:

20 1-20 Performance Evaluation  Using accuracy & honesty measures: Success ratio with 150 transactions per relation Monitor Frequencyvalue Number of malicious domains 01020 11.0100% 0.5100% 0.0100% 101.098.39%92.76%91.95% 0.5100%97.24%98.51% 0.0100%98.04%99.54% 201.093.45%82.98%81.38% 0.599.77%82.99%81.72% 0.0100%79.54%78.74%

21 1-21 Performance Evaluation  Using the accuracy measure: Success ratio with 150 transactions per relation Monitor Frequencyvalue Number of malicious domains 01020 11.0100% 0.5100% 0.0100% 101.098.62%93.22%92.30% 0.5100%95.86%92.53% 0.0100%96.09%91.72% 201.094.37%82.18%80.22% 0.599.66%78.62%71.03% 0.0100%62.41%47.13%

22 1-22 Performance Evaluation  Using Accuracy & honesty measures: Success ratio progress Malicious NCDs # Monitor Freq.value Number of iterations per relation 5102550150 0201.062.07%65.06%71.26%80.69%93.45% 0.580.69%83.45%87.93%93.56%99.77% 0.092.76%96.09%98.51%100% Malicious NCDs # Monitor Freq.value Number of iterations per relation 5102550150 10201.051.26%53.68%59.20%65.40%82.99% 0.549.89%52.87%55.63%61.38%82.99% 0.049.77% 50.11%52.64%79.54

23 1-23 Case Study: Trust Modeling on P2P Grids  The P2P Grid is segmented into Grid domains (GDs)  Two virtual domains are associated with each GD  resource domain and client domain  Each resource domain has 3 attributes:  Ownership  Type of Activities (ToA) it supports  TL for each ToA  Similarly, each client domain has 3 attributes

24 1-24 Case Study: Trust Modeling on P2P Grids  Suppose that client from wanting to engage in activities and on resource at  Offered TL (OTL) = min(TL for, TL for )  There are two required TLS (RTLs)  one from the client domain  one from the resource domain  Expected trust supplement (ETS) = RTL - OTL

25 1-25 Case Study: Trust Modeling on P2P Grids  An example of the ETS table

26 1-26 Case Study: Trust Modeling on P2P Grids  A batch mode mapping heuristic called “Sufferage heuristic” was used machine onemachine two task one3035 task two3550

27 1-27 Case Study: Trust Modeling on P2P Grids  Two different classes of Expected Execution Cost (EEC) were used:  Consistent Low task low machine (LOLO) heterogeneity models networks that have “related” machines which are “similar” in performance  Inconsistent Low task low machine (LOLO) heterogeneity models networks were machines are not related

28 1-28 Case Study: Performance Evaluation


Download ppt "1-1 Incentive Mechanisms for Large Collaborative Resource Sharing Objectives:  Why Resource harnessing  Examples of resource harnessing  Grid computing."

Similar presentations


Ads by Google