Download presentation
Presentation is loading. Please wait.
1
Approximated Provenance for Complex Applications
Susan B. Davidson University of Pennsylvania Eleanor Ainy, Daniel Deutch, Tova Milo Tel Aviv University Intro Science is changing Importance of wf and prov Vision Provenance: wf vs db Related work Privacy Privacy concerns Initial results on module privacy Composite modules as a hiding technique, can be used for privacy Search and access control Hierarchical wf model Search Access control Search w access control Related work.
2
Crowd Sourcing The engagement of crowds of Web users for data procurement and knowledge creation. Crowd-sourcing: harness the crowd to perform some task Galaxy Zoo: classifying galaxies according to their shape Captcha: Completely Automated Public Turing test to tell Computers and Humans Apart 2
3
Why now? We are all connected, all the time! 3
4
Complexity? Many of the initial applications were quite simple
Specify Human Interaction Task (HIT) using e.g. Mechanical Turk, collect responses, aggregate to form result. Newer ideas are multi-phase and complex, e.g. mining frequent fact sets from the crowd (OASSIS) Model as workflows with global state
5
Outline “State-of-the-art” in crowd data provenance New challenges
A proposal for modeling crowd data provenance
6
Outline “State-of-the-art” in crowd data provenance New challenges
A proposal for modeling crowd data provenance
7
Crowd data provenance? TripAdvisor: aggregates reviews and presents average ratings Individual reviews are part of the provenance Wikipedia: keeps extensive information about how pages are edited ID of the user who generated the page as well as changes to page (when, who, summary) Provides several views of this information, e.g. by page or by editor Mainly used for presentation and explanation
8
TA rank is based on an algorithm which accounts not only for the ratings given, but number of reviews and age of reviews with newer reviews receiving more weight than older ones. What that specific algorithm is, none of us know.
16
Outline “State-of-the-art” in crowd data provenance New challenges
A proposal for modeling crowd data provenance
17
Challenges for crowd data provenance
Complexity of processes and number of user inputs involved Provenance can be very large, leading to difficulties in viewing and understanding provenance Need for Summarization Multidimensional views Provenance mining Compact representation for maintenance and cleaning Data is collected through a variety of means, ranging from simple questions (e.g. “In which picture is this person the oldest?”or “What is the capital of France?”), to Datalog-style reasoning [7], to dynamic processes that evolve as answers are received from the crowd (e.g. mining association rules from the crowd [4]). The complexity of the process, together with the number of user inputs involved in the derivation of even a single value, lead to an especially large provenance size, which leads to further difficulties in viewing and understanding the information.
18
Summarization Large size of provenance need for abstraction
E.g., in heavily edited Wikipedia pages: “x1, x2, x3 are formatting changes; y1, y2, y3, y4 add content; z1 , z2 represent divergent viewpoints” “u1 , u2 , u3 represent edits by robots; v1, v2 represent edits by Wikipedia administrators” E.g., in a movie-rating application to summarize the provenance of the average rating for “MatchPoint” “Audience crowd members gave higher ratings (8-10) whereas critics gave lower ratings (3-5).”
19
Multidimensional Views
“Perspective” through which provenance can be viewed or mined E.g. in TripAdvisor, if there is an “outlier” review it would be useful to see other reviews by that person to “calibrate” it. “Question” perspective could show which questions are bad/unclear
20
Maintenance and Cleaning
May need update propagation to remove certain users, questions and/or answers E.g. spammers or bad questions Mining of provenance may lag behind the aggregate calculation E.g., detecting a spammer may only be possible when they have answered enough questions, or when enough answers have been obtained from other users. Note that the aggregate calculation may in turn have already been used in a downstream calculation, or have been used to direct the process itself.
21
Outline “State-of-the-art” in crowd data provenance New challenges
A proposal for modeling crowd data provenance
22
Crowd Sourcing Workflow
Consider a movie reviews aggregator platform, whose logic is captured by the workflow in Figure 1. Inputs for the platform are reviews (scores) from users, whose identifiers and names are stored in the Users table. Users have different roles (e.g. movie critics, directors, audience, etc.); information about two such roles, Critics and Audience, is shown in the corresponding relations. Reviews are fed through different reviewing modules, which “crawl” different reviewing platforms such as IMDB, newspaper web-sites etc. Each such module updates statistics in the Stats table, e.g. how many reviews the user has submitted (Num-Rate), what their average score is (computed as SumRate divided by NumRate), etc. A reviewing module also consults Stats to output a “sanitized review”, by implementing some logic. The sanitized reviews are then fed to an aggregator, which computes an aggregate movies scores. There are many plausible logics for the reviewing modules; we exemplify one in which each module “sanitizes” the reviews by joining the users, reviews and audience/critic relation (depending on the module), keeping only reviews of users listed under the corresponding role (audience/critic), and are “active”, i.e. submitted more than 2 reviews. The aggregator combines the reviews obtained from all modules to compute overall movie ratings (sum, num, avg). Movie reviews Aggregator Platform
23
Provenance expression
Want some expression that captures the Users (U), their type Audience (A), the logic involved (the Stat (S) for the User (U)), their answer, and how combined to calculate the final result.
24
Propagating provenance annotations through joins
A B C … a b c p R ⋈ S A B C D E … a b c d e p * r JOIN (on B) S D B E Todd J. Green, Gregory Karvounarakis, Val Tannen: Provenance semirings. PODS 2007: … d b e r The annotation p * r means joint use of data annotated by p and data annotated by r [Green, Karvounarakis, Tannen, Provenance Semirings. PODS 2007]
25
Propagating provenance annotations through unions and projections
A B C … a b c1 p a b c2 r a b c3 s πABR A B … a b p + r + s PROJECT + means alternative use of data, which arises in both PROJECT and UNION. [Green, Karvounarakis, Tannen, Provenance Semirings. PODS 2007]
26
Annotated Aggregate Expressions
Q = select Dept, sum(Sal) from R group by Dept R Eid Dept Sal The sum salary for d1 could be represented by the expression ( p p p3) ⊗ 1 d1 20 p1 2 d1 10 p2 3 d1 15 P3 This provenance aware value “commutes” with deletion. [Amsterdamer, Deutch, Tannen, Provenance for Aggregate Queries. PODS 2011]
27
Provenance expression
28
Provenance expression: Benefits
Can understand how movie ratings were computed. Can be used for data maintenance and cleaning E.g. if U2 is discovered to be a spammer, “map” its provenance annotation to 0
29
Summarizing provenance
Map annotations to a corresponding “summary” h: Ann Ann’, where |Ann’| << |Ann| E.g. in our example, let h(Ui)=h(Si)=1, h(Ai)=A, h(Ci)=C Reducing the expression to Which simplifies to
30
Constructing mappings?
How do we define and find “good” mappings? Provenance size Semantic constraints (e.g. two annotations can only be mapped to the same annotation if they come from the same input table) Distance between original provenance expression and the mapped expression (e.g. grouping all young French people and giving them an average rating for some movie)
31
Conclusions Provenance is needed for crowd-sourcing applications to help understand the results and reason about their quality. Techniques from database/workflow provenance can be used, but there are special challenges and “opportunities”
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.