Presentation is loading. Please wait.

Presentation is loading. Please wait.

Big Data Infrastructure Week 4: Analyzing Text (2/2) This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States.

Similar presentations


Presentation on theme: "Big Data Infrastructure Week 4: Analyzing Text (2/2) This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States."— Presentation transcript:

1 Big Data Infrastructure Week 4: Analyzing Text (2/2) This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States See http://creativecommons.org/licenses/by-nc-sa/3.0/us/ for details CS 489/698 Big Data Infrastructure (Winter 2016) Jimmy Lin David R. Cheriton School of Computer Science University of Waterloo January 28, 2016 These slides are available at http://lintool.github.io/bigdata-2016w/

2 Count. Source: http://www.flickr.com/photos/guvnah/7861418602/ Search!

3 Abstract IR Architecture Documents Query Hits Representation Function Representation Function Query RepresentationDocument Representation Comparison Function Index offlineonline Indexing Retrieval

4 one fish, two fish Doc 1 red fish, blue fish Doc 2 cat in the hat Doc 3 1 1 1 1 1 1 1 1 1 1 1 1 123 1 1 1 1 1 1 4 blue cat egg fish green ham hat one green eggs and ham Doc 4 1 1 red 1 1 two What goes in each cell? boolean count positions

5 one fish, two fish Doc 1 red fish, blue fish Doc 2 cat in the hat Doc 3 1 1 1 1 1 1 1 1 1 1 1 1 123 1 1 1 1 1 1 4 blue cat egg fish green ham hat one 3 4 1 4 4 3 2 1 blue cat egg fish green ham hat one 2 green eggs and ham Doc 4 1 1 red 1 1 two 2red 1two postings lists (always in sorted order)

6 2 1 1 2 1 1 1 1 1 1 1 2 2 1 1 2 2 1 1 1 1 1 1 123 1 1 1 1 1 1 4 1 1 1 1 1 1 1 1 1 1 1 1 2 2 1 1 df blue cat egg fish green ham hat one 1 1 1 1 1 1 2 1 blue cat egg fish green ham hat one 1 1 1 1 red 1 1 1 1 two 1 red 1 two 3 4 1 4 4 3 2 1 2 2 1 one fish, two fish Doc 1 red fish, blue fish Doc 2 cat in the hat Doc 3 green eggs and ham Doc 4 tf

7 [2,4] [3] [2,4] [2] [1] [3] [2] [1] [3] 2 1 1 2 1 1 1 1 1 1 1 2 2 1 1 2 2 1 1 1 1 1 1 123 1 1 1 1 1 1 4 1 1 1 1 1 1 1 1 1 1 1 1 2 2 1 1 tf df blue cat egg fish green ham hat one 1 1 1 1 1 1 2 1 blue cat egg fish green ham hat one 1 1 1 1 red 1 1 1 1 two 1 red 1 two 3 4 1 4 4 3 2 1 2 2 1 one fish, two fish Doc 1 red fish, blue fish Doc 2 cat in the hat Doc 3 green eggs and ham Doc 4

8 1 1 2 1 1 22 1 1 1 1 1 1 1 1 2 Inverted Indexing with MapReduce 1 one 1 two 1 fish one fish, two fish Doc 1 2 red 2 blue 2 fish red fish, blue fish Doc 2 3 cat 3 hat cat in the hat Doc 3 1 fish 2 1 one 1 two 2 red 3 cat 2 blue 3 hat Shuffle and Sort: aggregate values by keys Map Reduce

9 Inverted Indexing: Pseudo-Code

10 [2,4] [1] [3] [1] [2] [1] [3] [2] [3] [2,4] [1] [2,4] [1] [3] 1 1 2 1 1 2 1 1 22 1 1 1 1 1 1 Positional Indexes 1 one 1 two 1 fish 2 red 2 blue 2 fish 3 cat 3 hat 1 fish 2 1 one 1 two 2 red 3 cat 2 blue 3 hat Shuffle and Sort: aggregate values by keys Map Reduce one fish, two fish Doc 1 red fish, blue fish Doc 2 cat in the hat Doc 3

11 Inverted Indexing: Pseudo-Code What’s the problem?

12 [2,4] [9] [1,8,22] [23] [8,41] [2,9,76] [2,4] [9] [1,8,22] [23] [8,41] [2,9,76] 2 1 3 1 2 3 Another Try… 1 fish 9 21 (values)(key) 34 35 80 1 fish 9 21 (values)(keys) 34 35 80 fish How is this different? Let the framework do the sorting Term frequency implicitly stored Where have we seen this before?

13 Inverted Indexing: Pseudo-Code

14 213123 213123 Postings Encoding 1 fish 921343580 … 1 fish 81213145 … Conceptually: In Practice: Don’t encode docnos, encode gaps (or d-gaps) But it’s not obvious that this save space… = delta encoding, delta compression, gap compression

15 Overview of Integer Compression Byte-aligned technique VByte Bit-aligned Unary codes  /  codes Golomb codes (local Bernoulli model) Word-aligned Simple family Bit packing family (PForDelta, etc.)

16 VByte Simple idea: use only as many bytes as needed Need to reserve one bit per byte as the “continuation bit” Use remaining bits for encoding value Works okay, easy to implement… 010110 7 bits 14 bits 21 bits Beware of branch mispredicts!

17 Simple-9 How many different ways can we divide up 28 bits? Efficient decompression with hard-coded decoders Simple Family – general idea applies to 64-bit words, etc. 28 1-bit numbers 14 2-bit numbers 9 3-bit numbers 7 4-bit numbers (9 total ways) “selectors” Beware of branch mispredicts?

18 Bit Packing What’s the smallest number of bits we need to code a block (=128) of integers? Efficient decompression with hard-coded decoders PForDelta – bit packing + separate storage of “overflow” bits 3 … 4…5… Beware of branch mispredicts?

19 Golomb Codes x  1, parameter b: q + 1 in unary, where q =  ( x - 1 ) / b  r in binary, where r = x - qb - 1, in  log b  or  log b  bits Example: b = 3, r = 0, 1, 2 (0, 10, 11) b = 6, r = 0, 1, 2, 3, 4, 5 (00, 01, 100, 101, 110, 111) x = 9, b = 3: q = 2, r = 2, code = 110:11 x = 9, b = 6: q = 1, r = 2, code = 10:100 Optimal b  0.69 (N/df) Different b for every term!

20 Chicken and Egg? 1 fish 9 [2,4] [9] 21 [1,8,22] (value)(key) 34 [23] 35 [8,41] 80 [2,9,76] fish Write postings … Sound familiar? But wait! How do we set the Golomb parameter b? We need the df to set b… But we don’t know the df until we’ve seen all postings! Recall: optimal b  0.69 (N/df)

21 Getting the df In the mapper: Emit “special” key-value pairs to keep track of df In the reducer: Make sure “special” key-value pairs come first: process them to determine df Remember: proper partitioning!

22 Getting the df: Modified Mapper one fish, two fish Doc 1 1 fish [2,4] (value)(key) 1 one [1] 1 two [3]  fish [1]  one [1]  two [1] Input document… Emit normal key-value pairs… Emit “special” key-value pairs to keep track of df…

23 Getting the df: Modified Reducer 1 fish 9 [2,4] [9] 21 [1,8,22] (value)(key) 34 [23] 35 [8,41] 80 [2,9,76] fish Write postings  fish [63][82][27] … … First, compute the df by summing contributions from all “special” key-value pair… Compute b from df Important: properly define sort order to make sure “special” key-value pairs come first! Where have we seen this before?

24 Inverted Indexing: IP What’s the assumption? Is it okay?

25 Merging Postings Let’s define an operation ⊕ on postings lists P: Then we can rewrite our indexing algorithm! flatMap: emit singleton postings reduceByKey: ⊕ Postings(1, 15, 22, 39, 54) ⊕ Postings(2, 46) = Postings(1, 2, 15, 22, 39, 46, 54) What exactly is this operation? What have we created?

26 What’s the issue? Postings 1 ⊕ Postings 2 = Postings M Solution: apply compression as needed!

27 Inverted Indexing: LP Slightly less elegant implementation… but uses same idea

28 Inverted Indexing: LP

29 IP vs. LP? From: Elsayed et al., Brute-Force Approaches to Batch Retrieval: Scalable Indexing with MapReduce, or Why Bother? 2010 Experiments on ClueWeb09 collection: segments 1 + 2 101.8m documents (472 GB compressed, 2.97 TB uncompressed)

30 Another Look at LP Remind you of anything in Spark? RDD[(K, V)] aggregateByKey seqOp: (U, V) ⇒ U, combOp: (U, U) ⇒ U RDD[(K, U)]

31 Exploit associativity and commutativity via commutative monoids (if you can) Algorithm design in a nutshell… Source: Wikipedia (Walnut) Exploit framework-based sorting to sequence computations (if you can’t)

32 Abstract IR Architecture Documents Query Hits Representation Function Representation Function Query RepresentationDocument Representation Comparison Function Index offlineonline Indexing Retrieval

33 one fish, two fish Doc 1 red fish, blue fish Doc 2 cat in the hat Doc 3 1 1 1 1 1 1 1 1 1 1 1 1 123 1 1 1 1 1 1 4 blue cat egg fish green ham hat one green eggs and ham Doc 4 1 1 red 1 1 two Indexing: building this structure Retrieval: manipulating this structure

34 MapReduce it? The indexing problem Scalability is critical Must be relatively fast, but need not be real time Fundamentally a batch operation Incremental updates may or may not be important For the web, crawling is a challenge in itself The retrieval problem Must have sub-second response time For the web, only need relatively few results Perfect for MapReduce! Uh… not so good…

35 Assume everything fits in memory on a single machine… (For now)

36 Boolean Retrieval Users express queries as a Boolean expression AND, OR, NOT Can be arbitrarily nested Retrieval is based on the notion of sets Any given query divides the collection into two sets: retrieved, not-retrieved Pure Boolean systems do not define an ordering of the results

37 Boolean Retrieval To execute a Boolean query: Build query syntax tree For each clause, look up postings Traverse postings and apply Boolean operator ( blue AND fish ) OR ham bluefish ANDham OR 1 2blue fish2 1ham3 35 67 89 45 59

38 Term-at-a-Time bluefish ANDham OR 1 2blue fish2 1ham3 35 67 89 45 59 25 9 bluefish AND bluefish ANDham OR 12 3 4 59 What’s RPN? Efficiency analysis?

39 Document-at-a-Time bluefish ANDham OR 1 2blue fish2 1ham3 35 67 89 45 59 1 2blue fish2 1ham3 35 67 89 45 59 Tradeoffs? Efficiency analysis?

40 Strengths and Weaknesses Strengths Precise, if you know the right strategies Precise, if you have an idea of what you’re looking for Implementations are fast and efficient Weaknesses Users must learn Boolean logic Boolean logic insufficient to capture the richness of language No control over size of result set: either too many hits or none When do you stop reading? All documents in the result set are considered “equally good” What about partial matches? Documents that “don’t quite match” the query may be useful also

41 Ranked Retrieval Order documents by how likely they are to be relevant Estimate relevance(q, d i ) Sort documents by relevance Display sorted results User model Present hits one screen at a time, best results first At any point, users can decide to stop looking How do we estimate relevance? Assume document is relevant if it has a lot of query terms Replace relevance(q, d i ) with sim(q, d i ) Compute similarity of vector representations

42 Vector Space Model Assumption: Documents that are “close together” in vector space “talk about” the same things t1t1 d2d2 d1d1 d3d3 d4d4 d5d5 t3t3 t2t2 θ φ Therefore, retrieve documents based on how close the document is to the query (i.e., similarity ~ “closeness”)

43 Similarity Metric Use “angle” between the vectors: Or, more generally, inner products:

44 Term Weighting Term weights consist of two components Local: how important is the term in this document? Global: how important is the term in the collection? Here’s the intuition: Terms that appear often in a document should get high weights Terms that appear in many documents should get low weights How do we capture this mathematically? Term frequency (local) Inverse document frequency (global)

45 TF.IDF Term Weighting weight assigned to term i in document j number of occurrence of term i in document j number of documents in entire collection number of documents with term i

46 Retrieval in a Nutshell Look up postings lists corresponding to query terms Traverse postings for each query term Store partial query-document scores in accumulators Select top k results to return

47 Retrieval: Document-at-a-Time Evaluate documents one at a time (score all query terms) Tradeoffs Small memory footprint (good) Skipping possible to avoid reading all postings (good) More seeks and irregular data accesses (bad) fish 2131231921343580 … blue 21192135 … Accumulators (e.g. min heap) Document score in top k? Yes: Insert document score, extract-min if heap too large No: Do nothing

48 Retrieval: Term-At-A-Time Evaluate documents one query term at a time Usually, starting from most rare term (often with tf-sorted postings) Tradeoffs Early termination heuristics (good) Large memory footprint (bad), but filtering heuristics possible fish 2131231921343580 … blue 21192135 … Accumulators (e.g., hash) Score {q=x} (doc n) = s

49 Assume everything fits in memory on a single machine… Okay, let’s relax this assumption now

50 Important Ideas Partitioning (for scalability) Replication (for redundancy) Caching (for speed) Routing (for load balancing) The rest is just details!

51 Term vs. Document Partitioning … T D T1T1 T2T2 T3T3 D T … D1D1 D2D2 D3D3 Term Partitioning Document Partitioning

52 partitions … … … … … … … … replicas brokers FE cache

53 brokers Datacenter Tier partitions … … … … … … … … replicas cache Tier partitions … … … … … … … … replicas cache Tier partitions … … … … … … … … replicas cache brokers Datacenter Tier partitions … … … … … … … … replicas cache Tier partitions … … … … … … … … replicas cache Tier partitions … … … … … … … … replicas cache brokers Datacenter Tier partitions … … … … … … … … replicas cache Tier partitions … … … … … … … … replicas cache Tier partitions … … … … … … … … replicas cache

54 Important Ideas Partitioning (for scalability) Replication (for redundancy) Caching (for speed) Routing (for load balancing)

55 Source: Wikipedia (Japanese rock garden) Questions? Remember: Assignment 3 due next Tuesday at 8:30am


Download ppt "Big Data Infrastructure Week 4: Analyzing Text (2/2) This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States."

Similar presentations


Ads by Google