Presentation is loading. Please wait.

Presentation is loading. Please wait.

Compression of documents

Similar presentations


Presentation on theme: "Compression of documents"— Presentation transcript:

1 Compression of documents
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval" Compression of documents Paolo Ferragina Dipartimento di Informatica Università di Pisa

2 Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
Raw docs are needed

3 Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"

4

5

6 Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
LZ77 Algorithm’s step: Output <dist, len, next-char> Advance by len + 1 A buffer “window” has fixed length and moves a a c a a c a b c a a a a a a a c Dictionary (all substrings starting here) <6,3,a> a a c a a c a b c a a a a a a c a c <3,4,c>

7 Example: LZ77 with window
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval" Example: LZ77 with window a c b (0,0,a) a c b a c b (1,1,c) a c b (3,4,b) a c b c a b (3,3,a) c a b a c b (1,2,c) a c b within W Window size = 6 Longest match Next character Gzip -1…-9

8 Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
LZ77 Decoding Which is faster in decompression among gzip -1…-9 ? Decoder keeps same dictionary window as encoder. Finds substring <len,dist,char> in previously decoded text Inserts a copy of it What if len > dist ? (overlap with text to be compressed) E.g. seen = abcd, next codeword is <2,9,e> for (i = 0; i < len; i++) out[cursor+i] = out[cursor-d+i] Output is correct: abcdcdcdcdcdce

9 You find this at: www.gzip.org/zlib/
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval" You find this at:

10

11

12

13

14 Transfer + Decompression

15 https://quixdb.github.io/squash-benchmark/

16 Compression & Networking
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval" Compression & Networking

17 Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
Background data knowledge about data at receiver sender receiver network links are getting faster and faster but many clients still connected by fairly slow links (mobile?) people wish to send more and more data battery life is a king problem how can we make this transparent to the user?

18 Two standard techniques
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval" Two standard techniques caching: “avoid sending the same object again” Done on the basis of “atomic” objects Thus only works if objects are unchanged How about objects that are slightly changed? compression: “remove redundancy in transmitted data” avoid repeated substrings in transmitted data can be extended to history of past transmissions (overhead) How if the sender has never seen data at receiver ?

19 Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
Types of Techniques Common knowledge between sender & receiver Unstructured file: delta compression “partial” knowledge Unstructured files: file synchronization Record-based data: set reconciliation

20 Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
Formalization Delta compression [diff, zdelta, REBL,…] Compress file f deploying known file f’ Compress a group of files Speed-up web access by sending differences between the requested page and the ones available in cache File synchronization [rsynch, zsync] Client updates its old file fold with fnew available on a server Mirroring, Shared Crawling, Content Distr. Net Set reconciliation Client updates its structured file fold with file fnew available on server Update of contacts or appointments, intersect IL in P2P search engine

21 Z-delta compression (one-to-one)
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval" Z-delta compression (one-to-one) Problem: We have two files fknown and fnew (known to both parties) and the goal is to compute a file fd of minimum size such that fnew can be derived from fknown and fd Assume that block moves and copies are allowed Find an optimal covering set of fnew based on fknown LZ77-scheme provides and efficient, optimal solution fknown is “previously encoded text”, compress fknownfnew starting from fnew zdelta is one of the best implementations Uses e.g. in Version control, Backups, and Transmission.

22 Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
Efficient Web Access Dual proxy architecture: pair of proxies (client cache + proxy) located on each side of the slow link use a proprietary protocol to increase comm perf web request request Client Proxy Slow-link Fast-link Cache reference reference Delta-encoding Page Use zdelta to reduce traffic: Old version is available at both proxies (one on client cache, and one on proxy) Restricted to pages already visited (30% cache hits), or URL-prefix match Small cache

23 Cluster-based delta compression
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval" Cluster-based delta compression Problem: We wish to compress a group of files F Useful on a dynamic collection of web pages, back-ups, … Apply pairwise zdelta: find a good reference for each f  F Reduction to the Min Branching problem on DAGs Build a (complete?) weighted graph GF, nodes=files, weights= zdelta-size Insert a dummy node connected to all, and weights are gzip-coding Compute the directed spanning tree of min tot cost, covering G’s nodes. 1 2 3 123 20 620 2000 220 5 space time uncompr 30Mb --- tgz 20% linear THIS 8% quadratic 90

24 Improvement (group of files)
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval" Improvement (group of files) Problem: Constructing G is very costly, n2 edge calculations (zdelta exec) We wish to exploit some pruning approach Collection analysis: Cluster the files that appear similar and thus good candidates for zdelta-compression. Build a sparse weighted graph G’F containing only edges between pairs of files in the same cluster Assign weights: Estimate appropriate edge weights for G’F thus saving zdelta execution. Nonetheless, strict n2 time space time uncompr 260Mb --- tgz 12% 2 mins THIS 8% 16 mins

25 Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
File Synchronization

26 File synch: The problem
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval" Prof. Paolo Ferragina, Algoritmi per "Information Retrieval" File synch: The problem request f_new f_old update Server Client client request to update an old file server has new file but does not know the old file updates f_old without sending the entire f_new rsync: file synch tool, distributed with Linux Delta compression is a sort of local synch Since the server knows both files

27 Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
The rsync algorithm few hashes f_new f_old encoded file Server Client

28 The rsync algorithm (contd)
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval" The rsync algorithm (contd) simple, widely used, single roundtrip optimizations: 4-byte rolling hash + 2-byte MD5, gzip for literals choice of block size problematic (default: max{700, √n} bytes) not good in theory: granularity of changes may disrupt use of blocks

29 Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
A new framework: zsync Server sends hashes (unlike the client in rsync), clients checks them Server deploys the common fref to compress the new ftar (rsync compress just it).

30 Small differences (e.g. agenda)
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval" Small differences (e.g. agenda) Server sends hashes to client per levels Client checks equality with hash-substrings Log n/k levels match n/k blocks of k elems each If d differences, then on each level  d hashes not match, and need to continue Communication complexity is O(d lg(n/k) * lg n) bits [1 upward path per different k-block]


Download ppt "Compression of documents"

Similar presentations


Ads by Google