c-Perfect Hashing Schemes for Arrays, with Applications to Parallel Memories G. Cordasco 1, A. Negro 1, A. L. Rosenberg 2 and V. Scarano 1 1 Dipartimento di Informatica ed Applicazioni ”R.M. Capocelli” Università di Salerno, 84081, Baronissi (SA) – Italy 2 Dept.of Computer Science University of Massachusetts at Amherst Amherst, MA 01003, USA
Workshop on Distributed Data and Structures Summary The Problem The Motivation and some examples Our results Conclusion
Workshop on Distributed Data and Structures The problem Mapping nodes of a data structure on a parallel memory system in such a way that data can be efficiently accessed by templates. Data structures ( D ): array, trees, hypercubes… Parallel memory systems ( PM ): P P-1 P0P0 P1P1 M0M0 M1M1 M M -2 M M-1 PM D
Workshop on Distributed Data and Structures The problem(2) Mapping nodes of a data structure on a parallel memory system in such a way that data can be efficiently accessed by templates. Templates: distinct sets of nodes (for arrays: rows, columns, diagonals, submatrix; for trees: subtrees, simple paths, levels; etc). Formally: Let G D be the graph that describes the data structure D. A template t, for D is defined such as a subgraph of G D and each subgraph of G D isomorphous to t will be called instance of t-template. Efficiently: few(or no) conflicts (i.e. few processors need to access the same memory module at the same time).
Workshop on Distributed Data and Structures How a mapping algorithm should be: Efficient: the number of conflicts for each instance of the considered template type should be minimized. Versatile: should allow efficient data access by an algorithm that uses different templates. Balancing Memory Load: it should balance the number of data items stored in each memory module. Allow for Fast Memory Address Retrieval: algorithms for retrieving the memory module where a given data item is stored should be simple and fast. Use memory efficiently: for fixed size templates, it should use the same number of memory modules as the size.
Workshop on Distributed Data and Structures Why the versatility? Multi-programmed parallel machines: different set of processors run different programs and access different templates. Algorithms using different templates: in manipulating arrays, for example,accessing lines (i.e. rows and columns) is common. Composite templates: some templates are “composite”, e.g. the Range-query template consisting of a path with complete subtrees attached. A versatile algorithm is likely to perform well on that...
Workshop on Distributed Data and Structures Previous results: Array Research in this field originated with strategies for mapping two- dimensional arrays into parallel memories. Euler latin square (1778): conflict-free (CF) access to lines (rows and columns). Budnik, Kuck (IEEE TC 1971): skewing distance CF access to lines and some subblocks. Lawrie (IEEE TC 1975): skewing distance CF access to lines and main diagonals. requires 2N modules for N data items.
Workshop on Distributed Data and Structures Previous results: Array(2) Colbourn, Heinrich (JPDC 1992): latin squares CF access for arbitrary subarrays: for r > s r × s and s × r subarrays. Lower Bound: for any CF mapping algorithm for arrays, where templates are r × s and s × r subarrays ( r > s ) and lines requires n > rs + s memory modules. Corollary: more than one-seventh of the memory modules are idle when any mapping is used that is CF for lines, r × s and s × r subarrays ( r > s ).
Workshop on Distributed Data and Structures Kim, Prasanna Kumar (IEEE TPDS 93): latin squares CF access for lines, and main squares (i.e. where the top-left item ( i, j ) is such that i=0 mod and j=0 mod ). Perfect Latin squares: main diagonals are also CF. Every subarray has at most 4 conflicts (intersects at most 4 main squares) Previous results: Array(3)
Workshop on Distributed Data and Structures Das, Sarkar (SPDP 94): quasi-groups (or groupoids). Fan, Gupta, Liu (IPPS 94): latin cubes. These source offer strategies that scale with the number of memory module so that the number of available modules changes with the problem instance. Previous results: Array(4)
Workshop on Distributed Data and Structures We study templates that can be viewed as generalizing array-blocks and “paths” originating from the origin vertex : Chaotic array (C): A (two-dimensional) chaotic array C is an undirected graph whose vertex-set V C is a subset of N × N that is order-closed, in the sense that for each vertex ≠ of C, the set Our results: Templates
Workshop on Distributed Data and Structures Ragged array (R): A (two-dimensional) ragged array R is a chaotic array whose vertex-set V R satisfies the following: if V R, then {v 1 } × [v 2 ] V R ; if V R, then [v 1 ] × {0} V R ; Motivation pixel maps; lists of names where the table change shape dynamically; relational table in relational database. For each n N, [n] denotes the set {0, 1, …, n -1} Our results: Templates(2)
Workshop on Distributed Data and Structures Rectangular array (S): A (two-dimensional) rectangular array S is a ragged array whose vertex-set has the form [a] × [b] for some a, b N. Our results: Templates(3)
Workshop on Distributed Data and Structures Our results: Templates Chaotic Arrays Ragged Arrays Rectangular Arrays Rectangualar Ragged Chaotic Arrays
Workshop on Distributed Data and Structures For any integer c > 0, a c-contraction of an array A is a graph G that is obtained from A as follows. Rename A as G (0). Set k = 0. Pick a set S of c vertices of G (k) that were vertices of A. Replace these vertices by a single vertex v S ; replace all edges of G (k) that are incident to vertices of S by edges that are incident to v S. The graph so obtained is G (k+1) Iterate step 2 some number of times; G is the final G (k). Our results: c-contraction G (1) A=G (0) G (2) =G
Workshop on Distributed Data and Structures Our results are achieved by proving a (more general) result, of independent interest, about the sizes of graphs that are “almost” perfect universal for arrays. A graph G c = (V c, E c ) is c-perfect-universal for the family A n if for each array A A n exists a c-contraction of A that is a labeled-subgraph of G c. A n denotes the family of arrays having n or fewer vertices. Our results: some definitions
Workshop on Distributed Data and Structures The c-perfection number for A n, denoted Perf c (A n ), is the size of the smallest graph that is c-perfect-universal for the family A n Theorem (F.R.K. Chung, A.L.Rosenberg, L.Snyder 1983) Perf 1 (C n ) = Perf 1 (R n ) = Perf 1 (S n ) = n Our results: c-perfection number
Workshop on Distributed Data and Structures Theorem For all integers c and n, letting X ambiguously denote C n and R n, we have Perf c (X) . Theorem For all integers c and n, Perf c (C n ) . Theorem For all integers c and n, Perf c (R n ) . Theorem For all integers c and n, Perf c (S n ) =. Our results
Workshop on Distributed Data and Structures Perf c (C n ) Perf c (R n ) Our results: A Lower Bound on Perf c (C n ) and Perf c (R n )
Workshop on Distributed Data and Structures Perf c (C n ) d = (n+1)/c ; V d = { v = | v 1 < d and v 2 < d }. For each vertex v = : if v V d then color(v) = v 1 d + v 2 ; if v V d then color(v) = color( ); Our results: An Upper Bound on Perf c (C n ) VdVd d = 3
Workshop on Distributed Data and Structures Perf c (C n ) Size(V d )= Our results: An Upper Bound on Perf c (C n ) n = 13 c = 5 d = (n+1)/c = 3 VdVd
Workshop on Distributed Data and Structures Conclusions A mapping strategy for versatile templates with a given “conflict tolerance” c. c-Perfect Hashing Schemes for Binary Trees, with Applications to Parallel Memories (accepted for EuroPar 2003): Future Work: Matching Lower and Upper Bound for Binary Trees.