Presentation is loading. Please wait.

Presentation is loading. Please wait.

Data Management Reading: Chapter 5: Data-Intensive Computing And A Network-Aware Distributed Storage Cache for Data Intensive Environments.

Similar presentations


Presentation on theme: "Data Management Reading: Chapter 5: Data-Intensive Computing And A Network-Aware Distributed Storage Cache for Data Intensive Environments."— Presentation transcript:

1 Data Management Reading: Chapter 5: Data-Intensive Computing And A Network-Aware Distributed Storage Cache for Data Intensive Environments

2 What is Data Management? It depends… l Storage systems Disk arrays Network caches (e.g., DPSS) Hierarchical storage systems (e.g., HPSS) l Efficient data transport mechanisms Striped Parallel Secure Reliable Third-party transfers

3 What is data management? (cont.) l Replication management Associate files into collections Mechanisms for reliably copying collections, propagating updates to collections, selecting among replicas l Metadata management Associate attributes that describe data Select data based on attributes l Publishing and curation of data Official versions of important collections Digital libraries

4 Outline for Today l Examples of data-intensive applications l Storage systems: Disk arrays High-Performance Network Caches (DPSS) Hierarchical Storage Systems (Chris: HPSS) l Next two lectures: gridFTP Globus replica management Metadata systems Curation

5 Data-Intensive Applications: Physics l CERN Large Hadron Collider l Several terabytes of data per year Starting in 2005 Continuing 15 to 20 years Replication scenario: Copy of everything at CERN (Tier 0) Subsets at national centers (Tier 1) Smaller regional centers (Tier 2) Individual researchers will have copies

6 GriPhyN Overview (www.griphyn.org) l 5-year, $12.5M NSF ITR proposal to realize the concept of virtual data, via: l Key research areas: Virtual data technologies (information models, management of virtual data software, etc.) Request planning and scheduling (including policy representation and enforcement) Task execution (including agent computing, fault management, etc.) l Development of Virtual Data Toolkit (VDT) l Four Applications: ATLAS, CMS, LIGO, SDSS

7 GriPhyN Participants l Computer Science U.Chicago, USC/ISI, UW-Madison, UCSD, UCB, Indiana, Northwestern, Florida l Toolkit Development U.Chicago, USC/ISI, UW-Madison, Caltech l Applications ATLAS (Indiana), CMS (Caltech), LIGO (UW-Milwaukee, UT-B, Caltech), SDSS (JHU) l Unfunded collaborators UIC (STAR-TAP), ANL, LBNL, Harvard, U.Penn

8 The Petascale Virtual Data Grid (PVDG) Model l Data suppliers publish data to the Grid l Users request raw or derived data from Grid, without needing to know Where data is located Whether data is stored or computed l User can easily determine What it will cost to obtain data Quality of derived data l PVDG serves requests efficiently, subject to global and local policy constraints

9 PVDG Scenario User requests may be satisfied via a combination of data access and computation at local, regional, and central sites

10 Other Application Scenarios l Climate community Terabyte-scale climate model datasets: Collecting measurements Simulation results Must support sharing, remote access to and analysis of datasets l Distance visualization Remote navigation through large datasets, with local and/or remote computing

11 Storage Systems: Disk Arrays l What is a disk array? Collection of disks l Advantages: Higher capacity Many small, inexpensive disks Higher throughput Higher bandwidth (Mbytes/sec) on large transfers Higher I/O rate (transactions/sec) on small transfers

12 Trends in Magnetic Disks l Capacity increases: 60% per year l Cost falling at similar rate ($/MB or $/GB) l Evolving to smaller physical sizes 14in 5.25in 3.5in 2.5in 1.0in … ? l Put lots of small disks together l Problem: RELIABILITY Reliability of N disks = Reliability of 1 disk divided by N

13 Key Concepts in Disk Arrays Striping for High Performance Interleave data from single file across multiple disks Fine-grained interleaving: every file spread across all disks any access involves all disks Course-grained interleaving: interleave in large blocks small accesses may be satisfied by a single disk

14 Key Concepts in Disk Arrays Redundancy Maintain extra information in disk array Duplication Parity Reed-Solomon error correction codes Others When a disk fails: use redundancy information to reconstruct data on failed disk

15 RAID Levels l Defined by combinations of striping & redundancy l RAID Level 1: Mirroring or Shadowing Maintain a complete copy of each disk Very reliable High cost: twice the number of disks Great performance: on a read, may go to disk with faster access time l RAID Level 2: Memory Style Error Detection and Correction Not really implemented in practice Based on DRAM-style Hamming codes In disk systems, dont need detection Use less expensive correction schemes

16 RAID Levels (cont.) l RAID Level 3: Fine-grained Interleaving and Parity Many commercial RAIDs Calclate parity bit-wise across disks in the array (using exclusive-OR logic) Maintain a separate parity disk; update on write operations When a disk fails, use other data disk and parity disk to reconstruct data on lost disk Fine-grained interleaving: all disks involved in any access to the array

17 RAID Levels (cont.) l RAID Level 4: Large Block Interleaving and Parity Similar to level 3, but interleave on larger blocks Small accesses may be satisfied by a single disk Supports higher rate of small I/Os Parity disk may become a bottleneck with multiple concurrent I/Os l RAID Level 5: Large Block Interleaving and Distributed Parity Similar to level 4 Distributes parity blocks throughout all disks in array

18 RAID Levels (cont.) l RAID Level 6: Reed-Solomon Error Correction Codes Protection against two disk failures l Disks getting so cheap: consider massive storage systems composed entirely of disks No tape!!

19 DPSS: Distributed Parallel Storage System l Produced by Lawrence Berkeley National Labs l Cache: provides storage that is Faster than typical local disk Temporary l Virtual disk: appears to be single large, random- access, block-oriented I/O device l Isolates application from tertiary storage system: Acts as large buffer between slow tertiary storage and high-performance network connections Impedance matching

20 Features of DPSS l Components: DPSS block servers Typically low-cost workstations Each with several disk controllers, several disks per controller DPSS mater process Data requests sent from client to master process Determines which DPSS block server stores the requested blocks Forwards request to that block server Note: servers can be anywhere on network (a distributed cache)

21 Features of DPSS (cont.) l Client API library Supports variety of I/O semantics dpssOpen(), dpssRead(), dpssWrite(), dpssLSeek(), dpssClose() l Application controls data layout in cache For typical applications that read sequentially: stripe blocks of data across servers in round-robin fashion l DPSS client library is multi-threaded Number of client threads is equal to number of DPSS servers: client speed scales with server speed

22 Features of DPSS (cont.) l Optimized for relatively small number of large files Several thousand files Greater than 50 MB l DPSS blocks are available as soon as they are placed in cache Good for staging larges files to/from tertiary storage Dont have to wait for large transfer to complete l Dynamically reconfigurable Add or remove servers or disks on the fly

23 Features of DPSS (cont.) l Agent-based performance monitoring system l Client library automatically sets TCP buffer size to optimal value Uses information published by monitoring system l Load balancing Supports replication of files on multiple servers DPSS master uses status information stored in LDAP directory to select a replica that will give fastest response

24 Hierarchical Storage System l Fast, disk cache in front of larger, slower storage l Works on same principle as other hierarchies: Level-1 and Level-2 caches: minimize off-chip memory accesses Virtual memory systems:minimize page faults to disk l Goal: Keep popular material in faster storage Keep most of material on cheaper, slower storage Locality: 10% of material gets 90% of accesses l Problem with tertiary storage (especially tape): Very slow Tape seek times can be a minute or more…


Download ppt "Data Management Reading: Chapter 5: Data-Intensive Computing And A Network-Aware Distributed Storage Cache for Data Intensive Environments."

Similar presentations


Ads by Google