Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Stork: State of the Art Tevfik Kosar Computer Sciences Department University of Wisconsin-Madison

Similar presentations


Presentation on theme: "1 Stork: State of the Art Tevfik Kosar Computer Sciences Department University of Wisconsin-Madison"— Presentation transcript:

1 1 Stork: State of the Art Tevfik Kosar Computer Sciences Department University of Wisconsin-Madison kosart@cs.wisc.edu http://www.cs.wisc.edu/condor/stork

2 2 The Imminent Data “deluge” Moore’s Law outpaced by growth of scientific data! Exponential growth of scientific data –2000 : ~0.5 Petabyte –2005 : ~10 Petabytes –2010 : ~100 Petabytes –2015 : ~1000 Petabytes “I am terrified by terabytes” -- Anonymous “I am petrified by petabytes” -- Jim Gray

3 3 Bioinformatics: BLAST High Energy Physics: LHC Astronomy: LSST 2MASS SDSS DPOSS GSC-II WFCAM VISTA NVSS FIRST GALEX ROSAT OGLE... LSST 2MASS SDSS DPOSS GSC-II WFCAM VISTA NVSS FIRST GALEX ROSAT OGLE... Educational Technology: WCER EVP 500 TB/year 2-3 PB/year 11 PB/year 20 TB - 1 PB/year

4 4 How to access and process distributed data? TB PB

5 5 BUS CPU DISK MEMORY I/O PROCESSOR HARDWARE LEVEL CONTROLLER DMA I/O Management in the History HARDWARE LEVEL BUS CPU DISK MEMORY I/O PROCESSOR CONTROLLER DMA

6 6 HARDWARE LEVEL BUS CPU DISK MEMORY I/O PROCESSOR CONTROLLER DMA OPERATING SYSTEMS LEVEL I/O CONTROL SYSTEM CPU SCHEDULER I/O SCHEDULER I/O SUBSYSTEM OPERATING SYSTEMS LEVEL I/O CONTROL SYSTEM CPU SCHEDULER I/O SCHEDULER I/O SUBSYSTEM I/O Management in the History

7 7 HARDWARE LEVEL BUS CPU DISK MEMORY I/O PROCESSOR CONTROLLER DMA OPERATING SYSTEMS LEVEL I/O CONTROL SYSTEM CPU SCHEDULER I/O SCHEDULER I/O SUBSYSTEM DISTRIBUTED SYSTEMS LEVEL BATCH SCHEDULERS I/O Management in the History

8 8 HARDWARE LEVEL BUS CPU DISK MEMORY I/O PROCESSOR CONTROLLER DMA OPERATING SYSTEMS LEVEL I/O CONTROL SYSTEM CPU SCHEDULER I/O SCHEDULER I/O SUBSYSTEM DISTRIBUTED SYSTEMS LEVEL BATCH SCHEDULERS DATA PLACEMENT SCHEDULER I/O Management in the History

9 9 Stage-in Execute job jStage-out Stage-in Execute job jStage-out Release input space Release output space Allocate space for input & output data JOB iJOB kJOB i JOB k Individual Jobs JOB j get put Stage-in Stage-out Stage-in Stage-out Release input space Release output space Allocate space for input & output data Compute Jobs Data placement Jobs Release input spaceRelease output space Allocate space for input & output data

10 10 Separation of Jobs Data A A.stork Data B B.stork Job C C.condor ….. Parent A child B Parent B child C Parent C child D, E ….. DAG specification Workflow Manager A C B D E F Compute Job Queue C Data Job Queue E

11 11 Stork: Data Placement Scheduler First scheduler specialized for data movement/placement. De-couples data placement from computation. Understands the characteristics and semantics of data placement jobs. Can make smart scheduling decisions for reliable and efficient data placement. A prototype is already implemented and deployed at several sites. Now distributed with Condor Developers Release v6.7.6 http://www.cs.wisc.edu/condor/stork

12 12 Support for Heterogeneity Provides uniform access to different data storage systems and transfer protocols. Acts as an IOCS for distributed systems. Multilevel Policy Support Protocol translation: using Stork Disk Cacheusing Stork Memory Buffer [ Type = “Transfer”; Src_Url = “srb://ghidorac.sdsc.edu/kosart.condor/x.dat”; Dest_Url = “nest://turkey.cs.wisc.edu/kosart/x.dat”; …… Max_Retry = 10; Restart_in = “2 hours”; ] [ICDCS’04]

13 13 Dynamic Protocol Selection [ dap_type = “transfer”; src_url = “drouter://slic04.sdsc.edu/tmp/test.dat”; dest_url = “drouter://quest2.ncsa.uiuc.edu/tmp/test.dat”; alt_protocols = “gsiftp-gsiftp, nest-nest”; or: src_url = “any://slic04.sdsc.edu/tmp/test.dat”; dest_url = “any://quest2.ncsa.uiuc.edu/tmp/test.dat”; ] DiskRouter crashes DiskRouter resumes Traditional Scheduler: 48 Mb/s Using Stork: 72 Mb/s [ICDCS’04]

14 14 Run-time Auto-tuning [ link = “slic04.sdsc.edu – quest2.ncsa.uiuc.edu”; protocol = “gsiftp”; bs = 1024KB;// I/O block size tcp_bs = 1024KB;// TCP buffer size p= 4; // number of parallel streams ] Before Tuning: parallelism = 1 block_size = 1 MB tcp_bs = 64 KB After Tuning: parallelism = 4 block_size = 1 MB tcp_bs = 256 KB Traditional Scheduler (without tuning) 0.5 MB/s Using Stork (with tuning) 10 MB/s [AGridM’03] GridFTP

15 15 Controlling Throughput Increasing concurrency/parallelism does not always in crease transfer rate Effect on local area and wide are is different Concurrency and parallelism have slightly different impacts on transfer rate Wide Area Local Area [Europar’04]

16 16 Controlling CPU Utilization Concurrency and parallelism have totally opposite impacts on CPU utilization at the server side. Client Server [Europar’04]

17 17 Detecting and Classifying Failures Check DNS Server Check DNS Check Network S F Check Protocol Check Host DNS Server error No DNS entry Network Outage Host Down Protocol Unavailable F F F F S S S Test Transfer Transfer Failed S F Check Credentials Not Authenticated F Check File Source File Does Not Exist F S S S Transient Permanent Transient Permanent POLICIES [Grid’04]

18 18 15.8 min 99.7% Detecting Hanging Transfers Collecting job execution time statistics –Fit a distribution Detect and avoid –black holes –hanging transfers Eg. for normal distribution: 99.7% of job execution times should lie between [(avg-3*stdev), (avg+3*stdev)] [Cluster’04]

19 19 Stork can also: Allocate/de-allocate (optical) network links Allocate/de-allocate storage space Register/un-register files to Meta Data Catalog Locate physical location of a logical file name Control concurrency levels on storage servers You can refer to [ICDCS’04][JPDC’05][AGridM’03]

20 20 Apply to Real Life Applications

21 21 DPOSS Astronomy Pipeline

22 22 UniTree not responding Diskrouter reconfigured and restarted SDSC cache reboot & UW CS Network outage Software problem Failure Recovery

23 23 End-to-end Processing of 3 TB DPOSS Astronomy Data Traditional Scheduler: 2 weeks Using Stork: 6 days

24 24 Summary Stork provides solutions for the data placement needs of the Grid community. It is ready to fly! Now distributed with Condor developers release v6.7.6. All basic features you will need are included in the initial release. More features coming in the future releases.

25 25 Thank you for listening.. Questions?


Download ppt "1 Stork: State of the Art Tevfik Kosar Computer Sciences Department University of Wisconsin-Madison"

Similar presentations


Ads by Google