Deadline-based Grid Resource Selection for Urgent Computing Nick Trebon 6/18/08 University of Chicago.

Slides:



Advertisements
Similar presentations
Network Weather Service Sathish Vadhiyar Sources / Credits: NWS web site: NWS papers.
Advertisements

Energy Efficiency through Burstiness Athanasios E. Papathanasiou and Michael L. Scott University of Rochester, Computer Science Department Rochester, NY.
SLA-Oriented Resource Provisioning for Cloud Computing
Using Parallel Genetic Algorithm in a Predictive Job Scheduling
Fast Algorithms For Hierarchical Range Histogram Constructions
Bag-of-Tasks Scheduling under Budget Constraints Ana-Maria Oprescu, Thilo Kielman Presented by Bryan Rosander.
Esma Yildirim Department of Computer Engineering Fatih University Istanbul, Turkey DATACLOUD 2013.
Cloud Computing Resource provisioning Keke Chen. Outline  For Web applications statistical Learning and automatic control for datacenters  For data.
Meeting Service Level Objectives of Pig Programs Zhuoyao Zhang, Ludmila Cherkasova, Abhishek Verma, Boon Thau Loo University of Pennsylvania Hewlett-Packard.
All Hands Meeting, 2006 Title: Grid Workflow Scheduling in WOSE (Workflow Optimisation Services for e- Science Applications) Authors: Yash Patel, Andrew.
Chapter 9 Uniprocessor Scheduling Operating Systems: Internals and Design Principles, 6/E William Stallings Dave Bremer Otago Polytechnic, N.Z. ©2008,
Chapter 9 Uniprocessor Scheduling Operating Systems: Internals and Design Principles, 6/E William Stallings Dave Bremer Otago Polytechnic, N.Z. ©2008,
Senior Design Project: Parallel Task Scheduling in Heterogeneous Computing Environments Senior Design Students: Christopher Blandin and Dylan Machovec.
A Grid Resource Broker Supporting Advance Reservations and Benchmark- Based Resource Selection Erik Elmroth and Johan Tordsson Reporter : S.Y.Chen.
Chapter 8 – Processor Scheduling Outline 8.1 Introduction 8.2Scheduling Levels 8.3Preemptive vs. Nonpreemptive Scheduling 8.4Priorities 8.5Scheduling Objectives.
The Network Weather Service: A Distributed Resource Performance Forecasting Service for Metacomputing, Rich Wolski, Neil Spring, and Jim Hayes, Journal.
High Throughput Urgent Computing Jason Cope Condor Week 2008.
Grid Load Balancing Scheduling Algorithm Based on Statistics Thinking The 9th International Conference for Young Computer Scientists Bin Lu, Hongbin Zhang.
On Fairness, Optimizing Replica Selection in Data Grids Husni Hamad E. AL-Mistarihi and Chan Huah Yong IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS,
Self-Organizing Agents for Grid Load Balancing Junwei Cao Fifth IEEE/ACM International Workshop on Grid Computing (GRID'04)
1 Reading Report 9 Yin Chen 29 Mar 2004 Reference: Multivariate Resource Performance Forecasting in the Network Weather Service, Martin Swany and Rich.
Self Adaptivity in Grid Computing Reporter : Po - Jen Lo Sathish S. Vadhiyar and Jack J. Dongarra.
WP9 Resource Management Current status and plans for future Juliusz Pukacki Krzysztof Kurowski Poznan Supercomputing.
Resource Provisioning based on Lease Preemption in InterGrid Mohsen Amini Salehi, Bahman Javadi, Rajkumar Buyya Cloud Computing and Distributed Systems.
Quantification of the non- parametric continuous BBNs with expert judgment Iwona Jagielska Msc. Applied Mathematics.
Energy Prediction for I/O Intensive Workflow Applications 1 MASc Exam Hao Yang NetSysLab The Electrical and Computer Engineering Department The University.
Budget-based Control for Interactive Services with Partial Execution 1 Yuxiong He, Zihao Ye, Qiang Fu, Sameh Elnikety Microsoft Research.
Nimrod & NetSolve Sathish Vadhiyar. Nimrod Sources/Credits: Nimrod web site & papers.
A Survey of Distributed Task Schedulers Kei Takahashi (M1)
Supercomputing Cross-Platform Performance Prediction Using Partial Execution Leo T. Yang Xiaosong Ma* Frank Mueller Department of Computer Science.
Development Timelines Ken Kennedy Andrew Chien Keith Cooper Ian Foster John Mellor-Curmmey Dan Reed.
Scientific Workflow Scheduling in Computational Grids Report: Wei-Cheng Lee 8th Grid Computing Conference IEEE 2007 – Planning, Reservation,
Stochastic DAG Scheduling using Monte Carlo Approach Heterogeneous Computing Workshop (at IPDPS) 2012 Extended version: Elsevier JPDC (accepted July 2013,
Predicting Queue Waiting Time in Batch Controlled Systems Rich Wolski, Dan Nurmi, John Brevik, Graziano Obertelli Computer Science Department University.
Copyright © 2011, Performance Evaluation of a Green Scheduling Algorithm for Energy Savings in Cloud Computing Truong Vinh Truong Duy; Sato,
1 Grid Scheduling Cécile Germain-Renaud. 2 Scheduling Job –A computation to run on a machine –Possibly with network access e.g. input/output file (coarse.
Advisor: Resource Selection 11/15/2007 Nick Trebon University of Chicago.
TeraGrid Advanced Scheduling Tools Warren Smith Texas Advanced Computing Center wsmith at tacc.utexas.edu.
Data Replication and Power Consumption in Data Grids Susan V. Vrbsky, Ming Lei, Karl Smith and Jeff Byrd Department of Computer Science The University.
Resource Predictors in HEP Applications John Huth, Harvard Sebastian Grinstein, Harvard Peter Hurst, Harvard Jennifer M. Schopf, ANL/NeSC.
12/10/15.  It is a Cross Life Cycle Activity (CLCA) that may be performed at any stage ◦ In fact, some part of it (e.g. risk analysis and management)
SPRUCE Special PRiority and Urgent Computing Environment Advisor Demo Nick Trebon University of Chicago Argonne National Laboratory
OPERATING SYSTEMS CS 3530 Summer 2014 Systems with Multi-programming Chapter 4.
Matchmaking: A New MapReduce Scheduling Technique
MROrder: Flexible Job Ordering Optimization for Online MapReduce Workloads School of Computer Engineering Nanyang Technological University 30 th Aug 2013.
Automatic Statistical Evaluation of Resources for Condor Daniel Nurmi, John Brevik, Rich Wolski University of California, Santa Barbara.
Deadline-based Resource Management for Information- Centric Networks Somaya Arianfar, Pasi Sarolahti, Jörg Ott Aalto University, Department of Communications.
1 1 Slide Simulation Professor Ahmadi. 2 2 Slide Simulation Chapter Outline n Computer Simulation n Simulation Modeling n Random Variables and Pseudo-Random.
SPRUCE Special PRiority and Urgent Computing Environment User Demo Nick Trebon Argonne National Laboratory University of Chicago
ApproxHadoop Bringing Approximations to MapReduce Frameworks
Scheduling MPI Workflow Applications on Computing Grids Juemin Zhang, Waleed Meleis, and David Kaeli Electrical and Computer Engineering Department, Northeastern.
HPC HPC-5 Systems Integration High Performance Computing 1 Application Resilience: Making Progress in Spite of Failure Nathan A. DeBardeleben and John.
Author Utility-Based Scheduling for Bulk Data Transfers between Distributed Computing Facilities Xin Wang, Wei Tang, Raj Kettimuthu,
Sunpyo Hong, Hyesoon Kim
Network Weather Service. Introduction “NWS provides accurate forecasts of dynamically changing performance characteristics from a distributed set of metacomputing.
Chapter 9 Uniprocessor Scheduling Operating Systems: Internals and Design Principles, 6/E William Stallings Dave Bremer Otago Polytechnic, N.Z. ©2008,
Holding slide prior to starting show. Scheduling Parametric Jobs on the Grid Jonathan Giddy
Probabilistic Upper Bounds for Urgent Applications Nick Trebon and Pete Beckman University of Chicago and Argonne National Lab, USA Case Study Elevated.
Resource Characterization Rich Wolski, Dan Nurmi, and John Brevik Computer Science Department University of California, Santa Barbara VGrADS Site Visit.
OpenPBS – Distributed Workload Management System
Cognitive Link Layer for Wireless Local Area Networks
Abstract Machine Layer Research in VGrADS
Predicting Queue Waiting Time For Individual User Jobs
Process Scheduling B.Ramamurthy 11/18/2018.
Lecture 2 Part 3 CPU Scheduling
Process Scheduling B.Ramamurthy 2/23/2019.
Process Scheduling B.Ramamurthy 4/11/2019.
Process Scheduling B.Ramamurthy 4/7/2019.
Uniprocessor scheduling
Probabilistic Upper Bounds
Presentation transcript:

Deadline-based Grid Resource Selection for Urgent Computing Nick Trebon 6/18/08 University of Chicago

2 Deadline-Based Grid Resource Selection MS Committee Dr. Ian Foster Dr. Pete Beckman (ANL) Dr. Rick Stevens Special Thanks to Dr. Rich Wolski and Dan Nurmi (UCSB)

3 University of Chicago Deadline-Based Grid Resource Selection Outline Introduction  Introduction to SPRUCE  Problem Statement  Literature Review Methodology  Bounding Total Turnaround Time  Other Considerations Case Study (6 Experiments)  Results and Analysis Conclusions and Contributions Future Work

4 University of Chicago Deadline-Based Grid Resource Selection Introduction: SPRUCE Urgent Computing: SPRUCE  Urgent computations  Defined by a strict deadline -- late results are useless  Must be in “warm standby” -- ready to run when and where needed  Provide token-based, priority access for urgent computing jobs  Elevated batch queue priority  Resource reservations (HARC - prototype implemented)  Elevated network bandwidth priority (NLR - prototype implemented)  Resources respond to urgent requests based on local policy

5 University of Chicago Deadline-Based Grid Resource Selection Problem Statement  Given an urgent computation and a deadline how does one select the “best” resource?  “Best”: Resource that provides the configuration most likely to meet the deadline Configuration: a specification of the runtime parameters for an urgent computation on a given resource  Runtime parameters: # cpus, input/output repository, priority, etc.  Cost function (priority): Normal --> No additional cost SPRUCE --> Token + intrusiveness to other users

6 University of Chicago Deadline-Based Grid Resource Selection Literature Review Grid Resource Selection  J. Schopf, “Grid Resource Management: State of the Art and Future Trends, Chapter 2, Springer,  B. K. Alunkal, et al., “Reputation-based grid resource selection” Proceedings of AGridM,  R. Raman, et al., “Matchmaking: Distributed resource management for throughput computing”, Symposium on High Performance Distributed Computing,  R. Buyya, et al., “Economic models for resource management and scheduling in grid computing,” Concurrency and Computation: Practice and Experience, vol. 14, pp ,  D. Nurmi, et al., “Evaluation of a Workflow Scheduler Using Integrated Performance Modelling and Batch Queue Wait Time Prediction”, SuperComputing, 2006.

7 University of Chicago Deadline-Based Grid Resource Selection Literature Review, cont. Queue Delay  J. Brevik, et al., “Predicting bounds on queuing delay for batch-scheduled parallel machines”, in PPoPP ユ 06: Proceedings of the eleventh ACM SIGPLAN symposium on Principles and practice of parallel programming, New York, NY, USA, 2006, pp , ACM Press. File Transfer Delay  R. Wolski, “Dynamically forecasting network performance using the network weather service”, Cluster Computing”,  R. Wolski, “Experiences with predicting resource performance on-line in computational grid settings”, SIGMETRICS Perform. Eval. Rev., vol. 30, no. 4, pp. 41 ミ 49,  M. Allen, et al., “Comparing network measurement time-series”, in MetroGrid Workshop, October Performance Modeling  A. Mandal, et al., “Scheduling Strategies for Mapping Application Workflows onto the Grid”, In 14th IEEE Suyposium on High Performance Distributed Computing (HPDC14), , 2005.

8 University of Chicago Deadline-Based Grid Resource Selection Outline Introduction  Introduction to SPRUCE  Problem Statement  Literature Review Methodology  Bounding Total Turnaround Time  Other Considerations Case Study (6 Experiments)  Results and Analysis Conclusions and Contributions Future Work

9 University of Chicago Deadline-Based Grid Resource Selection Probabilistic Bounds on Total Turnaround Time Input Phase: (I Q,I B ) Resource Allocation Phase: (R Q,R B ) Execution Phase: (E Q,E B ) Output Phase: (O Q,O B ) If each phase is independent, then:  Overall bound = I B + R B + E B + O B  Overall quantile ≥ I Q * R Q * E Q * O Q

10 University of Chicago Deadline-Based Grid Resource Selection I B +R B +E B + O B : File Staging Delay Methodology is the same for input/output file staging Utilize Network Weather Service to generate bandwidth predictions based upon probe  Wolski, et al.: Use MSE as sample variance of a normal distribution to generate upper confidence interval Problems?  Predicting bandwidth for output file transfers  NWS uses TCP-based probes, transfers utilize GridFTP

11 University of Chicago Deadline-Based Grid Resource Selection I B + R B +E B +O B : Resource Allocation Delay Normal priority (no SPRUCE)  Utilize the existing Queue Bounds Estimation from Time Series (QBETS) SPRUCE policies  Next-to-run  Modified QBETS - Monte Carlo simulation on previously observed job history  Pre-emption  Utilize Binomial Method Batch Predictor (BMPB) to predict bounds based upon preemption history  Other possibilities: elevated priority, checkpoint/restart

12 University of Chicago Deadline-Based Grid Resource Selection Resource Allocation Concerns Resource allocation bounds are predicted entirely on historical data and not current queue state  What if necessary nodes are immediately available?  What if it is clear that the bounds may be exceeded?  Example: Higher priority jobs already queued Solution: Query Monitoring and Discovery Services framework (Globus) to return current queue state

13 University of Chicago Deadline-Based Grid Resource Selection I B +R B + E B +O B : Execution Delay Approach: Generate empirical bound for a given urgent application on each warm standby resource  Utilize BMBP methodology to generate probabilistic bounds  Methodology is general and non-parametric

14 University of Chicago Deadline-Based Grid Resource Selection Improving the Probability Composite quantile quickly decreases  =  = Overlapping phases  E.g., stage input files while job is queued Speculative execution  Initiate two independent configurations  Pr(C 1 or C 2 ) = Pr(C 1 ) + Pr(C 2 )(1 - Pr(C 1 ))

15 University of Chicago Deadline-Based Grid Resource Selection Overlapping Phases Example Input file staging  Pr(files staged < 10 minutes) = 0.90  Pr(files staged < 20 minutes) = 0.95  Pr(files staged < 30 minutes) = 0.99 Queue delay  Pr(job begins in < 10 minutes) = 0.75  Pr(job begins in < 20 minutes) = 0.85  Pr(job begins in < 30 minutes) = 0.95 Serial: Pr(job begins in < 30 minutes) ≤ 0.90 * 0.85 = Overlap: Pr(job begins < 30 minutes) ≤ 0.95 * 0.99 = 0.94

16 University of Chicago Deadline-Based Grid Resource Selection Calculating the Probability Goal: to determine a probabilistic upper bound for a given configuration Approximate approach: query a small subset (e.g., 5) of quantiles for each individual phase and select the combination that results in highest composite quantile with bound < deadline

17 University of Chicago Deadline-Based Grid Resource Selection Outline Introduction  Introduction to SPRUCE  Problem Statement  Literature Review Methodology  Bounding Total Turnaround Time  Other Considerations Case Study (6 Experiments)  Results and Analysis Conclusions and Contributions Future Work

18 University of Chicago Deadline-Based Grid Resource Selection Case Study Application: flash2 File staging requirements:  Modeled after LEAD workflow  Input file: 3.9 GB  Input repository: Indiana and UCAR  Output file: 3.9 GB  Output repository: Indiana

19 University of Chicago Deadline-Based Grid Resource Selection Case Study Resources UC/ANL supports normal, next-to-run and preemption priorities Mercury and Tungsten support normal priority NameCPUTotal Nodes Total CPUs Req. Nodes Req. PPN UC/ANL-IA64Itanium 2 (1.3/1.5 GHz) MercuryItanium 2 (1.5 GHz) TungstenIntel Xeon (3.5 GHz)

20 University of Chicago Deadline-Based Grid Resource Selection NWS Probes v. GridFTP Transfers

21 University of Chicago Deadline-Based Grid Resource Selection GridFTP Probe Framework Purpose: to generate probe measurements whose behavior matches the behavior seen in large GridFTP transfers Probes:  Tuned for each source, destination pair  Sent every 15 minutes  Initiated as third-party transfers  Measurements stored directly into NWS Results:  More similarity between probe and transfer behavior  Correctness and accuracy of predicted bounds discussed later

22 University of Chicago Deadline-Based Grid Resource Selection Experiment 1 Purpose: Validate methodology for bounding GridFTP transfers Methodology:  For each source,destination pair generate a bound and then initiate a transfer  Complete at least 100 trials and evaluate success of bounds

23 University of Chicago Deadline-Based Grid Resource Selection Experiment 1 Results SourceDestination# Trials% > bound (b/w)Avg % error IndianaUC/ANL IndianaMercury IndianaTungsten UCARUC/ANL UCARMercury UCARTungsten UC/ANLIndiana MercuryIndiana Table 1: Success rate for predicted bounds on GridFTP transfers

24 University of Chicago Deadline-Based Grid Resource Selection Experiment 1 Conclusions The GridFTP probes combined with NWS forecasting allow for both correct and accurate bounds The trials originating from UCAR are omitted from latter experiments

25 University of Chicago Deadline-Based Grid Resource Selection Experiment 2 Purpose:  1. Validate composite bounding methodology for normal priorities  2. Evaluate performance of BMBP methodology for bounding execution delay Methodology:  Generate individual phase bounds followed by the execution of the actual configuration  For each of the three resources utilizing the Indiana input and normal priority, complete approximately 100 trials  Compare predictions with actual performance

26 University of Chicago Deadline-Based Grid Resource Selection Experiment 2 Results

27 University of Chicago Deadline-Based Grid Resource Selection Experiment 2 Results, cont. Resource# TrialsInputQueueExec. OutputOverall UC/ANL-IA %97%100%83%97% Mercury10099% 93%86%100% Tungsten10088%99%96%77%99% Table 2: Success rate for individual and composite phases ResourceInputQueueExec.OutputOverall UC/ANL-IA648%2,379%7%22%65% Mercury10%4,468%4%5%1,417% Tungsten11%4,811%13%-5%1,842% Table 3: Percent overprediction for each phase

28 University of Chicago Deadline-Based Grid Resource Selection Experiment 2 Conclusions Utilizing BMBP methodology for execution bounds performs well Composite bounds are correct, but not accurate ResourceAvg. DelayAvg. Bound UC/ANL-IA6410,83317,869 Mercury8,696131,951 Tungsten7,631148,233 Table 3: Average delay and bounds

29 University of Chicago Deadline-Based Grid Resource Selection Experiment 3 Purpose: Validate composite bounding methodology for next-to-run priority Methodology:  Generate individual bounds followed by the execution of the actual configuration  Conduct approximately 100 trials utilizing the UC/ANL resource, Indiana input and next-to-run priority  Emulate execution delay  Compare predictions with actual performance

30 University of Chicago Deadline-Based Grid Resource Selection Experiment 3 Results

31 University of Chicago Deadline-Based Grid Resource Selection Experiment 3 Results, cont. Resource# TrialsInputQueueExec.OutputOverall UC/ANL-IA %92%91%92%94% Table 4: Success rate of individual and composite phases ResourceInputQueueExec.OutputOverall UC/ANL-IA648%46%5%41%11% Table 5: Percent overprediction for each phase

32 University of Chicago Deadline-Based Grid Resource Selection Experiment 3 Conclusions Next-to-run bounds predictor performs well Composite bounds are much tighter than in the normal priority cases  Average delay: 11,562 seconds  Average bound: 12,966 seconds

33 University of Chicago Deadline-Based Grid Resource Selection Experiment 4 Purpose: Validate composite bounding methodology for preemption priority Methodology:  Generate individual phase bounds followed by execution of actual configuration  Conduct approximately 100 trials utilizing UC/ANL resource, Indiana input and preemption policy  Emulate execution delay  Preempt “dummy” job  Compare predictions with actual performance

34 University of Chicago Deadline-Based Grid Resource Selection Experiment 4 Results

35 University of Chicago Deadline-Based Grid Resource Selection Experiment 4 Results, cont. Resource# TrialsInputQueueExec.OutputOverall UC/ANL-IA %94%93%85%95% ResourceInputQueueExec.OutputOverall UC/ANL-IA646%75%4%7%6% Table 6: Success rate for individual and composite phases Table 7: Percent overprediction for each phase

36 University of Chicago Deadline-Based Grid Resource Selection Experiment 4 Conclusions Bounding methodology for preemption priority is both accurate and correct Average overall bounds are tighter than in next-to-run experiment  Average delay: 10,350 seconds  Average bound: 11,019 seconds

37 University of Chicago Deadline-Based Grid Resource Selection Experiment 5 Purpose: Determine significance of savings from overlapping phases Methodology:  Generate individual phase bounds followed by execution of configuration  Conduct approximately 100 trials utilizing Mercury resource, Indiana input and normal priority  Emulate execution  0.95 quantile for queue, execution and output phases  quantile for input  Compare predictions with actual performance

38 University of Chicago Deadline-Based Grid Resource Selection Experiment 5 Results

39 University of Chicago Deadline-Based Grid Resource Selection Experiment 5 Results, cont. Resource# TrialsInputQueueI & QExec. OutputOverall Mercury102100% 96%92%100% ResourceInputQueueI & QExec.OutputOverall Mercury19%15,117%6,480%3%4%1,863% Table 8: Success rate for individual and composite phases Table 9: Percent overprediction for each phase

40 University of Chicago Deadline-Based Grid Resource Selection Experiment 5 Conclusions The quantile for the overlapped input and queue phases decreased by more than 50% The composite overprediction of composite bound still driven by overprediction of normal priority queue bounds  Future work: Demonstrate savings with elevated priority configurations

41 University of Chicago Deadline-Based Grid Resource Selection Experiment 6 Purpose: Demonstrate the Advisor algorithm which determines the approximately optimal probability that a configuration will meet a given deadline Methodology:  Query the following quantiles for each phase:  Input: [0.9995, 0.999, , 0.995, 0.99, 0.975, 0.95, 0.90]  Queue: [0.95, 0.75, 0.50]  Execution: [0.99, 0.90, 0.80, 0.70,0.60, 0.50]  Output: [0.9995, 0.999, , 0.995, 0.99, 0.975, 0.95, 0.90]  Deadline: 6 hours

42 University of Chicago Deadline-Based Grid Resource Selection Experiment 6 Results

43 University of Chicago Deadline-Based Grid Resource Selection Experiment 6 Results, cont.

44 University of Chicago Deadline-Based Grid Resource Selection Experiment 6 Results, cont.

45 University of Chicago Deadline-Based Grid Resource Selection Experiment 6 Conclusions In the normal priority conditions, lowering the targeted quantile for the queue bound significantly decreases composite bound Elevated priorities offer lower and less variable bounds User may have option of selecting a less invasive priority that provides a similar probability of meeting deadline

46 University of Chicago Deadline-Based Grid Resource Selection Outline Introduction  Introduction to SPRUCE  Problem Statement  Literature Review Methodology  Bounding Total Turnaround Time  Other Considerations Case Study (6 Experiments)  Results and Analysis Conclusions and Contributions Future Work

47 University of Chicago Deadline-Based Grid Resource Selection Conclusions The queue bounds predictions for normal priority jobs are correct, but not very accurate  Overprediction of normal priority drives overprediction of composite bound The composite bounding methodology was correct in all experiments, and accurate in the SPRUCE experiments  Individual bounding methodologies were correct and accurate, for the most part

48 University of Chicago Deadline-Based Grid Resource Selection Contributions First research to utilize next-to-run bounds predictor in a resource selection context First research to apply the BMBP methodology to bound preemption delay and execution delay First research to generate composite turnaround time bounds comprised of individual phase bounds generated by distinct methodologies

49 University of Chicago Deadline-Based Grid Resource Selection Outline Introduction  Introduction to SPRUCE  Problem Statement  Literature Review Methodology  Bounding Total Turnaround Time  Other Considerations Case Study (6 Experiments)  Results and Analysis Conclusions and Contributions Future Work

50 University of Chicago Deadline-Based Grid Resource Selection Future Work Port next-to-run predictors to other resources Conduct overlapping phases experiments on resource with SPRUCE Automate and standardize warm standby mechanism Incorporate resource reservations Network bandwidth reservations Adapt methodology to Grid workflows

51 University of Chicago Deadline-Based Grid Resource Selection Extra Slides

52 University of Chicago Deadline-Based Grid Resource Selection NWS Probes v. GridFTP Transfers cont.