Information and Scheduling: What's available and how does it change Jennifer M. Schopf Argonne National Lab
Oct 20, Information and Scheduling l How a scheduler work is closely tied to the information available l Choice of algorithm dependent on accessible data
Oct 20, This Talk l What approaches expect form information l What data is actually available, and some open questions l How data changes l What to do about changing data
Oct 20, NB l I’m speaking (pessimistically) from my own background l We’ve heard some talks earlier today (for example PACE) which address some of these problems l I still think these are interesting open issues to think about
Oct 20, Information systems (NOTE: taken from my standard MDS2 talk) l Information is always old –Time of flight, changing system state –Need to provide quality metrics l Distributed system state is hard to obtain –Information is not contemporaneous (thanks j.g.) –Complexity of global snapshot l Components will fail l Scalability and overhead –Approaches are changed for scalability, this will affect the information available
Oct 20, Scheduling approaches assume l A lot of data is available l All information is accurate l Values don’t change
Oct 20, Example: System data 1. The bandwidth bij : the maximum data rate in bits per second. 2. The flow fij : the effective data rate in bits per second on the link. 3. The utilization uij : the utilization is represented as the ratio of the effective flow to bandwidth, uij = fij / bij 4. The length lij : the Euclidean distance between its end peers. 5. The cost C ij : the cost can be defined as a function of the link length and its bandwidth, C ij = S*(lij/bij), where S is a constant value. 6. T i the processor speed of the peer, which is the number of work units that the peer can execute per unit of time. etc.
Oct 20, Example: Application information 1. B i is the number of work units (in terms of computations) in the task. So, the number of time units that the task ti needs in order to be executed on peer vk are B i/Tk 2. u i, is the number of packets required to transfer the task. Thus, the task ti needs u iw/bij work units to be transferred from peer vi to the peer vj, assuming that these two peers are direct neighbors and the condition of the network is ideal. l 3. Implicit: exact mapping of tasks and data in a DAG l etc…
Oct 20, What some people expect l Perfect bandwidth info l Number of operations in an application l Scalar value of computer “power” l Mapping of “power” to applications l Perfect load information
Oct 20, Bandwidth data l Network Weather Service (Wolski, UCSB) –64k probe BW data –Latency data –Predictions l Pinger (Les Cotrell, SLAC) –Create long term baselines for expectations on means/medians and variability for response time, throughput, packet loss l Predicting TCP performance –Allen Downey – l But what do Grid applications need?
Oct 20, Perfect Bandwidth Data 64 k probes don’t look like large file transfers LBL-ANL GridFTP (approximately 400 transfers at irregular intervals) end-to-end bandwidth and NWS (approximately 1,500 probes every five minutes) probe bandwidth for the two-week August’01 dataset.
Oct 20, Predicting Large File Transfers l Vazhkudai and Schopf: use GridFTP logs and some background data - NWS, ioStat (HPDC 2002) –Error rate of ~15% l M. Faerman A. Su, R. Wolski, and F. Berman (HPDC 99) –Similar results for SARA data l Hu and Schopf: use an AI learning technique on GridFTP log files only (not published yet) –Picks best place to get a file from 60-80% of time, using averages only gives you ~50% “best chosen” l This topic needs much more study!
Oct 20, Data Generally Available From an Application l What some scheduling approaches want: –Number of ops in an application –Exact execution time on a platform –Perfect models of applications
Oct 20, Application Data Currently Available l Bad models of applications l No models of applications –Some work (Propehsy, Taylor at Texas A&M) does logging to create models l Many interesting applications have non- deterministic run times l User estimates of application run time (historically) off by 20%+ l We need to be able to figure out ways to do predictions of application run times WITHOUT models
Oct 20, Scalar value of computer “power” l MDS2 gives me: –CPU vendor, model and version –CPU speed –OS name, release and version –RAM size –Node count –CPU count l Where is “compute power” in this data?
Oct 20, What is compute “power” l I could get benchmark data, but what’s the right benchmark(s) to use? l Computer “power” simply isn’t scalar, especially in a Grid environment l Goal is really to understand how an application will run on a machine Given three different benchmarks, 3 different platforms will perform very differently – one best on BM1, another best on BM2
Oct 20, Mapping “power” to applications l Many scheduling approaches assume “power” is a scalar – just multiply it by the set application time and we’re set l Only problem: –Power isn’t a scalar –No one knows absolute application run times –Mapping will NOT be straight forward l We need a way to estimate application time on a contended system
Oct 20, Perfect Load Information l MDS2 gives me: –Basic queue data –Host load 5/10/15 min avg –Last value only
Oct 20, Load Predictions l Network weather service –12+ prediction techniques –Work on any time series –Expect regularly arriving data l Only a prediction of the next value –*I* want to know what load is going to be like in 20 mins –Or the AVERAGE over the next 20 mins?
Oct 20, Information and Scheduling l What approaches expect us to have l What we actually have access to l How it changes l What to do about changing data
Oct 20, Dedicated SOR Experiments l Platform- 2 Sparc 2’s. 1 Sparc 5, 1 Sparc 10 l 10 mbit ethernet connection l Quiescent machines and network l Prediction within 3% before memory spill
Oct 20, Non-dedicated SOR results l Available CPU on workstations varied from.43 to.53
Oct 20, SOR with Higher Variance in CPU Availability
Oct 20, Improving predictions l Available CPU has range of / l Prediction should also have a range
Oct 20, Scheduling needs to consider variance Conservative Scheduling: Using Predicted Variance to Improve Scheduling Decisions in Dynamic Environments –Lingyun Yang, Jennifer M. Schopf, Ian Foster –To appear at SC'03, November 15-21, 2003, Phoenix, Arizona, USA – scheduling.pdfwww.mcs.anl.gov/~jms/Pubs/lingyun-SC- scheduling.pdf
Oct 20, Scheduling with Variance l Summary: Scheduling with variance can give better mean performance and less variance in overall execution time
Oct 20, Lessons: l We need work predicting large file transfers – NOT bandwidth l We need to be able to figure out ways to do predictions of application run times WITHOUT models l We need predictions over time periods – not just a next value l We need a way to represent “power” of a machine, that takes variance into account l We need a way to map power to application behavior l We need better scheduling approaches that take variance into account
Oct 20, Contact Information l Jennifer M. Schopf l l –Links to some of the publications mentioned –Links to the co-edited book “Grid resource Management: State of the Art and Future Trends”