Presentation is loading. Please wait.

Presentation is loading. Please wait.

Achieving Application Performance on the Information Power Grid Francine Berman U. C. San Diego and NPACI This presentation will probably involve audience.

Similar presentations


Presentation on theme: "Achieving Application Performance on the Information Power Grid Francine Berman U. C. San Diego and NPACI This presentation will probably involve audience."— Presentation transcript:

1 Achieving Application Performance on the Information Power Grid Francine Berman U. C. San Diego and NPACI This presentation will probably involve audience discussion, which will create action items. Use PowerPoint to keep track of these action items during your presentation In Slide Show, click on the right mouse button Select “Meeting Minder” Select the “Action Items” tab Type in action items as they come up Click OK to dismiss this box This will automatically create an Action Item slide at the end of your presentation with your points entered.

2 IPG = “Distributed Computer” comprising –clusters of workstations –MPPs –remote instruments –visualization sites –data archives for users, performance is key criteria in evaluating platform

3 Program Performance Current grid programs achieve performance by –dedicating resources –careful staging of computation and data –considerable coordination It must be possible to achieve program performance on the IPG by ordinary users on ordinary days...

4 Achieving Performance On ordinary days, many users share system resources –load and availability of resources vary –application behavior hard to predict –poor predictions make scheduling hard Challenge: Develop application schedules which can leverage deliverable performance of system at execution time.

5 Whose Job Is It? Application scheduling can be performed by many entities –Resource scheduler –Job Scheduler –Programmer or User –System Administrator –Application Scheduler

6 Scheduling and Performance Goal of scheduling application is to promote application performance Achieving application performance can conflict with achieving performance for other system components –Resource Scheduler -- perf measure is utilization –Job Scheduler -- perf measure is throughput –System Administrator -- focuses on system perf –Programmer or User -- may miss most current info –Application Scheduler -- can access most current info

7 Everything in the system is evaluated in terms of its impact on the application. –performance of each system component can be considered as a measurable quantity –forecasts of quantities relevant to the application can be manipulated to determine schedule This simple paradigm forms the basis for AppLeS. Self-Centered Scheduling

8 AppLeS Joint project with Rich Wolski AppLeS = Application-Level Scheduler Each application has its own self-centered AppLeS Schedule achieved through –selection of potentially efficient resource sets –performance estimation of dynamic system parameters and application performance for execution time frame –adaptation to perceived dynamic conditions

9 AppLeS Architecture AppLeS incorporates –application-specific information –dynamic information –prediction Schedule developed to optimize user’s performance measure –minimal execution time –turnaround time = staging/waiting time + execution time –other measures: precision, resolution, speedup, etc. NWS (Wolski) User Prefs App Perf Model Planner Resource Selector Application Act. IPG resources/ infrastructure

10 Network Weather Service (Wolski) The NWS provides dynamic resource information for AppLeS NWS –monitors current system state –provides best forecast of resource load from multiple models Sensor Interface Reporting Interface Forecaster Model

11 SARA: An AppLeS-in-Progress SARA = Synthetic Aperture Radar Atlas –application developed at JPL and SDSC Goal: Assemble/process files for user’s desired image –thumbnail image shown to user –user selects desired bounding box within image for more detailed viewing –SARA provides detailed image in variety of formats

12 Focusing in with SARA Thumbnail imageBounding box

13 Simple Sara Focuses on obtaining remote data quickly Code developed by Alan Su Compute Server Data Server Data Server Data Server Computation servers and data servers are logical entities, not necessarily different nodes Network shared by variable number of users Computation assumed to be done at compute servers

14 Simple SARA AppLeS Focus on resource selection problem: Which site can deliver data the fastest? –Data for image accessed over shared networks –Data sets 1.4 - 3 megabytes, representative of SARA file sizes –Servers used for experiments lolland.cc.gatech.edu sitar.cs.uiuc perigee.chpc.utah.edu mead2.uwashington.edu spin.cacr.caltech.edu via vBNS via general Internet

15 Simple SARA Experiments Ran back-to-back experiments from remote sites to UCSD/PCL Wolski’s Network Weather Service provides forecasts of network load and availability Experiments run during normal business hours mid-week

16 Which is “Closer”? Sites on the east coast or sites on the west coast? Sites on the vBNS or sites on the general Internet? Consistently the same site or different sites at different times?

17 Which is “Closer”? Sites on the east coast or sites on the west coast? Sites on the vBNS or sites on the general Internet? Consistently the same site or different sites at different times? Depends a lot on traffic...

18 Preliminary Results Experiment with larger data set (3 Mbytes) During this time-frame, general Internet provides data mostly faster than vBNS

19 9/21/98 Experiments Clinton Grand Jury webcast commenced at iteration 62

20 Experiment with smaller data set (1.4 Mbytes) During this time frame, east coast sites provide data mostly faster than west coast sites More Preliminary Results

21 Distributed Data Applications SARA representative of larger class of distributed data applications Simple SARA template being extended to accommodate –replicated data sources –multiple files per image –parallel data acquisition –intermediate compute sites –web interface, etc.

22 SARA AppLeS -- Phase 2 Client, servers are “logical” nodes, which servers should the client use? Client Comp. Server Comp. Server Comp. Server Data Server Data Server Data Server Data Server... Move the computation or move the data? Computation, data servers may “live” at the same nodes Data servers may access the same storage media. How long will data access take when data is needed?

23 A Bushel of AppLeS … almost During the first “phase” of the project, we’ve focused on getting experience building AppLeS –Jacobi2D, DOT, SRB, Simple SARA, Genetic Algorithm, Tomography,... Using this experience, we are beginning to build AppLeS “templates”/tools for –master/slave applications –parameter sweep applications –distributed data applications –proudly parallel applications, etc. What have we learned...

24 Lessons Learned from AppLeS Dynamic information is critical

25 Lessons Learned from AppLeS Program execution and parameters may exhibit a range of performance

26 Lessons Learned from AppLeS Knowing something about performance predictions can improve scheduling

27 Lessons Learned from AppLeS Performance of scheduling policy sensitive to application, data, and system characteristics

28 A First IPG AppLeS Focus on class of parameter sweep applications Building AppLeS template for INS2D that can be used with other applications from class AppLeS INS2D scheduler –first phase focuses on interactive clusters –second phase will target clusters and batch-scheduled platforms –goal is to minimize turnaround time

29 Parameter Sweep AppLeS Architecture Being developed by Dmitrii Zagorodnov AppLeS schedules work on interactive resources AppLeS tuned to leverage underlying resource management system AppLe S API Resources App- specific case gen. Exp Act Sched. Act Exp

30 INS2D AppLeS Project Goals Complete design and deployment of INS2D AppLeS for interactive cluster –focus on socket design for first phase Conduct experiments to assess AppLeS performance on interactive cluster and to compare with batch system performance Expand INS2D AppLeS to target both batch and interactive systems –target to evolving IPG resource management system

31 Show Stoppers Queue prediction time –How long will the program wait in a batch queue? –How accurate is the prediction? Experimental Verification –How do we verify the performance of schedulers in production environments? –How do we achieve reproducible and relevant results? –What are the right measures of success? Uncertainty –How do we capture time-dependent information? –What do we do if the range of information is large?

32 AppLeS and the IPG Usability, Integration development of basic IPG infrastructure Performance “grid-aware” programming Short-termMedium-termLong-term Application scheduling Resource scheduling Throughput scheduling Multi-scheduling Resource economy Integration of schedulers and other tools, performance interfaces You are here Integration of multiple grid constituencies architectural models which support multiple constituencies automation of program execution

33 Getting There: Current Projects AppLeS and more AppLeS –AppLeS applications –AppLeS templates/tools –Globus AppLeS, Legion AppLeS, IPG AppLeS –Plans for integration of AppLeS and NWS with NetSolve, Condor, Ninf Performance Prediction Engineering –structural modeling with stochastic predictions –development of quality of information measures accuracy lifetime overhead

34 New Directions Contingency Scheduling scheduling during execution Scheduling with partial information, poor information, dynamically changing information Multischeduling resource economies scheduling “social structure” X

35 The Brave New World Grid-aware Programming –development of adaptive poly-applications –integration of schedulers, PSEs and other tools PSEPSE Config. object program whole program compiler Source appli- cation libraries Realtime perf monitor Dynamic optimizer Grid runtime system negotiation Software components Service negotiator Scheduler Performance feedback Perf problem

36 Project Information Thanks to NSF, NPACI, Darpa, DoD, NASA AppLeS Corps: –Francine Berman –Rich Wolski –Walfredo Cirne –Marcio Faerman –Jaime Frey –Jim Hayes –Graziano Obertelli AppLeS Home Page: http://www- cse.ucsd.edu/groups/ hpcl/apples.html –Jenny Schopf –Gary Shao –Neil Spring –Shava Smallen –Alan Su –Dmitrii Zagorodnov


Download ppt "Achieving Application Performance on the Information Power Grid Francine Berman U. C. San Diego and NPACI This presentation will probably involve audience."

Similar presentations


Ads by Google