Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Case For Prediction-based Best-effort Real-time Peter A. Dinda Bruce Lowekamp Loukas F. Kallivokas David R. O’Hallaron Carnegie Mellon University.

Similar presentations


Presentation on theme: "The Case For Prediction-based Best-effort Real-time Peter A. Dinda Bruce Lowekamp Loukas F. Kallivokas David R. O’Hallaron Carnegie Mellon University."— Presentation transcript:

1 The Case For Prediction-based Best-effort Real-time Peter A. Dinda Bruce Lowekamp Loukas F. Kallivokas David R. O’Hallaron Carnegie Mellon University

2 2 Overview Distributed interactive applications Could benefit from best-effort real-time Example: QuakeViz (Earthquake Visualization) and the DV (Distributed Visualization) framework Evidence for feasibility of prediction-based best-effort RT service for these applications Mapping algorithms Execution time model Host load prediction

3 3 Application Characteristics Interactivity Users initiate tasks with deadlines Timely, consistent, and predictable feedback Resilience Missed deadlines are acceptable Distributability Tasks can be initiated on any host Adaptability Task computation and communication can be adjusted Shared, unreserved computing environments

4 4 Teora, Italy 1980 Motivation for QuakeViz

5 5 Northridge Earthquake Simulation 40 seconds of an aftershock of Jan 17, 1994 Northridge quake in San Fernando Valley of Southern California 50 x 50 x 10 km region 13,422,563 nodes 76,778,630 tetrahedrons 1 Hz frequency resolution 20 meter spatial resolution 16,666 40M x 40M SMVPs 15 GBytes of RAM 6.5 hours on 256 T3D PEs 80 trillion (10 12 ) FLOPs 3.5 sustained GFLOP/s 1.4 peak GB/s 16,666 time steps 13,422,563 3-tuples per step 6 Terabytes Real Event Huge Model High Perf. Simulation HUGE OUTPUT

6 6 Must Visualize Massive Remote Datasets Problem One Month Turnaround Time Datasets must be kept at remote supercomputing site due to their sheer size Visualization is inherently distributed

7 7 QuakeViz: Distributed Interactive Visualization of Massive Remote Earthquake Datasets Goal Interactive manipulation of massive remote datasets from arbitrary clients Sample 2 host visualization of Northridge Earthquake

8 8 DV: A Framework For Building Distributed Interactive Visualizations of Massive Remote Datasets Dataset interpolation isosurface extraction isosurface extraction scene synthesis scene synthesis interpolation morphology reconstruction morphology reconstruction local display and user rendering reading ROI resolution contours Logical View: Distributed pipelines of vtk * modules *Visualization Toolkit, open source C++ library User feedback and quality settings Display update latency Example: deadline

9 9 DV: A Framework For Building Distributed Interactive Visualizations of Massive Remote Datasets Dataset interpolation isosurface extraction isosurface extraction scene synthesis scene synthesis interpolation morphology reconstruction morphology reconstruction local display and user rendering reading ROI resolution contours Logical View: Distributed pipelines of vtk * modules *Visualization Toolkit, open source C++ library User feedback and quality settings Display update latency Example: deadline

10 10 Active Frames Active Frame n+2 ? interpolation isosurface extraction isosurface extraction scene synthesis scene synthesis Physical View of Example Pipeline: deadline Active Frame n+1 ? deadline Active Frame n ? deadline Encapsulates data, computation, and path through pipeline Launched from server by user interaction Dynamically chose on which host each pipeline stage will execute and what quality settings to use

11 11 Active Frames Active Frame n+2 ? interpolation isosurface extraction isosurface extraction scene synthesis scene synthesis Physical View of Example Pipeline: deadline Active Frame n+1 ? deadline Active Frame n ? deadline Encapsulates data, computation, and path through pipeline Launched from server by user interaction Dynamically chose on which host each pipeline stage will execute and what quality settings to use

12 12 Active Frame Execution Model Active Frame Host Load Measurement Network Measurement Remos Measurement Infrastructure Mapping Algorithm Prediction CMU Remos API Resource Predictions Exec Time Model pipeline stage quality params deadline

13 13 Active Frame Execution Model Active Frame Host Load Measurement Network Measurement Remos Measurement Infrastructure Mapping Algorithm Prediction CMU Remos API Resource Predictions Exec Time Model pipeline stage quality params deadline

14 14 Active Frame Execution Model Active Frame Host Load Measurement Network Measurement Remos Measurement Infrastructure Mapping Algorithm Prediction CMU Remos API Resource Predictions Exec Time Model pipeline stage quality params deadline

15 15 Feasibility of Best-effort Mapping Algorithms

16 16 Active Frame Execution Model Active Frame Host Load Measurement Network Measurement Remos Measurement Infrastructure Mapping Algorithm Prediction CMU Remos API Resource Predictions Exec Time Model pipeline stage quality params deadline

17 17 Feasibility of Execution Time Models

18 18 Active Frame Execution Model Active Frame Host Load Measurement Network Measurement Remos Measurement Infrastructure Mapping Algorithm Prediction CMU Remos API Resource Predictions Exec Time Model pipeline stage quality params deadline

19 19 Why Is Prediction Important? Bad Prediction No obvious choice Good Prediction Two good choices Predicted Exec Time Good predictions result in smaller confidence intervals Smaller confidence intervals simplify mapping decision Predicted Exec Time deadline

20 20 Feasibility of Host Load Prediction

21 21 Comparing Prediction Models Good models achieve consistently low error Mean Squared Error Model AModel BModel C Inconsistent low error Consistent low error Consistent high error Run 1000s of randomized testcases, measure prediction error for each, datamine results: 2.5% 25% 50% Mean 75% 97.5%

22 22 Comparing Linear Models for Host Load Prediction 15 second predictions for one host 2.5% 25% 50% Mean 75% 97.5% Raw CheapExpensive Very $

23 23 Conclusions Identified and described class of applications that benefit from best-effort real-time Distributed interactive applications Example: QuakeViz / DV Showed feasibility of prediction-based best- effort real-time systems Mapping algorithms, execution time model, host load prediction

24 24 Status - http://www.cs.cmu.edu/~cmcl QuakeViz / DV Overview: PDPTA'99, Aeschlimann, et al http://www.cs.cmu.edu/~quake Currently under construction Remos Overview: HPDC’98, DeWitt, et al Available from http://www.cs.cmu.edu/~cmcl/remulac/remos.html Integrating prediction services Network measurement and analysis HPDC’98, DeWitt, et al; HPDC’99, Lowekamp, et al Currently studying network prediction Host load measurement and analysis LCR’98, Dinda; SciProg’99, Dinda Host load prediction HPDC’99, Dinda, et al

25 25 Feasibility of Best-effort Mapping Algorithms

26 26 Feasibility of Host Load Prediction

27 27 Comparing Linear Models for Host Load Prediction 15 second predictions aggregated over 38 hosts 2.5% 25% 50% Mean 75% 97.5% Raw CheapExpensive Very $


Download ppt "The Case For Prediction-based Best-effort Real-time Peter A. Dinda Bruce Lowekamp Loukas F. Kallivokas David R. O’Hallaron Carnegie Mellon University."

Similar presentations


Ads by Google