Presentation is loading. Please wait.

Presentation is loading. Please wait.

Urs Hengartner Sonesh Surana Yinglian Xie Mentor: Dushyanth Narayanan.

Similar presentations


Presentation on theme: "Urs Hengartner Sonesh Surana Yinglian Xie Mentor: Dushyanth Narayanan."— Presentation transcript:

1 Urs Hengartner Sonesh Surana Yinglian Xie Mentor: Dushyanth Narayanan

2 Applications demand low latency even in case of varying CPU availability Multi-fidelity applications can adapt their fidelities to changes in CPU availability Maintain latency bound for CPU-bound processes facing varying CPU availability monitor CPU availability predict future CPU availability predict latency as a function of fidelity (Odyssey) use this function to find the right application fidelity for some latency constraint (Odyssey)

3 Monitoring: periodic measurement of number of runnable processes CPU ticks consumed by multi-fidelity application Prediction of CPU availability n: number of runnable processes (smoothed) f: fraction of CPU ticks consumed by multi-fidelity application (smoothed)

4 Predict CPU availability in next second/next ten seconds Background load consisting of CPU-intensive/make processes 50 experiments per data point

5 Why are short-term predictions less accurate? Artifact of Linux scheduler 100 ticks per second, per process time quantum of 20 ticks for two processes A and B: Process A gets 60 ticks per second Process B gets 40 ticks per second Impossible to make accurate short-term prediction unless scheduler is simulated at user-level kernel provides more scheduling information

6 Why are predictions for make background load less accurate? Artifact of Linux scheduler make process gets added to run queue, but is not immediately scheduled run queue always contains make process(es) however, make process(es) always consumes less than its share makes our formula under-predict CPU availability Our formula is not powerful enough

7 CPU prediction is feasible Predictions are difficult for short prediction intervals I/O-bound background processes Future Work what kind of kernel-level support? more sophisticated prediction formula (e.g., techniques from machine learning)

8 Interactive rendering application Multiple levels of fidelity based on number of rendered polygons Background load of three CPU-intensive processes and desired latency of two seconds Scenario 1: Prediction always returns 100% CPU availability Scenario 2: Prediction returns CPU availability based on our formula


Download ppt "Urs Hengartner Sonesh Surana Yinglian Xie Mentor: Dushyanth Narayanan."

Similar presentations


Ads by Google