Download presentation
Presentation is loading. Please wait.
Published byAlban Williamson Modified over 9 years ago
1
Providing QoS with Virtual Private Machines Kyle J. Nesbit, James Laudon, and James E. Smith
2
Motivation for QoS Multithreaded Chips Resource sharing Higher utilization E.g., Niagara Inter-thread interference Applications Soft real-time applications Cell-phones and game consoles Fine-grain parallel applications Scheduling and synchronization Server consolidation Hosting services
3
QoS Objectives Isolation Priority Fairness Performance Objectives are combined E.g., Isolation and performance
4
QoS Framework Separation of Objectives, Policies, Mechanisms Well structured solutions Main Memory (Capacity M) Memory Controller Proc. 1 L1 Cache Interconnect Proc. 2 L1 Cache Proc. 3 L1 Cache Proc. 4 L1 Cache L2 Cache (Capacity C) Bandwidth K Bandwidth L Local Policy Mechanisms Local Policy Local Policy Global Policy Objectives
5
Local Policy Resource-Directed Isolation Allocations – Minimum Service Allocated Consumed Priority Vector – Relative Service Unallocated or Unused Fairness Multiple Definitions? Higher Priority Consumed Same Priority Unused Service Work Conserving Policies
6
Global Policy Virtual Private Machines Main Memory Memory Cntl. L2 Cache (Capacity.5C) Proc. 1 L1 Cache VPM 1 Main Memory Memory Cntl. L2 Cache (Capacity.1C) Proc. 2 L1 Cache VPM 2 BW.5L BW.1L Main Memory Memory Cntl. L2 Cache (Capacity.1C) Proc. 3 L1 Cache VPM 3 Main Memory Memory Cntl. L2 Cache (Capacity.1C) Proc. 4 L1 Cache VPM 4 BW.1L BW.5K BW.1K Real-Time Thread Background Thread Background Thread Background Thread Priority = 0 Fairness Policy Priority = 3
7
Global Policy Performance-Directed Global optimization problem Use local policies to control resources Optimize one bottleneck and the bottleneck appears somewhere else Performance-directed policies need to fit into the VPM policy E.g., optimize aggregate performance within a priority level
8
Status Completed Secondary cache [ISCA ’07] and SDRAM memory system mechanisms [Micro ‘06] Bandwidth mechanisms Cache capacity mechanisms Ongoing and Future Work Multithreaded Processors Work conserving cache capacity policy Priority policy Aggregate performance policy
9
Conclusion Objectives: Isolation, Priority, Fairness, Performance Implementation: Separation of policies and mechanisms Abstraction: Virtual Private Machines Composable global policies that coexist on a per application basis
10
Questions and Comments Do Virtual Private Machines meet all of the requirements of software controlled micro- architecture resource management?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.