Download presentation
Presentation is loading. Please wait.
Published byPolly Lindsey Modified over 9 years ago
1
Modeling Architecture-OS Interactions using LQNS CADlab, SERC Indian Institute of Science Bangalore-12, India.
2
Agenda
3
Promise of System Virtualization on multi- core servers. Multi-core systems + System Virtualization → Solution to Enterprise data-center issues like server outages, real-estate and energy foot- print reduction, support for legacy applications, etc. Current design approach of multi-core systems: –Replacement to mid-sized distributed clusters. –Facilitate compute intensive workloads (ex. scientific workloads). –Lopsided resource approach – many processors with limited I/O devices.
4
Characteristics of Enterprise workloads. Workloads dominated by a mix of I/O and compute intensive jobs that are either response or throughput sensitive. Jobs are sequential or parallel that require significantly less numbers of processors, unlike scientific workloads. Many independent jobs require to share a common I/O device like network interface (NIC) or a storage controller. I/O Devices should enable independent, concurrent access that meets application specific QoS (performance specific) requirements.
5
I/O Device Virtualization - Overview Issues: Two basic modes of I/O device virtualization: –Hardware virtualization using emulation. –Software/Para-virtualization using Driver domains. QoS support on the device access path provided as an extension of system software of the hosting/driver domain. Requirement: On multi-core servers, w hile consolidating I/O dominant workloads, need better ways to manage and access I/O devices without losing out on performance or usable bandwidth. Hardware Virtual Machine Monitor Web Server Database Server Application Server Driver domain NIC Para-Virtualization of I/O Devices
6
I/O Device Virtualization – XEN Virtual Machine Hardware VMM NIC Application VM Net Front Device Driver Network Bridge Net Back Driver Domain
7
Issues in evaluating alternative end-to- end architecture design approaches Preferred approach –Simulation –Analytical (Maybe!) Simulation Framework –Need for a seamless, flexible and complete system simulation environment that can allow changes in the hardware, system software (VMM and GuestOS) and allow for executing real benchmarks. –Almost all system simulators allow changes only in some of the components. Analytical approach: Obvious difficulty in modeling shared, constrained resources and exact system and workload parameters.
8
Layered Queuing Network Models Based on queuing theory principles and intuitively captures software or device contentions when multiple processes share a common software or device. Using Method of Layers (MOL) on the LQN models, performance estimates on throughput or response times can be made. To assess architecture capability and scalability, bottleneck analysis studies are also possible on the model. For this study, LQNs software package developed at the RADS lab of Carleton Univ. was used. LQM and JLQNDEF(model development tools), LQSIM (model analysis tool) and PARASRVN (tools for simulation studies on LQN models) are the different tools available. End-to-end System architecture that is expressed using standard UML notation can be converted to LQN models.
9
Intuition behind using MOL solution approach on LQN Models. Ref: Method of Layers, Rolia et.Al, IEEE-Transactions of Software Engineering, vol8 aug. 1995.
10
Procedure used for generating LQN models Generate software and device contention model for the end-to-end system architecture. Expand the contention model into interaction workflow model. Convert the workflow into LQN model.
11
Software and Device contention model for NIC device sharing in XEN. App1 Client Xen- VMM Xen- IDD App2 Client App3 Client App1 Server App2 Server App3 Server VM1VM2VM3 p1p2 p3 p0 NIC Consolidated Server
12
Xen Network Packet reception workflow model. NIC NIC device memory VMM Packets arrive at the NIC 1 Copy 2 Device Interrupt 4 NetBack Device Driver Device IDD VM NetFront Device Driver Application Reception I/O Ring Event Channels Forward interrupt 5 DMA data 3 Post notify event 6 Receive event notification 7 Swap page with VM 8 Copy to socket buffer 9 Packet Bridging NIC Device Driver
13
Xen Network Packet transmission workflow model. NIC NIC device memory VMM Packets sent over the network 7 Transmit packet 6 NetBack Device Driver Device IDD VM NetFront Device Driver Application Reception I/O Ring Event Channels DMA data 5 Receive event notification 4 Post notify event 3 Swap page with VMM 2 Copy data from socket buffer 1 Packet bridging NIC Device Driver
14
LQN Model for NIC sharing across two VMs in Xen.
15
LQNs Modeling conventions. Each functional unit of workflow represented as an task. A task carries out its functions using different entries. Tasks interact with other tasks using synchronous (blocked) or asynchronous communication among their entries. Workflow sequence of multiple independent entries within a task is captured using phases within a task. Each task can be hosted to execute on a specified processor.
16
Service time demands for entries in LQN model. Task NameEntry NamePhase 1Phase 2 httperf1_postRequest11e-100 NIC_INDMA1_IN9.24e-050 NIC_INDMA2_IN9.24e-050 VMM_ISRINISRIN11e-100 VMM_ISRINISRIN21e-100 VMM_ISRINTintr1H4.7783e-050 DD1_RecvRecv1_Pkt2.3297e-050 httpS1_RecvRecv1_Req0.000210690 httpS1_ReplySend1_Rep0.000210690 DD1_RevCRev1_Pkt1e-103.7767e-05 DD1_SendSend1_Pkt2.3297e-050 VMM_ISROUTISR1OUT1e-100 VMM_ISROUTISR2OUT1e-100 NIC_OUTDMA1_OUT9.24e-050 NIC_OUTDMA2_OUT9.24e-050 httperf1_recvReply11e-100 Timer1Timer1_intr1e-100 DD1_ForwCForw1_Pkt3.7767e-051e-10 httperf2_postRequest21e-100 httpS2_RecvRecv2_Req0.000210690 httpS2_ReplySend2_Rep0.000210690 httperf2_recvReply21e-100 VM1_ISRINTintr2H4.7783e-050 VM2_ISRINTintr3H4.7783e-050 Timer2Timer2_intr1e-100 Timer3Timer3_intr1e-100 DD2_ForwCForw2_Pkt3.7767e-051e-10 DD2_RecvRecv2_Pkt2.3297e-050 DD2_RevCRev2_Pkt1e-103.7767e-05 DD2_SendSend2_Pkt2.3297e-050
17
Enterprise Workload used to analyze NIC sharing in XEN - httperf httperf is a tool for measuring web performance. Generates a mix of compute and I/O workload on the server. Provides a flexible facility for generating various HTTP workloads and for measuring server performance. http workloads are connection oriented tcp transactions. Tool provides a client called “httperf” that generates a given number of http requests to a specific, standard http server, to fetch a specified file. Benchmark workload is specified as number of http requests-per- second. Observable metrics at the client end, that depend on the server performance, are avg. response time of replies, number of replies(received)-per-second (throughput), and errors like client timeouts, lack of file descriptors, etc. httperf reports achieved throughput as an average of limited set of observed samples.
18
httperf throughput for non-virtualized and virtualized single-core system.
19
LQN model validation for httperf throughput. Non-Virtualized ServerVirtualized Server
20
LQN model validation for httperf throughput. httperf throughput for two-VMs consolidated server
21
LQN model assumptions and result validation analysis. In the LQN model the workload input is represented as number of http requests. In reality, the http request gets broken into network packets. Packet level queuing delays are missing in the LQN model. –In non-virtualized server there is no deviation observed. –In the case of virtualized server this causes the simulation throughput results to be optimistic (<10%). Reason is asynchronous device channels between IDD and VM. For evaluation the simulation results can be used as upper bounds on achievable throughput.
22
Case Study: Evaluating NIC virtualization Architecture using LQNs. Proposal of I/O device virtualization architecture - I/O devices to be made virtualization aware; physical device should enable logical partitioning of device resources guided by QoS guarantees. VMM to control and define a virtual device, using the logical partitioning of a device. A virtual device is exported to a VM. The virtual device is private to a VM and is managed by the VM. IDD is eliminated.
23
Proposed Architecture schematic diagram.
24
Contention model for the proposed architecture.
25
LQN Model for NIC sharing across two VMs in Xen for the proposed I/O virtualization architecture.
26
httperf achievable throughout results comparison. Existing XEN architecture on a multi-core server. Proposed I/O virtualization architecture on a multi-core server.
27
Conclusion Proposed I/O device virtualization architecture. Evaluated the architecture using simulation of LQN models for httperf workload. Architecture shows benefit of up to 60% on achievable server throughput when compared to existing Xen virtualization architecture. Simulation results indicate similar performance when compared to real implementations.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.