Presentation is loading. Please wait.

Presentation is loading. Please wait.

Improving IPC by Kernel Design

Similar presentations


Presentation on theme: "Improving IPC by Kernel Design"— Presentation transcript:

1 Improving IPC by Kernel Design
Jochen Liedtke Presentation: Rebekah Leslie

2 Microkernels and IPC: Microkernel architectures introduce a heavy reliance on IPC, particularly in modular systems Mach pioneered an approach to highly modular and configurable systems, but had poor IPC performance Poor performance leads people to avoid microkernels entirely, or architect their design to reduce IPC Paper examines a performance oriented design process and specific optimizations that achieve good performance

3 Performance vs. Protection:
Mach provided strong isolation between tasks using indirect IPC transfer (ports) with limited access controlled by capabilities (port rights) L3 removes indirect transfer, capabilities, and RPC validation to achieve better performance Provides basic address space protection Does not provide true isolation Recent L4 designs incorporate the use of capabilities to achieve isolation in security-critical systems

4 L3 System Architecture:
Mach-like design with a focus on highly modular systems Limit kernel features to functionality that absolutely cannot be implemented at user-level Reliance on user level servers for many “traditional” OS features: page-fault handling, exception handling, device drivers System organized into tasks and threads IPC allows direct data transfer and memory sharing Direct communication between threads via thread ID

5 Performance-centric Design:
Focus on IPC Any feature that will increase cost must be closely evaluated When in doubt, design in favor of IPC Design for Performance A poorly performing technique is unacceptable Evaluate feature cost compared to concrete baseline Aim for a concrete performance goal Comprehensive design Consider synergistic effects of all methods and techniques Cover all levels of implementation, from design to code

6 Performance Baseline:
The cost of each feature must be evaluated relative to a concrete performance baseline For IPC, the theoretical minimum is an empty message: this measures the overhead without data transfer cost 127 cycles without prefetching delays or cache misses + 45 cycles for TLB misses = 172 cycle minimum time GOAL: 350 cycles (7 s) for short messages

7 Messages in L3: Tag: Description of message contents
direct string indirect strings memory objects Tag: Description of message contents Direct string: Data to be transferred directly from send buffer to receive buffer Indirect string: Location and size of data to be transferred by reference Memory object: Description of a region of memory to be mapped in receiver address space (shared memory) System calls: Send, receive, call (send and receive), reply/wait (receive and send)

8 Basic Message Optimizations:
Ability to transfer long, complex messages reduces the number of messages that need to be sent (system calls) Indirect strings avoid copy operations at user level User specifies data location, rather than copying data to buffer Receiver specifies destination, rather than copying from buffer Memory objects transferred lazily, i.e., page table is not modified until access is required Combined send/receive calls reduce number of traps

9 Optimization - Direct Transfer via Temporary Mapping:
mapped with kernel-only permission Two copy message transfer costs n cycles L3 copies data once to a special communication window in kernel space Window is mapped to the receiver for the duration of the call (page directory entry) copy A kernel add mapping to space B B kernel

10 Optimization - Transfer Short Messages in Registers:
IPC messages are often very short Example: Device driver ack or error replies On average, between 50% and 80% of L3 messages are less than eight bytes long Even on the register poor x86, 2 registers can be set aside for short message transfer Register transfer implementation saved 2.4 s, even more than the overhead of temporary mapping (1.2 s) because it enabled further optimizations

11 Thread Scheduling in L3:
Scheduler maintains several queues to keep track relevant thread-state information Ready queue stores threads that are able to run Wakeup queues store threads that are blocked waiting for an IPC operation to complete or timeout (organized by region) Polling-me queue stores threads waiting to send to some thread Efficient representation of data structures Queues are stored as doubly-linked lists distributed across TCBs Scheduling never causes page faults

12 Optimization - Lazy Dequeueing
A significant in a microkernel is the scheduler overhead for kernel threads (recall the user-level threads papers) Sometimes, threads are removed from a queue, only to be inserted again a short while later With weak invariants on scheduling queues, you can delay deleting an a from a queue and save overhead The ready queue contains at least all ready threads A wakeup queue contains at least all waiting threads

13 Optimization - Store Task Control Blocks in Virtual Arrays
A task control block (TCB) stores kernel data for a particular thread Every operation on a thread requires lookup, and possibly modification, of that thread’s TCB Storing TCBs in a virtual array provides fast access to TCB structures

14 Optimization - Compact Structures with Good Locality
Access TCBs through pointer to the center of the structure so that short displacements can be used One-byte long registers reach twice as much TCB data as with a pointer to the start of a structure Group related TCB information on cache line boundaries to minimize cache misses Store frequently accessed kernel data in same page as hardware tables (IDT, GDT, TSS)

15 Optimization - Reduce Segment Register Loads
Loading segment registers is expensive (9 cycles register), so many systems use a single, flat segment Kernel preservation of the segment registers requires 66 cycles for the naive approach (always reload registers) L3 instead checks if the flat value is still intact, and only does a load if not Checking alone costs 10 cycles

16 Performance Impact of Specific Optimizations:
Large messages dominated by copy overhead. Small messages dominated get benefit of optimized path through kernel. Notes: (a) for RPC (b) reduced TLB misses (c) only reduced TLB miss effect Condensed, virtual tables reduce TLB misses and provide fast access Large messages dominated by copy overhead Small messages get benefit of faster context switching, fewer system calls, and fast access to kernel structures

17 IPC Performance Compared to Mach (Short Message):
Measured using pingpong micro-benchmark that makes use of unified send/receive calls For an n-byte message, the cost is n s in L3

18 IPC Performance Compared to Mach (Long Messages):
Same benchmark with larger messages. For n-byte messages larger than 2k, cache misses increase and the IPC time is n s Slightly higher base cost Higher per-byte cost By comparison, Mach takes n s

19 Comparison of L3 RPC to Previous Systems:
For null messages.

20 Conclusions: Well-performing IPC was essential in order for microkernels to gain wide adoption, which was a major limitation of Mach L3 demonstrates that good performance is attainable in a microkernel system with IPC performance that is 10 to 22 times better than Mach The performance-centric techniques demonstrated in the paper can be employed in any system, even if the specific optimizations cannot


Download ppt "Improving IPC by Kernel Design"

Similar presentations


Ads by Google