Presentation is loading. Please wait.

Presentation is loading. Please wait.

Automatic Compaction of OS Kernel Code via On-Demand Code Loading Haifeng He, Saumya Debray, Gregory Andrews The University of Arizona.

Similar presentations


Presentation on theme: "Automatic Compaction of OS Kernel Code via On-Demand Code Loading Haifeng He, Saumya Debray, Gregory Andrews The University of Arizona."— Presentation transcript:

1 Automatic Compaction of OS Kernel Code via On-Demand Code Loading Haifeng He, Saumya Debray, Gregory Andrews The University of Arizona

2 Background General Purpose Operating Systems Resource constraints Limited amount of memory Reduce memory footprint of OS kernel code as much as possible Desktop Embedded Devices

3 General OS with Embedded Apps. Executed Needed (exception handling) Not needed but missed by existing analysis Statically proved as unnecessary by prior work Unexecuted but still can ’ t be discarded About 68% kernel code is not executed A Linux kernel with minimal configuration Profiling with MiBench suite 32% 18%-24%

4 Our Approach Limited amount of main memory Greater amount of secondary storage Memory HierarchyKernel Code lives in memory lives in secondary storage Hot code Cold code On-Demand Code Loading

5 A Big Picture Main Memory Remaining kernel code Code clustering Memory-resident kernel code Hot code Code buffer Accommodate one cluster at a time Core code Scheduler Memory management Interrupt handling Secondary Storage size(cluser)  size(code buffer)

6 Memory Requirement for Kernel Code Main Memory Hot code Code buffer Core code Size is predetermined Select the most frequently executed code How much hot code should stay in memory? The total size of memory- resident code  size(core code)x(1 + ) where  specified by user (e.g. 0%,10%) Size specified by user Upper-bound of memory usage for kernel code

7 Our Approach  Reminiscent of the old idea of overlays Purely software-based approach Does not require MMU or OSs support for VM  Main steps Apply clustering to whole-program control flow graph  Group “ related ” code together  Reduce cost of code loading Transform kernel code to support overlays  Modify control flow edges

8 Code Clustering  Objective minimize the number of code loading  Given: An edge-weighted whole-program control flow graph A list of functions marked as core code A growth bound  for memory-resident code Code buffer size BufSz  Apply a greedy node-coalescing algorithm until no coalescing can be carried out without violating Size of memory-resident code  size(core code)x(1+ ) Size of each cluster  BufSz

9 Code Transformation  Apply code transformation on Inter-cluster control flow edges Control flow edges from memory- resident code to clusters (but not needed on the other way) All indirect control flow edges (targets only known at runtime)

10 Code Transformation After clustering Cluster A Cluster B call F 0x220 F: Rewritten code Cluster A push &F call dyn_loader dyn_loader Cluster B (in code buffer) 0x2000x500 0x520 F: Runtime library 1. Address look up for &F 2. Load B into code buffer 3. Translate target addr &F into relative addr in code buffer

11 0x500 … push &F 0x530 call dyn_loader 0x540 pc Issue: Call Return in Code Buffer code buffer : start at 0x500 Runtime 0x200 … 0x220 F: … 0x250 ret Cluster B 0x100 … push &F 0x130 call dyn_loader 0x140 Cluster A Code Cluster A return address = 0x540

12 0x500 … 0x520 F: … 0x540 0x550 ret Call Return in Code Buffer 0x200 … 0x220 F: … 0x250 ret Cluster B 0x100 … push &F 0x130 call dyn_loader 0x140 Cluster A code buffer : start at 0x500 CodeRuntime Cluster B pc A has been overwritten by B! pc return address = 0x540 Load B into code buffer pc

13 0x500 … push &F 0x530 call dyn_loader 0x540 pc Issue: Call Return in Code Buffer code buffer : start at 0x500 Runtime 0x200 … 0x220 F: … 0x250 ret Cluster B 0x100 … push &F 0x130 call dyn_loader 0x140 Cluster A Code Cluster A return address = 0x540

14 0x500 … push &F 0x530 call dyn_loader 0x540 pc Call Return in Code Buffer code buffer : start at 0x500 Runtime 0x200 … 0x220 F: … 0x250 ret Cluster B 0x100 … push &F 0x130 call dyn_loader 0x140 Cluster A Code Cluster A return address = 0x540 = & dyn_restore_A dyn_restore_A Actual ret_addr = 0x140 Fix

15 0x500 … 0x520 F: … 0x540 0x550 ret Call Return in Code Buffer 0x100 … 0x220 F: … 0x250 ret Cluster B 0x100 … push &F 0x130 call dyn_loader 0x140 Cluster A code buffer : start at 0x500 CodeRuntime Cluster B pc return address = & dyn_restore_A pc dyn_restore_A Actual ret_addr = 0x140 Load B into code buffer

16 0x500 … push &F 0x530 call dyn_loader 0x540 Call Return in Code Buffer code buffer : start at 0x500 return address = & dyn_restore_A Runtime 0x100 … 0x220 F: … 0x250 ret Cluster B 0x100 … push &F 0x130 call dyn_loader 0x140 Cluster A Code Cluster A pc dyn_restore_A Actual ret_addr = 0x140 restore

17 Context Switches and Interrupts  Context switches  Interrupt Currently keep interrupt handlers in main memory Execute cluster A in code buffer context switches Execute. May change code buffer Remember A in Thread 1 task_struct Continue executing. in code buffer context switches Time Reload A into code buffer Thread 2 Thread 1

18 Experimental Setup  Start with a minimally configured kernel (Linux 2.4.31) Compile the kernel with optimization for code size ( gcc –Os ) Original code size: 590KB  Implemented using binary rewriting tool PLTO  Benchmarks: MiBench, MediaBench, httpd

19 Memory Usage Reduction for Kernel Code Code buffer size = 2KB  Reduction decreases because amount of memory-resident code increases

20 Estimated Cost of Code Loading  All experiments were run in desktop environment  We estimated the cost of code loading as follows: Choose Micron NAND flash memory as an example (2KB page, takes to read a page) Est. Cost =

21 Overhead of Code Loading Unmodified Kernel  57% memory reduction 56% memory reduction 55% memory reduction

22 Related Work  Code compaction of OS kernel D. Chanet et al. LCTES 05 H. He et al. CGO 07  Reduce memory requirement in embedded system C. Park et al. EMSOFT 04 H. Park et al. DATE 06 B. Egger et al. CASE 06, EMSOFT 06  Binary rewriting of OS kernel Flower et al. FDDO-4

23 Conclusions  Embedded devices typically have a limited amount of memory  General-purpose OS kernels contain lots of code that is not executed in an embedded context  Reduce the memory requirement of OS kernel by using an on-demand code overlay mechanism  Memory requirements reduced significantly with little degradation in performance

24 Estimated Cost of Code Loading

25 A Big Picture Code buffer Main Memory Hot code Reuse code buffer Cold code Code clustering Core code Memory- resident kernel code Accommodate one cluster at a time Scheduler Memory management Interrupt handling

26 Memory Requirement for Kernel Code Core code How much hot code should stay in memory? Hot code Need to be in memory Size is predetermined Code buffer Size specified by user (we chose 2KB) Upper-bound of memory usage for kernel code Select the most frequently executed code Keep the total size of memory-resident code  size(core code)x(1 + ) where  specified by user (0%,10%)


Download ppt "Automatic Compaction of OS Kernel Code via On-Demand Code Loading Haifeng He, Saumya Debray, Gregory Andrews The University of Arizona."

Similar presentations


Ads by Google