Presentation is loading. Please wait.

Presentation is loading. Please wait.

PRACTICAL, TRANSPARENT OPERATING SYSTEM SUPPORT FOR SUPERPAGES

Similar presentations


Presentation on theme: "PRACTICAL, TRANSPARENT OPERATING SYSTEM SUPPORT FOR SUPERPAGES"— Presentation transcript:

1 PRACTICAL, TRANSPARENT OPERATING SYSTEM SUPPORT FOR SUPERPAGES
J. Navarro Rice University/Universidad Católica de Chile S. Iyer, P. Druschel, A. Cox Rice University

2 Paper Highlights Presents an general efficient mechanism to let OS manage VM pages of different sizes Superpages Without user intervention Main motivation is to address the limitations of extant translation lookaside buffers (TLB)

3 THE PROBLEM

4 The translation look aside buffer
Small high-speed memory Contains a fixed number of page table entries Content-addressable memory Entries include page frame number and page number Page frame number Bits Page number

5 TLB organization Usually fully associative
Not always true (see Intel Nehalem) Considerably fewer entries than an L1 cache Speed considerations

6 Realizations (I) TLB of ULTRA SPARC III 64-bit addresses
Do not even attempt to memorize this! TLB of ULTRA SPARC III 64-bit addresses Maximum program size is 16 TB (244) Supported page sizes were 4 KB, 16KB, 64 KB, 4MB ("superpages") External L2 cache had a maximum capacity of 8 MB

7 Realizations (II) TLB of ULTRA SPARC III Dual direct-mapping TLB
Do not even attempt to memorize this! TLB of ULTRA SPARC III Dual direct-mapping TLB 64 entries for code pages 64 entries for data pages Each entry occupies 64 bits Page number and page frame number Context Valid bit, dirty bit, …

8 Realizations (III) Intel Nehalem Architecture:
Do not even attempt to memorize this! Do not even attempt to memorize this! Intel Nehalem Architecture: Two-level TLB:First level: Two parts Data TLB has 64 entries for 4K pages (4K) or 32 for big pages (2M/4M) Instruction TLB has 128 entries for 4K pages and 7 for big pages.

9 Realizations (IV) Second level: Unified cache
Do not even attempt to memorize this! Second level: Unified cache Can store up to 512 entries Operates only with 4K pages

10 The main problem TLB sizes have not grown with sizes of main memories
Define TLB coverage as amount of main memory that can be accessed without incurring TLB misses Typically one gigabyte or less Relative TLB coverage is fraction of main memory that can be accessed without incurring TLB misses

11 Back to our examples Ultra SPARC III with 4 KB pages:
( )×4 KB = 512 KB with 16 KB pages: ( )×16 KB = 2 MB

12 Back to our examples Intel Nehalem with 4 KB pages: Level 1:
( )×4 KB = 768 KB Level 2: 512×4 KB = 2 MB

13 Relative TLB coverage evolution

14 Consequences Processes with very large working sets incur too many TLB misses "Significant performance penalty" Some machines have L2 caches bigger than their TLB coverage Can have TLB misses for data in L2 cache!

15 Solutions (I) Increase TLB size: Would increase TLB access time
Would slow down memory accesses Increase page sizes: Would increase memory fragmentation Poor utilization of main memory

16 Solutions (II) Use multiple page sizes:
Keep a relatively small "base" page size Say 4 KB Let them coexist with much larger page sizes Superpages Intel Nehalem solution

17 Hardware limitations (I)
Superpage sizes must be supported by hardware: 4 KB, 16KB, 64 KB, 4MB for UltraSPARC III 4 KB, 2 MB and 4 MB for Intel Nehanem Ten possible page sizes from 4KB to 256M for Intel Itanium

18 Hardware limitations (II)
Superpages must be contiguous and properly aligned in both virtual and physical address spaces Single TLB entry for each superpage All its base pages must have Same protection attributes Same clean/dirty status Will cause problems

19 ISSUES AND TRADE-OFFS

20 Allocation When we bring a page in main memory, we can
Put it anywhere in RAM Will need to relocate it to a suitable place when we merge it into a superpage Put it in a location that would let us "grow" a superpage around it: Reservation-based allocation Must pick a maximum size for the potential superpage

21 Fragmentation control
The OS must keep contiguous chunks of memory availably at any time OS will break previous reservation commitments if the superpage is unlikely to materialize Must "treat contiguity a a potentially contended resource"

22 Promotion Once a sufficient number of base pages within a potential superpage have been allocated, the OS may elect to promote them into a superpage. This requires Updating PTEs for all bases pages in the new superpage Bringing the missing base pages into main memory

23 Promotion Promotion can be incremental
Progressively larger and larger superpages In use In use In use In use In use In use Free Free Superpage In use Free

24 Demotion OS should disband or reduce the size of a superpage whenever some portions of it fall in disuse Main problem is that OS can only track accesses at the level of the superpage

25 Eviction Not that different from expelling individual base pages
Must flush out all base pages of any superpage containing dirty pages OS cannot ascertain which base pages remain clean

26 Many OS kernels use superpages Focus here is on application memory
RELATED APPROACHES Many OS kernels use superpages Focus here is on application memory

27 Reservations Talluri and Hill: propose a reservation-based scheme
reservations can be preempted emphasis is on partial subblocks HP-UX and IRIX Create superpages at page fault time User must specify a preferred per segment page size

28 Page relocation Relocation-based schemes
Let base pages reside any place in main memory Migrate these pages to a contiguous region in main memory when they find out that superpages are "likely to be beneficial." Disadvantage: cost of copying base pages Advantage: " more robust to fragmentation"

29 Hardware support Two proposals
Having multiple valid bits in each TLB entry Would allow small superpages to contain missing base pages Partial subblocking (Talluri and Hill) Adding additional level of address translation in memory controller Would "eliminate the contiguity requirement for superpages" (Fang et al.)

30 DESIGN

31 Allocation Use A reservation-based scheme for superpages
Assumes a preferred superpage size for a given range of addresses A buddy system to manage main memory Think of scheme used to manage block fragments in Unix FFS

32 Preferred superpage size (I)
For fixed-size memory objects, pick largest aligned superpage that Contains the faulting base page Does not overlap with other superpages or tentative superpages Does not extend over the boundaries of the object

33 Preferred superpage size (II)
For dynamically-size memory objects, pick largest aligned superpage that Contains the faulting base page Does not overlap with other superpages or tentative superpages Does not exceed the current size of the object

34 Fragmentation control
Mostly managed by buddy allocator Helped by page replacement daemon Modified BSD daemon is made "contiguity-aware"

35 Promotion Use incremental promotion
Wait until superpage is fully populated Conservative approach

36 Demotion (I) Incremental demotion Required when
A base page of a superpage is expelled from main memory Protection attributes of some base pages are changed

37 Demotion (II) Speculative demotion
Could be done each time a superpage referenced bit is reset When memory becomes scarce Let system know which parts of a superpage are still in use

38 Handling dirty superpages (I)
Demote superpages as soon as they a base page modified Otherwise would have to flush out whole superpage when it will be expelled from main memory Because there is one single dirty bit per superpage

39 Handling dirty superpages (II)
A superpage has been modified The whole superpage is dirty We break up the superpage All other pages remain clean X X

40 Multi-list reservation scheme
Maintains separate lists for each superpage size supported by the hardware, but largest one Each list contains reserved frames that could still accommodate a superpage of that size Sorted by time of their most recent page frame allocation Oldest entries are preempted first

41 Example Area above contains 8 page frames reserved for a possible superpage Three frames are allocated, five are free Breaking the reservation will free space for A superpage with 4 base pages or Two superpages with two base page each

42 Population maps One per memory object
Keep track of allocated pages within each object

43 EVALUATION

44 Benchmarks Thirty-five representative programs running on an Alpha processor Four page sizes: 8 KB, 64 KB, 512 KB and 4 MB Fully associative TLB with 128 entries for code and 128 for data 512 MB of RAM Separate 64 KB code and 64 KB data L1 caches 4 MB unified L2 cache

45 Results (I) Eighteen out of 35 benchmarks showed improvements over 5 percent Ten out of 35 showed improvements over 25 percent A single application showed a degradation of 1.5 percent Allocator does not does not distinguish zeroed-out pages from other free pages

46 Results (II) Different applications benefit most from different superpage sizes Should let system choose among multiple page sizes Contiguity-aware page replacement daemon can maintain enough contiguous regions Huge penalty for not demoting dirty superpages Overheads are small

47 CONCLUSION It works and does not require any changes to existing hardware


Download ppt "PRACTICAL, TRANSPARENT OPERATING SYSTEM SUPPORT FOR SUPERPAGES"

Similar presentations


Ads by Google