Segmentation COMP 755.

Slides:



Advertisements
Similar presentations
Virtual Memory Basics.
Advertisements

CSS430 Memory Management Textbook Ch8
Segmentation and Paging Considerations
Allocating Memory.
Memory Management Questions answered in this lecture: How do processes share memory? What is static relocation? What is dynamic relocation? What is segmentation?
OS Memory Addressing.
Memory Management Norman White Stern School of Business.
Memory Management and Paging CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
Memory Management. 2 How to create a process? On Unix systems, executable read by loader Compiler: generates one object file per source file Linker: combines.
Chapter 3.2 : Virtual Memory
Memory Management 1 CS502 Spring 2006 Memory Management CS-502 Spring 2006.
CS-3013 & CS-502, Summer 2006 Memory Management1 CS-3013 & CS-502 Summer 2006.
Memory Management Five Requirements for Memory Management to satisfy: –Relocation Users generally don’t know where they will be placed in main memory May.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts, Amherst Operating Systems CMPSCI 377 Lecture.
Virtual Memory Chantha Thoeun. Overview  Purpose:  Use the hard disk as an extension of RAM.  Increase the available address space of a process. 
1 Memory Management Memory Management COSC513 – Spring 2004 Student Name: Nan Qiao Student ID#: Professor: Dr. Morteza Anvari.
1 Chapter 3.2 : Virtual Memory What is virtual memory? What is virtual memory? Virtual memory management schemes Virtual memory management schemes Paging.
CIS250 OPERATING SYSTEMS Memory Management Since we share memory, we need to manage it Memory manager only sees the address A program counter value indicates.
Chapter 8 – Main Memory (Pgs ). Overview  Everything to do with memory is complicated by the fact that more than 1 program can be in memory.
1 Address Translation Memory Allocation –Linked lists –Bit maps Options for managing memory –Base and Bound –Segmentation –Paging Paged page tables Inverted.
PA0 due 60 hours. Lecture 4 Memory Management OSTEP Virtualization CPU: illusion of private CPU RAM: illusion of private memory Concurrency Persistence.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
By Teacher Asma Aleisa Year 1433 H.   Goals of memory management  To provide a convenient abstraction for programming.  To allocate scarce memory.
Operating Systems Lecture 14 Segments Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard. Zhiqing Liu School of Software Engineering.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Virtual Memory Hardware.
Review °Apply Principle of Locality Recursively °Manage memory to disk? Treat as cache Included protection as bonus, now critical Use Page Table of mappings.
Memory Management Continued Questions answered in this lecture: What is paging? How can segmentation and paging be combined? How can one speed up address.
COMP091 – Operating Systems 1 Memory Management. Memory Management Terms Physical address –Actual address as seen by memory unit Logical address –Address.
OS Memory Addressing. Architecture CPU – Processing units – Caches – Interrupt controllers – MMU Memory Interconnect North bridge South bridge PCI, etc.
Memory Management. 2 How to create a process? On Unix systems, executable read by loader Compiler: generates one object file per source file Linker: combines.
Memory management The main purpose of a computer system is to execute programs. These programs, together with the data they access, must be in main memory.
Computer Security: Chapter 5 Operating Systems Security.
1 CMSC Memory Management (Cont’d) Segmentation, Virtual Memory OSTEP Book (See Schedule Page) Chapter
Memory Management Chapter 7.
Translation Lookaside Buffer
Memory Management Virtual Memory.
Memory Management.
Non Contiguous Memory Allocation
Virtualization Virtualize hardware resources through abstraction CPU
Chapter 2 Memory and process management
CSE 120 Principles of Operating
CSC 322 Operating Systems Concepts Lecture - 12: by
Mechanism: Address Translation
Outline Paging Swapping and demand paging Virtual memory.
Paging COMP 755.
COMBINED PAGING AND SEGMENTATION
Chien-Chung Shen CIS/UD
PA1 is out Best by Feb , 10:00 pm Enjoy early
Main Memory Management
Chapter 8: Main Memory.
O.S Lecture 13 Virtual Memory.
Memory Management 11/17/2018 A. Berrached:CS4315:UHD.
Main Memory Background Swapping Contiguous Allocation Paging
CS399 New Beginnings Jonathan Walpole.
Virtual Memory Hardware
CSE451 Memory Management Introduction Autumn 2002
CSE 451: Operating Systems Autumn 2005 Memory Management
Chapter 8: Memory Management strategies
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
Lecture 7: Flexible Address Translation
Memory Management CSE451 Andrew Whitaker.
Evolution in memory management techniques
CS703 - Advanced Operating Systems
Evolution in memory management techniques
Evolution in memory management techniques
Mechanism: Address Translation
EECE.4810/EECE.5730 Operating Systems
Chapter 8 & 9 Main Memory and Virtual Memory
Presentation transcript:

Segmentation COMP 755

How to address large address space? A 32-bit address space (4 GB in size); a typical program will only use megabytes of memory, but still would demand that the entire address space be resident in memory. This just a waste of space. To solve this problem, an idea was born, and it is called segmentation.

Segmentation: Generalized Base/Bounds Instead of having just one base and bounds pair in our MMU, why not have a base and bounds pair per logical segment of the address space? We have three logically-different segments: code, stack, and heap. What this allows the OS to do is to place each one of those segments in different parts of physical memory, and thus avoid filling physical memory with unused virtual address space.

Base and Bounds For example, see Figure 16.2 (page 3); there you see a 64KB physical memory with those three segments in it (and 16KB reserved for the OS). So the hardware structure for out MMU would be:

Example Translation Assume a reference is made to virtual address 100 (which is in the code segment). When the reference takes place (say, on an instruction fetch), the hardware will add the base value to the offset into this segment (100 in this case) to arrive at the desired physical address: 100 + 32KB, or 32868. It will then check that the address is within bounds (100 is less than 2KB), find that it is, and issue the reference to physical memory address 32868.

SEGMENTATION FAULT The term segmentation fault or violation arises from a memory access on a segmented machine to an illegal address. Humorously, the term persists, even on machines with no support for segmentation at all. Or not so humorously, if you can’t figure why your code keeps faulting. int array[10]; … array[10] = 100; // Segmentation fault!

Address in heap example Suppose we attempt to access the virtual address 4200. If we just add the virtual address 4200 to the base of the heap (34KB), we get a physical address of 39016, which is not the correct physical address. Need to first do is extract the offset into the heap, i.e., which byte(s) in this segment the address refers to. Because the heap starts at virtual address 4KB (4096), the offset of 4200 is actually 4200 minus 4096, or 104. We then take this offset (104) and add it to the base register physical address (34K) to get the desired result: 34920.

Which Segment Are We Referring To? The hardware uses segment registers during translation. How does it know the offset into a segment, and to which segment an address refers? One way to do this is by using the explicit approach. In our example, then, if the top two bits are 00, the hardware knows the virtual address is in the code segment, and thus uses the code base and bounds pair to relocate the address to the correct physical location. If the top two bits are 01, the hardware knows the address is in the heap, and thus uses the heap base and bounds.

Heap example again. The virtual address from above (4200) and translate it, just to make sure this is clear. The virtual address 4200, in binary form, can be seen here: As you can see from the picture, the top two bits (01) tell the hardware which segment we are referring to. The bottom 12 bits are the offset into the segment: 0000 0110 1000, or hex 0x068, or 104 in decimal.

Implicit approach In the implicit approach, the hardware determines the segment by noticing how the address was formed. For example, suppose the address was generated from the program counter (i.e., it was an instruction fetch), then the address is within the code segment; if the address is based off of the stack or base pointer, it must be in the stack segment; any other address must be in the heap.

Stack The stack has been relocated to physical address 28KB in the diagram above, but with one critical difference: it grows backwards. In physical memory, it starts at 28KB and grows back to 26KB, corresponding to virtual addresses 16KB to 14KB; translation must proceed differently. The first thing we need is a little extra hardware support. Instead of just base and bounds values, the hardware also needs to know which way the segment grows (a bit, for example, that is set to 1 when the segment grows in the positive direction, and 0 for negative).

Stack virtual address Given virtual address 15KB,which should map to physical address 27KB. Or virtual in binaryform, like this: 11 1100 0000 0000 (hex 0x3C00). The hardware uses the top two bits (11) to designate the segment, but then we are left with an offset of 3KB. To obtain the correct negative offset, we must subtract the maximum segment size from 3KB: 3KB-4KB = -1KB. Add the negative offset (-1KB) to the base. 28KB – 1KB = 27KB The bounds check can be calculated by ensuring the absolute value of the negative offset is less than the segment’s size.

Support for Sharing As support for segmentation grew, system designers soon realized that they could realize new types of efficiencies with a little more hardware support. Specifically, to save memory, sometimes it is useful to share certain memory segments between address spaces. In particular, code sharing is common and still in use in systems today.

Protection bits The hardware, in the form of protection bits, provides basic support by adding extra few bits per segment. Indicating whether or not a program can read or write a segment, or perhaps execute code that lies within the segment.

OS Support: two issues The first issue is what should the OS do on a context switch? The segment registers must be saved and restored. Each process has its own virtual address space, and the OS must make sure to set up these registers correctly before letting the process run again. The second, and more important, issue is managing free space in physical memory. When a new address space is created, the OS has to be able to find space in physical memory for its segments.

Memory Allocation The general problem that arises is that physical memory quickly becomes full of little holes of free space, making it difficult to allocate new segments, or to grow existing ones. We call this problem external fragmentation; see Figure 16.6 (left). Compaction is expensive, as copying segments is memory-intensive and generally uses a fair amount of processor time. (Poor)

Summary Segmentation solves a number of problems, and helps us build a more effective virtualization of memory. Beyond just dynamic relocation, segmentation can better support sparse address spaces, by avoiding the huge potential waste of memory between logical segments of the address space. It is also fast, as doing the arithmetic segmentation requires is easy and well-suited to hardware; the overheads of translation are minimal. A fringe benefit arises too: code sharing.

Summary Continued Allocating variable-sized segments in memory leads to some problems. External fragmentation Segmentation still isn’t flexible enough to support our fully generalized, sparse address space.