ECE 391 Exam 2 HKN Review Session

Slides:



Advertisements
Similar presentations
More on Processes Chapter 3. Process image _the physical representation of a process in the OS _an address space consisting of code, data and stack segments.
Advertisements

EECS 470 Virtual Memory Lecture 15. Why Use Virtual Memory? Decouples size of physical memory from programmer visible virtual memory Provides a convenient.
4/14/2017 Discussed Earlier segmentation - the process address space is divided into logical pieces called segments. The following are the example of types.
Memory Management (II)
CE6105 Linux 作業系統 Linux Operating System 許 富 皓. Chapter 2 Memory Addressing.
Virtual Memory and Paging J. Nelson Amaral. Large Data Sets Size of address space: – 32-bit machines: 2 32 = 4 GB – 64-bit machines: 2 64 = a huge number.
Computer Organization and Architecture
Memory Management April 28, 2000 Instructor: Gary Kimura.
1 Process Description and Control Chapter 3 = Why process? = What is a process? = How to represent processes? = How to control processes?
System Calls 1.
Topics covered: Memory subsystem CSE243: Introduction to Computer Architecture and Hardware/Software Interface.
Problems discussed in the review session for the final COSC 4330/6310 Summer 2012.
COS 598: Advanced Operating System. Operating System Review What are the two purposes of an OS? What are the two modes of execution? Why do we have two.
8.4 paging Paging is a memory-management scheme that permits the physical address space of a process to be non-contiguous. The basic method for implementation.
IT253: Computer Organization
Chapter 8 – Main Memory (Pgs ). Overview  Everything to do with memory is complicated by the fact that more than 1 program can be in memory.
Virtual Memory Expanding Memory Multiple Concurrent Processes.
1 Linux Operating System 許 富 皓. 2 Memory Addressing.
Chapter 4 Memory Management Virtual Memory.
Memory Management Fundamentals Virtual Memory. Outline Introduction Motivation for virtual memory Paging – general concepts –Principle of locality, demand.
1 Virtual Memory Main memory can act as a cache for the secondary storage (disk) Advantages: –illusion of having more physical memory –program relocation.
Review °Apply Principle of Locality Recursively °Manage memory to disk? Treat as cache Included protection as bonus, now critical Use Page Table of mappings.
Processes and Virtual Memory
CSCI 156: Lab 11 Paging. Our Simple Architecture Logical memory space for a process consists of 16 pages of 4k bytes each. Your program thinks it has.
Memory Management memory hierarchy programs exhibit locality of reference - non-uniform reference patterns temporal locality - a program that references.
Memory Management & Virtual Memory. Hierarchy Cache Memory : Provide invisible speedup to main memory.
Virtual Memory Chapter 8.
Translation Lookaside Buffer
Lecture 11 Virtual Memory
Memory Management Virtual Memory.
Virtual Memory Chapter 7.4.
Segmentation COMP 755.
ECE232: Hardware Organization and Design
Memory COMPUTER ARCHITECTURE
Virtual Memory User memory model so far:
CS703 - Advanced Operating Systems
Outline Paging Swapping and demand paging Virtual memory.
143A: Principles of Operating Systems Lecture 6: Address translation (Paging) Anton Burtsev October, 2017.
Some Real Problem What if a program needs more memory than the machine has? even if individual programs fit in memory, how can we run multiple programs?
Swapping Segmented paging allows us to have non-contiguous allocations
CS510 Operating System Foundations
CSCI206 - Computer Organization & Programming
Lecture 14 Virtual Memory and the Alpha Memory Hierarchy
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Memory Management-I 1.
Main Memory Background Swapping Contiguous Allocation Paging
CS399 New Beginnings Jonathan Walpole.
Virtual Memory Hardware
Translation Lookaside Buffer
CSE451 Memory Management Introduction Autumn 2002
CSE 451: Operating Systems Autumn 2005 Memory Management
TLB Performance Seung Ki Lee.
Virtual Memory Overcoming main memory size limitation
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
CSE451 Virtual Memory Paging Autumn 2002
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
Lecture 8: Efficient Address Translation
Last section! Project 4 + EC due tomorrow Today: Project 4 questions
CSE 471 Autumn 1998 Virtual memory
Paging and Segmentation
CS703 - Advanced Operating Systems
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Chapter 8: Main Memory CSS503 Systems Programming
Lecture Topics: 11/20 HW 7 What happens on a memory reference Traps
Virtual Memory.
CS 444/544 Operating Systems II Virtual Memory Translation
Virtual Memory 1 1.
Page Main Memory.
Presentation transcript:

ECE 391 Exam 2 HKN Review Session By: Kevin Guo, Mrigank Bhardwaj, and Evan Lissoos

Virtual Memory In x86, virtual memory is implemented in 2 ways Segmentation Linux doesn’t really use segmentation Since it can’t be turned off in x86, all segments are set to the entire memory space 0-4GB for Kernel Code, Kernel Data, User Code, User Data “Flat memory model” Paging Key to how Linux manages memory from context switching to memory allocation x86 supports 4KB and 4MB pages Virtual memory is handled in hardware by the Memory Management Unit (MMU)

Paging Translation Lookaside Buffer (TLB) Small and fast cache that stores the translations for pages Located in the MMU Page Directory Base Register (PDBR) or Control Register 3 (cr3) Pointer to the base of the page directory Flushes the TLB when written to Page directory/tables Table that holds pointers to page tables or to pages Includes extra bits User/Supervisor Read/Write Global Flag Page Size

Paging (continued)

Both suffer from internal and external memory fragmentation Paging (continued) Pros and Cons of Different Page Sizes Larger Faster table walk More TLB hits (less misses) More fragmentation over smaller pages Both suffer from internal and external memory fragmentation Smaller Slower table walk Less TLB hits (more misses) Less fragmentation than larger pages Internal Fragmentation: wasted space from rounding up (ex. allocating a 4KB page when only 3KB needed) External Fragmentation: uneven free pages waste space and make it difficult to allocate a page of appropriate size (ex. 4MB page free except for a single 4KB page prevents a 4MB page from being

Paging (continued) Few other things to remember MP3 kernel maps kernel code to 0x400000-0x800000 (4MB-8MB) Same virtual and physical address Video memory is a single 4KB page Less levels => faster (on TLB miss) Page faults Minor - page in memory, but marked as not Major - page not in memory Invalid - address not part of memory space, or invalid permissions (privilege/write)

Mode What the heck is this? And why is it so convoluted and weird? The Build Buffer: A system to represent 2D image data in memory, and allow for window scrolling in a larger image with few writes/deletes. We “reverse” the planes so that they don’t collide into each other when shifting. Could also add spaces between all planes. We can shift around the relative position of these planes in the build buffer, so we don’t need to worry about the size. To determine which plane you write to, you set a mask. 4 planes for each single memory address. (IE, each virtual address = 4 pixels)

Mode 2. Map from logical view -> video screen memory: Just a basic cyclic shift. Nothing fancy here.

Mode 3. Palette 4. Double-Buffering Each ‘pixel’ in ModeX takes up a byte of memory, corresponding to an entry in a 256-color palette- this helps save memory, while allowing us a wider range of color, letting us pick what color we want (or changing the palette to easily change existing pixels). The palette is made up of 18 bits, which allows us the full RGB range. 4. Double-Buffering Not actually used in your MP (I think?) But basically you write to two different regions, and alternate which region you actually display.

R T (D) C The RTC is a clock which generates interrupts at a certain frequency. Most of dealing with the RTC is just knowing what the right code to copy from OSDev is, to initialize the RTC and to change its frequency. Remember, the commands there use a different syntax, so the ordering of the arguments are flipped. https://wiki.osdev.org/RTC There are other alternatives to the RTC, like the Programmable Interrupt Timer (PIT), which has more priority, and can generate interrupts faster. You’ll use that later.

R T (D) C Initialization:

R T (D) C Frequency Change: Handling Interrupts:

File System Inode: info about a file that the filesystem needs, i.e. type, access permissions, owner, etc.

Tux Controller Example Question Go To Practice Exam 2015

System Calls System calls are how user-level processes get the kernel to do things for them. We don’t let user-level processes do things like access hardware, access hard drive, or do critical important kernel things, because we only trust the kernel to do that. In OS, there are rings of protection. Lower priority can’t access resources in higher priority. Priority level is encoded in paging. In your OS, you only have ring 3 (user level) and ring 0 (kernel level), and system calls are what let you jump between these levels. If your untrustworthy friend wanted to borrow some money, would you let him/her walk into your safe and trust them to borrow the exact amount of money? NO! You’d take the exact money and bring to them. That’s how system calls work! We can never trust those sneaky user-level processes to do just what they say they’ll do, so we do it for them.

System Calls You invoke system calls by calling IDT Vector 0x80, and passing in name and arguments in different registers (EAX for system call number), to tell them which specific call you want executed. The system call library will push all registers to stack, then jump to the actual desired system call handler. Now you’re in kernelspace! Now the kernel will look at your system call request and decide if it wants to handle it, then return. The system call will pass in its return value in EAX.

Questions? Anything we missed? If your untrustworthy friend wanted to borrow some money, would you let him/her walk into your safe and trust them to borrow the exact amount of money? NO! You’d take the exact money and bring to them. That’s how system calls work! We can never trust those sneaky user-level processes to do just what they say they’ll do, so we do it for them.