Real-Time Concepts for Embedded Systems Author: Qing Li with Caroline Yao ISBN: 1-57820-124-1 CMPBooks.

Slides:



Advertisements
Similar presentations
Memory.
Advertisements

Part IV: Memory Management
Understanding Operating Systems Fifth Edition
KERNEL MEMORY ALLOCATION Unix Internals, Uresh Vahalia Sowmya Ponugoti CMSC 691X.
7. Physical Memory 7.1 Preparing a Program for Execution
Allocating Memory.
CS 311 – Lecture 21 Outline Memory management in UNIX
Chapter 7 Memory Management
Memory Management Chapter 4. Memory hierarchy Programmers want a lot of fast, non- volatile memory But, here is what we have:
Memory Management Memory Areas and their use Memory Manager Tasks:
CS 342 – Operating Systems Spring 2003 © Ibrahim Korpeoglu Bilkent University1 Memory Management - 2 CS 342 – Operating Systems Ibrahim Korpeoglu Bilkent.
CS 104 Introduction to Computer Science and Graphics Problems
Memory Management.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
Memory Management A memory manager should take care of allocating memory when needed by programs release memory that is no longer used to the heap. Memory.
1 Friday, June 30, 2006 "Man's mind, once stretched by a new idea, never regains its original dimensions." - Oliver Wendell Holmes, Jr.
Memory Management 1 CS502 Spring 2006 Memory Management CS-502 Spring 2006.
CS-3013 & CS-502, Summer 2006 Memory Management1 CS-3013 & CS-502 Summer 2006.
Memory Management Chapter 5.
Chapter 5: Memory Management Dhamdhere: Operating Systems— A Concept-Based Approach Slide No: 1 Copyright ©2005 Memory Management Chapter 5.
03/17/2008CSCI 315 Operating Systems Design1 Virtual Memory Notice: The slides for this lecture have been largely based on those accompanying the textbook.
03/05/2008CSCI 315 Operating Systems Design1 Memory Management Notice: The slides for this lecture have been largely based on those accompanying the textbook.
Chapter 7 Memory Management
Memory Allocation CS Introduction to Operating Systems.
The memory allocation problem Define the memory allocation problem Memory organization and memory allocation schemes.
Operating System Chapter 7. Memory Management Lynn Choi School of Electrical Engineering.
Lecture 21 Last lecture Today’s lecture Cache Memory Virtual memory
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 32 Paging Read Ch. 9.4.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-7 Memory Management (1) Department of Computer Science and Software.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 8: Main Memory.
Dynamic Partition Allocation Allocate memory depending on requirements Partitions adjust depending on memory size Requires relocatable code –Works best.
Operating Systems Chapter 8
1 Memory Management Memory Management COSC513 – Spring 2004 Student Name: Nan Qiao Student ID#: Professor: Dr. Morteza Anvari.
Exam2 Review Bernard Chen Spring Deadlock Example semaphores A and B, initialized to 1 P0 P1 wait (A); wait(B) wait (B); wait(A)
1 Memory Management Chapter Basic memory management 4.2 Swapping (εναλλαγή) 4.3 Virtual memory (εικονική/ιδεατή μνήμη) 4.4 Page replacement algorithms.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Chapter 10 Memory Management Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the.
1 Address Translation Memory Allocation –Linked lists –Bit maps Options for managing memory –Base and Bound –Segmentation –Paging Paged page tables Inverted.
CE Operating Systems Lecture 14 Memory management.
Memory Management. Introduction To improve both the utilization of the CPU and the speed of its response to users, the computer must keep several processes.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
Main Memory. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The.
CE Operating Systems Lecture 17 File systems – interface and implementation.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Virtual Memory Hardware.
Copyright ©: Nahrstedt, Angrave, Abdelzaher, Caccamo 1 Memory management & paging.
CS6502 Operating Systems - Dr. J. Garrido Memory Management – Part 1 Class Will Start Momentarily… Lecture 8b CS6502 Operating Systems Dr. Jose M. Garrido.
Memory Management OS Fazal Rehman Shamil. swapping Swapping concept comes in terms of process scheduling. Swapping is basically implemented by Medium.
Memory Management Continued Questions answered in this lecture: What is paging? How can segmentation and paging be combined? How can one speed up address.
Chapter 8: Main Memory. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 22, 2005 Memory and Addressing It all starts.
COMP091 – Operating Systems 1 Memory Management. Memory Management Terms Physical address –Actual address as seen by memory unit Logical address –Address.
OS Memory Addressing. Architecture CPU – Processing units – Caches – Interrupt controllers – MMU Memory Interconnect North bridge South bridge PCI, etc.
Chapter 7 Memory Management Eighth Edition William Stallings Operating Systems: Internals and Design Principles.
Memory management The main purpose of a computer system is to execute programs. These programs, together with the data they access, must be in main memory.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
Chapter 8: Memory Management. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 8: Memory Management Background Swapping Contiguous.
Chapter 7: Main Memory CS 170, Fall Program Execution & Memory Management Program execution Swapping Contiguous Memory Allocation Paging Structure.
MEMORY MANAGEMENT. memory management  In a multiprogramming system, in order to share the processor, a number of processes must be kept in memory. 
Introduction to Paging. Readings r 4.3 of the text book.
Memory Management Memory Areas and their use Memory Manager Tasks:
Memory Management.
Memory Management Memory Areas and their use Memory Manager Tasks:
CS Introduction to Operating Systems
Main Memory Background Swapping Contiguous Allocation Paging
Virtual Memory Hardware
Lecture 3: Main Memory.
Operating System Chapter 7. Memory Management
Memory Management Memory Areas and their use Memory Manager Tasks:
Operating Systems: Internals and Design Principles, 6/E
CSE 542: Operating Systems
Presentation transcript:

Real-Time Concepts for Embedded Systems Author: Qing Li with Caroline Yao ISBN: CMPBooks

Chapter 13 Memory Management

Outline  13.1 Introduction  13.2 Dynamic Memory Allocation in Embedded Systems  13.3 Fixed-size Memory Management in Embedded Systems  13.4 Blocking vs. Non-blocking Memory Functions  13.5 Hardware Memory Management Unit (MMU)

13.1 Introduction  Embedded systems developers commonly implement custom memory-management facilities on top of what the underlying RTOS provides  Understanding memory management is therefore an important aspect

Common Requirements  Regardless of the type of embedded system, the requirements placed on a memory management system Minimal fragmentation Minimal management overhead Deterministic allocation time

13.2 Dynamic Memory Allocation in Embedded Systems  The program code, program data, and system stack occupy the physical memory after program initialization completes  The kernel uses the remaining physical memory for dynamic memory allocation. – heap

Memory Control Block (Cont.)  Maintains internal information for a heap The starting address of the physical memory block used for dynamic memory allocation The size of this physical memory block Allocation table indicates which memory areas are in use, which memory areas are free The size of each free region

Memory Fragmentation and Compaction  The heap is broken into small, fixed-size blocks Each block has a unit size that is power of two  Internal fragmentation If a malloc has an input parameter that requests 100 bytes But the unit size is 32 bytes The malloc will allocate 4 units, i.e., 128 bytes  28 bytes of memory is wasted

Memory Fragmentation and Compaction (Cont.)  The memory allocation table can be represented as a bitmap  Each bit represents a block unit

Example1: States of a Memory Allocation Map

Memory Fragmentation and Compaction (Cont.)  Another memory fragmentation: external fragmentation  For example, 0x10080 and 0x101C0 Cannot be used for any memory allocation requests larger than 32 bytes

Memory Fragmentation and Compaction (Cont.)  Solution: compact the area adjacent to these two blocks Move the memory content from 0x100A0 to 0x101BF to the new range 0x10080 to 0x1019F  Effectively combines the two free blocks into one 64- byte block This process is continued until all of the free blocks are combined into a large chunk

Example2: Memory Allocation Map with Possible Fragmentation

Problems with Memory Compaction  Allowed if the tasks that own those memory blocks reference the blocks using virtual addresses Not permitted if task hold physical addresses to the allocated memory blocks  Time-consuming  The tasks that currently hold ownership of those memory blocks are prevented from accessing the contents of those memory locations during compaction  Almost never done in practice in embedded designs

Requirements for An Efficient Memory Manager  An efficient memory manager needs to perform the following chores quickly: Determine if a free block that is large enough exists to satisfy the allocation request. (malloc) Update the internal management information (malloc and free). Determine if the just-freed block can be combined with its neighboring free blocks to form a larger piece. (free) The structure of the allocation table is the key to efficient memory management

An Example of malloc and free  We use a allocation array to implement the allocation map  Similar to the bitmap Each entry represents a corresponding fixed-size block of memory  However, allocation array uses a different encoding scheme

An Example of malloc and free (Cont.)  Encoding scheme To indicate a range of contiguous free blocks  A positive number is placed in the first and last entry representing the range The number is equal to the number of free blocks in the range  For example: in the next slide, array[0] = array[11] = 12 To indicate a range of allocated blocks  Place a negative number in the first entry and a zero in the last entry The number is equal to -1 times the number of allocated blocks  For example: in the next slide, array[9]=-3, array[11]=0

An Example of malloc and free Static array implementation of the allocation map

Finding Free Blocks Quickly  malloc() always allocates from the largest available range of free blocks However, the entries in the allocation array are not sorted by size Find the largest range always entails an end-to- end search  Thus, a second data structure is used to speed up the search for the free block Heap data structure

Finding Free Blocks Quickly (Cont.)  Heap: a data structure that is a complete binary tree with one property The value contained at a node is no smaller than the value in any of its child nodes  The sizes of free blocks within the allocation array are maintained using the heap data structure The largest free block is always at the top of the heap

Finding Free Blocks Quickly (Cont.)  However, in actual implementation, each node in the heap contains at least two pieces of information The size of a free range Its starting index in the allocation array  Heap implementation Linked list Static array, called the heap array. See next slide

Free Blocks in a Heap Arrangement

The malloc() Operation  Examine the heap to determine if a free block that is large enough for the allocation request exists.  If no such block exists, return an error to the caller.  Retrieve the starting allocation-array index of the free range from the top of the heap.  Update the allocation array  If the entire block is used to satisfy the allocation, update the heap by deleting the largest node. Otherwise update the size.  Rearrange the heap array

The free Operation  The main operation of the free function To determine if the block being freed can be merged with its neighbors  Assume index points to the being freed block. The merging rules are Check for the value of the array[index-1]  If the value is positive, this neighbor can be merged Check for the value of the array[index+number of blocks]  If the value is positive, this neighbor can be merged

The free Operation  Example 1: the block starting at index 3 is being freed Following rule 1:  Array[3-1]= array[2] = 3 > 0, thus merge Following rule 2  Array[3+4] = array[7] = -3 < 0, no merge  Example 2: The block starting at index 7 is being freed  Following rule 1and rule 2: no merge The block starting at index 3 is being freed  Following rule 1 and rule 2: all both merges

The free Operation

 Update the allocation array and merge neighboring blocks if possible.  If the newly freed block cannot be merged with any of its neighbors. Insert a new entry into the heap array.  If the newly freed block can be merged with one of its neighbors The heap entry representing the neighboring block must be updated The updated entry rearranged according to its new size.  If the newly freed block can be merged with both of its neighbors The heap entry representing one of the neighboring blocks must be deleted from the heap The heap entry representing the other neighboring block must be updated and rearranged according to its new size.

13.3 Fixed-Size Memory Management in Embedded Systems  Another approach to memory management uses the method of fixed-size memory pools  The available memory space is divided into variously sized memory pools For example, 32, 50, and 128  Each memory-pool control structures maintains information such as The block size, total number of blocks, and number of free blocks

Fixed-Size Memory Management in Embedded Systems  Management based on memory pools

Fixed-Size Memory Management in Embedded Systems  Advantages More deterministic than the heap method algorithm (constant time) Reduce internal fragmentation and provide high utilization for static embedded applications  Disadvantages This issue results in increased internal memory fragmentation per allocation in dynamic environments

13.4 Blocking vs. Non-Blocking Memory Functions  The malloc and free functions discussed bofore do not allow the calling task to block and wait for memory to become available  However, in practice, a well-designed memory allocation function should allow for allocation that permits blocking forever, blocking for a timeout period, or no blocking at all

Blocking vs. Non-Blocking Memory Functions (Cont.)  A blocking memory allocation can be implemented using both a counting semaphore and a mutex lock Created for each memory pool and kept in the control structure Counting semaphore is initialized with the total number of available memory blocks at the creation of the memory pool

Blocking vs. Non-Blocking Memory Functions (Cont.)  The mutex lock is used to guarantee a task exclusive access to Both the free-blocks list and the control structure  Counting semaphore is used to acquire the memory block A successful acquisition of the counting semaphore reserves a piece of the available block from the pool

Implementing A Blocking Allocation Function: Using A Mutex and A Counting Semaphore

Blocking Allocation/Deallocation  Pseudo code for memory allocation Acquire(Counting_Semaphore) Lock(mutex) Retrieve the memory block from the pool Unlock(mutex)  Pseudo code for memory deallocation Lock(mutex) Release the memory block back to into the pool Unlock(mutex) Release(Counting_Semaphore)

Blocking vs. Non-Blocking Memory Functions (Cont.)  A task first tries to acquire the counting semaphore If no blocks are available, blocks on the counting semaphore  Once a task acquire the counting semaphore The task then tries to lock the mutex to retrieves the resource from the list

13.5 Hardware Memory Management Units  The memory management unit (MMU) provides several functions Translates the virtual address to a physical address for each memory access (many commercial RTOSes do not support) Provides memory protection  If an MMU is enabled on an embedded system, the physical memory is typically divided into pages

Hardware Memory Management Units  Provides memory protection A set of attributes is associated with each memory page  Whether the page contains code or data,  Whether the page is readable, writable, executable, or a combination of these  Whether the page can be accessed when the CPU is not in privileged execution mode, accessed only when the CPU is in privileged mode, or both All memory access is done through MMU when it is enabled.  Therefore, the hardware enforces memory access according to page attributes