Mode, space, and context: the basics Jeff Chase Duke University.

Slides:



Advertisements
Similar presentations
More on Processes Chapter 3. Process image _the physical representation of a process in the OS _an address space consisting of code, data and stack segments.
Advertisements

D u k e S y s t e m s Servers and Threads Jeff Chase Duke University.
The Kernel Abstraction. Challenge: Protection How do we execute code with restricted privileges? – Either because the code is buggy or if it might be.
CSC 501 Lecture 2: Processes. Von Neumann Model Both program and data reside in memory Execution stages in CPU: Fetch instruction Decode instruction Execute.
Processes CSCI 444/544 Operating Systems Fall 2008.
Figure 2.8 Compiler phases Compiling. Figure 2.9 Object module Linking.
OS Fall ’ 02 Introduction Operating Systems Fall 2002.
1 CS 333 Introduction to Operating Systems Class 2 – OS-Related Hardware & Software The Process Concept Jonathan Walpole Computer Science Portland State.
Introduction to Kernel
Home: Phones OFF Please Unix Kernel Parminder Singh Kang Home:
OS Spring’03 Introduction Operating Systems Spring 2003.
Advanced OS Chapter 3p2 Sections 3.4 / 3.5. Interrupts These enable software to respond to signals from hardware. The set of instructions to be executed.
Process in Unix, Linux and Windows CS-3013 C-term Processes in Unix, Linux, and Windows CS-3013 Operating Systems (Slides include materials from.
CS-502 Fall 2006Processes in Unix, Linux, & Windows 1 Processes in Unix, Linux, and Windows CS502 Operating Systems.
Unix & Windows Processes 1 CS502 Spring 2006 Unix/Windows Processes.
CPS110: Implementing threads/locks on a uni-processor Landon Cox.
1 Process Description and Control Chapter 3 = Why process? = What is a process? = How to represent processes? = How to control processes?
Threads CS 416: Operating Systems Design, Spring 2001 Department of Computer Science Rutgers University
Process Description and Control A process is sometimes called a task, it is a program in execution.
OS Spring’04 Introduction Operating Systems Spring 2004.
1 Threads Chapter 4 Reading: 4.1,4.4, Process Characteristics l Unit of resource ownership - process is allocated: n a virtual address space to.
Processes in Unix, Linux, and Windows CS-502 Fall Processes in Unix, Linux, and Windows CS502 Operating Systems (Slides include materials from Operating.
Process. Process Concept Process – a program in execution Textbook uses the terms job and process almost interchangeably A process includes: – program.
CSE 451: Operating Systems Autumn 2013 Module 6 Review of Processes, Kernel Threads, User-Level Threads Ed Lazowska 570 Allen.
Protection and the Kernel: Mode, Space, and Context.
Traps and Faults. Review: Mode and Space data user mode kernel mode A B C “kernel space”
Process in Unix, Linux, and Windows CS-3013 A-term Processes in Unix, Linux, and Windows CS-3013 Operating Systems (Slides include materials from.
CSC 501 Lecture 2: Processes. Process Process is a running program a program in execution an “instantiation” of a program Program is a bunch of instructions.
Threads and Concurrency. A First Look at Some Key Concepts kernel The software component that controls the hardware directly, and implements the core.
Chapter 3 Process Description and Control
Multiprogramming CSE451 Andrew Whitaker. Overview Multiprogramming: Running multiple programs “at the same time”  Requires multiplexing (sharing) the.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto OS-Related Hardware.
The Structure of Processes (Chap 6 in the book “The Design of the UNIX Operating System”)
Lecture 3 Process Concepts. What is a Process? A process is the dynamic execution context of an executing program. Several processes may run concurrently,
Hardware process When the computer is powered up, it begins to execute fetch-execute cycle for the program that is stored in memory at the boot strap entry.
The Machine and the Kernel Mode, space, and context: the basics Jeff Chase Duke University.
Multiprogramming. Readings r Silberschatz, Galvin, Gagne, “Operating System Concepts”, 8 th edition: Chapter 3.1, 3.2.
Processes Introduction to Operating Systems: Module 3.
Mode, space, and context: the basics
Processes Questions answered in this lecture: What is a process? How does the dispatcher context-switch between processes? How does the OS create a new.
CPS110: Implementing threads Landon Cox. Recap and looking ahead Hardware OS Applications Where we’ve been Where we’re going.
Processes CS 6560: Operating Systems Design. 2 Von Neuman Model Both text (program) and data reside in memory Execution cycle Fetch instruction Decode.
1 Computer Systems II Introduction to Processes. 2 First Two Major Computer System Evolution Steps Led to the idea of multiprogramming (multiple concurrent.
Introduction to Operating Systems and Concurrency.
Processes, Threads, and Process States. Programs and Processes  Program: an executable file (before/after compilation)  Process: an instance of a program.
Hardware process When the computer is powered up, it begins to execute fetch-execute cycle for the program that is stored in memory at the boot strap entry.
Managing Processors Jeff Chase Duke University. The story so far: protected CPU mode user mode kernel mode kernel “top half” kernel “bottom half” (interrupt.
What is a Process ? A program in execution.
D u k e S y s t e m s CPS 310 Threads and Concurrency: Topics Jeff Chase Duke University
CPS110: Implementing threads on a uni-processor Landon Cox January 29, 2008.
Introduction to Operating Systems Concepts
Introduction to Kernel
Processes and threads.
Process concept.
Process Management Process Concept Why only the global variables?
CS 6560: Operating Systems Design
Protection of System Resources
Processes in Unix, Linux, and Windows
Structure of Processes
Processes in Unix, Linux, and Windows
Processes in Unix, Linux, and Windows
CSE 451: Operating Systems Spring 2012 Module 6 Review of Processes, Kernel Threads, User-Level Threads Ed Lazowska 570 Allen.
Process Control B.Ramamurthy 2/22/2019 B.Ramamurthy.
Unix Process Control B.Ramamurthy 4/11/2019 B.Ramamurthy.
Processes in Unix, Linux, and Windows
Processes in Unix and Windows
The Operating System Kernel (original slides by Jeff Chase)
CS510 Operating System Foundations
Process Description and Control in Unix
Presentation transcript:

Mode, space, and context: the basics Jeff Chase Duke University

mem.c (OSTEP) int main(int argc, char *argv[]) { int *p; p = malloc(sizeof(int)); *p = atoi(argv[1]); while (1) { Spin(1); *p = *p + 1; printf("(pid:%d) value of p: %d\n”, getpid(), *p); } chase$ cc -o mem mem.c chase$./mem 21 & chase$./mem 42 & (pid:5587) value of p: 22 (pid:5587) value of p: 23 (pid:5587) value of p: 24 (pid:5588) value of p: 43 (pid:5588) value of p: 44 (pid:5587) value of p: 25 (pid:5587) value of p: 26 … data p pid: 5587pid: 5588

0x0 0x7fffffff Static data Dynamic data (heap/BSS) Text (code) Stack Reserved 0x0 0x7fffffff Static data Dynamic data (heap/BSS) Text (code) Stack Reserved

Operating Systems: The Classical View data Programs run as independent processes. Protected system calls...and upcalls (e.g., signals) Protected OS kernel mediates access to shared resources. Threads enter the kernel for OS services. Each process has a private virtual address space and one or more threads. The kernel code and data are protected from untrusted processes.

Processes: A Closer Look ++ user ID process ID parent PID sibling links children virtual address spaceprocess descriptor (PCB) resources thread stack Each process has a thread bound to the VAS. The thread has a stack addressable through the VAS. The kernel can suspend/restart the thread wherever and whenever it wants. The OS maintains some state for each process in the kernel’s internal data structures: a file descriptor table, links to maintain the process tree, and a place to store the exit status. The address space is a private name space for a set of memory segments used by the process. The kernel must initialize the process memory for the program to run.

Processes and threads The rest of the story + +… virtual address space main thread stack Each process has a thread bound to the VAS, with stacks (user and kernel). If we say a process does something, we really mean its thread does it. The kernel can suspend/restart the thread wherever and whenever it wants. Each process has a virtual address space (VAS): a private name space for the virtual memory it uses. The VAS is both a “sandbox” and a “lockbox”: it limits what the process can see/do, and protects its data from others. From now on, we suppose that a process could have additional threads. We are not concerned with how to implement them, but we presume that they can all make system calls and block independently. other threads (optional) STOP wait

A process can have multiple threads volatile int counter = 0; int loops; void *worker(void *arg) { int i; for (i = 0; i < loops; i++) { counter++; } pthread_exit(NULL); } int main(int argc, char *argv[]) { if (argc != 2) { fprintf(stderr, "usage: threads \n"); exit(1); } loops = atoi(argv[1]); pthread_t p1, p2; printf("Initial value : %d\n", counter); pthread_create(&p1, NULL, worker, NULL); pthread_create(&p2, NULL, worker, NULL); pthread_join(p1, NULL); pthread_join(p2, NULL); printf("Final value : %d\n", counter); return 0; } data Much more on this later!

Key Concepts for Classical OS kernel The software component that controls the hardware directly, and implements the core privileged OS functions. Modern hardware has features that allow the OS kernel to protect itself from untrusted user code. thread An executing instruction path and its CPU register state. virtual address space An execution context for thread(s) defining a name space for executing instructions to address data and code. process An execution of a program, consisting of a virtual address space, one or more threads, and some OS kernel state.

The theater analogy Threads Address space Program script virtual memory (stage) [lpcox] Running a program is like performing a play.

The sheep analogy Thread Code and data Address space

CPU cores Core #1 Core #2 The machine has a bank of CPU cores for threads to run on. The OS allocates cores to threads. Cores are hardware. They go where the driver tells them. Switch drivers any time.

Threads drive cores

What was the point of that whole thing with the electric sheep actors? A process is a running program. A running program (a process) has at least one thread (“main”), but it may (optionally) create other threads. The threads execute the program (“perform the script”). The threads execute on the “stage” of the process virtual memory, with access to all of the program’s code and data. A thread can access any virtual memory in its process, but is contained by the “fence” of the process virtual address space. Threads run on cores: a thread’s core executes instructions for it. Sometimes threads idle to wait for a free core, or for some event. Sometimes cores idle to wait for a ready thread to run. The operating system kernel shares/multiplexes the computer’s memory and cores among the virtual memories and threads.

A thread running in a process VAS 0 high code library your data heap registers CPU R0 Rn PC “memory” x x your program common runtime stack address space (virtual or physical) e.g., a virtual memory for a process SP y y

Thread context Each thread has a context (exactly one). – Context == the thread’s register values. – Including a (protected) identifier naming its VAS. – And a pointer to thread’s stack in VAS/memory. Each core has a context (at least one). – Context == a register set that can hold values. A core can change drivers: context switch. – Save register values into memory. – Load new register values from memory. – (Think of driver settings for the seat, mirrors, audio…) – Enables time slicing or time sharing of machine. registers CPU core R0 Rn PC x SP y

Messing with the context #include int count = 0; ucontext_t context; int main() { int i = 0; getcontext(&context); count += 1; i += 1; sleep(2); printf(”…", count, i); setcontext(&context); } ucontext Standard C library routines to: Save current register context to a block of memory (getcontext from core) Load/restore current register context from a block of memory (setcontext) Also: makecontext, swapcontext Details of the saved context (ucontext_t structure) are machine-dependent.

Messing with the context (2) #include int count = 0; ucontext_t context; int main() { int i = 0; getcontext(&context); count += 1; i += 1; sleep(1); printf(”…", count, i); setcontext(&context); } Loading the saved context transfers control to this block of code. (Why?) What about the stack? Save core context to memory Load core context from memory

Messing with the context (3) #include int count = 0; ucontext_t context; int main() { int i = 0; getcontext(&context); count += 1; i += 1; sleep(1); printf(”…", count, i); setcontext(&context); } chase$ cc -o context0 context0.c < warnings: ucontext deprecated on MacOS > chase$./context …

Reading behind the C Disassembled code: movl0x a(%rip),%ecx addl$0x ,%ecx movl%ecx,0x e(%rip) movl0xfc(%rbp),%ecx addl$0x ,%ecx movl%ecx,0xfc(%rbp) %rip and %rbp are set “right”, then these references “work”. count += 1; i += 1; On MacOS: chase$ man otool chase$ otool –vt context0 … On this machine, with this cc: Static global _count is addressed relative to the location of the code itself, as given by the PC register [%rip is instruction pointer register] Local variable i is addressed as an offset from stack frame. [%rbp is stack frame base pointer]

Messing with the context (4) #include int count = 0; ucontext_t context; int main() { int i = 0; getcontext(&context); count += 1; i += 1; sleep(1); printf(”…", count, i); setcontext(&context); } chase$ cc –O2 -o context0 context0.c < warnings: ucontext deprecated on MacOS > chase$./context … What happened?

The point of ucontext The system can use ucontext routines to: – “Freeze” at a point in time of the execution – Restart execution from a frozen moment in time – Execution continues where it left off…if the memory state is right. The system can implement multiple independent threads of execution within the same address space. – Create a context for a new thread with makecontext. – Modify saved contexts at will. – Context switch with swapcontext: transfer a core from one thread to another (“change drivers”) – Much more to this picture: need per-thread stacks, kernel support, suspend/sleep, controlled ordering, etc.

Two threads: closer look 0 high code library data registers CPU (core) R0 Rn PC x x program common runtime stack address space SP y y stack running thread “on deck” and ready to run

Thread context switch 0 high code library data registers CPU (core) R0 Rn PC x x program common runtime stack address space SP y y stack 1. save registers 2. load registers switch in switch out

A metaphor: context/switching Page links and back button navigate a “stack” of pages in each tab. Each tab has its own stack. One tab is active at any given time. You create/destroy tabs as needed. You switch between tabs at your whim. Similarly, each thread has a separate stack. The OS switches between threads at its whim. One thread is active per CPU core at any given time time 

Messing with the context (5) #include int count = 0; ucontext_t context; int main() { int i = 0; getcontext(&context); count += 1; i += 1; sleep(1); printf(”…", count, i); setcontext(&context); } What does this do?

Thread/process states and transitions running readyblocked Scheduler governs these transitions. wait, STOP, read, write, listen, receive, etc. sleep STOP wait wakeup Sleep and wakeup are internal primitives. Wakeup adds a thread to the scheduler’s ready pool: a set of threads in the ready state. yield “requesting a car” “driving a car” “waiting for someplace to go” dispatch

Programs gone wild int main() { while(1); } Can you hear the fans blow? How does the OS regain control of the core from this program? How to “make” the process save its context and give some other process a chance to run? How to “make” processes share machine resources fairly?

Timer interrupts, faults, etc. When processor core is running a user program, the user program/thread controls (“drives”) the core. The hardware has a timer device that interrupts the core after a given interval of time. Interrupt transfers control back to the OS kernel, which may switch the core to another thread, or resume. Other events also return control to the kernel. – Wild pointers – Divide by zero – Other program actions – Page faults

Entry to the kernel syscall trap/returnfault/return interrupt/return The handler accesses the core register context to read the details of the exception (trap, fault, or interrupt). It may call other kernel routines. Every entry to the kernel is the result of a trap, fault, or interrupt. The core switches to kernel mode and transfers control to a handler routine. OS kernel code and data for system calls (files, process fork/exit/wait, pipes, binder IPC, low-level thread support, etc.) and virtual memory management (page faults, etc.) I/O completionstimer ticks

Protected CPU mode user mode kernel mode kernel “top half” kernel “bottom half” (interrupt handlers) syscall trap u-startu-returnu-start fault u-return fault clock interrupt interrupt return Kernel handler manipulates CPU register context to return to selected user context. Any kind of machine exception transfers control to a registered (trusted) kernel handler running in a protected CPU mode.

registers CPU core R0 Rn PC x mode CPU mode (a field in some status register) indicates whether a machine CPU (core) is running in a user program or in the protected kernel (protected mode). Some instructions or register accesses are legal only when the CPU (core) is executing in kernel mode. CPU mode transitions to kernel mode only on machine exception events (trap, fault, interrupt), which transfers control to a handler registered by the kernel with the machine at boot time. So only the kernel program chooses what code ever runs in the kernel mode (or so we hope and intend). A kernel handler can read the user register values at the time of the event, and modify them arbitrarily before (optionally) returning to user mode. Kernel Mode U/K

synchronous caused by an instruction asynchronous caused by some other event intentional happens every time unintentional contributing factors trap: system call open, close, read, write, fork, exec, exit, wait, kill, etc. fault invalid or protected address or opcode, page fault, overflow, etc. interrupt caused by an external event: I/O op completed, clock tick, power fail, etc. “software interrupt” software requests an interrupt to be delivered at a later time Exceptions: trap, fault, interrupt

Example: System Call Traps Programs in C, C++, etc. invoke system calls by linking to a standard library of procedures written in assembly language. – the library defines a stub or wrapper routine for each syscall – stub executes a special trap instruction (e.g., chmk or callsys or int) – syscall arguments/results passed in registers or user stack read() in Unix libc.a Alpha library (executes in user mode): #define SYSCALL_READ 27 # op ID for a read system call move arg0…argn, a0…an# syscall args in registers A0..AN move SYSCALL_READ, v0# syscall dispatch index in V0 callsys# kernel trap move r1, _errno# errno = return status return Alpha CPU architecture

Processes and threads + +… virtual address space main thread stack Each process has a thread bound to the VAS, with stacks (user and kernel). If we say a process does something, we really mean its thread does it. The kernel can suspend/restart the thread wherever and whenever it wants. Each process has a virtual address space (VAS): a private name space for the virtual memory it uses. The VAS is both a “sandbox” and a “lockbox”: it limits what the process can see/do, and protects its data from others. From now on, we suppose that a process could have additional threads. We are not concerned with how to implement them, but we presume that they can all make system calls and block independently. other threads (optional) STOP wait

Kernel Stacks and Trap/Fault Handling data Processes execute user code on a user stack in the user virtual memory in the process virtual address space. Each process has a second kernel stack in kernel space (VM accessible only to the kernel). stack System calls and faults run in kernel mode on the process kernel stack. syscall dispatch table Kernel code running in P’s process context (i.e., on its kstack) has access to P’s virtual memory. The syscall handler makes an indirect call through the system call dispatch table to the handler registered for the specific system call.

The kernel syscall trap/returnfault/return interrupt/return system call layer: files, processes, IPC, thread syscalls fault entry: VM page faults, signals, etc. I/O completionstimer ticks thread/CPU/core management: sleep and ready queues memory management: block/page cache sleep queueready queue

The kernel syscall trap/returnfault/return interrupt/return system call layer: files, processes, IPC, thread syscalls fault entry: VM page faults, signals, etc. I/O completionstimer ticks thread/CPU/core management: sleep and ready queues memory management: block/page cache sleep queueready queue policy

Separation of policy and mechanism Every OS platform has mechanisms that enable it to mediate access to machine resources. – Gain control of core by timer interrupts – Fault on access to non-resident virtual memory – I/O through system call traps – Internal code and data structures to track resource usage and allocate resources The mechanisms enable resource management policy. But the mechanisms do not and must/should not determine the policy. We might want to change the policy!

Goals of policy Share resources fairly. Use machine resources efficiently. Be responsive to user interaction. But what do these things mean? How do we know if a policy is good or not? What are the metrics? What do we assume about the workload?

Time sharing vs. space sharing time  space Two common modes of resource allocation. What kinds of resources do these work for?

Example: Processor Allocation The key issue is: how should an OS allocate its CPU resources among contending demands? – We are concerned with resource allocation policy: how the OS uses underlying mechanisms to meet design goals. – Focus on OS kernel : user code can decide how to use the processor time it is given. – Which thread to run on a free core? – For how long? When to take the core back and give it to some other thread? (timeslice or quantum) – What are the policy goals?

CPU Scheduling 101 The OS scheduler makes a sequence of “moves”. – Next move: if a CPU core is idle, pick a ready thread t from the ready pool and dispatch it (run it). – Scheduler’s choice is “nondeterministic” – Scheduler’s choice determines interleaving of execution Wakeup GetNextToRun SWITCH() ready pool blocked threads If timer expires, or wait/yield/terminate

The story so far: OS platforms OS platforms let us run programs in contexts. Contexts are protected/isolated to varying degrees. The OS platform TCB offers APIs to create and manipulate protected contexts. – It enforces isolation of contexts for running programs. – It governs access to hardware resources. Classical example: – Unix context: process – Unix TCB: kernel – Unix kernel API: syscalls

“Classic Linux Address Space” N

Something wild (1) #include Int count = 0; int set = 0; ucontext_t contexts[2]; void proc() { int i = 0; if (!set) { getcontext(&contexts[count]); } printf(…, count, i); count += 1; i += 1; if (set) { setcontext(&contexts[count&0x1]); } time  int main() { set = 0; proc(); set = 1; proc(); }

Something wild (2) #include ucontext_t contexts[2]; void proc() { int i = 0; getcontext(&contexts[count]); printf(”…", count, i); count += 1; i += 1; } time  int main() { set=0; proc(); … }

Something wild (3) #include ucontext_t contexts[2]; void proc() { int i = 0; printf(”…", count, i); count += 1; i += 1; sleep(1); setcontext(&contexts[count&0x1]); } time  int main() { … set=1; proc(); }

Something wild (4) void proc() { int i = 0; printf(”…", count, i); count += 1; i += 1; sleep(1); setcontext(…); } time  Switch to the other saved register context. Alternate “even” and “odd” contexts. We have a pair of register contexts that were saved at this point in the code. If we load either of the saved contexts, it will transfer control to this block of code. (Why?) What about the stack? Lather, rinse, repeat. What will it print? The count is a global variable…but what about i?

void proc() { int i = 0; printf("%4d %4d\n", count, i); count += 1; i += 1; sleep(1); setcontext(…); } Something wild (5) time  What does this do?