PRASHANTHI NARAYAN NETTEM.

Slides:



Advertisements
Similar presentations
Multiple Processor Systems
Advertisements

Multiple Processor Systems
Threads, SMP, and Microkernels
More on Processes Chapter 3. Process image _the physical representation of a process in the OS _an address space consisting of code, data and stack segments.
Extensibility, Safety and Performance in the SPIN Operating System Presented by Allen Kerr.
Serverless Network File Systems. Network File Systems Allow sharing among independent file systems in a transparent manner Mounting a remote directory.
Multiple Processor Systems
Threads, SMP, and Microkernels Chapter 4. Process Resource ownership - process is allocated a virtual address space to hold the process image Scheduling/execution-
Chapter 4 Threads, SMP, and Microkernels Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design.
CS-550: Distributed File Systems [SiS]1 Resource Management in Distributed Systems: Distributed File Systems.
Using DSVM to Implement a Distributed File System Ramon Lawrence Dept. of Computer Science
Computer Systems/Operating Systems - Class 8
Ameoba Designed by: Prof Andrew S. Tanenbaum at Vrija University since 1981.
Distributed Processing, Client/Server, and Clusters
CMPT 300: Final Review Chapters 8 – Memory Management: Ch. 8, 9 Address spaces Logical (virtual): generated by the CPU Physical: seen by the memory.
Multiple Processor Systems Chapter Multiprocessors 8.2 Multicomputers 8.3 Distributed systems.
Inter Process Communication:  It is an essential aspect of process management. By allowing processes to communicate with each other: 1.We can synchronize.
Home: Phones OFF Please Unix Kernel Parminder Singh Kang Home:
Chapter 11 Operating Systems
NFS. The Sun Network File System (NFS) An implementation and a specification of a software system for accessing remote files across LANs. The implementation.
Design and Implementation of a Single System Image Operating System for High Performance Computing on Clusters Christine MORIN PARIS project-team, IRISA/INRIA.
Operating System A program that controls the execution of application programs An interface between applications and hardware 1.
Presentation by Betsy Kavali
Chapter 4 Threads, SMP, and Microkernels Dave Bremer Otago Polytechnic, N.Z. ©2008, Prentice Hall Operating Systems: Internals and Design Principles, 6/E.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 2: System Structures.
Distributed Shared Memory Systems and Programming
Multiple Processor Systems. Multiprocessor Systems Continuous need for faster and powerful computers –shared memory model ( access nsec) –message passing.
Distributed Systems. Interprocess Communication (IPC) Processes are either independent or cooperating – Threads provide a gray area – Cooperating processes.
Distributed Shared Memory: A Survey of Issues and Algorithms B,. Nitzberg and V. Lo University of Oregon.
Distributed File Systems
CSE 451: Operating Systems Section 10 Project 3 wrap-up, final exam review.
DCE (distributed computing environment) DCE (distributed computing environment)
Transparent Process Migration: Design Alternatives and the Sprite Implementation Fred Douglis and John Ousterhout.
Processes and Threads Processes have two characteristics: – Resource ownership - process includes a virtual address space to hold the process image – Scheduling/execution.
Lecture 3 Process Concepts. What is a Process? A process is the dynamic execution context of an executing program. Several processes may run concurrently,
ECE200 – Computer Organization Chapter 9 – Multiprocessors.
TECHNIQUES FOR REDUCING CONSISTENCY- RELATED COMMUNICATION IN DISTRIBUTED SHARED-MEMORY SYSTEMS J. B. Carter University of Utah J. K. Bennett and W. Zwaenepoel.
Threads, SMP, and Microkernels Chapter 4. Process Resource ownership - process is allocated a virtual address space to hold the process image Scheduling/execution-
Operating Systems CSE 411 Multi-processor Operating Systems Multi-processor Operating Systems Dec Lecture 30 Instructor: Bhuvan Urgaonkar.
Ihr Logo Operating Systems Internals & Design Principles Fifth Edition William Stallings Chapter 2 (Part II) Operating System Overview.
Multiple Processor Systems. Multiprocessor Systems Continuous need for faster computers –shared memory model ( access nsec) –message passing multiprocessor.
Ch 10 Shared memory via message passing Problems –Explicit user action needed –Address spaces are distinct –Small Granularity of Transfer Distributed Shared.
Scott Ferguson Section 1
1 Threads, SMP, and Microkernels Chapter Multithreading Operating system supports multiple threads of execution within a single process MS-DOS.
CS533 - Concepts of Operating Systems 1 The Mach System Presented by Catherine Vilhauer.
The Mach System Abraham Silberschatz, Peter Baer Galvin, and Greg Gagne Presented by: Jee Vang.
GLOBAL EDGE SOFTWERE LTD1 R EMOTE F ILE S HARING - Ardhanareesh Aradhyamath.
M. Accetta, R. Baron, W. Bolosky, D. Golub, R. Rashid, A. Tevanian, and M. Young MACH: A New Kernel Foundation for UNIX Development Presenter: Wei-Lwun.
The Mach System Silberschatz et al Presented By Anjana Venkat.
Lecture 4 Mechanisms & Kernel for NOSs. Mechanisms for Network Operating Systems  Network operating systems provide three basic mechanisms that support.
Review of Computer System Organization. Computer Startup For a computer to start running when it is first powered up, it needs to execute an initial program.
Introduction Contain two or more CPU share common memory and peripherals. Provide greater system throughput. Multiple processor executing simultaneous.
Chapter 4 Threads, SMP, and Microkernels Dave Bremer Otago Polytechnic, N.Z. ©2008, Prentice Hall Operating Systems: Internals and Design Principles, 6/E.
Multimedia Retrieval Architecture Electrical Communication Engineering, Indian Institute of Science, Bangalore – , India Multimedia Retrieval Architecture.
Chapter Five Distributed file systems. 2 Contents Distributed file system design Distributed file system implementation Trends in distributed file systems.
Threads, SMP, and Microkernels Chapter 4. Processes and Threads Operating systems use processes for two purposes - Resource allocation and resource ownership.
DISTRIBUTED FILE SYSTEM- ENHANCEMENT AND FURTHER DEVELOPMENT BY:- PALLAWI(10BIT0033)
Chapter 2: Computer-System Structures(Hardware)
Chapter 2: Computer-System Structures
Threads, SMP, and Microkernels
Outline Midterm results summary Distributed file systems – continued
Multiple Processor Systems
Lecture 4- Threads, SMP, and Microkernels
Concurrency: Mutual Exclusion and Process Synchronization
Chapter 15: File System Internals
CS333 Intro to Operating Systems
Database System Architectures
Distributed Resource Management: Distributed Shared Memory
Chapter 2: Computer-System Structures
Chapter 2: Computer-System Structures
Presentation transcript:

PRASHANTHI NARAYAN NETTEM. THE SPRITE NETWORK OPERATING SYSTEM PRESENTED BY, PRASHANTHI NARAYAN NETTEM.

WHAT IS SPRITE? Sprite is an operating system implemented at the University of California, Berkeley as part of development of SPUR a high-performance multiprocessor Workstation. Sprite is a distributed operating system that provides a single system image to a cluster of workstations.

Features Of Sprite Operating System Process Migration Virtual Memory File Management Remote Procedure Calls Mutual Exclusion and synchronization

Process Migration Sprite presents an illusion of a single fast time-sharing system, rather than distributed system with many independent hosts. Process migration facility moves a process execution site between two machines of the same architecture. Process migration is both transparent and automatic. Migration involves two phases. They are extracting and executing. Extracting includes installing the process state from source to the target. It depends on the state associated with the process.

Process Migration (Contd…) The second phase, process execution, depends not only on the way in which state is transferred, but also the degree to which migration is intended to be transparent. Process transfer includes files transfer, virtual memory transfer and migrating the open file. Sprite uses a common high performance file system. File server keeps track of which host has a file open for reading or writing. If a file is open for writing then caching is disabled and all hosts must forward their read and write requests for that file to the server so that they can be serialized.

Process Migration (Contd…) In Sprite backing storage for virtual memory is implemented on the ordinary files. These backing files stored on the network file system, are accessible throughout the network. The backing files are stored on network file servers, which cache recently used file data in memory.

Process Migration (contd…) When migrating an open file to a new host, access to the file is managed using standard mechanisms in the Sprite file system. The server that stores a file is responsible for keeping file’s contents and processes consistent.

Virtual Memory Management Sprite’s virtual memory is very much similar to UNIX and has been redesigned to support three features. They are multiprocessing, networks and large physical memory. The large physical memories in Sprite offer the opportunity for speeding program startup by using free memory as a cache for recently used programs. Sprite uses the clock algorithm for page replacement and provides shared read write data segments.

SHARED MEMORY The processes working together in a multi-processor environment need to provide mechanisms for shared memory and inter process communication (IPC) for synchronization and sharing data. Sprite uses the shared writable memory and messages for IPC.

A virtual memory must be able to load pages into memory and provide backing storage when a process faults on a page. Sprite uses file system for both demand loading and backing store.

DEMAND LOADING OF CODE UNIX allows processes to share only code but Sprite allows processes to share both code and heap of a process address space. Sprite initially loads code and initialized heap pages from object files in the file system. Because of using high-performance file servers, Sprite file system is fast enough for the virtual memory system to use it for all demand loading. The Virtual memory is simplified because it does not have to worry about the physical location on disk and the file’s server cache can be used to increase performance as page reads can be serviced out of cache instead of going to disk.

BACKING STORE MANAGEMENT Backing store is used to store dirty pages when they are taken away from a segment. In Sprite each segment has its own file in the file system that it uses for backing store instead of a special disk partition. When a segment needs to write a dirty page out it is written to the file using normal file write operation and all the read operations by normal read operation by using the virtual address of the page to be read. Advantages : 1. no preallocated partition of disk space for backing storage. 2. Migration is made easier as only the pointer to the file need to be transferred.

FILE MANAGEMENT Sprite’s file system is implemented by a distributed set of computers and its internal state is distributed among the operating system kernels at the different sites. Sprite supports pseudo file system which is a facility that includes the foreign file systems and arbitrary user services. It is a file system that allows further extensions to the system to be implemented at the user level server processes instead of inside the kernel. The pseudo file server does the recovery and the caching mechanism. Thus the Sprite’s file system provides support for user implemented services rather than a message based kernel.

CACHING OF FILE SYSTEM Caches increase the file system performance by storing the memory which is repeatedly accessed without the use of disk which reduces the delay in going to disk. In Sprite caching is done at both the main memories of both server and client. Caching at client reduces the communication delay caused by fetching blocks from server. This speeds up the program and increases the number of clients supported by the server. The size of cache can change dynamically. The virtual memory and the file system negotiate for the machine’s physical memory as the needs of both change.

CACHE CONSISTENCY Many clients can cache the same file simultaneously as long as none of them is writing the file. In Sprite sequential write sharing and concurrent write sharing cause the consistency problems. Sequential write sharing occurs when the file is not open for reading or writing at the same time on different clients. Concurrent write sharing occurs when multiple clients can open a file and atleast one of them has it open for writing. Sprite uses file servers as centralized points for cache consistency. There are no client to client interactions.

TIME Sequential write sharing Concurrent write sharing TIME C1 READS FILE C2 WRITES TO FILE C1 READS FILE TIME Concurrent write sharing C1 READS FILE C2 WRITES TO FILE C1 WRITES TO FILE TIME

REMOTE PROCEDURE CALLS A remote procedure call is a procedure executed on a foreign host and the result is sent back to the calling program. The foreign host is called the server and the calling program a client. Sprite uses network RPC protocol that is used for communication among the Sprite kernels. The protocol used is the extension of Birell-Nelson protocol and is used to optimize bulk transfer of data. An RPC request or reply consists of two buffers plus a header. The first buffer is used for marshalling small segments and the other one refers to the large unintegrated block of data. The header contains boot time-stamp to easily detect crashes and reboots.

Args Calling Procedure Called Procedure Results Request msg Request msg Args Calling procedure Args Client Stub RPC Transport RPC Transport Server Stub Called Procedure Network Results Result msg Results Result msg

MULTI-PROCESSOR SPRITE KERNEL Sprite operating system is designed to run on multiprocessors. The kernel is multi-threaded to allow more than one processor to execute kernel code simultaneously. It has to provide means for synchronization and mutual exclusion between threads. It uses condition variables and monitor style locks for this purpose. There are two types of locks. Monitor lock is used to implement monitor. It is acquired at the beginning and released at the end. When the lock is already acquired and another process attempts to acquire lock then it is put to sleep. When lock is released all the waiting threads are awakened.

The another type of lock is master lock The another type of lock is master lock. It is used to provide mutual exclusion between processes and interrupt handlers. If the lock is already existing then the processor retries the locking operation until it succeeds. It is a spin lock acquired with the interrupts disabled. Interrupts are disabled because if an interrupt is taken by a master lock then interrupt routine has to spin forever to check whether the lock is released. Condition variables are used for synchronization. The process has to wait for interesting conditions to occur. A process waits on the conditional variable until it is signaled by another process.

CONCLUSIONS Sprite is an operating system designed to provide high performance, consistency and simplicity. High performance is achieved using server and client caches. Sprite provides non-write through file cache. The cache consistency mechanism permits files to be shared without danger of stale data. Virtual memory uses ordinary files as backing storage for simplicity, easy implementation of process migration and dynamic usage of disk storage.