Presentation is loading. Please wait.

Presentation is loading. Please wait.

IT 344: Operating Systems Winter 2010 Module 17 Distributed Systems.

Similar presentations


Presentation on theme: "IT 344: Operating Systems Winter 2010 Module 17 Distributed Systems."— Presentation transcript:

1 IT 344: Operating Systems Winter 2010 Module 17 Distributed Systems

2 9/9/20152 What is a “distributed system”? Very broad definition –loosely-coupled to tightly-coupled Nearly all systems today are distributed in some way –they use email –they access files over a network –they access printers over a network –they’re backed up over a network –they share other physical or logical resources –they cooperate with other people on other machines –they access the web –they receive video, audio, etc.

3 9/9/20153 Economics dictate that we buy small computers Everyone needs to communicate We need to share physical devices (printers) as well as information (files, etc.) Many applications are by their nature distributed (bank teller machines, airline reservations, ticket purchasing) To solve the largest problems, we will need to get large collections of small machines to cooperate together (parallel programming) Distributed systems are now a requirement

4 9/9/20154 Loosely-coupled systems Earliest systems used simple explicit network programs –FTP (rcp): file transfer program –telnet (rlogin/rsh): remote login program –mail (SMTP) Each system was a completely autonomous independent system, connected to others on the network

5 9/9/20155 Even today, most distributed systems are loosely- coupled –each CPU runs an independent autonomous OS –computers don’t really trust each other –some resources are shared, but most are not –the system may look differently from different hosts –typically, communication times are long

6 9/9/20156 Closely-coupled systems A distributed system becomes more “closely-coupled” as it –appears more uniform in nature –runs a “single” operating system –has a single security domain –shares all logical resources (e.g., files) –shares all physical resources (CPUs, memory, disks, printers, etc.) In the limit, a distributed system looks to the user as if it were a centralized timesharing system, except that it’s constructed out of a distributed collection of hardware and software components

7 9/9/20157 Tightly-coupled systems A “tightly-coupled” system usually refers to a multiprocessor –runs a single copy of the OS with a single job queue –has a single address space –usually has a single bus or backplane to which all processors and memories are connected –has very low communication latency –processors communicate through shared memory

8 9/9/20158 Some issues in distributed systems Transparency (how visible is the distribution) Security Reliability Performance Scalability Programming models Communication models

9 9/9/20159 Distributed File Systems The most common distributed services: –printing –email –Files –Computation Basic idea of distributed file systems –support network-wide sharing of files and devices (disks) Generally provide a “traditional” view –a centralized shared local file system But with a distributed implementation –read blocks from remote hosts, instead of from local disks

10 9/9/201510 Issues What is the basic abstraction –remote file system? open, close, read, write, … –remote disk? read block, write block Naming –how are files named? –are those names location transparent? is the file location visible to the user? –are those names location independent? do the names change if the file moves? do the names change if the user moves?

11 9/9/201511 Caching –caching exists for performance reasons –where are file blocks cached? on the file server? on the client machine? both? Sharing and coherency –what are the semantics of sharing? –what happens when a cached block/file is modified –how does a node know when its cached blocks are out of date?

12 9/9/201512 Replication –replication can exist for performance and/or availability –can there be multiple copies of a file in the network? –if multiple copies, how are updates handled? –what if there’s a network partition and clients work on separate copies? Performance –what is the cost of remote operation? –what is the cost of file sharing? –how does the system scale as the number of clients grows? –what are the performance limitations: network, CPU, disks, protocols, data copying?

13 9/9/201513 Example: SUN Network File System (NFS) The Sun Network File System (NFS) has become a common standard for distributed UNIX file access NFS runs over LANs (even over WANs – slowly) Basic idea –allow a remote directory to be “mounted” (spliced) onto a local directory –Gives access to that remote directory and all its descendants as if they were part of the local hierarchy Pretty much exactly like a “local mount” or “link” on UNIX –except for implementation and performance … –no, we didn’t really learn about these, but they’re obvious

14 9/9/201514 For instance: –I mount /u4/teng on Node1 onto /students/foo on Node2 –users on Node2 can then access this directory as /students/foo –if I had a file /u4/teng/myfile, users on Node2 see it as /students/foo/myfile Just as, on a local system, I might link /groups/it344/www/10wi/ as /u4/teng/it344 to allow easy access to my web data from class home directory

15 9/9/201515 NFS implementation NFS defines a set of RPC operations for remote file access: –searching a directory –reading directory entries –manipulating links and directories –reading/writing files Every node may be both a client and server

16 9/9/201516 NFS defines new layers in the Unix file system System Call Interface Virtual File System buffer cache / i-node table (local files)(remote files) UFSNFS The virtual file system (VFS) provides a standard interface, using v-nodes as file handles. A v-node describes either a local or remote file. RPCs to other (server) nodes RPC requests from remote clients, and server responses

17 9/9/201517 NFS caching / sharing On an open, the client asks the server whether its cached blocks are up to date. Once a file is open, different clients can write it and get inconsistent data. Modified data is flushed back to the server every 30 seconds.

18 9/9/201518 Example: CMU’s Andrew File System (AFS) Developed at CMU to support all of its student computing Consists of workstation clients and dedicated file server machines (differs from NFS) Workstations have local disks, used to cache files being used locally (originally whole files, subsequently 64K file chunks) (differs from NFS) Andrew has a single name space – your files have the same names everywhere in the world (differs from NFS) Andrew is good for distant operation because of its local disk caching: after a slow startup, most accesses are to local disk

19 9/9/201519 AFS caching/sharing Need for scaling required reduction of client-server message traffic Once a file is cached, all operations are performed locally On close, if the file has been modified, it is replaced on the server The client assumes that its cache is up to date, unless it receives a callback message from the server saying otherwise –on file open, if the client has received a callback on the file, it must fetch a new copy; otherwise it uses its locally-cached copy (differs from NFS)

20 9/9/201520 Example: Berkeley Sprite File System Unix file system developed for diskless workstations with large memories at UCB (differs from NFS, AFS) Considers memory as a huge cache of disk blocks –memory is shared between file system and VM Files are permanently stored on servers –servers have a large memory that acts as a cache as well Several workstations can cache blocks for read-only files If a file is being written by more than 1 machine, client caching is turned off – all requests go to the server (differs from NFS, AFS)

21 9/9/201521 Other Approaches Serverless –Xfs, Farsite Highly Available –GFS Mostly Read Only –WWW State, not Files –SQL Server –BigTable

22 Administrivia Case study http://www.et.byu.edu/groups/it344/10wi/casestudies.ht m#_Assignments Lab 8 – last one of the semester, no write up, pass off gives you full credit No lab next week – catch up on past due work and work on BYOOS BYOOS part 3 – write up just to show progress HW 10 – last one of the semester Final exam, on blackboard, date TBD, probably 1 st week of April HONOR CODE 9/9/201522

23 9/9/2015 © 2007 Gribble, Lazowska, Levy, Zahorjan 23 Example: Google File System (GFS) Independence Small Scale Many users Many programs Cooperation Large Scale Few users Few programs

24 “Google” circa 1997 (google.stanford.edu) 9/9/2015 24

25 Google (circa 1999) 9/9/2015 25

26 Google data center (circa 2000) 9/9/2015 26

27 Google new data center 2001 9/9/2015 27

28 Google data center (3 days later) 9/9/2015 28

29 9/9/201529 GFS: Google File System Why did Google build its own FS? Google has unique FS requirements –Huge read/write bandwidth –Reliability over thousands of nodes –Mostly operating on large data blocks –Need efficient distributed operations Unfair advantage –Google has control over applications, libraries and operating system

30 9/9/201530 GFS Idealogy Huge amount of data Ability to efficiently access data w/ low locality, typical query reads 100s MB of data Large quantity of Cheap machines: performance vs performance/$, performance/W Replication: scalability and h/w failure BW more important than latency Component failures are the norm rather than the exception Atomic append operation so that multiple clients can append concurrently

31 9/9/201531 Ideal Hardware for Google Performance/Watt No need for instruction level parallelism –Very little floating point or graphic needs Small Cache

32 9/9/201532 200+ clusters Filesystem clusters of 1000s of machines Pools of 1000+ clients 4+ PB Filesystems 40 GB/s read/write load (in the presence of frequent HW failures) GFS Usage @ Google

33 9/9/201533 Files in GFS Files are huge by traditional standards Most files are mutated by appending new data rather than overwriting existing data Once written, the files are only read, and often only sequentially. Appending becomes the focus of performance optimization and atomicity guarantees

34 9/9/201534 GFS Setup Master manages metadata Data transfers happen directly between clients/chunkservers Files broken into chunks (typically 64 MB) Client Misc. servers Client Replicas Masters GFS Master C0C0 C1C1 C2C2 C5C5 Chunkserver 1 C0C0 C2C2 C5C5 Chunkserver N C1C1 C3C3 C5C5 Chunkserver 2 …

35 9/9/201535 Architecture GFS cluster consists of a single master and multiple chunk servers and is accessed by multiple clients. Each of these is typically a commodity Linux machine running a user-level server process. Files are divided into fixed-size chunks identified by an immutable and globally unique 64 bit chunk handle For reliability, each chunk is replicated on multiple chunk servers master maintains all file system metadata. The master periodically communicates with each chunk server in HeartBeat (timer) messages to give it instructions and collect its state Neither the client nor the chunk server caches file data eliminating cache coherence issues. Clients do cache metadata, however.

36 9/9/201536 Architecture

37 9/9/201537 Read Process Single master vastly simplifies design Clients never read and write file data through the master. Instead, a client asks the master which chunk servers it should contact. Using the fixed chunk size, the client translates the file name and byte offset specified by the application into a chunk index within the file It sends the master a request containing the file name and chunk index. The master replies with the corresponding chunk handle and locations of the replicas. The client caches this information using the file name and chunk index as the key. The client then sends a request to one of the replicas, most likely the closest one. The request specifies the chunk handle and a byte range within that chunk

38 9/9/201538 Specifications Chunk Size = 64 MB Chunks stored as plain Unix files on chunk server. A persistent TCP connection to the chunk server over an extended period of time (reduce network overhead) cache all the chunk location information to facilitate small random reads. Master keeps the metadata in memory Disadvantages – Small files become Hotspots. Solution – Higher replication for such files.

39 Microsoft Data Center 4.0 http://www.youtube.com/watch?v=PPnoKb9fTkA 9/9/201539

40 Data center container 9/9/2015 40 Microsoft $500M Chicago data center (2009) > 2000 servers/container (40 ft) 150 containers 11 diesel generators, each 2.8 megawatts 12 chillers, each 1260 tons

41 Data center container 9/9/2015 41 Google IBM HP …

42 9/9/201542

43 Cloud Computing Platforms 9/9/201543

44 9/9/2015 44 Client/server computing Mail server/service File server/service Print server/service Compute server/service Game server/service Music server/service Web server/service etc.

45 9/9/2015 45 Peer-to-peer (p2p) systems Napster Gnutella (LimeWire) –example technical challenge: self- organizing overlay network –technical advantage of Gnutella? –er … legal advantage of Gnutella? Data source: Digital Music News Research Group

46 9/9/201546 Summary There are a number of issues to deal with: –what is the basic abstraction –naming –caching –sharing and coherency –replication –performance No right answer! Different systems make different tradeoffs!

47 9/9/201547 Performance is always an issue –always a tradeoff between performance and the semantics of file operations (e.g., for shared files). Caching of file blocks is crucial in any file system –maintaining coherency is a crucial design issue. Newer systems are dealing with issues such as disconnected operation for mobile computers

48 Service Oriented Architecture How do you allow hundreds of developers to work on a single website?

49 Amazon.com: The Beginning Initially, one web server (Obidos) and one database ObidosDatabase Internet Details: Front end consists of a web server (Apache) and “business logic” (Obidos)

50 Amazon: Success Disaster! Obidos Load balancer Amazon.com Internet Database Use redundancy to scale-up, improve availability

51 Obidos Obidos was a single monolithic C application that comprised most of Amazon.com’s functionality During scale-up, this model began to break down

52 Problem #1: Branch Management Merging code across branches becomes untenable HelloWorld.c development release Blue changes depend on Red changes (which may depend on other changes…)

53 Problem #2: Debugging On a failure, we would like to inspect what happened “recently” –But, the change log contains numerous updates from many groups Bigger problem: lack of isolation –Change by one group can impact others

54 Problem #3: Linker Failure Obidos grew so large that standard build tools were failing

55 Service-Oriented Architecture (1) First, decompose the monolithic web site into a set of smaller modules –Called services Examples: –Recommendation service –Price service –Catalogue service –And MANY others

56 Sidebar: Modularity Information hiding (Parnas 1972): The purpose of a module is to hide secrets Benefits of modularity –Groups can work independently Less “synchronization overhead” –Ease of change We are free to change the hidden secrets –Ease of comprehension Can study the system at a high level of abstraction public interface List { } // This can be an array, a linked-list, // or something else

57 Systems and Information Hiding There is often a tension between performance and information hiding In OS’s, performance often wins: struct buffer { // DO NOT MOVE these fields! // They are accessed by inline assembly that // assumes the current ordering. struct buffer* next; struct buffer* prev; int size; … }

58 Service Oriented Architectures (2) Modularity + a network Services live on disjoint sets of machines Services communicate using RPC –Remote procedure call

59 Remote Procedure Call RPC exposes a programming interface across machines: interface PriceService { float getPrice(long uniqueID); } Client Server PriceImpl getPrice()

60 SOA, Visualized Website Shopping Cart Price Recommendation Catalogue All services reside on separate machines All invocations are remote procedure calls

61 Benefits of SOA Modularity and service isolation –This extends all the way down to the OS, programming language, build tools, etc. Better visibility –Administrators can monitor the interactions between services Better resource accounting –Who is using which resources?

62 Performance Issues A webpage can require dozens of service calls –RPC system must be high performance Metrics of interest: –Throughput –Latency Both average and the variance

63 SLAs Service performance is dictated by contracts called Service Level Agreements –e.g., Service Foo must Have 4 9’s of availability Have a median latency of 50 ms Have a 3 9’s latency of 200 ms

64 Amazon and Web Services Allow third-parties to use some (but not all) of the Amazon platform Front-end website Catalogue Sleds.com Order Processing Shopping Carts Amazon.com

65 Searching on a Web Site

66 Searching Through a Web Service class Program { static void Main(string[] args) { AWSECommerceService service = new AWSECommerceService(); ItemSearch request = new ItemSearch(); request.SubscriptionId = "0525E2PQ81DD7ZTWTK82"; request.Request = new ItemSearchRequest[1]; request.Request[0] = new ItemSearchRequest(); request.Request[0].ResponseGroup = new string[] { "Small" }; request.Request[0].SearchIndex = "Books"; request.Request[0].Author = "Tom Clancy"; ItemSearchResponse response = service.ItemSearch(request); Console.WriteLine(response.Items[0].Item.Length + " books written by Tom Clancy found."); }

67 Other Web Services Google –Calendar –Maps –Charts Amazon infrastructure services (cloud) –Simple storage (disk) –Elastic compute cloud (virtual machines) –SimpleDB Facebook Ebay …


Download ppt "IT 344: Operating Systems Winter 2010 Module 17 Distributed Systems."

Similar presentations


Ads by Google