Download presentation
Presentation is loading. Please wait.
Published byJan Eves Modified over 9 years ago
1
COS 461 Fall 1997 Workstation Clusters u replace big mainframe machines with a group of small cheap machines u get performance of big machines on the cost-curve of small machines u technical challenges –meeting the performance goal –providing single system image
2
COS 461 Fall 1997 Supporting Trends u economics –consumer market in PCs leads to economies of scale and fierce competition among suppliers »result: lower cost –Gordon Bell’s rule of thumb: double manufacturing volume, cut cost by 10% u technology –PCs are big enough to do interesting things –networks have gotten really fast
3
COS 461 Fall 1997 Models u machines on desks –pool resources among everybody’s desktop machine u virtual mainframe –build a “cluster system” that sits in a machine room –use dedicated PCs, dedicated network –special-purpose software
4
COS 461 Fall 1997 Model Comparison u advantage of machines on desks –no hardware to buy u advantages of virtual mainframe –no change to client OS –more reliable and secure –resource allocation easier –better network performance
5
COS 461 Fall 1997 Resource Pooling u CPU –run each process on the best machine –stay close to user –balance load u memory –use idle memory to store VM pages, cached disks blocks u storage –distributed file system (already covered)
6
COS 461 Fall 1997 CPU Pooling u How should we decide where to run a computation? u How can we move computations between machines? u How should shared resources be allocated?
7
COS 461 Fall 1997 Efficiency of Distributed Scheduling u queueing theory predicts performance u assume –10 users –each user creates jobs randomly at rate C –machine finishes jobs randomly at rate F u compare three configurations –separate machine for each user –10 machines, distributed scheduling –a single super-machine (10x faster)
8
COS 461 Fall 1997 Predicted Response Time separate machines super-machine pooled machines 1 F-C 1 10(F-C) between the other two like separate under light load like super under heavy load
9
COS 461 Fall 1997 Independent Processes u simplest method (on vanilla Unix) –monitor load-average of all machines –when a new process is created, put it on the least-loaded machine –processes don’t move u pro: simple u con: doesn’t balance load unless new processes are created; Unix isn’t location- transparent
10
COS 461 Fall 1997 Location Transparency u principle: a process should see itself as running on the machine where it was created u location-dependencies: process-Ids, parts of file system, sockets, etc. u usual solution –run “proxy” process on machine where process was created –“system calls” cause RPC to proxy
11
COS 461 Fall 1997 Process Migration u idea: move running processes around to balance load u problems: –how to move a running process –when to migrate –how to gather load information
12
COS 461 Fall 1997 Moving a Process u steps –stop process, saving all state into memory –move memory image to another machine –reactivate the memory image u problems –can’t move to machine with different architecture or OS –image is big, so expensive to move –need to set up proxy process
13
COS 461 Fall 1997 Migration Policy u migration can be expensive, so do rarely u migration balances load, so do often u many policies exist u typical design: let imbalance persist for a while before migrating –“patience time” is several times the cost of a migration
14
COS 461 Fall 1997 Pooling Memory u some machines need more memory than they have; some need less u let machines use each other’s memory –virtual memory backing store –disk block cache u assume (for now) all nodes use distinct pages and disk blocks
15
COS 461 Fall 1997 Failure and Memory Pooling u might lose remotely-stored pages in a crash u solution: make remote memory servers stateless u only store pages you can afford to lose –for virtual memory: write to local disk, then store copy in remote memory –for disk blocks, only store “clean” blocks in remote memory u drawback: no reduction in writes
16
COS 461 Fall 1997 Local Memory Management Locally-used pages Global page pool within each block, use LRU replacement
17
COS 461 Fall 1997 Issues u how to divide space between local and global pools –goal: throw away the least recently used stuff »keep (approximate) timestamp of last access for each page »throw away the oldest page u what to do with thrown-away pages –really throw away, or migrate to another machine –where to migrate
18
COS 461 Fall 1997 Random Migration u when evicting page –throw away with probability P –otherwise, migrate to random machine »may immediately re-do at new machine u good: simple local decisions; generally does OK when load is reasonably balanced u bad: does 1/P as much work as necessary; makes bad decisions when load is imbalanced
19
COS 461 Fall 1997 N-chance Forwarding u forward page N times before discarding it u forward to random places u improvement –gather hints about oldest page on other machines –use hints to bias decision about where to forward pages to u does a little better than random
20
COS 461 Fall 1997 Global Memory Management u idea: always throw away a page that is one of the very oldest u periodically, gather state –mark the oldest 2% of pages as “old” –count number of old pages on each machine –distribute counts to all machines u each machine now has an idea of where the old pages are
21
COS 461 Fall 1997 Global Memory Management u when evicting a page –throw it away if it’s old –otherwise, pick a machine to forward to »prob. of sending to M proportional to number of old pages on M u when a node that had old pages runs out of old pages, stop and regather state u good: old throws away old pages; fewer multi-migrations u bad: cost of gathering state
22
COS 461 Fall 1997 Virtual Mainframe u challenges are performance and single system image u lots of work in commercial and research worlds on this u case study: SHRIMP project –two generations built here at Princeton »focus on last generation –dual goals: parallel scientific computing and virtual mainframe apps
23
COS 461 Fall 1997 SHRIMP-3... Message passing libraries, Shared virtual memory, Fault-tolerance Graphics, Scalable storage server, Performance measurement Applications WinNT/Linux PC Network Interface WinNT/Linux PC Network Interface WinNT/Linux PC Network Interface...
24
COS 461 Fall 1997 Performance Approach u single user-level process on each machine –cooperate to provide single system image –client connects to any machine u optimized user-level to user-level communication –low latency for control messages –high bandwidth for block transfers
25
COS 461 Fall 1997 Virtual Memory Mapped Comm. VA space 1VA space N... Network Interface Network Interface VA space 1VA space N... Network Interface Network Interface Network
26
COS 461 Fall 1997 Communication Strategy u separate permission checking from communication –establish “mapping” once –move data many times u communication looks like local-to-remote memory copy –supported directly by hardware
27
COS 461 Fall 1997 Higher-Level Communication u support sockets and RPC via specialized libraries u calls do extra sender-to-receiver communication to coordinate data transfer u bottom line for sockets –15 microsecond latency –90 Mbyte/sec bandwidth –much faster than alternatives
28
COS 461 Fall 1997 Pulsar Storage Service Fast communication shared logical disk shared logical disk shared logical disk shared file system shared file system shared file system disk
29
COS 461 Fall 1997 Single Network-Interface Image u want to tell clients there is just one server, even when there are many –balance load automatically u methods –DNS round-robin –IP-level routing »based on IP address of peer »dynamic, based on load
30
COS 461 Fall 1997 Summary u clusters of cheap machines can replace mainframes –keys: fast flexible communication, carefully implemented single system image –experience with databases too u this method is becoming mainstream u more work needed to make machines-on- desks model work
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.