Download presentation
Presentation is loading. Please wait.
Published byHortense Stone Modified over 9 years ago
1
240-322 Cli/Serv.: Dist. Prog./21 Client/Server Distributed Systems v Objectives –explain the general meaning of distributed programming beyond client/server –look at the history of distributed programming 240-322, Semester 1, 2005-2006 2. Distributed Programming Concepts
2
240-322 Cli/Serv.: Dist. Prog./22 Overview 1. Definition 2. From Parallel to Distributed 3. Forms of Communication 4. Data Distribution 5. Algorithmic Distribution 6. Granularity 7. Load Balancing 8. Brief History of Distributed Programming
3
240-322 Cli/Serv.: Dist. Prog./23 1. Definition v Distributed programming is the spreading of a computational task across several programs, processes, or processors. v Includes parallel (concurrent) and networked programming. v Definition is a bit vague.
4
240-322 Cli/Serv.: Dist. Prog./24 2. From Parallel to Distributed v Most parallel languages talk about processes: –these can be on different processors or on different computers v The implementor may choose to add language features to explicitly say where a process should run. v May also choose to address network issues (bandwidth, failure, etc.) at the language level. continued
5
240-322 Cli/Serv.: Dist. Prog./25 v Often resources required by programs are distributed, which means that the programs must be distributed. continued
6
240-322 Cli/Serv.: Dist. Prog./26 continued
7
240-322 Cli/Serv.: Dist. Prog./27 Network Transparency v Most users want networks to be as transparent (invisible) as possible: –users do not want to care which machine is used to store their files –they do not want to know where a process is running
8
240-322 Cli/Serv.: Dist. Prog./28 3. Forms of Communication v 1-to-1 communication v 1-to-many communication continued or These can be supported on top of shared memory or distributed memory platforms. processes
9
240-322 Cli/Serv.: Dist. Prog./29 v many-to-1 communication v many-to-many communication
10
240-322 Cli/Serv.: Dist. Prog./210 4. Data Distribution v Divide input data between identical separate processes. v Examples: –database search –edge detection in an image –builders making a room with bricks
11
240-322 Cli/Serv.: Dist. Prog./211 Boss-Workers boss workers (all database search engines) part send part of database send answer
12
240-322 Cli/Serv.: Dist. Prog./212 v workers often need to talk to one another boss workers (all builders) send bricks done talking
13
240-322 Cli/Serv.: Dist. Prog./213 Boss - Eager Workers boss workers (all builders) ask for bricks send bricks talking
14
240-322 Cli/Serv.: Dist. Prog./214 Things to Note v The code is duplicated in every process. v The maximum no. of processes depends on the size of the task and difficulty of dividing data. v Talking can be very hard to code. v Talking is usually called communication, synchronisation or cooperation continued
15
240-322 Cli/Serv.: Dist. Prog./215 v Communication is almost always implemented using message passing. v How are processes assigned to processors?
16
240-322 Cli/Serv.: Dist. Prog./216 5. Algorithmic Distribution v Divide algorithm into parallel parts / processes –e.g. UNIX pipes collectorwasher Drier Stacker dirty plates clean wet plates wipe dry plates dirty plates on table plates in cupboard
17
240-322 Cli/Serv.: Dist. Prog./217 Things to Note v Talking is simple: pass data to next process which ‘wakes up’ that process. v Talking becomes harder to code if there are loops. v How to assign processes to processors?
18
240-322 Cli/Serv.: Dist. Prog./218 Several Workers per Sub-task v Use both algorithmic and data distribution. v Problems: how to divide data? how to combine data? collectorwasherDrierStackerDrier collector
19
240-322 Cli/Serv.: Dist. Prog./219 Parallelise Separate Sub-tasks brick laying electrical wiring plumbing Build a house: paint b | (pl & e) | pt
20
240-322 Cli/Serv.: Dist. Prog./220 6. Granularity Amount of data handled by a process: v Course grained : lots of data per process –e,g, UNIX processes v Fine grained : small amounts of data per process –e.g. UNIX threads, Java threads
21
240-322 Cli/Serv.: Dist. Prog./221 7. Load Balancing v How to assign processes to processors? v Want to ‘even out’ work so that each processor does about the same amount of work. v But: –different processors have different capabilities –must consider cost of moving a process to a processor (e.g. network speed, load)
22
240-322 Cli/Serv.: Dist. Prog./222 8. Brief History of (UNIX) Distributed Programming v 1970’s: UNIX was a multi-user, time-sharing OS –&, pipes –interprocess communication (IPC) on a single processor v mid 1980’s: System V UNIX –added extra IPC mechanisms: shared memory, messages, queues, etc. continued
23
240-322 Cli/Serv.: Dist. Prog./223 v late 1970's to mid 1980’s: ARPA –US Advanced Research Projects Agency –funded research that produced TCP/IP, sockets –added to BSD Unix 4.2 v mid-late 1980’s: utilities developed –telnet, ftp –r* utilities: rlogin, rcp, rsh –client-server model based on sockets continued
24
240-322 Cli/Serv.: Dist. Prog./224 v 1986: System V UNIX –released TL1, a set of socket-based libraries that support OSI –not widely used v late 1980’s: Sun Microsystems –NFS (Network File System) –RPC (Remote Procedure Call) –NIS (Network Information Services) continued
25
240-322 Cli/Serv.: Dist. Prog./225 v early 1990’s –POSIX threads (light-weight processes) –Web client-server model based on TCP/IP v mid 1990's: Java –Java threads –Java Remote Method Invocation (RMI) –CORBA continued
26
240-322 Cli/Serv.: Dist. Prog./226 v late 1990's / early 2000's –J2EE,.NET –peer-to-peer (P2P) u Napster, Gnutella, etc. –JXTA
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.