Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction Outline Statistical Multiplexing Network Architecture

Similar presentations


Presentation on theme: "Introduction Outline Statistical Multiplexing Network Architecture"— Presentation transcript:

1 Introduction Outline Statistical Multiplexing Network Architecture
Performance Metrics (just a little) Spring 2006 CS 332

2 Our Journey Peterson Text: Chapter 1
Chapters 4,5,6,8 (with Chapters 2,3,7 sprinkled in as needed) Why? Spring 2006 CS 332

3 Building Blocks … Nodes: PC, special-purpose hardware…
hosts Switches (host connected to at least two links that runs software that forwards data received on one link out on another). Links: coax cable, optical fiber… point-to-point multiple access Be sure to mention media access protocols when you have multiple access (maybe talk a little about Ethernet and it’s media access protocol. Note difference between switch, host, and router (and that host can also be router). Spring 2006 CS 332

4 Switched Networks A network can be defined recursively as...
two or more nodes connected by a link, or two or more networks connected by two or more nodes Nodes on inside of cloud called switches, nodes on outside called hosts internetwork Mention that single host can obviously have more than one network interface. Cloud can Denote any Type of network: P2P, multi-access, switched Node between networks called router or gateway Spring 2006 CS 332

5 Routing Just because we have physical connectivity between all hosts, doesn’t mean we have provided host-to-host connectivity Need to be able to specify hosts with which we wish to communicate I.e. each host needs an address (note that the address is really only a name – it need not relate in any way to the physical location of the host) Routers and switches between hosts need to use address to decide how to forward messages Routing: process of determining systematically how to forward messages toward the destination node based on the destination node’s address Spring 2006 CS 332

6 Not Just One Destination…
Unicast – single source, single destination Broadcast – single source, all destinations (well, sort of) Multicast – single source, whole group of destinations Anycast?! Why would you want this? Spring 2006 CS 332

7 A Key Idea We can define a network recursively as a network of networks. At bottom layer, it is implemented by some physical medium At higher layers it is a “logical” network Key issue: how do we assign addresses at each layer in a manner which allows us to efficiently route messages? Spring 2006 CS 332

8 Strategies Circuit switching: carry bit streams
original telephone network Connection is established before any data sent Packet switching: store-and-forward messages Internet (Why?) Send discrete blocks of data from node to node (called a packet or message) Why packet switching versus circuit switching? Efficiency for data traffic. Also reliability in face of downed link (recall that Internet was a military network, sort of like Interstate system). Note that these can be logical things too (as we’ll see). Spring 2006 CS 332

9 Virtual Circuit Switching
Explicit connection setup (and tear-down) phase Subsequence packets follow same circuit Sometimes called connection-oriented model 1 3 2 5 11 4 7 Switch 3 Host B Switch 2 Host A Switch 1 Analogy: phone call Each switch maintains a VC table Advantages: mechanics of forwarding packets are easier (and thus quicker) Disadvantages: Explicit setup/teardown. What happens if link goes down then Comes back up! Advantages? Disadvantages? Spring 2006 CS 332

10 Datagram Switching No connection setup phase
Each packet forwarded independently Sometimes called connectionless model 1 3 2 Switch 3 Host B Switch 2 Host A Switch 1 Host C Host D Host E Host F Host G Host H Analogy: postal system Each switch maintains a forwarding (routing) table Spring 2006 CS 332

11 Multiplexing Time-Division Multiplexing (TDM)
Frequency-Division Multiplexing (FDM) L1 L2 L3 R1 R2 R3 Switch 1 Switch 2 STDM: time broken into cycles, each flow gets to transmit during its cycle. FDM: each flow gets a given frequency band, each flow transmits only in its band. Problem: wasted resources: Nodes may wait for no one (think traffic light) in STDM, and same with FDM, since a given frequency may be free. (So frequency hopping can be used). This is done statistically to try to minimize “collisions”. Also, for both, need to know max number of flows ahead of time. Note that in either technique, bandwidth can be wasted! Spring 2006 CS 332

12 Statistical Multiplexing
On-demand time-division (rather than in specific time slot) Schedule link on a per-packet basis Packets from different sources interleaved on link Buffer packets that are contending for the link Buffer (queue) overflow is called congestion Fairly allocating link capacity and dealing with congestion are key issues here! May want to mention things like head-of-line blocking and queue placement in switches. Spring 2006 CS 332

13 IPC Abstractions Stream-Based Request/Reply video: sequence of frames
video applications on-demand video video conferencing Request/Reply distributed file systems digital libraries (web) Request/Reply is really client/server type. Properties you might want include RELIABLE MESSAGE DELIVERY (THEY ALL DO ARRIVE), ONLY ONE COPY RECEIVED, MIGHT PROTECT PRIVACY AND DATA INTEGRITY SO NO EVESDROPPING Stream based MIGHT NOT NEED DELIVERY GUARANTEE (VIDEO IS FINE WITH A FEW MISSING FRAMES) BUT WOULD PROBABLY REQUIRE PACKETS MAINTAIN SEND ORDER, MIGHT BE PARAMETRIZED TO SUPPORT DIFFERENT DELAY PROPERTIES OR ONE-WAY OR TWO-WAY STREAMS. Spring 2006 CS 332

14 Host-to-host connectivity
Layering Use abstractions to hide complexity Abstraction naturally lead to layering Alternative abstractions at each layer Application programs Request/reply Message stream channel channel Host-to-host connectivity Hardware Spring 2006 CS 332

15 Protocols Building blocks of a network architecture
Each protocol object has two different interfaces service interface: operations on this protocol peer-to-peer interface: messages exchanged with peer Term “protocol” is overloaded specification of peer-to-peer interface module that implements this interface Spring 2006 CS 332

16 Interfaces Spring 2006 CS 332 Host 1 Host 2 Service High-level
object interface object Protocol Protocol Peer-to-peer interface Spring 2006 CS 332

17 Protocol Machinery Protocol Graph
most peer-to-peer communication is indirect peer-to-peer is direct only at hardware level Host 1 Host 2 File Digital Video Digital File Video library library application application application application application application RRP MSP RRP MSP HHP HHP RRP: request/reply protocol MSP: msg stream protocol Spring 2006

18 Machinery (cont) Multiplexing and Demultiplexing (demux key)
Encapsulation (header/body) Host 1 Host 2 Application Application program program Data Data RRP RRP RRP Data RRP Data HHP HHP HHP RRP Data Spring 2006 CS 332

19 Internet Architecture
Defined by Internet Engineering Task Force (IETF) Hourglass Design Application vs Application Protocol (FTP, HTTP) FTP HTTP NV TFTP TCP UDP IP NET 1 2 n Spring 2006 CS 332

20 ISO Architecture Note transport layer is “end-to-end” Spring 2006
End host End host Note transport layer is “end-to-end” Application Application Presentation Presentation Session Session Transport Transport Network Network Network Network Note a key difference between UDP and IP. Also between network layer and transport layer. Talk here also about the user/kernel space boundary, about network interface cards, and perhaps about the path that a message takes as it goes from a user process out onto the network. Might mention a bit of architecture, and why the memory bandwidth can have a big impact on network performance (and how each copy operation limits overall performance). Increasing processor speeds but slowly increasing memory speeds means copying a problem. See p. 51 in text “copying data from one buffer to another is one of the most expensive things a protocol implementation can do!” Talk about the problems with this in terms of the changing relation between processor and link speeds. That is, in the old days, the link was the bottleneck, so a big protocol stack wasn’t an issue. Now the problem is the protocol stack. Data link Data link Data link Data link Physical Physical Physical Physical One or more nodes within the network Spring 2006 CS 332

21 Performance Metrics Bandwidth (throughput) “width” of pipe
data transmitted per time unit link versus end-to-end notation KB = 210 bytes Mbps = 106 bits per second Latency (delay) “length” of pipe time to send message from point A to point B OR time for bit to travel from point A to point B Also “link latency” one-way versus round-trip time (RTT) components Latency = Propagation + Transmit + Queue Propagation = Distance / c Transmit = Size / Bandwidth Spring 2006 CS 332

22 Bandwidth versus Latency
Relative importance Infinite bandwidth Spring 2006 CS 332

23 Protocol Implementation Issues
Which process model? Process-per-protocol model Each protocol implemented by separate process (thread) Sometimes logically “cleaner” – one protocol, one process Context switch required at each level of protocol graph Process-per-message model Each message handled by a single process with each protocol a separate procedure that invokes the subsequent protocol Order of magnitude more efficient (procedure call much more efficient than context switch) Spring 2006 CS 332

24 Protocol Implementation Issues
Service Interface relationship with process model If high-level protocol invokes send() on lower level protocol, it has message in hand so no big deal If high-level protocol invokes receive() on lower level protocol, it must wait for receipt of message, which basically forces a context switch. No big deal if app directly calls network subsystem, but big deal if it occurs at each layer of protocol stack Cure: low level protocol does an upcall (a procedure call up the stack) to deliver message to higher level Spring 2006 CS 332

25 Protocol Implementation Issues
Message Buffers In socket API, application process provides buffers for both outbound and incoming messages. This forces top most protocol to copy messages from/to network buffers. Copying data from one buffer to another is one of the most expensive operations a protocol implementation can perform. Memory is not getting fast as quickly as processors are Solution: Rather than copying from buffer to buffer at each layer of protocol stack, network subsystem defines a message abstraction shared by all protocols in the protocol graph. Can be viewed as string manipulations with pointers Note: you can’t move data any faster than the slowest copy operation! Spring 2006 CS 332

26 Implementation Are you using streams or request/reply, and what are the ramifications? What operating system are you coding on/for and where is the sockets library? What languages can you use, in theory? Why would you wish to use specific languages? What will you have to do in your first assignment? Spring 2006 CS 332

27 Implementation Port Numbers Endian issues Compiler flags
Solaris: reserved ( ), ephemeral( ) BSD:reserved(1-1023), ephemeral( ), nonprivileged servers( ) IANA: well known (1-1023), registered( ), dynamic ( ) Endian issues Compiler flags Solaris –lsocket –lnsl Linux no flags required Specifying command line arguments Null characters in strings? Spring 2006 CS 332


Download ppt "Introduction Outline Statistical Multiplexing Network Architecture"

Similar presentations


Ads by Google