Application Protocols

Slides:



Advertisements
Similar presentations
Categories of I/O Devices
Advertisements

A CHAT CLIENT-SERVER MODULE IN JAVA BY MAHTAB M HUSSAIN MAYANK MOHAN ISE 582 FALL 2003 PROJECT.
© Lethbridge/Laganière 2001 Chap. 3: Basing Development on Reusable Technology 1 Let’s get started. Let’s start by selecting an architecture from among.
WXES2106 Network Technology Semester /2005 Chapter 8 Intermediate TCP CCNA2: Module 10.
VSP Video Station Protocol Presented by : Mittelman Dana Ben-Hamo Revital Ariel Tal Instructor : Sela Guy Presented by : Mittelman Dana Ben-Hamo Revital.
Gursharan Singh Tatla Transport Layer 16-May
CEG3185 Tutorial 4 Prepared by Zhenxia Zhang Revised by Jiying Zhao (2015w)
Process-to-Process Delivery:
Practical Session 11 Multi Client-Server Java NIO.
SPL/2010 Protocols 1. SPL/2010 Application Level Protocol Design ● atomic units used by protocol: "messages" ● encoding ● reusable, protocol independent,
ISO Layer Model Lecture 9 October 16, The Need for Protocols Multiple hardware platforms need to have the ability to communicate. Writing communications.
The Socket Interface Chapter 21. Application Program Interface (API) Interface used between application programs and TCP/IP protocols Interface used between.
© Lethbridge/Laganière 2005 Chap. 3: Basing Development on Reusable Technology The Client-Server Architecture A distributed system is a system in.
AS Computing Data Transmission and Networks. Transmission error Detecting errors in data transmission is very important for data integrity. There are.
Practical Session 11 Multi Client-Server Java NIO.
The Client-Server Model And the Socket API. Client-Server (1) The datagram service does not require cooperation between the peer applications but such.
Agenda Socket Programming The OSI reference Model The OSI protocol stack Sockets Ports Java classes for sockets Input stream and.
Java Networking I IS Outline  Quiz #3  Network architecture  Protocols  Sockets  Server Sockets  Multi-threaded Servers.
SPL/2010 Reactor Design Pattern 1. SPL/2010 Overview ● blocking sockets - impact on server scalability. ● non-blocking IO in Java - java.niopackage ●
Distributed Computing & Embedded Systems Chapter 4: Remote Method Invocation Dr. Umair Ali Khan.
1 Chapter 24 Internetworking Part 4 (Transport Protocols, UDP and TCP, Protocol Port Numbers)
Object-Orientated Analysis, Design and Programming
Chapter 9: Transport Layer
Fast Retransmit For sliding windows flow control we waited for a timer to expire before beginning retransmission of a packet TCP uses an additional mechanism.
Instructor Materials Chapter 9: Transport Layer
03 – Remote invoaction Request-reply RPC RMI Coulouris 5
Advanced OS Concepts (For OCR)
5. End-to-end protocols (part 1)
SOFTWARE DESIGN AND ARCHITECTURE
Chapter 3 Internet Applications and Network Programming
Module 4 Remote Login.
Layered Architectures
Chapter 14 User Datagram Program (UDP)
Net 323 D: Networks Protocols
File Transfer and access
Internet Networking recitation #12
Client/Server Example
Programming Models for Distributed Application
Chapter 14 User Datagram Protocol (UDP)
CSE 451: Operating Systems Winter 2006 Module 20 Remote Procedure Call (RPC) Ed Lazowska Allen Center
Lecture 11 Socket Programming.
Process-to-Process Delivery:
CSE 451: Operating Systems Autumn 2003 Lecture 16 RPC
CS4470 Computer Networking Protocols
CSE 451: Operating Systems Winter 2007 Module 20 Remote Procedure Call (RPC) Ed Lazowska Allen Center
Net 323 D: Networks Protocols
CSE 451: Operating Systems Winter 2004 Module 19 Remote Procedure Call (RPC) Ed Lazowska Allen Center
Channels.
CSE 451: Operating Systems Spring 2012 Module 22 Remote Procedure Call (RPC) Ed Lazowska Allen Center
Application Protocols
CSE 451: Operating Systems Autumn 2009 Module 21 Remote Procedure Call (RPC) Ed Lazowska Allen Center
William Stallings Data and Computer Communications
PART 5 Transport Layer.
Application Protocols
Channels.
An Introduction to Internetworking
NETWORK PROGRAMMING CNET 441
Channels.
CSE 451: Operating Systems Autumn 2010 Module 21 Remote Procedure Call (RPC) Ed Lazowska Allen Center
CSE 451: Operating Systems Winter 2003 Lecture 16 RPC
Process-to-Process Delivery: UDP, TCP
TCP/IP Sockets in Java: Practical Guide for Programmers
Message Passing Systems Version 2
Clients and Servers 19-Jul-19.
CSE 451: Operating Systems Messaging and Remote Procedure Call (RPC)
Clients and Servers 13-Sep-19.
Transport Layer 9/22/2019.
Exceptions and networking
Thread per client and Java NIO
Message Passing Systems
Presentation transcript:

Application Protocols Lecture 12 part 1 Application Protocols

Protocol Communication Rules syntax : how do we phrase the information we exchange. semantics : what actions/response for information received. synchronization : whose turn it is to speak (given the above defined semantics).

Protocol Initialization (hand-shake) communication begins when party sends initiation message to other party. synchronization - each party sends one message in a round robin fashion.

TCP 3-Way Handshake Establish/ tear down TCP socket connections computers attempting to communicate can negotiate network TCP socket connection both ends can initiate and negotiate separate TCP socket connections at the same time

TCP 3-Way Handshake (SYN,SYN-ACK,ACK)

What happens behind accept() A sends a  SYNchronize packet to B B receives A's SYN B sends a SYNchronize-ACKnowledgement A receives B's SYN-ACK A sends ACKnowledge B receives ACK.  TCP socket connection is ESTABLISHED.

Message Format Protocol syntax: message is the atomic unit of data exchanged throughout the protocol. message = letter concentrate on the delivery mechanism.

Framing streaming protocols - TCP separate between different messages all messages are sent on the same stream, one after the other, receiver should distinguish between different messages. Solution: message framing - taking the content of the message, and encapsulating it in a frame (letter - envelop).

Framing – what is it good for? sender and receiver agree on the framing method beforehand framing is part of message format/protocol enable receiver to discover in a stream of bytes where message starts/ends

Framing – how? Simple framing protocol for strings: special FRAMING character (e.g., a line break). each message is framed by two FRAMING characters at beginning and end. message will not contain a FRAMING character framing protocol by adding a special tag at start and end. <begin> / <end> strings. avoid having <begin> / <end> in message body.

Framing – how? framing protocol by employing a variable length message format special tag to mark start of a frame message contains information on message's length

Many protocols exchange data in textual form Textual data Many protocols exchange data in textual form strings of characters, in character encoding, (UTF-8) very easy to document/debug - print messages Limitation: difficult to send non-textual data. how do we send a picture? video? audio file?

Binary Data non-textual data is called binary data. all data is eventually encoded in "binary" format, as a sequence of bits. "binary data" = data that cannot be encoded as a readable string of characters?

Binary Data Sending binary data in raw binary format in a stream protocol is dangerous. may contain any byte sequence, may corrupt framing protocol. Option 1: a variable length message format.

Base64 Encoding Binary Data Option 2: encode binary data using encoding algorithm Base64 encoding - encodes binary data into a string Convert every 2 bytes into 3 ASCII characters. used by many "standard" protocols (email to encode file attachments of any type of data).

Encoding binary data advantage: any stream of bytes can be "framed" as ASCII data regardless of character encoding used by protocol. disadvantage - size of the message, increased by 50%. (we will use UTF-8 encoding scheme)

Encoding using Poco In C++, Poco library includes module for encoding/decoding byte arrays into/from Base64 encoded ASCII data. functionality is modeled as a stream "filter" performs encode/decode on all data flowing through the stream classes Base64Encoder / Base64Decoder.

Encoding in Java iharder library. modeled as stream filters (wrappers around Input/Output Java streams).

Protocol and Server Separation Code reuse is one of our design goals! Generic implementation of server, which handles all communication details Generic protocol interface: Handles incoming messages Implements protocol's semantics Generates the reply messages.

Protocol-Server Separation: protocol object Protocol object is in charge of implementing expected behavior of our server: What actions should be performed upon the arrival of a request. Requests may be correlated one to another, meaning protocol should save an appropriate state per client. E.g. authentication (logins)

A software architecture that separates tasks into separate interfaces

The actions that need to be performed by the server Accept new connections. Receive new bytes from the connected client. Parse the bytes ointo masseges (called “de- serialization”, “unframing”, or “decoding”). Dispatch the message to the right method to execute whatever the request specifies. Send back the answer.

Interfaces & classes We define the following interfaces: ConnectionHandler: Handles incoming messages from the client. Holds the Socket, the MassageEncoderDecoder and the MessagingProtocol instances. MessageEncoderDecoder: implements the protocol's syntax Encoding and decoding messages from and to bytes. For simplicity we will encode messages as UTF-8 text. MessagingProtocol: implements the protocol's semantics. Handling the received messages and generating the appropriate responses.

MessageEncoderDecoder In charge of parsing a stream of bytes into a stream of messages and backwards. It is a filter that we put between the Socket input stream and the protocol. The protocol receives the messages from the EncoderDecoder. Does not access the steams directly. This way we can replace the messaging formats but keep the protocol.

MessageEncoderDecoder MessageEncoderDecoder works in byte-to-byte fashion. If we decode a byte that together with the previous bytes represents a message: We return the message. We restart the decoding procedure.

MessageEncoderDecoder

MessagingProtocol Receives incoming messages and executes the actions requested by the client. Needs to look at the message and decide what needs to be done. The decision may depend on the state of the client (e.g. authenticated?). Once the action is performed we expect to get an answer to send the client.

MessagingProtocol We allow to use any type of message (T).

Implementations ConnectionHandler The server accepts a new connection from a client, and creates a ConnectionHandler Handles incoming messages from the client. Holds the Socket, the MassageEncoderDecoder and the MessagingProtocol instances. Tcp socket - a pair of InputStream and OutputStream. Designed to run by its own thread (Runnable). It handles a connection to a client for the whole session From the moment the connection is accepted, until one of the sides closes the connection.

ConnectionHandler Generic – works with any protocol. Receives the socket (sock) from the server (the output of accept()).

MessageEncoderDecoder implementation for “echo server” Echo server: The server sends back an identical copy of the data it received. We use a framing method based on a single character FRAMING. Specifically, we use ‘\n’.

MessagingProtocol implementation for “echo server” When the server receives a message: It prints it on the screen (on the server side) together with the time it received. Then, returns it back to the sender while repeating the last two chars a couple of times. That is, if a client send to the server the line "hello" it will be responded with the line “[time] hello .. lo .. lo ..“ Also supports a “bye” message which causes the server to close its connection.

Implementations Connection Handler MessageEncoderDecoder

Concurrency Models of TCP Servers The actual server - an object that listen to new connection and assigned them to connection handlers. Server quality criteria: Scalability: capability to server a large number of concurrent clients. Low accept latency: acceptance wait time. Low reply latency: reply wait time after message received. High efficiency: use little resources on the server host (RAM, number of threads CPU usage). A TCP server should strive to optimize the following quality criteria:

Generic base server implementation

Some notes Supplier - an interface in java that has one non default function called get(). A factory is a supplier of objects. Our TCP server needs to create a new Protocol and EncoderDecoder for every connection it receives. Since it is generic, it does not know how to create such objects. This problem is solved using factories, the server receives factories in its constructor that create those objects for it.

Concurrency models To obtain good quality, a TCP server will most often use multiple threads. We will now investigate three simple models of concurrency for servers. Single thread. Thread per client. Constant number of threads.

Server Model 1: Single Thread One thread for: Accepting a new client Dealing requests, by applying run method of the passive ConnectionHandler object.

Single Thread Model: Quality no scalability: at any given time, it serves one client only. high accept latency: a second client must wait until first client disconnects. low reply latency: all resources are concentrated on serving one client. good efficiency: server uses exactly the resources needed to serve one client. Suites only in cases where the process time is small (echo/linePrint).

Server Model 2: Thread per Client Assigns a new thread, for each connected client. In execute(), it allocates a new thread. Invokes the start() method over the runnable ConnectionHandler object.

Model Quality: Scalability Scalability: server can serve several concurrent clients, up to max threads running in the process. RAM - each thread allocates a stack and thus consumes RAM Approx. 500 - 1000 threads become active in a single process. The process does not defend itself – keeps creating new threads - dangerous for the host.

Model Quality: Latency Low accept latency: time from one accept() to the next accept() is the time to create a new thread short compared to delay in incoming client connections. Reply latency: resources of the server are spread among concurrent connections. As long as we have a reasonable number of active connections (~hundreds), load requested relatively low in CPU and RAM,

Model Quality: Efficiency Low efficiency: server creates full thread per connection, connection may be bound to Input/Output operations. ConnectionHandler thread blocked waiting for IO still use the resources of the thread (RAM and Thread).

Server Model 3: Constant Number of Threads Constant number of 10 threads (given by the Executor interface of Java) Adding runnable ConnectionHandler object to task queue of a thread pool executor

Model Quality Avoids host crash when too many clients connect at the same time. Up to N concurrent client connections -server behaves as "thread-per-connection" Above N, accept latency will grow Scalability is limited to amount of concurrent connections we believe we can support.