Technology Integration: RSerPool & Server Load-balancing Curt Kersey, Cisco Systems Aron Silverton, Motorola Labs.

Slides:



Advertisements
Similar presentations
Scheduling in Web Server Clusters CS 260 LECTURE 3 From: IBM Technical Report.
Advertisements

CPSC Network Layer4-1 IP addresses: how to get one? Q: How does a host get IP address? r hard-coded by system admin in a file m Windows: control-panel->network->configuration-
Module 5: TLS and SSL 1. Overview Transport Layer Security Overview Secure Socket Layer Overview SSL Termination SSL in the Hosted Environment Load Balanced.
Approaches to EJB Replication. Overview J2EE architecture –EJB, components, services Replication –Clustering, container, application Conclusions –Advantages.
© 2007 Cisco Systems, Inc. All rights reserved.Cisco Public 1 Version 4.0 OSI Transport Layer Network Fundamentals – Chapter 4.
1 Internet Networking Spring 2004 Tutorial 13 LSNAT - Load Sharing NAT (RFC 2391)
K. Salah 1 Chapter 31 Security in the Internet. K. Salah 2 Figure 31.5 Position of TLS Transport Layer Security (TLS) was designed to provide security.
Session Initiation Protocol (SIP) By: Zhixin Chen.
Page: 1 Director 1.0 TECHNION Department of Computer Science The Computer Communication Lab (236340) Summer 2002 Submitted by: David Schwartz Idan Zak.
TCP Splicing for URL-aware Redirection
Design and Implementation of a Server Director Project for the LCCN Lab at the Technion.
WXES2106 Network Technology Semester /2005 Chapter 8 Intermediate TCP CCNA2: Module 10.
1 Spring Semester 2007, Dept. of Computer Science, Technion Internet Networking recitation #12 LSNAT - Load Sharing NAT (RFC 2391)
Understanding Networks Charles Zangla. Network Models Before I can explain how connections are made from across the country, I would like to provide you.
Lecture slides prepared for “Business Data Communications”, 7/e, by William Stallings and Tom Case, Chapter 8 “TCP/IP”.
FIREWALL TECHNOLOGIES Tahani al jehani. Firewall benefits  A firewall functions as a choke point – all traffic in and out must pass through this single.
Support Protocols and Technologies. Topics Filling in the gaps we need to make for IP forwarding work in practice – Getting IP addresses (DHCP) – Mapping.
A Brief Taxonomy of Firewalls
CN2668 Routers and Switches Kemtis Kunanuraksapong MSIS with Distinction MCTS, MCDST, MCP, A+
1 Lab 3 Transport Layer T.A. Youngjoo Han. 2 Transport Layer  Providing logical communication b/w application processes running on different hosts 
CECS 5460 – Assignment 3 Stacey VanderHeiden Güney.
Windows Internet Connection Sharing Dave Eitelbach Program Manager Networking And Communications Microsoft Corporation.
Server Load Balancing. Introduction Why is load balancing of servers needed? If there is only one web server responding to all the incoming HTTP requests.
Redirection and Load Balancing
23-Support Protocols and Technologies Dr. John P. Abraham Professor UTPA.
70-291: MCSE Guide to Managing a Microsoft Windows Server 2003 Network Chapter 3: TCP/IP Architecture.
Firewall and Internet Access Mechanism that control (1)Internet access, (2)Handle the problem of screening a particular network or an organization from.
© 2007 Cisco Systems, Inc. All rights reserved.Cisco Public 1 Version 4.0 Application Layer Functionality and Protocols.
Our Last Class!!  summary  what does the future look like?
Objectives Configure routing in Windows Server 2008 Configure Routing and Remote Access Services in Windows Server 2008 Network Address Translation 1.
Presented by Xiaoyu Qin Virtualized Access Control & Firewall Virtualization.
Thomas Dreibholz Institute for Experimental Mathematics University of Duisburg-Essen, Germany University of Duisburg-Essen, Institute.
1 Version 3.0 Module 11 TCP Application and Transport.
Wireless Access and Terminal Mobility in CORBA Dimple Kaul, Arundhati Kogekar, Stoyan Paunov.
Othman Othman M.M., Koji Okamura Kyushu University 1.
CMPT 471 Networking II Address Resolution IPv4 ARP RARP 1© Janice Regan, 2012.
CS1Q Computer Systems Lecture 17 Simon Gay. Lecture 17CS1Q Computer Systems - Simon Gay2 The Layered Model of Networks It is useful to think of networks.
Fundamentals of Proxying. Proxy Server Fundamentals  Proxy simply means acting on someone other’s behalf  A Proxy acts on behalf of the client or user.
1 Network Layer Lecture 13 Imran Ahmed University of Management & Technology.
1 Security Protocols in the Internet Source: Chapter 31 Data Communications & Networking Forouzan Third Edition.
Thomas Dreibholz Institute for Experimental Mathematics University of Duisburg-Essen, Germany University of Duisburg-Essen, Institute.
Module 10: How Middleboxes Impact Performance
Security, NATs and Firewalls Ingate Systems. Basics of SIP Security.
Switch Features Most enterprise-capable switches have a number of features that make the switch attractive for large organizations. The following is a.
Approaches to Multi6 An Architectural View of Multi6 proposals Geoff Huston March 2004.
Eric Tryon Brian Clark Christopher McKeowen. System Architecture The architecture can be broken down to three different basic layers Stub/skeleton layer.
Network Infrastructure Microsoft Windows 2003 Network Infrastructure MCSE Study Guide for Exam
Making SIP NAT Friendly Jonathan Rosenberg dynamicsoft.
Chapter 4 Version 1 Virtual LANs. Introduction By default, switches forward broadcasts, this means that all segments connected to a switch are in one.
1. Layered Architecture of Communication Networks: TCP/IP Model
Reading TCP/IP Protocol. Training target: Read the following reading materials and use the reading skills mentioned in the passages above. You may also.
J. Liebeher (modified by M. Veeraraghavan) 1 Introduction Complexity of networking: An example Layered communications The TCP/IP protocol suite.
TCP/IP1 Address Resolution Protocol Internet uses IP address to recognize a computer. But IP address needs to be translated to physical address (NIC).
IP Security (IPSec) Matt Hermanson. What is IPSec? It is an extension to the Internet Protocol (IP) suite that creates an encrypted and secure conversation.
© 2001, Cisco Systems, Inc. CSPFA 2.0—5-1 Chapter 5 Cisco PIX Firewall Translations.
CompTIA Security+ Study Guide (SY0-401)
Network Address Translation (NAT)
F5 BIGIP V 9 Training.
Network Load Balancing
VIRTUAL SERVERS Presented By: Ravi Joshi IV Year (IT)
The Reliable Server Pooling Framework
Introduction to Networking
NET323 D: Network Protocols
CompTIA Security+ Study Guide (SY0-401)
Debashish Purkayastha, Dirk Trossen, Akbar Rahman
NET323 D: Network Protocols
CS4470 Computer Networking Protocols
Introduction to Network Security
Network Address Translation (NAT)
DHCP: Dynamic Host Configuration Protocol
Presentation transcript:

Technology Integration: RSerPool & Server Load-balancing Curt Kersey, Cisco Systems Aron Silverton, Motorola Labs

Contents Motivation Motivation Background: Background: Server Load-balancing Server Load-balancing Server Feedback Server Feedback RSerPool RSerPool Unified approach: Unified approach: Description Description Sample Flows Sample Flows Work Items Work Items

Assumptions / Terminology All load-balancing examples will use TCP/IP as the transport protocol. This could easily be any other protocol (e.g., SCTP). All load-balancing examples will use TCP/IP as the transport protocol. This could easily be any other protocol (e.g., SCTP). SLB = Server Load-Balancer. SLB = Server Load-Balancer. Virtual Server = Virtual instance of application running on SLB device. Virtual Server = Virtual instance of application running on SLB device. Real Server = physical machine with application instances. Real Server = physical machine with application instances.

Motivation Highly redundant SLB. Highly redundant SLB. More accurate server pooling. More accurate server pooling.

Server Load-balancing

What does a SLB do? Gets user to needed resource: Gets user to needed resource: Server must be available Server must be available User’s “session” must not be broken User’s “session” must not be broken If user must get to same resource over and over, the SLB device must ensure that happens (ie, session persistence) If user must get to same resource over and over, the SLB device must ensure that happens (ie, session persistence) In order to do work, SLB must: In order to do work, SLB must: Know servers – IP/port, availability Know servers – IP/port, availability Understand details of some protocols (e.g., FTP, SIP, etc) Understand details of some protocols (e.g., FTP, SIP, etc) Network Address Translation, NAT: Network Address Translation, NAT: Packets are re-written as they pass through SLB device. Packets are re-written as they pass through SLB device.

Why to Load-balance? Scale applications / services Scale applications / services Ease of administration / maintenance Ease of administration / maintenance Easily and transparently remove physical servers from rotation in order to perform any type of maintenance on that server. Easily and transparently remove physical servers from rotation in order to perform any type of maintenance on that server. Resource sharing Resource sharing Can run multiple instances of an application / service on a server; could be running on a different port for each instance; can load-balance to different port based on data analyzed. Can run multiple instances of an application / service on a server; could be running on a different port for each instance; can load-balance to different port based on data analyzed.

Load-Balancing Algorithms Most predominant: Most predominant: least connections: server with fewest number of flows gets the new flow request. least connections: server with fewest number of flows gets the new flow request. weighted least connections: associate a weight / strength for each server and distribute load across server farm based on the weights of all servers in the farm. weighted least connections: associate a weight / strength for each server and distribute load across server farm based on the weights of all servers in the farm. round robin: round robin thru the servers in server farm. round robin: round robin thru the servers in server farm. weighted round robin: give each server ‘weight’ number of flows in a row; weight is set just like it is in weighted least flows. weighted round robin: give each server ‘weight’ number of flows in a row; weight is set just like it is in weighted least flows. There are other algorithms that look at or try to predict server load in determining the load of the real server. There are other algorithms that look at or try to predict server load in determining the load of the real server.

How SLB Devices Make Decisions The SLB device can make its load-balancing decisions based on several factors. The SLB device can make its load-balancing decisions based on several factors. Some of these factors can be obtained from the packet headers (i.e., IP address, port numbers, etc.). Some of these factors can be obtained from the packet headers (i.e., IP address, port numbers, etc.). Other factors are obtained by looking at the data beyond the network headers. Examples: Other factors are obtained by looking at the data beyond the network headers. Examples: HTTP Cookies HTTP Cookies HTTP URLs HTTP URLs SSL Client certificate SSL Client certificate The decisions can be based strictly on flow counts or they can be based on knowledge of application. The decisions can be based strictly on flow counts or they can be based on knowledge of application. For some protocols, like FTP, you have to have knowledge of protocol to correctly load-balance (i.e., control and data connection must go to same physical server). For some protocols, like FTP, you have to have knowledge of protocol to correctly load-balance (i.e., control and data connection must go to same physical server).

When a New Flow Arrives Determine if virtual server exists. Determine if virtual server exists. If so, make sure virtual server has available resources. If so, make sure virtual server has available resources. If so, then determine level of service needed by that client to that virtual server. If so, then determine level of service needed by that client to that virtual server. If virtual machine is configured with particular type of protocol support of session persistence, then do that work. If virtual machine is configured with particular type of protocol support of session persistence, then do that work. Pick a real server for that client. Pick a real server for that client. The determination of real server is based on flow counts and information about the flow. The determination of real server is based on flow counts and information about the flow. In order to do this, the SLB may need to proxy the flow to get all necessary information for determining the real server – this will be based on the services configured for that virtual server. In order to do this, the SLB may need to proxy the flow to get all necessary information for determining the real server – this will be based on the services configured for that virtual server. If not, the packet is bridged to the correct interface based on Layer 2. If not, the packet is bridged to the correct interface based on Layer 2.

SLB: Architectures Traditional Traditional SLB device sits between the Clients and the Servers being load-balanced. SLB device sits between the Clients and the Servers being load-balanced. Distributed Distributed SLB device sits off to the side, and only receives the packets it needs to based on flow setup and tear down. SLB device sits off to the side, and only receives the packets it needs to based on flow setup and tear down.

SLB: Traditional View with NAT SLB Client Server1 Server2 Server3

SLB: Traditional View without NAT SLB Client Server1 Server2 Server3

Load-Balance: Layer 3 / 4 Looking at the destination IP address and port to make a load-balancing decision. Looking at the destination IP address and port to make a load-balancing decision. In order to do that, you can determine a real server based on the first packet that arrives. In order to do that, you can determine a real server based on the first packet that arrives.

Layer 3 / 4: Sample Flow SLB Client Server1 Server2 Server3 1: SYN 2: SLB makes decision on Server 3: SYN 4: SYN/ACK 5: SYN/ACK Rest of flow continues through HTTP GET and Server response.

Load-Balance: Layer 5+ The SLB device must terminate the TCP flow for an amount of time BEFORE the SLB decision can be made. The SLB device must terminate the TCP flow for an amount of time BEFORE the SLB decision can be made. For example, the cookie value must be sent by the client, which is after the TCP handshake before determining the real server. For example, the cookie value must be sent by the client, which is after the TCP handshake before determining the real server.

Layer 5+: Sample Flow SLB Client Server1 Server2 Server3 1: SYN 2: SLB device determines it must proxy flow before decision can be made. 6: SYN 7: SYN/ACK 3: SYN/ACK Rest of flow continues with Server response. Note: the flow can be unproxied at this point for efficiency. 4: ACK 5: GET w/ Cookie 8: ACK 9: GET w/ Cookie

SLB: Distributed Architecture Client FE Server SLB FE: Forwarding Engines, which are responsible for forwarding packets. They ask the SLB device where to send the flow.

FE SLB Client Distributed Architecture: Sample Flow Server1 Server2 Server3 Server4 1: TCP SYN 2: FE asks where to send flow. 3: Service Mgr tells it to use Server2. 4: flow goes to Server2. Subsequent packets flow directly from Client to Server2 thru the FE. The FE must notify the SLB device when the flow ends.

Server Feedback

Determining Health of Real Servers In order to determine health of real servers, SLB can: In order to determine health of real servers, SLB can: Actively monitor flows to that real server. Actively monitor flows to that real server. Initiate probes to the real server. Initiate probes to the real server. Get feedback from real server or third party box. Get feedback from real server or third party box.

Server Feedback Need information from real server while it is a part of a server farm. Need information from real server while it is a part of a server farm. Why? Why? Dynamic load-balancing based on ability of real server. Dynamic load-balancing based on ability of real server. Dynamic provisioning of applications. Dynamic provisioning of applications.

Server Feedback: Use of Information Availability of real server is reported as a ‘weight’ that is use by SLB algorithms (e.g., weighted round robin, weighted least connections). Availability of real server is reported as a ‘weight’ that is use by SLB algorithms (e.g., weighted round robin, weighted least connections). As weight value changes over time, the load distribution changes with it. As weight value changes over time, the load distribution changes with it.

How to Get Weights Statically configured on SLB device – never change. Statically configured on SLB device – never change. Start with statically configured value on SLB device for initial start-up, then get weight from: Start with statically configured value on SLB device for initial start-up, then get weight from: Real server Real server Third party box / Collection Point Third party box / Collection Point It is assumed that if a third party box is being used, it would be used for all the real servers in a server farm. It is assumed that if a third party box is being used, it would be used for all the real servers in a server farm.

Direct Host Feedback Description: Have “agents” running on host to gather data points. That data is then sent to SLB device just for that physical server. Description: Have “agents” running on host to gather data points. That data is then sent to SLB device just for that physical server. Note: agent could report for different applications on that real server. Note: agent could report for different applications on that real server. Agent could be based on available memory, general resources available, proprietary information, etc. Agent could be based on available memory, general resources available, proprietary information, etc.

Direct Host Feedback Pros: Pros: Have some way to dynamically change physical server’s capability for SLB flows. Have some way to dynamically change physical server’s capability for SLB flows. Cons: Cons: SLB device must attempt to normalize data for all real servers in a server farm. If have heterogeneous servers, it is difficult to do. SLB device must attempt to normalize data for all real servers in a server farm. If have heterogeneous servers, it is difficult to do. Difficult for real server to identify itself in SLB terms for case of L3 vs. L4 vs. L5, etc SLB scenarios. Difficult for real server to identify itself in SLB terms for case of L3 vs. L4 vs. L5, etc SLB scenarios.

Third Party Feedback: Network SLB Client Server1 Server2 Server3 Collection Point

Host to Third Party Feedback Description: Real servers report data to a ‘collection point’. The ‘collection point’ system can normalize the data as needed, then it can report for all physical servers to the SLB device. Description: Real servers report data to a ‘collection point’. The ‘collection point’ system can normalize the data as needed, then it can report for all physical servers to the SLB device. Pros: Pros: Have a device that can analyze and normalize the data from multiple servers. The SLB device can then just do SLB functionality. Have a device that can analyze and normalize the data from multiple servers. The SLB device can then just do SLB functionality. Cons: Cons: Requires more communication to determine dynamic weight – could delay the overall dynamic affect if it takes too long. Requires more communication to determine dynamic weight – could delay the overall dynamic affect if it takes too long.

RSerPool

ASAP PU ASAP ENRP Servers RSerPool: Architecture PE

RSerPool: Overview RSerPool protocols sit between the user application and the IP transport protocol (session layer). RSerPool protocols sit between the user application and the IP transport protocol (session layer). The application communication is now defined over a pair of logical session layer endpoints that are dynamically mapped to transport layer addresses. The application communication is now defined over a pair of logical session layer endpoints that are dynamically mapped to transport layer addresses. When a failure occurs at the network or transport layer, the session can survive because the logical session endpoints can be mapped to alternative transport addresses. When a failure occurs at the network or transport layer, the session can survive because the logical session endpoints can be mapped to alternative transport addresses. The endpoint to transport mapping is managed by distributed servers providing resiliency. The endpoint to transport mapping is managed by distributed servers providing resiliency.

RSerPool / SLB: Unified Approach (A Work in Progress)

Unified View: Overview Preserve the RSerPool architecture: Preserve the RSerPool architecture: Any extensions or modifications are backwards compatible with current RSerPool. Any extensions or modifications are backwards compatible with current RSerPool. SLB extensions at ENRP Server and PE are optional based on pool policy chosen / implemented. SLB extensions at ENRP Server and PE are optional based on pool policy chosen / implemented. Utilize SLB distributed architecture: Utilize SLB distributed architecture: Introduce FE when using SLB pool policies. Introduce FE when using SLB pool policies. Add SLB technology to the ENRP Server: Add SLB technology to the ENRP Server: SLB-specific versions of pool policies. SLB-specific versions of pool policies. SLB- : example SLB-WRR takes into account additional host feedback such as number of flows on each PE. SLB- : example SLB-WRR takes into account additional host feedback such as number of flows on each PE. Add server feedback: Add server feedback: Enable delivery of host feedback from PEs to home ENRP Server. Enable delivery of host feedback from PEs to home ENRP Server. Enable delivery of host feedback to FE from ENRP Server. Enable delivery of host feedback to FE from ENRP Server.

Unified: Component Description ASAP: ASAP: Between PE and ENRP Server is extended to include additional host feedback such as current number of flows on PE. Between PE and ENRP Server is extended to include additional host feedback such as current number of flows on PE. Encapsulation of host feedback protocol in pool element parameter. Encapsulation of host feedback protocol in pool element parameter. Information will be replicated among peer ENRP Servers. Information will be replicated among peer ENRP Servers. Subscription service and/or polling between ENRP Server and PU allows delivery of host feedback (membership, weights, flows, etc). Subscription service and/or polling between ENRP Server and PU allows delivery of host feedback (membership, weights, flows, etc). Subscription is between PU and current ENRP Server (not replicated). Subscription is between PU and current ENRP Server (not replicated). PU must be re-register subscription upon selection of new ENRP Server. PU must be re-register subscription upon selection of new ENRP Server. Subscription and polling service previously discussed in design team as an addition to core ASAP functionality. Subscription and polling service previously discussed in design team as an addition to core ASAP functionality. Make decision on flow destination based on SLB-specific pool policy (i.e., load-balancing algorithm). Make decision on flow destination based on SLB-specific pool policy (i.e., load-balancing algorithm).

Unified: Component Description FE: FE: RSerPool enabled application (PU): RSerPool enabled application (PU): Uses RSerPool API for sending flows to PE. Uses RSerPool API for sending flows to PE. ASAP control plane for PE selection. ASAP control plane for PE selection. Bearer plane uses flow-specific protocol (e.g., HTTP, SIP, etc) and corresponding transport (e.g., TCP, SCTP). Bearer plane uses flow-specific protocol (e.g., HTTP, SIP, etc) and corresponding transport (e.g., TCP, SCTP). Must know which pools support which applications (SLB- types). Must know which pools support which applications (SLB- types). Add parameter to SLB-enabled PEs? Add parameter to SLB-enabled PEs? Choose pool handle based on incoming client requests and supported SLB-types (SLB-L4, SLB-HTTP, SLB-SIP, etc). Choose pool handle based on incoming client requests and supported SLB-types (SLB-L4, SLB-HTTP, SLB-SIP, etc). If no other SLB-type matches, the SLB-L4 will be used. If no other SLB-type matches, the SLB-L4 will be used. NAT, reverse NAT. NAT, reverse NAT. Proxy service. Proxy service.

Unified: Component Description FE (continued): FE (continued): Configuration: Configuration: Server Pools: Server Pools: Static configuration of pool handles – pool names are resolved upon initialization. Static configuration of pool handles – pool names are resolved upon initialization. Static configuration of pool handles and PE detail, including initial/default weights. Static configuration of pool handles and PE detail, including initial/default weights. Automagic configuration? Automagic configuration? Protocol Table: Protocol Table: Maps supported SLB-types to pool handles by looking for best match in incoming packet, e.g., Maps supported SLB-types to pool handles by looking for best match in incoming packet, e.g., SLB-L4 (must implement). SLB-L4 (must implement). SLB-HTTP. SLB-HTTP. SLB-SIP. SLB-SIP.

Unified: Component Description PE: PE: SLB-enabled PEs must support dynamic host feedback. SLB-enabled PEs must support dynamic host feedback.

Unified: Layer 3/4 Example PU / FE Client ENRP Server PE1 PE2 PE3 1: TCP SYN 3: TCP SYN is sent to PE2 4: SYN/ACK 5: SYN/ACK ASAP with host feedback 2: Correlate request to SLB-type; then choose pool handle. Then do a send to that pool handle. ASAP Pool handle resolution & subscription/ polling. ENRP Server

Server Feedback: How to Implement with RSerPool

Unified: PE Communication PEs will send their weights to ENRP server via ASAP protocol. PEs will send their weights to ENRP server via ASAP protocol. Server agent on host provides weight to PE application. Server agent on host provides weight to PE application. There are some protocols that exist for reporting this information. The current list: There are some protocols that exist for reporting this information. The current list: Server/Application State Protocol, SASP: Server/Application State Protocol, SASP: Joint IBM / Cisco Protocol. Joint IBM / Cisco Protocol. IETF draft is currently available. IETF draft is currently available. Dynamic Feedback Protocol, DFP: Dynamic Feedback Protocol, DFP: Cisco developed Protocol. Cisco developed Protocol. IETF draft is in progress. IETF draft is in progress.

Design Team Work Items

How to Implement: To Do List Details, Details, Details..... Details, Details, Details..... Reconcile design with pool policy draft: Reconcile design with pool policy draft: Determine what information needs to be passed. Determine what information needs to be passed. Determine what algorithms need to be added where. Determine what algorithms need to be added where. Define SLB-. Define SLB-. Determine best method for implementation of host feedback. Determine best method for implementation of host feedback. Complete Layer 5 example with session persistence mechanism at FE. Complete Layer 5 example with session persistence mechanism at FE.

How to Implement: To Do List Polling / Subscriptions. Polling / Subscriptions. Complete DFP IETF draft, so it can be considered. Complete DFP IETF draft, so it can be considered. Everything else. Everything else.