Download presentation
Presentation is loading. Please wait.
Published byMaryann Stanley Modified over 9 years ago
1
1F0_4553_c1 © 1999, Cisco Systems, Inc. Cisco Load Balancing Solutions
2
2F0_4553_c1 © 1999, Cisco Systems, Inc. Agenda Problems We Are Solving DistributedDirector LocalDirector MultiNode Load Balancing
3
3F0_4553_c1 © 1999, Cisco Systems, Inc. Problems We Are Solving Efficient, high-performance client access to large server complexes Continuous availability of server applications Scalable, intelligent load distribution across servers in the complex Load distribution based on server capacity of doing work and application availability
4
4F0_4553_c1 © 1999, Cisco Systems, Inc. 4 DistributedDirector F0_4553_c1 © 1999, Cisco Systems, Inc.
5
5F0_4553_c1 © 1999, Cisco Systems, Inc. What Is DistributedDirector? Two pieces: Standalone software/hardware bundle Special Cisco IOS ® -based software on Cisco 2501, 2502, and Cisco 4700M hardware platforms—11.1IA release train Cisco IOS software release 11.3(2)T and later on DRP-associated routers in field DistributedDirector is NOT a router Dedicated box for DistributedDirector processing
6
6F0_4553_c1 © 1999, Cisco Systems, Inc. What Does DistributedDirector Do? Resolves domain or host names to a specific server (IP address) Provides transparent access to topologically closest Internet/intranet server relative to client Maps a single DNS host name to the “closest” server to client Dynamically binds one of several IP addresses to a single host name Eliminates need for end-users to choose from a list of URL/host names to find “best” server The only solution which uses intelligence in the network infrastructure to direct client to best server
7
7F0_4553_c1 © 1999, Cisco Systems, Inc. APPL2 APPL1 APPL3 2.2.2.1 IP DD Client 1 2 3 4 1.1.1.1 3.3.3.1 Resolve appl.com DNS-Based Distribution Client connects to appl.com appl.com request routed to DistributedDirector DistributedDirector uses multiple decision metrics to select appropriate server destination DistributedDirector sends destination address to client Client connects to the appropriate server
8
8F0_4553_c1 © 1999, Cisco Systems, Inc. How Are DistributedDirector Choices Made? Director Response Protocol (DRP) Interoperates with remote routers (DRP agents) to determine network topology Determines network distance between clients and server Client-to-server link latency (RTT) Server availability Administrative “cost” Take a server out of service for maintenance Proportional distribution For heterogeneous distributed server environments Random distribution
9
9F0_4553_c1 © 1999, Cisco Systems, Inc. Client Web Server DistributedDirector DRP Agents Internet Web Server Director Response Protocol (DRP) Operates with routers in the field to determine: Client-to-server network proximity Client-to-server link latency
10
10F0_4553_c1 © 1999, Cisco Systems, Inc. Client AS3 AS1 AS4 One Hop Two Hops AS2 Server DRP Server DRP DRP “External” Metric Measures distance from DRP agents to client in BGP AS hop counts Server DRP
11
11F0_4553_c1 © 1999, Cisco Systems, Inc. Measures client-to-DRP server round-trip times Compares link latencies Server with lowest round-trip time is considered “best” Maximizes end-to-end server access performance DRP “Round-Trip Time” Metric RTT Measurement Client AS1 AS4 Server DRP Server DRP AS3 AS2 Server DRP
12
12F0_4553_c1 © 1999, Cisco Systems, Inc. “Portion” Metric Proportional load distribution across heterogeneous servers Can also be used to enable traditional round-robin DNS Server 1 SPARCstation Server 1 SPARCstation Server 2 SPARCstation Server 2 SPARCstation Server 3 Pentium 60 MHz Server 3 Pentium 60 MHz Server 4 Pentium 60 MHz Server 4 Pentium 60 MHz “Portion” Metric Value Server 5 Pentium 166 MHz Server 5 Pentium 166 MHz 7/24 = 29.2% 8/24 = 33.3% 2/24 = 8.3% 5/24 = 20.8% 24/24 = 100% Portion of Connections Portion of Connections 7 7 8 8 2 2 2 2 5 5 Total = 24
13
13F0_4553_c1 © 1999, Cisco Systems, Inc. Server Availability Parameter DistributedDirector establishes a TCP connection to the service port on each remote server, thus verifying that the service is available Verification is made at regular intervals Port number and connection interval are configurable Minimum configurable interval is ten seconds Maximizes service availability as seen by clients
14
14F0_4553_c1 © 1999, Cisco Systems, Inc. DistributedDirector— How Does It Work? Two configuration modes: DNS caching name server authoritative for www.foo.com subdomain HTTP redirector for http://www.foo.com Modes configurable on per-domain basis
15
15F0_4553_c1 © 1999, Cisco Systems, Inc. DistributedDirector—Redundancy DNS mode Use multiple DistributedDirectors to provide several name servers authoritative for a given hostname to provide redundancy All DistributedDirectors are considered to be primary DNS servers HTTP mode Use multiple DistributedDirectors and Cisco’s Hot Standby Router Protocol (HSRP) to provide redundancy
16
16F0_4553_c1 © 1999, Cisco Systems, Inc. 16 LocalDirector F0_4553_c1 © 1999, Cisco Systems, Inc.
17
17F0_4553_c1 © 1999, Cisco Systems, Inc. LocalDirector LocalDirector appliance front-ends server farm Load balances connections to “best server” Failures, changes transparent to end users Improves response time Simplifies operations and maintenance Simultaneously supports different server platforms, operating systems Any TCP service (not just Web) LocalDirector Data Center Internet or Intranet User
18
18F0_4553_c1 © 1999, Cisco Systems, Inc. LocalDirector— Server Management Represents multiple servers with a single virtual address Easily place servers in and out of service Identifies failed servers: takes offline Identifies working servers: places in service IP address management Application-specific servers Maximum connections Hot-standby server
19
19F0_4553_c1 © 1999, Cisco Systems, Inc. LocalDirector—Specifications 80-Mbps throughput—model 416 300-Mbps throughput—model 430 Fast Ethernet channel Supports up to 64,000 virtual and real IP addresses Up to 16 10/100 Ethernet, 4 FDDI ports One-million simultaneous TCP connections TCP, UDP applications supported
20
20F0_4553_c1 © 1999, Cisco Systems, Inc. Network Address Translation Client traffic destined for virtual address is distributed across multiple real addresses in the server cluster Transparent to client and server Network Address Translation (NAT) Requires all traffic to pass through LocalDirector Virtuals and reals are IP address/port combination Client IP Server 2 Server 1Server 3 LocalDirector 1.1.1.1 3.3.3.12.2.2.22.2.2.1 Virtual Address Real Addresses Server Cluster
21
21F0_4553_c1 © 1999, Cisco Systems, Inc. Session Distribution Algorithm Passive approach Least connections Weighted Fastest Linear Source IP
22
22F0_4553_c1 © 1999, Cisco Systems, Inc. Ideal for Mission-Critical Applications TAP Servers Mail, Web, FTP, and so on LocalDirector High-Availability Solution
23
23F0_4553_c1 © 1999, Cisco Systems, Inc. LocalDirector Strengths Network Address Translation (NAT) allows arbitrary IP topology between LocalDirector and servers Proven market leader with extensive field experience Rich set of features to map between virtual and real addresses Bridge-like operation allows transparent deployment and gradual migration to NAT
24
24F0_4553_c1 © 1999, Cisco Systems, Inc. LocalDirector Weaknesses NAT requires all traffic to be routed through a single box NAT requires that data be scanned and manipulated beyond the TCP/UDP header Two interface types supported: FE and FDDI
25
25F0_4553_c1 © 1999, Cisco Systems, Inc. 25 MultiNode Load Balancing F0_4553_c1 © 1999, Cisco Systems, Inc.
26
26F0_4553_c1 © 1999, Cisco Systems, Inc. MultiNode Load Balancing (MNLB) Next-generation server load balancing Unprecedented high availability Eliminate single points of failure Unprecedented scalability Allow immediate incremental or large-scale expansion of application servers New dynamic server feedback Balance load according to actual application availability and server workload
27
27F0_4553_c1 © 1999, Cisco Systems, Inc. MNLB MNLB—What Is It? Hardware and software solution that distributes IP traffic across server farms Cisco IOS router and switch based Implementation of Cisco’s ContentFlow architecture Utilizes dynamic feedback protocol for balancing decisions
28
28F0_4553_c1 © 1999, Cisco Systems, Inc. MNLB MNLB Features Defines single-system image or “virtual address” for IP applications on multiple servers Load balances across multiple servers Uses server feedback or statistical algorithms for load-balancing decisions Server feedback contains application availability and/or server work capacity Algorithms include round robin, least connections, and best performance
29
29F0_4553_c1 © 1999, Cisco Systems, Inc. MNLB MNLB Features Session packet forwarding distributed across multiple routers or switches Supports any IP application: TCP, UDP, FTP, HTTP, Telnet, and so on For IBM OS/390 Parallel Sysplex environments: Delivers generic resource capability Makes load-balancing decisions based on OS/390 Workload Manager data
30
30F0_4553_c1 © 1999, Cisco Systems, Inc. MNLB Components Services Manager Software runs on LocalDirector ContentFlow Flow Management Agent Makes load-balancing decisions Uses MNLB to instruct Forwarding Agents of correct server destination Uses server feedback protocol to maintain server capacity and application availability info Backup Services Manager Enables 100% availability for Services Manager No sessions lost due to primary services manager failure Backup Service Manager Services Manager
31
31F0_4553_c1 © 1999, Cisco Systems, Inc. MNLB Components Forwarding Agent Cisco IOS router and switch software ContentFlow Flow Delivery Agent Uses MNLB to communicate with Services Manager Sends connection requests to Services Manager Receives server destination from Services Manager Forwards data to chosen server Workload Agents Runs on either server platforms or management consoles Maintains information on server work capacity and application availability Communicates with Services Manager using server feedback protocol For IBM OS/390 systems delivers OS/390 Workload Manager data Workload Agents Forwarding Agents
32
32F0_4553_c1 © 1999, Cisco Systems, Inc. Workload Agents Forwarding Agents Services Manager Client How Does MNLB Work? Initialization: Services Manager locates Forwarding Agents Instructs each Forwarding Agent to send session requests for defined virtuals to Services Manager Locates Workload Agents and receives server operating and application information
33
33F0_4553_c1 © 1999, Cisco Systems, Inc. How Does MNLB Work? Session packet flow 1. Client transmits connection request to virtual address 2. Forwarding Agent transmits packet to Services Manager Services Manager selects appropriate destination and tells Forwarding Agent 3. Forwarding Agent forwards packet to destination 4. Session data flows through any Forwarding Agent router and switch The Services Manager is also notified on session termination Client 1 2 3 4
34
34F0_4553_c1 © 1999, Cisco Systems, Inc. Dispatch Mode of Session Distribution Virtual IP address (VIPA) on hosts (alias, loopback) Load-balancer presents virtual IP address to network Load-balancer forwards packets based on Layer 2 address Uses ARP to obtain Layer 2 address IP header still contains virtual IP address Requires subnet adjacency since it relies on Layer 2 addressing Client IP Server 2 Server 1Server 3 LocalDirector 1.1.1.1 3.3.3.12.2.2.22.2.2.1 Virtual Address Real Addresses Server Cluster VIPA 1.1.1.1
35
35F0_4553_c1 © 1999, Cisco Systems, Inc. Dispatch Mode Benefits No need to scan past TCP/UDP header, may achieve higher performance Outbound packets may travel any path Issues Inbound packets must pass through the load-balancer Ignoring outbound packets does limit the effectiveness of the balancing decisions Subnet adjacency can be a real network design problem Client IP Server 2 Server 1Server 3 Server Cluster Client
36
36F0_4553_c1 © 1999, Cisco Systems, Inc. MNLB Uses either NAT or modified dispatch mode NAT MNLB architecture creates high availability—no single point of failure No throughput bottleneck Modified dispatch mode Uses Cisco Tag Switch network to address across multiple subnets Inbound and outbound traffic can travel through any path Services Manager notified on session termination MNLB Client 1.1.1.1
37
37F0_4553_c1 © 1999, Cisco Systems, Inc. 37 Benefits F0_4553_c1 © 1999, Cisco Systems, Inc.
38
38F0_4553_c1 © 1999, Cisco Systems, Inc. MNLB: The Next Generation Unprecedented high availability Eliminate single points of failure Unprecedented scalability Allow immediate incremental or large- scale expansion of application servers New dynamic server feedback Balance load according to actual application availability and server work load
39
39F0_4553_c1 © 1999, Cisco Systems, Inc. MNLB Single System Image One IP address for the server cluster Easy to grow and maintain server cluster without disrupting availability or performing administrative tasks on clients Easy to administrate clients, only one IP address Enhances availability
40
40F0_4553_c1 © 1999, Cisco Systems, Inc. MNLB Server Independence MNLB operates independent of server platform Server agents operate in IBM MVS, IBM OS/390, IBM TPF, NT, and UNIX sites Application-aware load distribution available in all server sites Enables IP load distribution for large IBM Parallel Sysplex complexes
41
41F0_4553_c1 © 1999, Cisco Systems, Inc. MNLB Application-Aware Load Balancing Client traffic is distributed across server cluster to the best server for the request Transparent to client Allow agent(s) in servers to provide intelligent feedback to network as basis for balancing decision Uses IBM’s OS/390 Work Load Manager in OS/390 Parallel Sysplex environments Application-aware load balancing ensures session completion
42
42F0_4553_c1 © 1999, Cisco Systems, Inc. MNLB Total Redundancy— Ultimate Availability No single point of failure for either applications, servers, or MNLB Multiple forwarding agents ensure access to server complex Multiple Services Managers ensure load balancing is maintained through failure Single cluster address for multiple servers maintains access to applications in case of server failure or server maintenance
43
43F0_4553_c1 © 1999, Cisco Systems, Inc. Unbounded Scalability Scalability limited only by the number and throughput of forwarding agents Performance limited only by the number and throughput of Forwarding Agents Forwarding Agents can be added at any time with no loss of service Servers can be added with no network design changes NO throughput bottlenecks Scales to the largest of Web sites MNLB
44
44F0_4553_c1 © 1999, Cisco Systems, Inc. 44 Implementation and Road Map F0_4553_c1 © 1999, Cisco Systems, Inc.
45
45F0_4553_c1 © 1999, Cisco Systems, Inc. MNLB Phase One Implementation MNLB components Cisco IOS-based forwarding agents in Cisco 7500, 7200, 4000, 3600, and Catalyst ® 5000R Services Manager Services Manager runs on LocalDirector chassis LocalDirector hot-standby for phase one backup manager Workload Agents for IBM OS/390, IBM TPF, NT, and UNIX
46
46F0_4553_c1 © 1999, Cisco Systems, Inc. Thank You Q & A 46F0_4553_c1 © 1999, Cisco Systems, Inc.
47
47F0_4553_c1 © 1999, Cisco Systems, Inc.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.