Presentation is loading. Please wait.

Presentation is loading. Please wait.

Module 1: Introduction to Windows Clustering

Similar presentations


Presentation on theme: "Module 1: Introduction to Windows Clustering"— Presentation transcript:

1 Module 1: Introduction to Windows Clustering

2 Overview Defining Clustering Features
Introducing Application Architecture Identifying Availability and Scalability Requirements Introducing Microsoft Windows 2000 Clustering Comparing Network Load Balancing to Cluster Service Identifying the Application and Service Environments

3 As your organization’s business needs grow, you must be able to expand your organization’s system capacity economically, avoid single points of failure and quickly restore failed services and applications for users. Microsoft® Windows® 2000 Clustering enables you to provide availability, scalability, and load balancing for applications and services. This module describes the central concepts of Cluster service and Network Load Balancing service, by providing a brief background of clustering technologies and explaining what Windows 2000 Clustering provides.

4 In this course, a cluster is defined as a group of independent computers working together as a single system. Microsoft clustering technologies provide the functionality that is required to enable you to configure multiple computers as a single logical system. In this module, you will learn the key benefits of Microsoft Windows 2000 Clustering and how they apply within single and multiple tier application architectures.

5 After completing this module, you will be able to:
Define clustering features. Define application architectures. Identify clustering technologies that can improve availability and scalability in an enterprise system. Identify the available Microsoft clustering technologies. Identify the similarities and appropriate use of the clustering technologies. Identify the applications and services that can benefit from clustering technologies.

6 Defining Clustering Features
A working knowledge of a clustering solution begins with the definitions of clustering features. High Availability and Fault Tolerance Manageability Scalability Comparing Reliability and Availability

7 High Availability and Fault Tolerance
A system that is available whenever users want to use it and provides service that meets a defined organizational standard is considered to have high availability. When a system or component in a cluster fails, the cluster software responds by reallocating the resources from the failed system to the remaining systems in the cluster, thereby ensuring that the system is providing high availability to client/server applications and services.

8 High Availability and Fault Tolerance (continue)
Throughout this process, client communications with applications or services usually continue with minimal interruption in service and Clustering provides a single, virtual image of the server to clients. Most client software applications will automatically recover from the broken connections with little or no interruption to the user. A fault tolerant solution is one that addresses performance by offering errorfree, nonstop availability, usually by keeping a backup of the primary system. This backup system remains idle and unused until a failure occurs, which makes this an expensive solution.

9 Manageability Although manageability is not a key feature of clustering technologies, it allows system administrators to perform all of the necessary functions of maintaining the system by providing a single point of control. Administrators can access a single point of control remotely or run tools that provide a view of the system members, which allows control of the servers as a single logical entity.

10 Scalability A system can be scaled up, scaled out, or scaled down.
Scaling up. Achieved by adding more resources, such as memory, processors, and disk drives to a system. Scaling out. Achieved by adding additional computers to deliver high performance when the throughput requirements of an application exceed the capabilities of an individual system. Scaling down. Achieved by reducing resources.

11 Scalability (continue)
When the overall load exceeds the capabilities of the systems in a cluster, you may need to add additional systems. You will find that clusters are highly scalable; you can add CPU, input/output (I/O) storage, and application resources incrementally to efficiently expand or contract capacity by implementing one of the three types of scaling architectures.

12 Comparing Reliability and Availability
High availability and high reliability are at times used interchangeably, but when considering complex systems, each can have a different meaning. When designing products, for example a computer motherboard, there is a failure rate defined for each component. The reliability number may be expressed as Mean Time Between Failure (MTBF), which shows the measured failure rate based on testing of the individual components.

13 Comparing Reliability and Availability (continue)
The testing regime is usually a large number of components being tested in a benign environment within their operating parameters, the aggregate run hours without failure are used to ascertain the MTBF. Given the reliability figures of all of the components, it is possible to calculate the probability of failure of the motherboard within a given time. This MTBF number is a measure of the reliability of the component and recognizes that all components will fail in time. For example, disk drives may have an MTBF of 1x106 power hours.

14 Comparing Reliability and Availability (continue)
A system with high availability is one where you expect that whenever you want to use it, it is available to provide service meeting your defined standard. So a computer system might be expected to be available 24 hours a day, 7 days a week, 52 weeks a year; in other words, it can never stop working. There is a distinct advantage to using high reliability components to build high availability systems, because the probability of a failure is lower. However, you can build high availability systems by using unreliable components, provided that you use some fault-tolerant mechanism to maintain operation.

15 Introducing Application Architecture
Two-Tier Thin Client Two-Tier Fat Client Three-Tier N-Tier User Services User Interface Microsoft Win32® User Interface Win32 Most Business Logic User Interface Win32 Browser User Interface Win32 Browser DHTML, XML Business Services Business Logic COM Objects User Interface ASP Business Logic COM Objects Data Services Storage RDBMS All Business Logic (SP) Storage RDBMS Min Business Logic (SP) Storage RDBMS Min Business Logic (SP) Storage RDBMS Min Business Logic (SP)

16 The application architecture defines how pieces of the application interact with each other, and what functionality each piece is responsible for performing. There are three main classes of application architecture that can be characterized by the number of layers between the user and the data. The three types of application architecture are two-tier, three-tier and n-tier, where n can be three or more. The table demonstrates the user, business, and data services layers in each of the application architectures. One of the benefits of a three-tier or n-tier model is that applications are divided cleanly into presentation, business logic, and data layers. This division results in enhanced scalability and manageability, which can be improved by Windows Clustering technologies.

17 Client Server 1. Client requests data 2. Server fulfills request
Business Data 1. Client requests data 2. Server fulfills request 3. Client receives data 1 Client Application Requests Server Applications 2 3

18 Two-Tier In a thin client, two-tiered model, the business logic is server-based and typically consists of stored procedures in the database server. You must install client code on every client accessing the application; the client code is responsible for the user interface only. In a fat client, two-tiered model, you must install client code on every client accessing the application; the client code is responsible for the user interface and most of the business logic. The database can still have stored procedures, but the requirements for these procedures are reduced. This model requires that more resources are available on the client.

19 Three-Tier Native Win32 ActiveX COM, COM+ IIS/ASP COM, COM+ ADO ADSI
Business Services IIS/ASP COM, COM+ ADO ADSI CDO MSMQ Data Services SQL Server Index Server Catalog Site Server Directory Exchange Server SMTP Server Internet

20 Three-Tier In a three-tiered model, the business layer or application layer lies between data and client. This layer is responsible for both the application's business logic and the overall management of business transactions. Often the application layer will utilize object technologies.

21 Business Services Server
N-Tier Examples: HTML, XML, Java applets, client side script Examples: DCOM.ASP, MTS MSMQ Examples: SQL, Exchange, SMTP Business Data User Services Business Services Server Data Services Server

22 N-Tier In an n-tier model, the user-services tier or first tier handles presentation of information and interaction with the users. Some sources refer to this first tier as presentation services, because some of the services that are performed in the middle or business services tier of an application, such as authenticating users, are also user services. The business-services tier provides most of an application's functionality. This tier handles the bulk of application-specific processing and enforces an application's business rules. Business logic built into custom components bridges the client environments and the data-services tier. The data-services tier in an n-tier application can consist of data residing in several different kinds of stores.

23 N-Tier (continue) Although this split is conceptual, it can be mirrored in a real- world scenario by implementing the data tier on a number of computers running a high performance database, such as Microsoft SQL Server™; implementing the business tier on a set of separate computers; and implementing the presentation tier on yet another set of computers. When you complete the n-tier implementation, you achieve redundancy and if a single computer fails, the applications and services are available on the other computers. This environment also addresses the need for scalability by allowing users to incorporate different hardware.

24 Application Architecture Development Strategies
Developing applications for a Microsoft platform will typically use Microsoft development tools and strategies. Current applications use the Windows Distributed interNet Application Architecture (Windows DNA) strategies for development and future development extend this using Microsoft .NET Enterprise Servers.

25 Application Architecture Development Strategies (continue)
Windows DNA The Windows DNA model distributes an application in several layers, called tiers, which often reside physically on different machines, emphasizing logical distribution. Microsoft developed Windows DNA as a way to fully integrate the Web with the n-tier model of development. Windows DNA defines a framework for delivering solutions that meet the demanding requirements of corporate computing, the Internet, intranets, and global electronic commerce, while reducing overall development and deployment costs. Windows DNA architecture employs standard Windows-based services to address the requirements of each tier in the multitiered solution: user interface and navigation, business logic, and data storage.

26 Application Architecture Development Strategies (continue)
Microsoft .NET The core services of .NET are fulfilled by a set of strategies for the development Internet-based applications. These core services include services and development strategies for user identification, data storage, calendar management, messaging, database, and many other services.

27 Identifying Availability and Scalability Solutions
Assessing Risks Scalability High Availability

28 As a system administrator planning to expand your system’s capacity, you may be required to make commitments to expensive high-end servers that provide space for additional CPUs, drives, and memory. By using a clustering technology solution, you will be able to incrementally add smaller, standard systems as needed to meet overall processing power requirements. Clustering solutions are ideal when you need more system processing power or high availability. For example, you would consider using a clustering solution for an Internet server-based program supporting mission-critical applications, such as financial transactions, database access, corporate intranets, and other key functions that must run 24 hours a day, 7 days a week. Implementing a clustering solution makes it possible for you to share a computing load over several computer systems, without the users needing to know that more than one computer is involved. If any component in the system (hardware or software) fails, the user will not lose access to the service or application.

29 Performing a Risk Audit
Assessing Risks X Client Router Server Power Performing a Risk Audit

30 A risk audit helps you to identify system risk; it also helps to determine if clustering is an appropriate solution to reduce the risk. More specifically it helps to identify where you can use clustering to eliminate single points of failure and maintain availability.

31 Identifying Risks When you identify risks, you identify the possible failures that can interrupt access to resources. A single point of failure is any component in your environment that would block data or applications if it failed. A single point of failure can be caused by hardware, software, or external dependencies, such as power supplied by a utility company and dedicated wide area network (WAN) lines. In general, you provide improved reliability when you minimize the number of single points of failure in your environment. Maximum reliability is provided by mechanisms that maintain service when a failure occurs by providing fault tolerance.

32 Performing a Risk Audit
The following table lists some of the more commonly encountered points of failure. Point of failure Cluster service solution Possible other solutions Network component, such as a hub or router None Spare components or redundant routes Power failure Uninterruptible power supply (UPS) Server hardware, such as CPU, memory, or network card Failover process of taking resources offline on one node and bringing them back online on another node Disk – non shared Failover Disk – shared Redundant Array of Independent Disks (RAID) Server connection Sever software, such as the operating system, a service, or an application

33 Note: Clustering cannot eliminate all possible points of failure. It is designed to protect availability to data but it cannot protect the data itself. Therefore, it is still important to have a backup strategy.

34 Scalability Enhanced Symmetric Multiprocessing Cluster Service
Network Load Balancing

35 Microsoft Windows 2000 Advanced Server provides integrated system scalability through enhanced symmetric multiprocessing (SMP), in addition to the two Windows Clustering technologies, Cluster service and Network Load Balancing service. Combined with relatively inexpensive computer hardware, Windows 2000 Advanced Server gives organizations powerful and scalable alternatives to more expensive proprietary solutions.

36 Enhanced Symmetric Multiprocessing Scalability
SMP is a technology that allows software to use multiple processors on a single server to improve performance, a concept known as hardware scaling, or scaling up. Windows 2000 Advanced Server supports up to 8-way SMP. Improvements in the implementation of the SMP code allow for improved scaling linearity, making Windows Advanced Server an even more powerful platform for business-critical applications, databases, and Web services. In an SMP system, several processors share a global memory and I/O subsystem.

37 Enhanced Symmetric Multiprocessing Scalability (continue)
At the hardware level, the major drawback to SMP systems is that they encounter physical limitations in bus and memory speed that are expensive to overcome. As microprocessor speeds increase, shared memory multiprocessors become increasingly expensive. There are large cost differences as customers increase their systems from one processor to 2 to 4 processors, and especially when implementing more than 8 processors.

38 Cluster Service Cluster service is a feature of Windows 2000 Advanced Server that allows a pair of independent servers, referred to as nodes, to be managed as a single entity. The objective of Cluster service is to provide high levels of availability and scalability for applications and data.

39 Network Load Balancing
Network Load Balancing service enables organizations to cluster up to 32 servers running Windows 2000 Advanced Server to evenly distribute incoming traffic while also monitoring servers and the network. The dual benefits of simple, incremental scalability combined with high- availability make Network Load Balancing service ideal for use with business-critical e-commerce, Internet Service Provider hosting, and Terminal Services applications. Network Load Balancing service introduces the concept of software scaling; or scaling out, where system administrators can add capacity to their server farms by simply plugging in additional Network Load Balancing service-configured servers as needed.

40 High Availability Measuring High Availability Cluster Service
Network Load Balancing

41 Windows 2000 Advanced Server provides system services for server clustering as a standard feature of the product. The objective of clustering is to provide very high levels of application and data availability. Availability refers to the percentage of time that a system is available for the users. Availability is increased by improving reliability and by reducing the amount of time that a system is down for various reasons, such as planned maintenance or recovery from failure.

42 Measuring High Availability
High availability is a measure of the time during which clients can successfully use a resource, application, or system within design specifications. Availability is normally expressed as a percentage. For example, a computer system that is required on a 24x365 basis that is unavailable for 24 hours would have an availability percentage of 99.62%. To achieve 99.99% availability this system can only be unavailable for 53 minutes per year. To achieve % availability this system can only be unavailable for 5.3 minutes a year. A computer system with high availability will optimally provide continuous service without interruptions that are caused by software or hardware failures.

43 Comparing High Availability and Fault Tolerance
It is important to note that high availability is not fault tolerance. Fault tolerant systems, such as those used for air traffic control applications, may be required to achieve greater than % availability. This is typically achieved by adding extensive redundancy to the system hardware, which instantly provide backup components in the event of primary component failure with no loss of process or data consistency. An example of a fault tolerant system is RAID technology. Logical disk data is written to an array of disks with additional information so that the loss of a single disk can be tolerated without preventing access to the data. There is no backup component in this case; the data is dynamically rebuilt from the information on the other drives in the array. You can remove and replace the failed disk drive with a new one, and the system is repaired, returning to the initial state before the failure. A fault tolerant system is designed to guarantee resource availability. A highavailability system is concerned with maximizing resource availability.

44 Cluster Service The use of component hardware provides many advantages, including reduced purchase cost and greater standardization; these advantages can lead to reduce maintenance costs. But component hardware, like all hardware, is subject to periodic failure. Windows Advanced Server provides for high availability of these hardware components through the use of clustering. A cluster is a group of servers that appear to the client as a single entity. The nodes in a cluster access the same disk drives, so any single server in the cluster has access to the same set of data and programs. The servers in a cluster act as backups for each other, if any one server in the cluster stops working, its workload is automatically moved to another server in the cluster in a process called failover.

45 Network Load Balancing
Another way to improve availability is through the use of network load balancing. Network load balancing is a method where incoming requests for service are routed to one of several different computers. Network load balancing is provided by Network Load Balancing services in Windows 2000 Advanced Server and Microsoft Windows 2000 Datacenter Server.

46 Network Load Balancing Component Load Balancing
Introducing Microsoft Windows 2000 Clustering ethernet Web Host 1 Host 2 Host 3 Host 4 Network Load Balancing Component Load Balancing 2-node Cluster Service Internet Customer Database Messaging File Shares

47 Windows 2000 Advanced Server provides two clustering technologies that can be used independently or in combination, Network Load Balancing service, Component Load Balancing (available with Application Center 2000) and Cluster service. These technologies provide a complete set of clustered solutions to choose from depending on your application or service. The preceding graphic is a graphical representation of both technologies. Network Load Balancing Cluster Service Component Load Balancing

48 Network Load Balancing
This service load balances incoming Internet protocol (IP) traffic across clusters of up to 32 hosts. Network Load Balancing service enhances both the availability and scalability of Internet server-based programs, such as Web servers, streaming media servers, and Terminal Services. By acting as the load balancing infrastructure and providing control information to management applications built on top of Windows Management Instrumentation (WMI), Network Load Balancing service can seamlessly integrate into existing Web server farm infrastructures. Network Load Balancing service will also serve as an ideal load balancing architecture for use with the Microsoft release of Microsoft Windows 2000 Application Center 2000 in distributed Web farm environments.

49 Cluster Service This service is intended primarily to provide failover support for applications, such as databases, messaging systems, and file and print services. Cluster service supports 2-node failover clusters in Windows 2000 Advanced Server and 4- node clusters in Datacenter Server. Cluster service is ideal for ensuring the availability of critical line-ofbusiness and other back-end systems, such as Microsoft Exchange Server or a database running Microsoft SQL Server version 7.0 that is acting as a data store for an e-commerce Web site.

50 Component Load Balancing
This service will be a feature of Microsoft Application Center Component Load Balancing distributes workload across multiple servers running a site’s business logic components.

51 Network Load Balancing Service
ethernet Web Host 1 Host 2 Host 3 Host 4 Internet Customer Database Messaging File Shares

52 Windows 2000 Network Load Balancing service provides an integrated infrastructure for creating a distributed load- balanced environment for your critical services, such as high-demand Web sites. Designed for use with a diverse array of applications and services, Network Load Balancing uses a statistical or manual load-balancing algorithm to distribute incoming IP requests across a cluster of up to 32 servers.

53 As the system administrator deploying Network Load Balancing, you will be able to:
Scale Web applications by quickly and incrementally adding additional servers. Ensure that your Web sites are always online for your customers. Network Load Balancing supports load balancing, which reduces poor customer experience that results from unplanned downtime. Note: Network Load Balancing service combined with application monitoring tools that are included in Windows Resource Kit ensure that your Web site is always available to customers.

54 Scale virtual private network (VPN), Point-to-Point Tunneling Protocol (PPTP) servers to accommodate every user account with simplified access to a central IP address. Scale streaming media services for performance and scalability. Scale Terminal Services to support large user accounts by distributing connections across multiple servers.

55 Component Load Balancing
Network Load Balancing Component Load Balancing (COM+) Clustering Service Clients IISWeb Server or other IP-based services Application Servers COM+ Components Data Servers SQL Server, Exchange Server File

56 Component Load Balancing service, a feature of Microsoft Application Center 2000, load balances different instances of the same COM+ components that are running on one or more servers. This dynamic load-balancing feature of Component Load Balancing enables applications that are using COM+ components to be distributed evenly across a group of application servers for increased reliability and scalability. In the event of a server failure, Component Load Balancing is notified and reroutes requests away from the failed node, ensuring continuous availability of the COM+ components even in the event of multiple system hardware or software failures. Component Load Balancing complements both Network Load Balancing service and Cluster service by acting on the middle- or business tier of a multitiered clustered network.

57 The key differences between Network Load Balancing service, Component Load Balancing, and Cluster service are:

58 Network Load Balancing service cannot differentiate between Uniform Resource Locators (URLs) being sent in Hypertext Transfer Protocol (HTTP) requests, so it processes all of the requests and all of the clustered computers identically. Component Load Balancing mechanisms offer finer-grained control than Network Load Balancing service and are able to differentiate between URLs being sent in HTTP requests and route them in the most efficient manner. Cluster service nodes share a disk, which is important for storage services, such as databases or groupware messaging stores, but provides little benefit to the majority of COM+ component servers. Unlike Network Load Balancing and Component Load Balancing, Cluster service cannot work across more than two nodes in Microsoft Windows Advanced Server.

59 Network Load Balancing
Cluster Service Database Messaging File Shares Network Load Balancing 2-node Cluster Service ethernet Web Host 1 Host 2 Host 3 Host 4 Internet Customer

60 Applications that are central to your organization’s operations include systems such as databases, messaging servers, enterprise resource planning applications, and core file and print services. Cluster service in the Windows 2000 operating system ensures that these critical applications are online when needed by removing the physical server as a single point of failure. In the event that a hardware or software failure occurs in either node, Cluster service migrates the applications currently running on that node to the surviving node and restarts them. Because Cluster service uses a shared-disk configuration with common bus architectures, such as small computer system interface (SCSI) and Fibre Channel, you will not lose any data during a failover.

61 As the system administrator deploying Cluster service, you will be able to:
Reduce unplanned downtime. Downtime caused by hardware or software failures can result in lost revenue and poor customer experience. Using Cluster service with a shared-disk solution on critical line-of-business applications can significantly reduce the amount of application downtime that unexpected failures cause. Deploy upgrades smoothly with rolling upgrade support. Cluster service is ideally suited for ensuring transparent upgrades of applications without interrupting your clients. By migrating your applications to one node, upgrading the first node, and then migrating them back, you can roll out hardware, software, and even operating system upgrades without taking the application offline. Cluster service in Windows 2000 supports rolling operating system upgrades from Microsoft Windows NT® Server version 4.0, Enterprise Edition clusters that are deployed with Service Pack 4 or higher. Deploy reliable applications. Cluster service is supported by dozens of cluster-aware applications spanning a wide range of functions and vendors.

62 Deploy applications on industry-standard hardware
Deploy applications on industry-standard hardware. Cluster service also allows you to cluster services such as, Dynamic Host Configuration Protocol (DHCP), Windows Internet Name Service (WINS), and Distributed File System (DFS). Install and configure in less time. Cluster service in Windows is now easier to set up and use. With a substantially improved Setup Wizard, Cluster service setup requires fewer entries and less time to install and configure. Combined with the improved Cluster Administrator, now a Microsoft Management Console snap-in, the Cluster service in the Windows 2000 operating system is redefining how simple building clusters on standard Intel computer-based hardware can be.

63 Comparing Network Load Balancing to Cluster Service
Which Clustering Technology Should be Used for Your Application? Technology Cluster Service Networking Load Balancing Benefits Scenario Web Server Farm Quickly expand your capacity Minimize site downtime P Terminal Services Quickly expand your capacity Minimize effects of server failures P File/Print Servers Minimize service downtime Ensure data consistency after failover P Database/ Messaging Minimize application downtime Ensure data consistency after failover P E-Commerce Sites Quickly expand your capacity Minimize effects of server/app. downtime P P

64 Windows 2000 Advanced Server and Datacenter Server operating systems support two clustering technologies that you can use independently or in combination, Cluster service and Network Load Balancing service. You can use combined Windows Clustering technologies to create e- commerce sites that have high scalability and high availability. By deploying Network Load Balancing service across a Web server farm, and clustering back-end lineof-business applications, such as databases with Cluster service, you can gain all of the benefits of near-linear scalability with no server or application-based single points of failure. As a system administrator you will need to decide which clustering solution to implement, Cluster service or Network Load Balancing service. The preceding graphic demonstrates which clustering technology should be implemented, depending on the business need, and gives the benefits for each solution.

65 Identifying the Application and Service Environments
Application Environment Services Environment

66 After you have identified the potential points of failure within your system, the next step is to determine which application and services you can move to the cluster. Typically, the resources moved to a cluster are those that provide access to mission-critical data and where loss of access can have a negative impact on customer experience.

67 Application Environment
Microsoft developed the Application Specification for Windows 2000 in cooperation with customers and third-party developers to provide clear, concise guidelines to help developers create applications that deliver new levels of reliability and manageability. Three types of server applications benefit from clustering technologies:

68 Application Environment (continue)
In the box services of Windows 2000 Advanced Server: These services include file shares, print queues, Internet/intranet sites managed by Microsoft Internet Information Services (IIS), Microsoft Terminal Services, Microsoft Routing and Remote Access VPN Server, Microsoft Message Queue Server (MSMQ) services, and Component Services, all which are part of Windows Advanced Server and Windows 2000 Datacenter Server.

69 Application Environment (continue)
Cluster-aware applications. Software vendors are testing and supporting their application products on Microsoft Cluster service. These vendors will also be integrating Cluster service-based enhancements, from simpler setup and faster failover, to cluster-enabled scalability and load balancing, into their software.

70 Application Environment (continue)
The following table lists common cluster-aware databases, messaging servers, and management tools and applications. Databases Messaging Servers Management Tools Applications SQL Server 7.0 Exchange Server 5.5 NetIQ App Manager Service Advertising Protocol (SAP) IBM DB2 Lotus Domino NSI DoubleTake 3.0 Baan PeopleSoft JD Edwards

71 Services Environment Several services can take advantage of the clustering technology. These services include: WINS, DFS, and DHCP Support. Cluster service now supports WINS, DHCP, and the DFS as cluster-aware resources that support failover and automatic recovery. A file share resource can now serve as a DFS root or share its folder subdirectories for efficient management of large numbers of related file shares.

72 Discussion: Evaluating Business Scenarios

73 The objective of this discussion is to provide you with the opportunity to analyze and evaluate two unique business scenarios. You will use what you have learned from this module to determine and apply an appropriate Windows clustering technology solution for each discussion.

74 While working in small groups, you will answer questions that help you to make decisions that lead to possible solutions. For each scenario you will: Read and evaluate each scenario carefully. Document the limitations of the current implementations. Suggest solutions that meet the specified requirements. Present these solutions to the class.

75 Scenario One You work for a company, which has implemented a static form of Internet Protocol load balancing, known as round robin Domain Name System (DNS), for the company Web site. The Web farm consists of four identical dual processor Windows 2000-based IIS servers, and a single T3 link providing adequate bandwidth to the Internet for customer access. You can use two spare servers, which normally are used as staging servers, to accommodate failures, but they are single processor computers with reduced performance.

76 Customers access the Web site to obtain documentation and sales information on company products, which is critical to the sales channel. In a recent server outage, however, many customer calls were received claiming that they could not access your site. The resulting loss of sales has raised concerns at a management level. You are required to present to the board of directors the reason that the customers could not access the site, and provide a viable solution to ensure that future access is uninterrupted.

77 Questions Carefully read and answer the following questions to generate possible solutions for Scenario one: What are the limitations with the current corporate Web site implementation? How would you improve future availability of the site? What are the potential solutions to minimize downtime events? How might you implement these solutions? Note: An upgrade of an active site can require a significant amount of time, because the client connections for any single server must be eliminated during the upgrade. If sufficient client capacity exists you can remove half the servers, upgrade and bring them online, and then upgrade the remainder.

78 Scenario Two You work for a stock brokerage firm that deals in complex futures trading. The firm has a loyal customer base that is steadily growing, but is pressuring the organization to provide interactive online deals. Customers are currently handled by telephone, with the dealers running a complex analysis program when negotiating with customers. To enable Internet stock trading for the customers on the Web will involve moving the complex trading logic into COM- based objects, which can be accessed from the Web servers. Moving part of the dealers to a new COMbased application, with the business logic running on separate servers, has successfully tested the use of COM objects. A simple ASP-based Web frontend has also successfully used the COM objects to provide access to the analysis logic.

79 The organization does not want to spend significant amounts of money to initially implement a solution, and wants to be able to scale up or scale out as more customer demand for the service occurs. There is an existing T1 line to an ISP that is used to provide Internet access, and the company wants to use this line to provide initial service. The ISP provides DNS services for the organization. The Information Technology manager has read about three- tier application architecture and is convinced that it will provide a viable solution for the organization.

80 The application developers want to focus on a stateless first tier design, and will maintain client state in the SQL Server database. They wish to implement a solution that will provide the highest possible availability when structured as follows: First tier – Presentation Layer (HTTP) Second/Middle tier – Business Logic (COM Objects), but without the use of Microsoft Application Center Server Third Tier – Data Services and client transaction state (SQL Server)

81 Questions Carefully read and answer the following questions to generate possible solutions for Scenario two: What are the possible risks associated with the architecture and infrastructure? What are the potential solutions to minimize downtime and maximize availability with the suggested architecture? How would you recommend fault tolerance be implemented within each tier of the architecture?

82 Review Defining Clustering Features
Introducing Application Architecture Identifying Availability and Scalability Requirements Introducing Microsoft Windows 2000 Clustering Comparing Network Load Balancing to Cluster Service Identifying the Application and Service Environments


Download ppt "Module 1: Introduction to Windows Clustering"

Similar presentations


Ads by Google