Presentation is loading. Please wait.

Presentation is loading. Please wait.

Clustering Technology In Windows NT Server, Enterprise Edition Jim Gray Microsoft Research Gray@Microsoft.com research.Microsoft.com/~gray.

Similar presentations


Presentation on theme: "Clustering Technology In Windows NT Server, Enterprise Edition Jim Gray Microsoft Research Gray@Microsoft.com research.Microsoft.com/~gray."— Presentation transcript:

1 Clustering Technology In Windows NT Server, Enterprise Edition Jim Gray Microsoft Research research.Microsoft.com/~gray

2 Today’s Agenda Windows NT® clustering
MSCS (Microsoft Cluster Server) Demo MSCS background Design goals Terminology Architectural details Setting up a MSCS cluster Hardware considerations Cluster application issues Q&A

3 Extra Credit Included in your presentation materials but not covered in this session Reference materials SCSI primer Speakers notes included Hardware Certification

4 MSCS In Action

5 High Availability Versus Fault Tolerance
High Availability: mask outages through service restoration Fault-Tolerance: mask local faults RAID disks Uninterruptible Power Supplies Cluster Failover Disaster Tolerance: masks site failures Protects against fire, flood, sabotage,.. Redundant system and service at remote site

6 Windows NT Clusters What is clustering to Microsoft?
Group of independent systems that appear as a single system Managed as a single system Common namespace Services are “cluster-wide” Ability to tolerate component failures Components can be added transparently to users Existing client connectivity is not effected by clustered applications

7 Microsoft Cluster Server
2-node available 97Q3 Commoditize fault-tolerance (high availability) Commodity hardware (no special hardware) Easy to set up and manage Lots of applications work out of the box. Multi-node Scalability in NT5 timeframe

8 MSCA Initial Goals Manageability Availability Reliability
Manage nodes as a single system Perform server maintenance without affecting users Mask faults, so repair is non-disruptive Availability Restart failed applications and servers Un-availability ~ MTTR / MTBF , so quick repair Detect/warn administrators of failures Reliability Accommodate hardware and software failures Redundant system without mandating a dedicated “stand by” solution

9 MSCS Cluster Server A Server B Heartbeat Cluster management
Client PCs Server A Server B Heartbeat Cluster management Disk cabinet A Disk cabinet B

10 Failover Example Server 1 Server 2 Server 1 Server 2 Browser Web site
Database Database Web site files Database files

11 Basic MSCS Terms Resource - basic unit of failover
Group - collection of resources Node - Windows NT® Server running cluster software Cluster - one or more closely-coupled nodes, managed as a single entity

12 MSCS Namespace Cluster view
Cluster name Node name Node name Virtual server name Virtual server name Virtual server name Virtual server name

13 MSCS Namespace Outside world view
Cluster Node 1 Node 2 Virtual server 1 Virtual server 2 Virtual server 3 Internet Information Server SQL MTS “Falcon” Microsoft Exchange IP address: Network name: WHECCLUS IP address: Network name: WHECNode1 IP address: Network name: WHECNode2 IP address: Network name: WHEC-VS1 IP address: Network name: WHEC-VS2 IP address: Network name: WHEC-VS3

14 Windows NT Clusters Target applications
Application & Database servers , groupware, productivity applications server Transaction processing servers Internet Web servers File and print servers

15 MSCS Design Philosophy
Shared nothing Simplified hardware configuration Remoteable tools Windows NT manageability enhancements Never take a “cluster” down: shell game rolling upgrade Microsoft® BackOffice™ product support Provide clustering solutions for all levels of customer requirements Eliminate cost and complexity barriers

16 MSCS Design Philosophy
Availability is core for all releases Single server image for administration, client interaction Failover provided for unmodified server applications, unmodified clients (cluster-aware server applications get richer features) Failover for file and print are default Scalability is phase 2 focus

17 Non-Features Of MSCS Not lock-step/fault-tolerant
Not able to “move” running applications MSCS restarts applications that are failed over to other cluster members Not able to recover shared state between client and server (i.e., file position) All client/server transactions should be atomic Standard client/server development rules still apply ACID always wins

18 Setting Up MSCS Applications

19 Attributes Of Cluster- Aware Applications
A persistence model that supports orderly state transition Database example ACID transactions Database log recovery Client application support IP clients only How are retries supported? No name service location dependencies Custom resource DLL is a good thing

20 MSCS Services For Application Support
Name service mapper GetComputerName resolves to virtual server name Registry replication Key and underlying keys and values are replicated to the other node Atomic Logged to insure partitions in time are handled

21 Application Deployment Planning
System configuration is crucial Adequate hardware configuration You can’t run Microsoft BackOffice on a 32-MB 75mhz Pentium Planning of preferred group owners Good understanding of single-server performance is critical See Windows NT Resource Kit performance planning section Understand working set size What is acceptable performance to the business units?

22 Evolution Of Cluster- Aware Applications
Active/passive - general out-of- the-box applications Active/active - applications that can run simultaneously on multiple nodes Highly scalable - extending the active/active through I/O shipping, process groups, and other techniques

23 Application Evolution
Application Node 1 Node 2 Microsoft SQL Server  Microsoft Transaction  Server (MTS) Internet Information  Server (IIS) Microsoft Exchange  Server

24 Evolution Of Cluster- Aware Applications
Application Node 1 Node 2 Node 3 Node 4 Microsoft SQL Server     Microsoft Transaction     Server (MTS) Internet Information     Server (IIS) Microsoft Exchange     Server

25 Resources What are they?
Resources are basic system components such as physical disks, processes, databases, IP addresses, etc., that provide a service to clients in a client/server environment They are online in only one place in the cluster at a time They can fail over from one system in the cluster to another system in the cluster

26 Resources MSCS includes resource DLL support for:
Physical and logical disk IP address and network name Generic service or application File share Print queue Internet Information Server virtual roots Distributed Transaction Coordinator (DTC) Microsoft Message Queue (MSMQ) Supports resource dependencies Controlled via well-defined interface Group: offers a “virtual server”

27 Cluster Service To Resource
Windows NT cluster service Resource monitor Initiate changes Resource events Physical disk resource DLL IP address resource DLL Generic app resource DLL Database resource DLL Disk Network App Database

28 Cluster Abstractions Resource Cluster Resource Group
Resource: program or device managed by a cluster e.g., file service, print service, database server can depend on other resources (startup ordering) can be online, offline, paused, failed Resource Group: a collection of related resources hosts resources; belongs to a cluster unit of co-location; involved in naming resources Cluster: a collection of nodes, resources, and groups cooperation for authentication, administration, naming

29 Resources Resource Cluster Group Resources have...
Type: what it does (file, DB, print, Web…) An operational state (online/offline/failed) Current and possible nodes Containing Resource Group Dependencies on other resources Restart parameters (in case of resource failure)

30 Resource Fails over (moves) from one machine to another
Logical disk IP address Server application Database May depend on another resource Well-defined properties controlling its behavior

31 Resource Dependencies
A resource may depend on other resources A resource is brought online after any resources it depends on A resource is taken offline before any resources it depends on All dependent resources must fail over together

32 Dependency Example Database resource DLL Generic application
IP address resource DLL Drive E: resource DLL Drive F: resource DLL

33 Group Example Payroll group Database resource DLL Generic application
IP address resource DLL Drive E: resource DLL Drive F: resource DLL

34 MSCS Architecture Network Cluster API Resource API
Cluster administrator Cluster API DLL Cluster API stub Cluster.Exe Cluster API DLL Log Manager Global Update Manager Database Manager Membership Manager Event Processor Checkpoint Manager Object Manager Node Manager Failover Manager Resource Manager Resource monitors Application resource DLL Resource API Physical resource DLL Logical resource DLL Application resource DLL Reliable Cluster Transport + Heartbeat Network

35 MSCS Architecture Cluster service is comprised of the following objects Failover Manager (FM) Resource Manager (RM) Node Manager (NM) Membership Manager (MM) Event Processor (EP) Database Manager (DM) Object Manager (OM) Global Update Manager (LM) Checkpoint Manager (CM) More about these in the next session

36 Setting Up An MSCS Cluster

37 MSCS Key Components Two servers Shared SCSI bus Interconnect
Multi versus uniprocessor Heterogeneous servers Shared SCSI bus SCSI HBAs, SCSI RAID HBAs, HW RAID boxes Interconnect Many types can be supported Remember, two NICs per node PCI for cluster interconnect Complete MSCS HCL configuration

38 MSCS Setup Most common problems
Duplicate SCSI IDs on adapters Incorrect SCSI cabling SCSI Card order on PCI bus Configuration of SCSI Firmware Let’s walk through getting a cluster operational

39 Test Before You Build Bring each system up independently
Network adapters Cluster interconnect Organization interconnect SCSI and disk function NTFS volume(s)

40 Top Ten Setup “Concerns”
10. SCSI is not well known. Please use the MSCS and IHV setup documentation. Consider the SCSI book reference for this session 9. Build a support model that will support clustering requirements. For example, in clustering components are paired exactly (i.e., SCSI bios revision levels. Include this in your plans) 8. Build extra time into your deployment planning to accommodate cluster setup, both for hardware and software. Hardware examples include SCSI setup. Software issues would include installation across cluster nodes 7. Know the certification process and its support implications

41 Top Ten Setup “Concerns”
6. Applications will become more cluster-aware through time. This will include better setup, diagnostics, and documentation. In the meantime, plan and test accordingly 5. Clustering will impact your server maintenance and upgrade methodologies. Plan accordingly 4. Use multiple network adapters and hubs to eliminate single points of failure (everywhere possible) 3. Today’s clustering solutions are more complex to install and configure than single servers. Plan your deployments accordingly 2. Make sure that your cabinet solutions and peripherals both fit and function well. Consider the serviceability implications 1. Cabling is a nightmare. Color coded, heavily documented, Y cable inclusive, maintenance-designed products are highly desirable

42 Cluster Management Tools
Cluster administrator Monitor and manage cluster Cluster CLI/COM Command line and COM interface Minor modifications to existing tools Performance monitor Add ability to watch entire cluster Disk administrator Add understanding of shared disks Event logger Broadcast events to all nodes

43 MSCS Reference Materials
In Search of Clusters; The Coming Battle In Lowly Parallel Computing Gregory F. Pfister ISBN The Book of SCSI Peter M. Ridge ISBN

44 The Basics Of SCSI Why SCSI? Types of interfaces?
Caching and performance… RAID The future…

45 Why SCSI? Faster then IDE - intelligent card/drive
Uses less processor time Can transfer data up to 100 MB/sec. More devices on a single chain - up to 15 Wider variety of devices DASD Scanners CD-ROM writers and optical drives Tape drives

46 Types Of Interfaces SCSI and SCSI II Wide SCSI Ultra SCSI Ultra wide
50-pin, 8-bit, max transfer = 10 MB/s (early 1.5 to 5 MB/s ) Internal transfer rate = 4 to 8 MB/s Wide SCSI 68-pin, 16-bit, max transfer = 20 MB/s Internal transfer rate = 7 to 15.5 MB/s Ultra SCSI 50-pin, 8-bit, higher transfer rate, max transfer = 20 MB/s Ultra wide 68-pin, 16-bit, max transfer rate = 40 MB/s Internal transfer rate = 7 to 30 MB/s

47 Performance Factors Cache on the drive or controller Caching in the OS
Different variables Seek time Transfer rates

48 Redundant Array Of Inexpensive Disks (RAID)
Developed from paper published in 1987 at University of California Berkeley The idea is to combine multiple inexpensive drives (eliminate SLED - single large expensive drive) Provided redundancy by storing parity information

49 The Future For SCSI Faster interfaces - why? Fibre Channel
Optical standard Proposed as part of SCSI III (not final) Up to 100 MB/s transfer Still using ultra-wide SCSI inside enclosures Drives with optical interfaces not available yet in quantity, higher cost than SCSI

50 The Future Of SCIS Fibre Channel-arbitrated loop
Ring instead of bus architecture Can support up to 126 devices/hosts Hot pluggable through the use of a port bypass circuit No disruption of the loop as devices are added/removed Generally implemented using a backplane design

51 HCL List For MSCS Servers on normal Windows NT HCL
Self-test of MP machines soon MSCS SCSI component HCL Tested by WHQL Must pass Windows NT HCT as well MSCS interconnect HCL Not required to pass 100% of HCT I.e., point-to-point adapters

52 MSCS System Certification Process
Windows NT 4.0+ Server HCL Complete MSCS configuration ready for self-test Windows NT 4.0+ SCSI HCL Windows NT 4.0+ MSCS SCSI HCL Windows NT 4.0+ Network HCL

53 Testing Phases HW compatibility (24 hours) One-node testing (24 hours)
SCSI and interconnect testing One-node testing (24 hours) Eight clients Two-node with failover (72 hours) Eight-client with asynchronous failovers Stress testing (24 hours) Dual initiator I/O, split-brain problems Simultaneous reboots

54 Final MSCS HCL Only complete configurations are supported
Self test results sent to Microsoft Logs checked and configuration reviewed HCL updated on Web and for next major Windows NT release For more details see the MSCS Certification document


Download ppt "Clustering Technology In Windows NT Server, Enterprise Edition Jim Gray Microsoft Research Gray@Microsoft.com research.Microsoft.com/~gray."

Similar presentations


Ads by Google