VCS I/O Fencing. I /O Fencing After completing this topic, you will be able to define I / O Fencing.

Slides:



Advertisements
Similar presentations
Copyright © 2014 EMC Corporation. All Rights Reserved. Linux Host Installation and Integration for Block Upon completion of this module, you should be.
Advertisements

1 Dynamic DNS. 2 Module - Dynamic DNS ♦ Overview The domain names and IP addresses of hosts and the devices may change for many reasons. This module focuses.
Intel® Manager for Lustre* Lustre Installation & Configuration
CCNA2 MODULE 5.
Discovering Computers Fundamentals, Third Edition CGS 1000 Introduction to Computers and Technology Fall 2006.
1 © Copyright 2010 EMC Corporation. All rights reserved. EMC RecoverPoint/Cluster Enabler for Microsoft Failover Cluster.
MCDST : Supporting Users and Troubleshooting a Microsoft Windows XP Operating System Chapter 13: Troubleshoot TCP/IP.
70-290: MCSE Guide to Managing a Microsoft Windows Server 2003 Environment Chapter 12: Managing and Implementing Backups and Disaster Recovery.
Hands-On Microsoft Windows Server 2003 Networking Chapter 7 Windows Internet Naming Service.
1 I/O Management in Representative Operating Systems.
Hands-On Microsoft Windows Server 2003 Administration Chapter 6 Managing Printers, Publishing, Auditing, and Desk Resources.
1© Copyright 2011 EMC Corporation. All rights reserved. EMC RECOVERPOINT/ CLUSTER ENABLER FOR MICROSOFT FAILOVER CLUSTER.
How to Cluster both Servers and Storage W. Curtis Preston President The Storage Group.
Module – 7 network-attached storage (NAS)
© 2009 IBM Corporation Statements of IBM future plans and directions are provided for information purposes only. Plans and direction are subject to change.
Implementing Failover Clustering with Hyper-V
Maintaining Windows Server 2008 File Services
Microsoft Load Balancing and Clustering. Outline Introduction Load balancing Clustering.
1 Chapter Overview Creating User and Computer Objects Maintaining User Accounts Creating User Profiles.
Linux Operations and Administration
SRP Update Bart Van Assche,.
SANPoint Foundation Suite HA Robert Soderbery Sr. Director, Product Management VERITAS Software Corporation.
Module 10 Configuring and Managing Storage Technologies.
A+ Guide to Managing and Maintaining Your PC Fifth Edition Chapter 22 All About SCSI.
Hands-On Microsoft Windows Server 2008
Guide to Linux Installation and Administration, 2e 1 Chapter 9 Preparing for Emergencies.
Oracle10g RAC Service Architecture Overview of Real Application Cluster Ready Services, Nodeapps, and User Defined Services.
INSTALLING MICROSOFT EXCHANGE SERVER 2003 CLUSTERS AND FRONT-END AND BACK ‑ END SERVERS Chapter 4.
Module 9: Configuring Storage
Chapter 8 Implementing Disaster Recovery and High Availability Hands-On Virtual Computing.
Database Edition for Sybase Sales Presentation. Market Drivers DBAs are facing immense time pressure in an environment with ever-increasing data Continuous.
MCTS Guide to Microsoft Windows Server 2008 Applications Infrastructure Configuration (Exam # ) Chapter Three Configuring Windows Server 2008 Storage.
MCTS Guide to Microsoft Windows Vista Chapter 4 Managing Disks.
Copyright © 2014 EMC Corporation. All Rights Reserved. SnapView Snapshot Upon completion of this module, you should be able to: Describe SnapView Snapshot.
Site Power OutageNetwork Disconnect Node Shutdown for Patching Node Crash Quorum Witness Failure How do I make sure my Cluster stays up ??... Add/Evict.
Module 11: Implementing ISA Server 2004 Enterprise Edition.
7 1 Chapter 7 Introduction to Structured Query Language (SQL) Database Systems: Design, Implementation, and Management, Seventh Edition, Rob and Coronel.
FailSafe SGI’s High Availability Solution Mayank Vasa MTS, Linux FailSafe Gatekeeper
Copyright © 2014 EMC Corporation. All Rights Reserved. Windows Host Installation and Integration for Block Upon completion of this module, you should be.
Veritas Volume Manager Conversion Tim Clemens Training and Consulting Louisville, KY
Stratus Continuous Process System COSC513 Presentation By Ying Li & Kunyu Zheng.
File Management Chapter 12. File Management File management system is considered part of the operating system Input to applications is by means of a file.
Creating and Managing File Systems. Module 5 – Creating and Managing File Systems ♦ Overview This module deals with the structure of the file system,
3 Copyright © 2004, Oracle. All rights reserved. Working in the Forms Developer Environment.
Mark E. Fuller Senior Principal Instructor Oracle University Oracle Corporation.
SnapView Clones Upon completion of this module, you should be able to:
70-293: MCSE Guide to Planning a Microsoft Windows Server 2003 Network, Enhanced Chapter 12: Planning and Implementing Server Availability and Scalability.
WinCvs. WinCVS WinCvs is a window based version control system. Use WinCvs when  You want to save every version of your file you have ever created. CVS.
High Availability in DB2 Nishant Sinha
3 Copyright © 2006, Oracle. All rights reserved. Using Recovery Manager.
Stretching A Wolfpack Cluster Of Servers For Disaster Tolerance Dick Wilkins Program Manager Hewlett-Packard Co. Redmond, WA
© 2006 EMC Corporation. All rights reserved. The Host Environment Module 2.1.
Copy to Tape TOI. 2 Copy to Tape TOI Agenda Overview1 Technical Feature Implementation2 Q&A3.
1 CEG 2400 Fall 2012 Network Servers. 2 Network Servers Critical Network servers – Contain redundant components Power supplies Fans Memory CPU Hard Drives.
Hands-On Microsoft Windows Server 2008 Chapter 7 Configuring and Managing Data Storage.
Installing VERITAS Cluster Server. Topic 1: Using the VERITAS Product Installer After completing this topic, you will be able to install VCS using the.
VCS Building Blocks. Topic 1: Cluster Terminology After completing this topic, you will be able to define clustering terminology.
How to setup DSS V6 iSCSI Failover with XenServer using Multipath Software Version: DSS ver up55 Presentation updated: February 2011.
13 Copyright © 2007, Oracle. All rights reserved. Using the Data Recovery Advisor.
70-293: MCSE Guide to Planning a Microsoft Windows Server 2003 Network, Enhanced Chapter 12: Planning and Implementing Server Availability and Scalability.
Failover and High Availability
Cluster Disks and Cluster File Storage
Network Load Balancing
Cluster Communications
System and Communication Faults
VCS-257 Dumps Questions
Specialized Cloud Architectures
Allocating IP Addressing by Using Dynamic Host Configuration Protocol
Overview Multimedia: The Role of WINS in the Network Infrastructure
Presentation transcript:

VCS I/O Fencing

I /O Fencing After completing this topic, you will be able to define I / O Fencing

Split Brain The symptoms look identical to a failed node. For example, if a system in a two-node cluster were to fail, it would stop sending heartbeats over the private interconnects and the remaining node would take corrective action. However, the failure of the private interconnects would present identical symptoms. In this case, both nodes would determine that their peer has departed and attempt to take corrective action. This typically results in data corruption when both nodes attempt to take control of data storage in an uncoordinated manner.

Understanding Split Brain Split Brain can cause database corruption One node must survive, the other must be shutdown VERITAS I/O Fencing handles all scenarios Network failure or system failure? System failure or system hang?

I/O Fencing I/O fencing is a feature within a kernel module of Storage Foundation designed to guarantee data integrity, even in the case of faulty cluster communications causing a split brain condition.

Need for I/O Fencing Split brain is an issue faced by all cluster solutions. To provide high availability, the cluster must be capable of taking corrective action when a node fails. In Storage Foundation for Oracle RAC, this is carried out by the reconfiguration of CVM, CFS, and RAC to change membership. Problems arise when the mechanism used to detect the failure of a node breaks down.

SCSI-3 Persistent Reservations SCSI-3 Persistent Reservations (SCSI-3 PR) are required for I/O fencing and resolve the issues of using SCSI reservations in a clustered SAN environment. SCSI-3 PR enables access for multiple nodes to a device and simultaneously blocks access for other nodes. SCSI-3 reservations are persistent across SCSI bus resets and support multiple paths from a host to a disk. Each system registers its own “key” with a SCSI-3 device. Multiple systems registering keys form a membership and establish a reservation, typically set to “Write Exclusive Registrants Only.”

SCSI 3 PERSISTENT Mechanism »The WERO setting enables only registered systems to perform write operations. For a given disk, only one reservation can exist amidst numerous registrations. »With SCSI-3 PR technology, blocking write access is as simple as removing a registration from a device. Only registered members can “eject” the registration of another member. A member wishing to eject another member issues a “preempt and abort” command. »Ejecting a node is final and atomic; an ejected node cannot eject another node. In VCS, a node registers the same key for all paths to the device. A single preempt and abort command ejects a node from all paths to the storage device.

I/O Fencing: How Does it Work 1 Eject departed node from coordinator disks 2 Eject departed node from data disks 3 Ejected node can not write to disks 4

I /O Fencing Components Data disks »Data disks are standard disk devices for data storage and are either physical disks or RAID Logical Units (LUNs). These disks must support SCSI-3 PR and are part of standard VxVM or CVM disk groups. »CVM is responsible for fencing data disks on a disk group basis. Disks added to a disk group are automatically fenced, as are new paths discovered to a device.

I / O Fencing Components Contd. Coordinator Disks : Coordinator disks are three standard disks or LUNs set aside for I/O fencing during cluster reconfiguration. Coordinator disks do not serve any other storage purpose in the VCS configuration. Users cannot store data on these disks or include the disks in a disk group for user data. The coordinator disks can be any three disks that support SCSI- 3 PR. Coordinator disks cannot be special devices that array vendors use. These disks provide a lock mechanism to determine which nodes get to fence off data drives from other nodes. A node must eject a peer from the coordinator disks before it can fence the peer from the data drives.

I/O Fencing Components Contd.. Dynamic Multipathing devices with I/O fencing »DMP allows coordinator disks to take advantage of the path failover and the dynamic adding and removal capabilities of DMP. You can configure coordinator disks to use Veritas Volume Manager Dynamic Multipathing (DMP) feature.

Implementing I /O Fencing  Verifying the nodes see the same disk For example, an EMC disk is accessible by the /dev/rdsk/c2t13d0s2 path on node A and the /dev/rdsk/c2t11d0s2 path on node B. From node A, enter:  # vxfenadm -i /dev/rdsk/c2t13d0s2 Vendor id : EMC Product id : SYMMETRIX Revision : 5567Serial Number : a The same serial number information should appear when you enter the equivalent command on node B using the /dev/rdsk/c2t11d0s2 path.

Implementing I /O Fencing Contd.. Testing the disks using vxfentsthdw script 1 Make sure system-to-system communication is functioning properly. 2 From one node, start the utility. Do one of the following: If you use ssh for communication: # /opt/VRTSvcs/vxfen/bin/vxfentsthdw If you use rsh for communication: # /opt/VRTSvcs/vxfen/bin/vxfentsthdw –n 3 After reviewing the overview and warning that the tests overwrite data on the disks, confirm to continue the process and enter the node names.  Warning: The tests overwrite and destroy data on the disks unless you use the –r option.

Implementing I /O Fencing Contd.. Initializing the disks  Verify that the ASL for the disk array is installed on each of the nodes # vxddladm listsupport all  Scan all disk drives and their attributes, # vxdisk scandisks  Use the vxdisksetup command to initialize a disk as a VxVM disk. # vxdisksetup -i c2t13d0s2 format=cdsdisk  Repeat this command for each disk you intend to use as a coordinator disk.

Implementing I /O Fencing Contd.. Requirements for coordinator disks  You must have three coordinator disks.  Each of the coordinator disks must use a physically separate disk or LUN.  Each of the coordinator disks should exist on a different disk array, if possible.  You must initialize each disk as a VxVM disk.  The coordinator disks must support SCSI-3 persistent reservations.  The coordinator disks must exist in a disk group (for example, vxfencoorddg).  Symantec recommends using hardware-based mirroring for coordinator disks.

Implementing I /O Fencing Contd.. Creating the coordinator disk group and setting the coordinator attribute To create the vxfencoorddg disk group 1 On any node, create the disk group by specifying the device name of the disks: # vxdg -o coordinator=on init vxfencoorddg c1t1d0s0 2 Add the other two disks to the disk group: # vxdg -g vxfencoorddg adddisk c2t1d0s0 # vxdg -g vxfencoorddg adddisk c3t1d0s0

Implementing I /O Fencing Contd.. Stopping VCS on all nodes # hastop -all Configuring /etc/vxfendg disk group for I/O fencing 1 Deport the disk group: # vxdg deport vxfencoorddg 2 Import the disk group with the -t option to avoid automatically importing it when the nodes restart: # vxdg -t import vxfencoorddg 3 Deport the disk group. Deporting the disk group prevents the coordinator disks from serving other purposes: # vxdg deport vxfencoorddg 4 On all nodes, type: # echo "vxfencoorddg" > /etc/vxfendg Do not use spaces between the quotes in the “vxfencoorddg” text. This command creates the /etc/vxfendg file, which includes the name of the coordinator disk group.

Implementing I /O Fencing Contd.. /etc/vxfentab File  Based on the contents of the /etc/vxfendg and /etc/vxfenmode files, the rc script creates the /etc/vxfentab file for use by the vxfen driver when the system starts.  The /etc/vxfentab file is a generated file; do not modify this file. The /etc/vxfentab file gets created when you start the I/O fencing driver.  Raw disk /dev/rdsk/c1t1d0s2 /dev/rdsk/c2t1d0s2 /dev/rdsk/c3t1d0s2  DMP disk /dev/vx/rdmp/c1t1d0s2 /dev/vx/rdmp/c2t1d0s2 /dev/vx/rdmp/c3t1d0s2

Implementing I /O Fencing Contd.. Updating /etc/vxfenmode file To update /etc/vxfenmode file On all cluster nodes, depending on the SCSI-3 mechanism you have chosen, type:  For DMP configuration: cp /etc/vxfen.d/vxfenmode_scsi3_dmp /etc/vxfenmode  For raw device configuration: cp /etc/vxfen.d/vxfenmode_scsi3_raw /etc/vxfenmode

Implementing I /O Fencing Contd.. Modifying VCS configuration to use I/O fencing  # haconf -dump -makero  # hastop -all  # cp main.cf main.orig  On one node, use vi or another text editor to edit the main.cf file. Modify the list of cluster attributes by adding the UseFence attribute and assigning its value of SCSI3. Save and close the file.  Verify the syntax of the file /etc/VRTSvcs/conf/config/main.cf: # hacf -verify /etc/VRTSvcs/conf/config  Using rcp or another utility, copy the VCS configuration file from a node  On each node enter the following sequence of commands. # /opt/VRTS/bin/hastart

Implementing I /O Fencing Contd.. Starting I/O fencing on each node VxFEN, the I/O fencing driver, may already be running, so you need to restart the driver for the new configuration to take effect. Stop the I/O fencing driver. # /etc/init.d/vxfen stop Start the I/O fencing driver. # /etc/init.d/vxfen start

Verifying GAB Port Membership galaxy# /sbin/gabconfig -a GAB Port Memberships ============================================= Port a gen 4a1c0001 membership 01 Port b gen g8ty0002 membership 01 Port h gen d membership 01 Port Function a GAB b I/O fencing h VCS (VERITAS Cluster Server: high availability daemon)

To verify the I/O Fencing configuration # vxfenadm -d The output of this command should look similar to: I/O Fencing Cluster Information: ================================ Cluster Members: * 0 (galaxy) 1 (nebula) RFSM State Information: node 0 in state 8 (running) node 1 in state 8 (running)

How I/O Fencing Works in Different Event Scenarios