Presentation is loading. Please wait.

Presentation is loading. Please wait.

High Availability options Explored with SQL Server

Similar presentations


Presentation on theme: "High Availability options Explored with SQL Server"— Presentation transcript:

1 High Availability options Explored with SQL Server
Balmukund Lakhani Technical Lead – SQL Support | Microsoft GTSC | Team Blog –

2 About me… Working with SQL Technology since 2001 (almost 10 years)
Currently working as Technical Lead with Microsoft SQL Support team. Premier Field Microsoft Ramco Systems Ramco Systems ERP Onsite Ramco Systems

3 Agenda Why High Availability? Backup/restore related technologies.
Database Snapshots Log-shipping Database Mirroring Failover Clustering Replication SQL Server “Denali” High Availability Intro: Today many companies require some or all of their critical data to be highly-available. For example, a company requiring “24x7” availability is an online merchant, whose product databases and online sales applications must be available at all times; otherwise sales (and revenue) are lost. Another example is a hospital, where computerized patient records must be available at all times or a human life could be lost. In a perfect world, this critical data would remain available and nothing would threaten its availability. In the real world, however, there are numerous problems that can cause data to become unavailable. If high availability of the data is required, a proactive strategy must be formulated to mitigate the threats to availability—commonly called a “high-availability strategy”. Such strategies always call for the implementation of multiple technologies that help maintain data availability—there is no single high-availability technology that can meet all requirements. The Microsoft® SQL Server® 2008 data management system includes a variety of technologies that can be used to increase and/or maintain high availability of critical data. This white paper will introduce these technologies and describe when and how they should be used.

4 Business Needs RTO (Recovery Time Objective)
The duration of acceptable application downtime, whether from an unplanned outage or from scheduled maintenance/upgrades RPO (Recovery Point Objective) The ability to accept potential data loss from an outage The two main requirements around high-availability are commonly known as RTO and RPO. RTO stands for Recovery Time Objective and is the maximum allowable downtime when a failure occurs. RPO stands for Recovery Point Objective and is the maximum allowable data-loss when a failure occurs. Apart from specifying a number, it is also necessary to contextualize the number. For example, when specifying that a database must be available 99.99% of the time, is that 99.99% of 24x7 or is there an allowable maintenance window? A requirement that is often overlooked is workload performance. Some high-availability technologies can affect workload performance when implemented (either outright or when configured incorrectly). Also, workload performance after a failover must be considered—should the workload continue with the same throughput as before, or is some temporary degradation acceptable? Some examples of requirements are: > Database X must be available as close to 24x7 as possible and no data loss can be tolerated. Database X also relies on stored procedures in the master database and must be hosted on a SQL Server 2008 instance on a server in security domain Y. > Tables A, B, and C in database Z must be available from 8 A.M. to 6 P.M. on weekdays, no data loss can be tolerated, and they must all be available together. After a failover, workload performance cannot drop.

5 Causes of Downtime and Data Loss
Planned Downtime* Performing database maintenance Performing batch operations Performing an upgrade. Unplanned Downtime Data center failure Server failure I/O subsystem failure Human error > Database maintenance can cause downtime if an operation is performed that needs to place a lock on a table for an extended period of time. Examples of such operations include: Creating or rebuilding a nonclustered index (can prevent table modifications) Creating, dropping, or rebuilding a clustered index (can prevent table reads and modifications) > Performing batch operations can cause downtime through blocking locks. For example, consider a table that holds the year-to-date sales for a company. At the end of each month, the data from the oldest month must be deleted. If the number of rows being deleted is large enough, a blocking lock may be required, which prevents updates to the table while the delete operation is being performed. A similar scenario exists where data is being loaded into a table from another source. > Performing an upgrade always incurs some amount of downtime, because there comes a point where the application must disconnect from the database that is being upgraded. Even with a hot standby technology (such as synchronous database mirroring), there is (at best) a very short time between the application disconnecting and then connecting to the redundant copy of the database. Data center failure: This category of failures takes the entire data center offline, rendering any local redundant copies of the data useless. Examples include natural disasters, fire, power loss, or failed network connectivity. Server failure: This category of failures takes the server hosting one or more SQL Server instances offline. Examples include failed power supply, failed CPU, failed memory, or operating system crashes. I/O subsystem failure: This category of failures involves the hardware used to physically store the data and thus directly affects the data itself—usually causing data loss (which then may lead to downtime to perform disaster recovery). Examples include a drive failure, a RAID controller failure, or an I/O subsystem software bug causing corruption. Human error: This category of failures involves someone (a DBA, a regular user, or an application programmer introducing an application software bug) making a mistake that damages data, usually causing data loss (and then potentially downtime to recover the data). Examples include dropping a table, deleting or updating data in a table without specifying a predicate, setting a database offline, or shutting down a SQL Server instance. Human errors are usually accidental, but they can also be malicious.

6 What do we need? Minimize or avoid service downtime
Whether planned or unplanned When components fail, service interruption is brief or non-existent Automatic failover Eliminate single points of failure (as affordable) Redundant components Fault-tolerant servers It should be noted here that high availability is not the same as disaster recovery, although the two terms are often (erroneously) interchanged. High availability is about putting a set of technologies into place before a failure occurs to prevent the failure from affecting the availability of data. Disaster recovery is about taking action after a failure has occurred to recover any lost data and make the data available again.

7 Single-Instance Technologies

8 Backup, Restore, and Related Technologies
Partial Database Availability and Online Piecemeal Restore Instant File Initialization Mirrored Backups Backup Checksums Database Snapshots Backup Compression Feature of SQL 2008 R2 Enterprise A database cannot be made available until recovery has completed, with two exceptions. In SQL Server 2008 Enterprise, the database is made available after the redo phase has completed during regular crash recovery or as part of a database mirroring failover. In these two cases, the database comes online faster and downtime is reduced. This feature is called fast recovery. Instant File Initialization For example, with a 1-terabyte data file and a backup containing 120 gigabytes (GBs) of data, the entire 1-terabyte data file must be created and zero-initialized, even though only 120 GBs of data will be restored into it. In a high-availability strategy, Resource Governor is a different kind of preventative technology, in that it does not provide any protection against the usual causes of downtime. Instead it enables the DBA to configure SQL Server to protect against unforeseen resource contention that may manifest itself to users as downtime.

9 Minimizing downtime Backup Compression Backup compression Hot- Add CPU
Hot-Add memory Online Operations Backup Compression 4.01GB 295.8s 219s 34.2 MB 126.7s 126s

10 Database Snapshots

11 What Are Database Snapshots?
Read-only, consistent view of a database Specified point-in-time Modifying data Copy-on-write of affected pages Reading data Accesses snapshot if data has changed Redirected to original database otherwise Page Page It should also be noted that database snapshots are not a substitute for a comprehensive backup strategy. If the source database becomes corrupt or goes offline, the database snapshot is similarly affected (the converse is never true, however). Because of this linkage, a database snapshot cannot be used to protect against loss of the entire database, nor can it be used to perform more targeted disaster recovery operations such as page restore. Database snapshots. A database snapshot is a read-only copy of a database taken at a particular point in time. Users can query a database snapshot in the same way they query an ordinary database. A database snapshot can be used to recover data and add it back into the original database quickly and easily. For a more detailed description of database snapshots, see “Using Database Snapshots” later in this module. While database mirroring provides continuous support, there are many scenarios in which a simple snapshot of the database is useful as a “warm to cold” standby database, as a test and development database, or simply as a reporting database. A database snapshot is a read-only, consistent view of a database from a specified point in time. Modifying Data: SQL Server 2005 uses copy-on-write technology to implement database snapshots without incurring the overhead of creating a complete copy of the database. A database snapshot is initially empty, employing NTFS sparse files to allocate physical disk space only when required. When a page in the source database is first updated, the original image of that page is copied to the database snapshot. (Subsequent updates to the same page incur no additional copying overhead.) a page is never modified, it is never copied. If you drop a file from the source database, the entire contents of the file will be copied to the database snapshot. Reading Data: A user accessing a database snapshot will see the copy of a page in the snapshot only if that page has changed since the snapshot was created. Otherwise, the user is redirected to the corresponding page in the source database. This redirection is transparent to the user. 12:00 Snapshot

12 Using Database Snapshot to Recover Data
Scenario Example Code / Steps Undeleting rows Undoing an update Recovering a dropped object INSERT INTO Production.WorkOrderRouting SELECT * FROM AdventureWorks_dbsnapshot_1800.Prod.WorkOrderRouting UPDATE HR.Department SET Name = ( SELECT Name FROM AdventureWorks_dbsnapshot_1800.HR.Department WHERE DepartmentID = 1) WHERE DepartmentID = 1 You can use a database snapshot to recover from an accidental change to a database by applying the data from the database snapshot to the source database. However, you should be aware that a database snapshot provides a very lightweight recovery mechanism that should not be used as a substitute for implementing a comprehensive backup and restore strategy. A variety of situations can result in data loss, ranging from the accidental deletion of a table or modification of a single row to corruption or loss of a database file. The nature of a database snapshot makes it ideal for recovering from application or user errors that cause rows to be deleted or updated accidentally, or tables dropped. Restoring data from a database snapshot is quicker and easier than performing a restore operation from a database backup. However, the copy-on-write mechanism prevents database snapshots from recovering a suspect database comprising corrupt files—this state requires restoring the missing files from a database backup. You should also note that you can only recover changes made up to the point in time at which the snapshot was taken. Recovering subsequent changes requires restoring the database from a backup and then rolling forward using the most recent transaction log backups. The following scenarios are examples of when to use a database snapshot for recovery purposes. You should remember that these types of recovery statement recover only the data that you explicitly specify, and that when you restore data using these methods, any other modifications made to the data in the subsequent period will be lost. Scenario 1: Undeleting rows : You can recover rows deleted from a table by copying them from the corresponding snapshot. For example, a user named Fred has reported that all the rows in the Production.WorkOrderRouting table in the AdventureWorks database have disappeared. You can restore the missing rows from the AdventureWorks_dbsnapshot_1800 database using the statements shown in the following code sample: ALTER TABLE Production.WorkOrderRouting NOCHECK CONSTRAINT CK_WorkOrderRouting_ActualEndDate INSERT INTO Production.WorkOrderRouting SELECT * FROM AdventureWorks_dbsnapshot_1800.Production.WorkOrderRouting CHECK CONSTRAINT CK_WorkOrderRouting_ActualEndDate It is common practice to disable any constraints when copying a large number of rows into a table, for performance purposes. In other cases, it may be necessary to disable constraints temporarily to prevent data from being rejected when it is reapplied. Scenario 2: Undoing an update : You can use a similar technique of copying data from the snapshot to undo changes made to selected rows. Fred has now reported that he has mistakenly changed the name of department 1 in the HumanResources.Department table, but cannot remember what value it had before and so has not been able to change it back. You can correct the error using the following statement: UPDATE HumanResources.Department SET Name = ( SELECT Name AdventureWorks_dbsnapshot_1800.HumanResources.Department WHERE DepartmentID = 1) WHERE DepartmentID = 1 Scenario 3: Recovering a dropped object : Fred has reported another problem with the Production.WorkOrderRouting table: It has disappeared altogether. You can rebuild the table by following this procedure. Use Object Explorer in SQL Server Management Studio to script the Production.WorkOrderRouting table in the AdventureWorks_dbsnapshot _1800 database. Generate the script to the Query Editor. Execute the script in the AdventureWorks database. Note that depending on the options selected when you generated the script, it will also contain definitions of the table constraints and triggers attached to the table. Populate the table using the technique described in scenario 1. You can use the same strategy to recreate any objects that have been dropped from the AdventureWorks database, including views, stored procedures, user-defined data types, user-defined function, rules, and defaults 1 Script the object in the database snapshot Execute the script in the source database 2 Repopulate the object (if appropriate) 3 Caution: Not a substitute for a comprehensive backup and restore strategy

13 Demo Database Snapshot

14 Quick Puzzle

15 Transaction log restore - Puzzle
Full backup 1 (12:00 PM) T-Log backup 1 (1:00 PM) T-Log backup 2 (2:00 PM) T-Log backup 3 (3:00 PM) Full backup 2 (3:25 PM) T-Log backup 4 (3:35 PM) 03:22 PM

16 Multi-Instance Technologies

17 Log shipping

18 Log Shipping Log shipping allows you to automatically send transaction log backups from one database to a secondary database on another server Logs are restored automatically Secondary server can be the failover server and can become primary if the main server goes down ms-help://MS.SQLCC.v9/MS.SQLSVR.v9.en/udb9/html/a38e69a a51e-7b9f83cabbb1.htm Log shipping allows you to automatically send transaction log backups from one database (known as the primary database) to a secondary database on another server (known as the secondary server). At the secondary server, these transaction log backups are restored to the secondary database, keeping it closely synchronized with the primary database. An optional third server, known as the monitor server, records the history and status of backup and restore operations and optionally raises alerts if these operations fail to occur as scheduled. Log shipping consists of three operations: Back up the transaction log at the primary server instance. Copy the transaction log file to the secondary server instance. Restore the log backup on the secondary server instance. The log can be shipped to multiple secondary server instances. In such cases, operations two and three are duplicated for each secondary server instance. A log shipping configuration does not automatically fail over from the primary server to the secondary server. If the primary database becomes unavailable, any of the secondary databases can be brought online manually.

19 Log Shipping (in Action)
Copy and Restore Backups Copy Perform Backups Secondary Database Copy and Restore Backups Secondary Database Primary Database Copy and Restore Backups Log shipping allows you to automatically send transaction log backups from one database (known as the primary database) to a secondary database on another server (known as the secondary server). At the secondary server, these transaction log backups are restored to the secondary database, keeping it closely synchronized with the primary database. An optional third server, known as the monitor server, records the history and status of backup and restore operations and optionally raises alerts if these operations fail to occur as scheduled. Log shipping consists of three operations: 1. Back up the transaction log at the primary server instance. 2. Copy the transaction log file to the secondary server instance. 3. Restore the log backup on the secondary server instance. The log can be shipped to multiple secondary server instances. In such cases, operations two and three are duplicated for each secondary server instance.A log shipping configuration does not automatically fail over from the primary server to the secondary server. If the primary database becomes unavailable, any of the secondary databases can be brought online manually. Log shipping also provides one way to reallocate query processing from the primary server to one or more of its secondary databases. The primary and secondary servers can be on the same computer; however, in this case, SQL Server failover clustering may provide better results. For more information, see Failover Clustering. Primary Server and Database The primary server in a log shipping configuration is the instance of the SQL Server database engine that is your production server. The primary database is the database on the primary server that you want to back up to another server. All administration of the log shipping configuration through SQL Server Management Studio is performed from the primary database. The primary database must use the full or bulk-logged recovery model; switching the database to simple recovery will cause log shipping to stop functioning. Secondary Server and Databases The secondary server in a log shipping configuration is the server where you want to keep an up-to-date copy of your primary database. A secondary server can contain backup copies of databases from several different primary servers. For example, a department could have five servers, each running a mission-critical database system. Rather than having five separate secondary servers, a single secondary server could be used. The backups from the five primary systems could be loaded onto the single backup system, reducing the number of resources required and saving money. It is unlikely that more than one primary system would fail at the same time. Additionally, to cover the remote chance that more than one primary system becomes unavailable at the same time, the secondary server could be of higher specification than the primary servers. The secondary database must be initialized by restoring a full backup of the primary database. The restore can be completed using either the NORECOVERY or STANDBY option. This can be done manually or through SQL Server Management Studio. The frequency of transaction log backups applied to the secondary server depends on the frequency of transaction log backups of the primary production server database. Frequently applying the transaction log backups reduces the work required to bring the secondary server online in the event that the production system fails. You can optionally specify a delay for when the transaction log backup is restored in the secondary database. This provides an interval in which you can interrupt the restore in the event of a catastrophic action on the primary, such as accidentally deleting critical data. Monitor Server The optional monitor server tracks all of the details of log shipping, including: • When the transaction log on the primary database was last backed up. • When the secondary servers last copied and restored the backup files. • Information about any backup failure alerts. The monitor server should be on a server separate from the primary or secondary servers to avoid losing critical information and disrupting monitoring if the primary or secondary server is lost. A single monitor server can monitor multiple log shipping configurations. In such a case, all of the log shipping configurations that use that monitor server would share a single alert job.For more information, see Monitoring Log Shipping. Log Shipping Operations Log shipping is made up of four operations, which are handled by dedicated SQL Server Agent jobs: backup job, copy job, restore job, and alert job. Backup Job A backup job is created on the primary server instance for each primary database. It performs the backup operation, logs history to the local server and the monitor server, and deletes old backup files and history information. The SQL Server Agent job category "Log Shipping Backup" is created on the primary server instance when log shipping is enabled. By default, this job will run every two minutes. Copy Job A copy job is created on the secondary server instance for each log shipping configuration. This job copies the backup files from the primary server to the secondary server, and logs history on the secondary server and the monitor server. The SQL Server Agent job category "Log Shipping Copy" is created on the secondary server instance when log shipping is enabled. Restore Job A restore job is created on the secondary server instance for each log shipping configuration. This job restores the copied backup files to the secondary databases. It logs history on the local server and the monitor server, and deletes old files and old history information. The SQL Server job category "Log Shipping Restore" is created on the secondary server instance when log shipping is enabled. Alert Job If a monitor server is used, an alert job is created on the monitor server instance. This alert job is shared by the primary and secondary databases of all log shipping configurations using this monitor server instance. Any change to the alert job (such as rescheduling, disabling, or enabling the job) affects all databases using that monitor server. This job raises alerts (for which you must specify alert numbers) for primary and secondary databases when backup and restore operations have not completed successfully within specified thresholds. You must configure these alerts to have an operator receive notification of the log shipping failure. The SQL Server Agent job category "Log Shipping Alert" is created on the monitor server instance when log shipping is enabled. If a monitor server is not used, alert jobs are created locally on the primary server instance and each secondary server instance. The alert job on the primary server instance raises errors when backup operations have not completed successfully within a specified threshold. The alert job on the secondary server instance raises errors when local copy and restore operations have not completed successfully within a specified threshold. Raise Alerts Secondary Database Monitor Database

20 Strength & weakness Strengths Weaknesses
Can Ship Logs Across WAN (Wide-Area Network) Protects an Entire Database Weaknesses Configured Per Database NO AUTOMATIC FAILOVER

21 Demo Log shipping

22 Database Mirroring

23 Database Mirroring How it works
4/19/2017 6:18 AM Database Mirroring How it works Mirror is always redoing – it remains current Application Witness Commit 1 5 Principal Mirror Database mirroring is different from log shipping or transactional replication in the following ways: > Failures are automatically detected. > Failovers can be made automatic. 2 SQL Server SQL Server 4 2 >2 3 >3 Log Data Log Data © 2005 Microsoft Corporation. All rights reserved. This presentation is for informational purposes only. Microsoft makes no warranties, express or implied, in this summary.

24 Database Mirroring Modes
4/19/2017 6:18 AM Database Mirroring Modes High-Availability Mode Safety Full; Synchronous operation Database is available whenever a quorum exists Automatic failover High-Protection Mode No witness – quorum provided by partners If Principal loses quorum, it stops servicing the database Ensures high protection; database is never in ‘exposed’ state Manual failover only; no automatic failover A transition mode; should not be in this mode for long High-Performance Mode Safety Off; Asynchronous operation Manual failover only Supports only one form of role switching: forced service (with possible data loss) © 2005 Microsoft Corporation. All rights reserved. This presentation is for informational purposes only. Microsoft makes no warranties, express or implied, in this summary.

25 DBM – Automatic Page Recovery
Witness Client 2. Request page 3. Find page 5. Transfer page 6. Write Page 1. Bad Page Detected X New feature in SQL 2008 Log Data Data Log 4. Retrieve page Principal Mirror

26 DBM – Log Compression CPU goes up when compression is on, both because of compression/decompression, but also because the server can now process more transactions per second Percentage increases in throughput is most dramatic for low network bandwidths New feature in SQL 2008 SELECT DB_NAME(database_id), size_on_disk_bytes FROM sys.dm_io_virtual_file_stats(DB_ID(N'AdventureWorks_DBSS1'), 1)

27 Strength & Weakness Strengths Weaknesses Can Mirror Across WAN
Automatic Failover, and Nearly Instantaneous, Better than Failover Clustering Protects an Entire Database Weaknesses Requires Enterprise Edition Must be Configured Per Database

28 Demo Mirroring Setup

29 Demo Auto Page Repair

30 Failover Clustering

31 Failover Clustering Virtual Server Node 2 Node 3 Shared Disk Node 1

32 SQL Server Cluster Topologies
4/19/2017 SQL Server Cluster Topologies Supports many scenarios: Single Instance Multiple Instance Multiple Active Nodes N+1 N+M Failover Cluster * Inst1 Multiple Active Nodes N+1: N Active, 1 Inactive Nodes Inst2 * * Inst1 N+M: N Active, M Inactive Nodes * Inst1 Inst3 * Inst2 * © 2008 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.

33 Failover Clustering (Facts)
Redundancy at database instance level All databases fail over together Shared copy of system databases Single data copy on shared storage device No I/O overhead reducing throughput Storage unit is single point of failure for cluster All database services are clustered SQL Agent; Analysis Services; Full-Text engine, MS DTC Automatic failover (up to minutes) DBMS accessed over virtual IP Storage is controlled by one cluster node at a time Requires hardware certified by Microsoft for Microsoft Cluster Service

34 Strength & Weakness Strengths Weaknesses
Provides Protection Against a Node Failure, Protects the Entire SQL Instance Automatic Failover Supported Weaknesses Generally Expensive, Requires Specialty Hardware Specialty Hardware Requirements Not Trivial to Configure and Manage Doesn’t Protect Against a Complete Site Failure

35 Replication

36 Replication Primarily used where availability is required in conjunction with scale out of read activity Failover possible; a custom solution Not limited to entire database; Can define subset of source database or tables Copy of database is continuously accessible for read activity Latency between source and copy can be as low as seconds

37 Peer-to-Peer Replication
Provides high availability and read scalability Builds redundancy by eliminating single point of failure Enable online upgrades of servers Maximize Application Uptime Support for both Ring and Grid Topology Centralized Management using Management Studio Peer Node Peer Node Peer Node Peer Node

38 New Features Application Server Replicated Data Write Read
Add and remove nodes without stopping Visual configuration with Topology Wizard Ability to detect conflicts Replicated Data Write Load Balancing Read Application Server User Requests

39 Strength & Weakness Strengths Weaknesses
Perpetual or on-demand replication of data, local or remote Protects (duplicates or merges) the exact portion of the database I want Weaknesses Configured per database, even per table Generally does not protect or duplicate an entire Database

40 SQL Server “Denali” High Availability

41 SQL Server AlwaysOn Increased Application Availability at Lower TCO with Ease of Use: Multi-Database Failover Multiple Secondaries Active Secondaries Fast Client Connection Redirection Windows Server Core Multisite Clustering

42 Availability Group Concepts
Defines the high availability requirements Databases, Replicas, Availability Mode, Failover Mode etc Availability Replica SQL Server Instances that are part of the availability group which hosts the physical copy of the database Role: Primary, Secondary, Resolving Availability Database SQL Server database that is part of an availability group This can be a regular database or contained database

43 Readable Secondary DB2 DB1 SQLservr.exe InstanceA Primary Secondary Database Log Synchronization InstanceB Reports Readable secondary allow offloading read queries to secondary Close to real-time data, latency of log synchronization impact data freshness

44 SQL Server “Denali” High Availability
Demo SQL Server “Denali” High Availability Configuration Read-only Secondary

45 Comparing High Availability Technologies
Category Availability Feature Failover Clustering Database Mirroring Log Shipping Replication Failover characteristics Standby Type Hot Warm Automatic role change Yes No Failover Type Automatic and Manual Manual Database Downtime during failover 30 secs + database recovery Less than 10 secs Variable Physical Configuration Redundant Storage Locations Hardware requirements Microsoft certified cluster solutions Standard servers Physical distance limit 100 miles None Additional server role Witness Monitor Distributor

46 Comparing High Availability Technologies
Category Availability Feature Failover Clustering Database Mirroring Log Shipping Replication Management Complexity High Low Medium Standby Accessible No Via Database Snapshots Read Only Yes Multiple Secondaries Load delay on secondary Scope of Availability Server Instance Database Client Access Client redirect None required Support in SNAC and ADO.NET Custom coding required Note: In Replication, you can introduce load delay on secondary by scheduling the replication agents. Also, the scope of availability in Replication is the most granular and can be at the table level or even column level.

47 Summary SQL Server High Availability Technology
4/19/2017 6:18 AM Summary SQL Server High Availability Technology Database Server Failure or Disaster Failover Clustering Database Mirroring Peer-to-Peer Replication User or Application Error Log Shipping Database Snapshot Data Access Concurrency Limitations Snapshot Isolation Online Index Operations Replication Database Maintenance and Operations Fast Recovery Partial Availability Online Restore Media Reliability Dedicated Administration Connection Dynamic Configuration Availability at Scale Data Partitioning Replication Tuning Database Tuning Advisor © 2005 Microsoft Corporation. All rights reserved. This presentation is for informational purposes only. Microsoft makes no warranties, express or implied, in this summary.

48 References SQL Server Code Name Denali CTP1 SQL Server Books Online
SQL Server Books Online

49 Feedback / Q&A Your Feedback is Important!
Please take a few moments to fill out our online feedback form at: For detailed feedback, use the form at Or us at Use the Question Manager on LiveMeeting to ask your questions now!

50 Contact Blog Address http://blogs.msdn.com/BLakhani
Address

51 Thank You!!!

52


Download ppt "High Availability options Explored with SQL Server"

Similar presentations


Ads by Google