Download presentation
Presentation is loading. Please wait.
Published byChester Barker Modified over 6 years ago
1
Oracle Database 11g: Change Management Overview Seminar
Introduction Oracle Database 11g: Change Management Overview Seminar
2
Oracle Database 11g: Change Management Overview Seminar I - 2
This seminar introduces the change management features of Oracle Database 11g. It is a complementary seminar to the Oracle Database 11g: New Features Overview Seminar. Experience with Oracle databases (particularly Oracle Database 10g Releases 1 and 2) is required for a full understanding of many new features. Overview This seminar introduces you to the change management features of Oracle Database 11g. It does not attempt to provide every detail about a feature or cover aspects of a feature that were available in previous releases. It is structured as a complementary seminar to the Oracle Database 11g: New Features Overview Seminar. The seminar will be most useful to those who have already administered Oracle databases (specifically Oracle Database 10g). Oracle Database 11g: Change Management Overview Seminar I - 2
3
Oracle Database 11g: Main Focus Areas
Manageability Availability Performance Fault management Business intelligence and data warehousing Application development Security Oracle Database 11g: Main Focus Areas Oracle’s infrastructure grid technology enables IT systems to be built from pools of low-cost servers and storage that deliver the highest quality of service in terms of manageability, high availability, and performance. Oracle Database 11g extends Oracle’s existing grid capabilities in these areas. Many new features are introduced—and many existing features are further enhanced—to make your databases more manageable. Oracle Database 11g: Change Management Overview Seminar I - 3
4
Oracle Database 11g: New Features Overview Seminar
Lesson Title 1 Managing Storage 2 Using Data Recovery Advisor and Flashback 3 RMAN and Data Guard Enhancements 4 Security New Features 5 Intelligent Infrastructure 6 Datawarehousing Enhancements 7 Additional Performance Enhancements Oracle Database 11g: New Features Overview Seminar Lesson 1 covers storage management in Oracle Database 11g. The main discussion points are the enhancements to Automatic Storage Management, the new ASMCMD command extensions, and the reengineered large-object SecureFiles. Lesson 2 covers the high availability features. The main components are the data recovery advisor and the flashback data archive. Lesson 3 continues the high availability discussion with RMAN command enhancements and a discussion of standby database improvements. Lesson 4 describes the security features in Oracle Database 11g. Discussion points are password complexity enforcements and transparent data encryption extensions. Lesson 5 explains the automatic SQL tuning available in Oracle Database 11g and the automated maintenance tasks. Lesson 6 discusses the datawarehousing enhancements covering new partitioning options. Lesson 7 covers the performance features in Oracle Database 11g. Oracle Database 11g: Change Management Overview Seminar I - 4
5
Oracle Database 11g: Change Management Overview Seminar
Lesson Title 1 Setting Up the Test Environment 2 Using Database Replay 3 Using SQL Performance Analyzer 4 Performing Online Changes 5 Using SQL Plan Management 6 Diagnosing Problems 7 Installing Patches Oracle Database 11g: Change Management Overview Seminar This seminar complements the Oracle Database 11g: New Features Overview Seminar and introduces you to the change management features of Oracle Database 11g. Lesson 1 introduces you to the concept of the life cycle of change management, with a focus on performing realistic testing by establishing a simple test environment using snapshot standby. Lesson 2 adds to the realistic testing discussion by introducing Database Replay—a feature that enables you to capture production workloads to replay under a test environment. Lesson 3 continues with the SQL Performance Analyzer feature, which enables you to predict the impact of system changes on SQL workload response time. Lesson 4 furthers automation provisioning by discussing a collection of features that improve database maintenance activities on applications while they are in use. Lesson 5 introduces SQL Plan Management, which enables the system to automatically control SQL plan evolution and, therefore, not generate performance regression. Lesson 6 introduces the diagnostic features that assist DBAs in detecting problems proactively. Lesson 7 discusses hot patching, which provides the ability to install, enable, and disable a bug fix or diagnostic patch on a live, running Oracle instance. Oracle Database 11g: Change Management Overview Seminar I - 5
6
Management Automation
Autotuning Advisory Instrumentation RAC Storage Backup Memory Schema Apps/SQL Recovery Replication Management Automation Oracle Database 11g continues the efforts that began in Oracle9i Database and continued through Oracle Database 10g to dramatically simplify and ultimately fully automate the tasks that DBAs perform. New in Oracle Database 11g is Automatic SQL Tuning with self-learning capabilities. Other new capabilities include automatic, unified tuning of both SGA and PGA memory buffers and new advisors for partitioning, database repair, streams performance, and space management. Enhancements to Oracle Automatic Database Diagnostic Monitor (ADDM) give it a better global view of performance in Oracle Real Application Clusters (RAC) environments and improved comparative performance analysis capabilities. Oracle Database 11g: Change Management Overview Seminar I - 6
7
Self-Managing Database: The Next Generation
Manage performance and resources Manage change Manage fault Self-Managing Database: The Next Generation Self-management is an ongoing goal for the Oracle database. Oracle Database 10g initiated the effort to make the database easier to use, with the focus on performance and resources. Oracle Database 11g extends the effort with two additional important axes for the overall self-management goal: change management and fault management. Oracle Database 11g: Change Management Overview Seminar I - 7
8
Suggested Additional Courses
Oracle Database 11g: Real Application Clusters Oracle Database 11g: Data Guard Administration Oracle Enterprise Manager 11g Grid Control Suggested Additional Courses For more information about key grid computing technologies used by Oracle products, you can take additional courses (listed in the slide) from Oracle University. Oracle Database 11g: Change Management Overview Seminar I - 8
9
Oracle Database 11g: Change Management Overview Seminar I - 9
Further Information For more information about topics that are not covered in this course, refer to the following: Oracle Database 11g: New Features Overview Seminar Oracle Database 11g: New Features eStudies A comprehensive series of self-paced online courses covering all new features in great detail Oracle By Example series: Oracle Database 11g Oracle OpenWorld events Oracle Database 11g: Change Management Overview Seminar I - 9
10
Setting Up the Test Environment
Realistic Testing Setting Up the Test Environment
11
Oracle Database 11g: Change Management Overview Seminar I - 11
Objectives After completing this lesson, you should be able to: Identify the challenges that result from system changes Set up snapshot standby for database testing Oracle Database 11g: Change Management Overview Seminar I - 11
12
Challenges Faced by DBAs When Performing Changes
Maintaining service-level agreements through changes to hardware or software configurations Offering production-level workload environment for testing purposes Effectively forecasting and analyzing impact on SQL performance Challenges Faced by DBAs When Performing Changes Large business–critical applications are complex and have highly varying load and usage patterns. At the same time, these business systems are expected to provide certain service-level guarantees in terms of response time, throughput, up time, and availability. Any change to a system (such as upgrading the database or modifying the configuration) often necessitates extensive testing and validation before these changes can make it to the production system. To be confident before moving to a production system, the database administrator (DBA) must expose a test system to a workload very similar to the workload to be experienced in a production environment. It is also beneficial for the DBA to have an effective way to analyze the impact of system-level changes on overall SQL performance so that any required tuning changes can be performed before production. Oracle Database 11g: Change Management Overview Seminar I - 12
13
Change Is the Only Constant
Change is the most common cause of instability. Enterprise production systems are complex. Actual workloads are difficult to simulate. possible! Realistic testing before production is impossible. Reluctance to make changes Inability to adopt new competitive technologies Preserve order amid change. Change Is the Only Constant Oracle Database 11g is designed for data center environments that are rapidly evolving and changing to keep up with business demands, enabling database administrators to manage change effectively and efficiently. Building on the self-managing capabilities of Oracle Database 10g, Oracle Database 11g offers significant advances in the areas of automatic diagnostics, supportability, and change management. Oracle DBAs and information technology managers are leading the key initiatives in data centers today. Some of these data center initiatives are moving to low-cost computing platforms (such as Oracle Enterprise Linux) and simplifying storage management by using Automatic Storage Management (ASM). DBAs need to test the databases by using realistic workloads with new operating systems or storage platforms to ensure that migration is successful. Today’s enterprises must make significant investments in hardware and software to perform the infrastructure changes. For example, if the DBA wants to test the storage management of data files for a database from file system–based to Automatic Storage Management for a typical J2EE application, the enterprise would need to invest in duplicate hardware for the entire application stack, including the Web server, application server, and database. The organization would also need to invest in expensive testing software to capture the end-user workload. These purchases make it very expensive for any organization to evaluate and implement changes to their data center infrastructure. Oracle Database 11g addresses this issue with a collection of solutions under the umbrella of “Change Management.” Oracle Database 11g: Change Management Overview Seminar I - 13
14
Life Cycle of Change Management
Test (Database Replay or SQL Performance Analyzer) Diagnose and resolve problems (advisors) Make change Set up test environments (snapshot standbys) Diagnose problems Provision for production Patches and workarounds Realistic testing Life Cycle of Change Management Oracle Database 11g supports realistic testing through the use of snapshot standbys to set up and test the physical environment. You can open a physical standby database temporarily (that is, activated) for read and write activities such as reporting and testing. Once testing is completed, you can then simply revert to the physical standby mode to allow catch-up to the primary site. This functionality preserves zero data loss and is similar to storage snapshots, but allows for disaster recovery and offers a single copy of storage at the time of testing. For enterprises to be able to perform an accurate test of a database environment, it is vital that they be able to reproduce the production scenarios accurately. Further support for realistic testing is the Database Replay capabilities in Oracle Database 11g. Database Replay is designed to capture client requests on a given database to be reproduced on other copies of production databases. Oracle Enterprise Manager provides an easy-to-use set of steps to set up the capture of a workload. Some of the changes that a DBA deals with are database upgrades, new tuning recommendations, schema changes, statistics collection, operating system or hardware changes. DBAs can use SQL Performance Analyzer to track and forecast SQL performance changes caused by the above changes. If the SQL performance has regressed in some of the cases, the DBA can then run the SQL Tuning Advisor to tune the SQL statements. Oracle Database 11g: Change Management Overview Seminar I - 14
15
Life Cycle of Change Management
Test Diagnose and resolve problems Make change Set up test environments Diagnose problems (ADR/Support Workbench) Provision for production (rolling upgrades) Patches and workarounds (Enterprise Manager) Provisioning automation Life Cycle of Change Management (continued) When upgrading from Oracle Database 11g Release 1, you can use the rolling upgrade functionality to ensure that various versions of the software can still communicate with each other. This allows independent nodes of an Automatic Storage Management (ASM) cluster to be migrated or patched without affecting the availability of the database, thereby providing higher up time and problem-free migration to new releases. ASM offers further system capacity planning and workload change enhancements (Fast Disk Resync, Preferred Mirror Read), and numerous enhancements to online functionality (online index reorganization and online table redefinition) further support application change. Automatic Diagnostic Repository (ADR) is a new system-managed repository for storing and organizing trace files and other error diagnostic data. You get a comprehensive view of all the serious errors encountered by the database and the relevant data needed for problem diagnosis and eventual resolution. You can also use EM Support Workbench, which provides a simple workflow interface to view and diagnose incident data and package it for Oracle Support. The Data Recovery Advisor tool can be used to automatically diagnose data failures and report on the appropriate repair option. Oracle Database 11g Enterprise Manager supports end-to-end automation of patch application on single-instance database homes and rolling patches on clusterware. You no longer need to perform manual steps for shutting down your system, invoking OPatch, applying SQL, and other such best-practice steps in the patching procedure. Oracle Database 11g: Change Management Overview Seminar I - 15
16
Setting Up the Test Environment by Using the Snapshot Standby Database
Redo stream Physical standby database Open database as snapshot standby Back out testing changes Redo stream Snapshot standby database Perform testing Setting Up the Test Environment by Using the Snapshot Standby Database In Oracle Database 11g, a physical standby database can be opened temporarily (that is, activated) for read or write activities such as reporting and testing. A physical standby database in the snapshot standby state still receives redo data from the primary database, thereby providing data protection for the primary database while still in the reporting or testing database role. You convert a physical standby database to a snapshot standby database, and you open the snapshot standby database for writes by applications for testing. When you have completed testing, you discard the testing writes and catch up to the primary database by applying the redo logs. You cannot use standby databases for real-time query or fast-start failover. Creating a snapshot standby database was possible with the previous releases. However, Oracle Database 11g greatly simplifies the way you set up a snapshot standby database. For more information about snapshot standby databases, refer to the Oracle Data Guard Concepts and Administration Guide. Oracle Database 11g: Change Management Overview Seminar I - 16
17
Benefits of Snapshot Standby
A snapshot standby database is activated from a physical standby database. Redo stream is continually accepted. Provides for disaster recovery Users can continue to query or update. Snapshot standby is open read/write. Benefits reporting applications Reduces storage requirements Benefits of Snapshot Standby A snapshot standby database is a database that is activated from a physical standby database to be used for reporting and testing. The snapshot standby database receives redo from the primary database and continues to provide data protection for the primary database. The snapshot standby database: Is like the primary database in that users can perform queries or updates Is like a physical standby database in that it continues receiving redo data from the primary database A snapshot standby database provides the combined benefit of disaster recovery and of reporting and testing using a physical standby database. Although similar to storage snapshots, snapshot standby databases provide a single copy of storage while maintaining disaster recovery. Oracle Database 11g: Change Management Overview Seminar I - 17
18
Using SQL to Create a Snapshot Standby Database
Activate the snapshot standby database: Convert the snapshot standby database back to a physical standby database: SQL> ALTER DATABASE CONVERT TO SNAPSHOT STANDBY; SQL> ALTER DATABASE CONVERT TO PHYSICAL STANDBY; Using SQL to Create a Snapshot Standby Database You cannot convert a physical standby database that is a fast-start failover target to a snapshot standby database. You cannot perform a switchover or failover to a snapshot standby database. You cannot convert a physical standby database to a snapshot standby database if it is the only physical standby database in a Data Guard configuration operating in maximum protection mode. In a Real Application Clusters (RAC) configuration, shut down all instances except one and then activate it as the snapshot standby database. After the physical standby database is converted to a snapshot standby database, restart all the other instances. Oracle Database 11g: Change Management Overview Seminar I - 18
19
Using DGMGRL to Create a Snapshot Standby Database
Activate the snapshot standby database: Convert the snapshot standby database back to a physical standby database: Add a snapshot standby database to a broker configuration: DGMGRL> CONVERT DATABASE <db_unique_name> >TO SNAPSHOT STANDBY; DGMGRL> CONVERT DATABASE <db_unique_name> >TO PHYSICAL STANDBY; DGMGRL> ADD DATABASE <db_unique_name> AS > CONNECT IDENTIFIER IS <connect_identifier>; Using DGMGRL to Create a Snapshot Standby Database You can use the CONVERT DATABASE DGMGRL command to convert a physical standby database to a snapshot standby database, as shown in the following example: DGMGRL> convert database pdb1 to snapshot standby; Converting database "pdb1" to a snapshot standby database, please wait... Database "pdb1" converted successfully You also use the CONVERT DATABASE command to convert the database back to a physical standby database. You can add a snapshot standby database to a Data Guard Broker configuration by using the ADD DATABASE DGMGRL command. You do not need to specify that the database is a snapshot standby database because the Data Guard Broker determines the type of database that is being added to the configuration. Oracle Database 11g: Change Management Overview Seminar I - 19
20
Viewing Snapshot Standby Database Information
View the database role by querying V$DATABASE: SQL> SELECT database_role FROM v$database; DATABASE_ROLE SNAPSHOT STANDBY Viewing Snapshot Standby Database Information You can view the snapshot standby database role in the DATABASE_ROLE column of the V$DATABASE view. Oracle Database 11g: Change Management Overview Seminar I - 20
21
Using DGMGRL to View Snapshot Standby Database Information
DGMGRL> show configuration Configuration Name: DGConfig Enabled: YES Protection Mode: MaxPerformance Databases: orcl - Primary database pdb1 - Snapshot standby database Fast-Start Failover: DISABLED Current status for "DGConfig": SUCCESS Using DGMGRL to View Snapshot Standby Database Information You use the SHOW CONFIGURATION and SHOW CONFIGURATION VERBOSE DGMGRL commands to display information about the snapshot standby database. Oracle Database 11g: Change Management Overview Seminar I - 21
22
Using DGMGRL to View Snapshot Standby Database Information
DGMGRL> show database pdb1 Database Name: pdb1 Role: SNAPSHOT STANDBY Enabled: YES Intended State: APPLY-OFF Instance(s): pdb1 Current status for "pdb1": SUCCESS Using DGMGRL to View Snapshot Standby Database Information (continued) When you execute the SHOW DATABASE and SHOW DATABASE VERBOSE commands for a snapshot standby database, “SNAPSHOT STANDBY” is displayed in the role field. Oracle Database 11g: Change Management Overview Seminar I - 22
23
Snapshot Standby Database Considerations
Potential data loss with a corrupted log file Lengthy conversion of the snapshot standby database to a primary database in the event of a failure of the primary database Snapshot Standby Database Considerations You should consider the following when activating a snapshot standby database: Potential data loss when there is a corrupted log file: The snapshot standby database accepts redo log files but does not apply them. If there is a corrupted redo log file at the snapshot standby database, it will not be discovered until the database is converted back to a physical standby database and MRP is started. If the primary database is unavailable at that time, there is no way to retrieve that log. Lengthy conversion of the snapshot standby database to a primary database: In the event of a failure of the primary database, the snapshot standby database can be converted back to a physical standby database. The redo that has been received can then be applied, and the physical standby can be converted to a primary database. If the snapshot standby database lags far behind the primary database, applying the redo that has been received and converting the physical standby to the primary database might take a long time. Oracle Database 11g: Change Management Overview Seminar I - 23
24
Oracle Database 11g: Change Management Overview Seminar I - 24
Summary In this lesson, you should have learned how to: Identify the challenges that result from system changes Set up snapshot standby for database testing Oracle Database 11g: Change Management Overview Seminar I - 24
25
Realistic Testing Using Database Replay
26
Oracle Database 11g: Change Management Overview Seminar I - 26
Objectives After completing this lesson, you should be able to: Identify the benefits of using Database Replay List the steps involved in Database Replay Use Enterprise Manager to record and replay workloads Oracle Database 11g: Change Management Overview Seminar I - 26
27
Managing Change in Oracle Database 11g
Oracle Database 11g dramatically reduces the cost of system changes by providing: Database Replay Captures actual database workload on the production system Replays the captured workload (with same concurrency) on a test system Identifies performance changes and errors SQL Performance Analyzer Identifies SQL regressions and fixes them Managing Change in Oracle Database 11g Database Replay enables you to test the impact of a system change by replaying real-world workload on the test system before it is exposed to a production system. The production workload (including transaction concurrency and dependency) of the database server is recorded over an illustrative period of time (for example, a peak period). This recorded data is used to replay the workload on a test system that has been appropriately configured. You gain a high degree of confidence in the overall success of the database change by subjecting the database server in a test system to a workload that is practically indistinguishable from a production workload. SQL Performance Analyzer makes quantitative estimates of the system’s performance in the new environment and makes suggestions for avoiding potential SQL performance degradation. Oracle Database 11g: Change Management Overview Seminar I - 27
28
Managing Change Effectively
Confidence in the success of a system change in minimizing system disruption Ease of upgrade to a different RDBMS release or server configuration Performing capacity planning Recording and replaying sessions when debugging issues Developing autonomic features on real-life workloads Preventing degradations that are potentially caused by a change in the production environment Managing Change Effectively By using Oracle Database 11g functionality, your business can be confident in the success of its change-management goals. The record and replay functionality offers confidence in the ease of upgrade during a database server upgrade. A useful application of Database Replay is to test the performance of a new server configuration. Consider a customer who is using a single instance database and wants to move to a Real Application Clusters (RAC) setup. The customer records the workload of an interesting period and then sets up a RAC test system for replay. During replay, the customer is able to monitor the performance benefit of the new configuration by comparing the performance to the recorded system. This can also help convince customers to move to a RAC configuration after being shown the benefits of using the database replay functionality. Another application is debugging. You can record and replay sessions emulating an environment to make bugs more reproducible. Manageability feature testing is another benefit. Self-managing and self-healing systems need to implement this advice automatically. Multiple replay iterations allow testing and fine-tuning of the control strategies’ effectiveness and stability. Oracle Database 11g: Change Management Overview Seminar I - 28
29
Why Use Database Replay?
System changes such as hardware and software upgrades are a fact of life. Customers want to identify the full impact of a change before going live. Extensive testing and validation can be expensive in time and money. Despite expensive testing, success rates can be low: Many issues may go undetected. Changes can impact system availability and performance negatively. Database Replay makes it possible to test with real-world production workloads. Why Use Database Replay Large business-critical applications are complex and experience highly varying load and usage patterns. At the same time, these business systems are expected to provide certain service-level guarantees in terms of response time, throughput, up time, and availability. Any change to a system, such as upgrading the database or modifying the configuration, often necessitates extensive testing and validation before the changes can make it to the production system. To be confident before moving to a production system, the DBA must expose a test system to a workload very similar to the workload to be experienced in a production environment. It is also beneficial for the DBA to have an effective means of analyzing the impact of system-level changes on overall SQL performance so that any tuning changes required can be performed before production. Oracle Database 11g: Change Management Overview Seminar I - 29
30
Oracle Database 11g: Change Management Overview Seminar I - 30
Database Replay Re-create actual production database workload in test environment. Identify, analyze, and fix potential instabilities before making changes to production. Capture workload in production: Capture full production workload with real load and concurrency. Move the captured workload to test system. Replay workload in test: Make the desired changes in test system. Replay workload with production load and concurrency. Honor commit ordering. Analyze and report: Errors Data divergence Performance divergence Database Replay Oracle Database 11g provides specific solutions to the challenges of managing change. Database Replay enables you to test the impact of a system change by replaying real-world workload on the test system before it is exposed to a production system. The production workload (including transaction concurrency and dependency) of the database server is recorded over an illustrative period of time (for example, a peak period). This recorded data is used to replay the workload on a test system that has been appropriately configured. You gain a high degree of confidence in the overall success of the database change by subjecting the database server in a test system to a workload that is practically indistinguishable from a production workload. Oracle Database 11g: Change Management Overview Seminar I - 30
31
System Architecture: Capture
Capture directory Shadow capture file Shadow Shadow Shadow Shadow Shadow capture file Recording infrastructure Database stack Shadow capture file Shadow capture file Background Background Database backup System Architecture: Capture Here you see an illustration of a system that is being recorded. You should always record a workload that spans an “interesting” period in a production system. Typically, the replay of the recording is used to determine whether it is safe to upgrade to a new version of the RDBMS server. During recording, a special recording infrastructure built into the RDBMS records data about all external client requests while the production workload is running on the system. External requests are any SQL queries, PL/SQL blocks, PL/SQL remote procedure calls, DML statements, DDL statements, Object Navigation requests, or OCI calls. Background jobs and, in general, all internal clients continue their work during recording without being recorded. The end product is the workload recording, which contains all necessary information for replaying the workload as seen by the RDBMS in the form of external requests. The recording infrastructure imposes minimal performance overhead (extra CPU, memory, and I/O) on the recording system. You should, however, plan to accommodate the additional disk space that is needed for the actual workload recording. RAC Note: Instances in a RAC environment have access to the common database files. However, they do not need to share a common general-purpose file system. In such an environment, the workload recording is written on each instance’s file system during recording. For processing and replay, all parts of the workload recording must be manually copied into a single directory. Production database Oracle Database 11g: Change Management Overview Seminar I - 31
32
System Architecture: Processing the Workload
Capture directory Shadow capture file Shadow capture file Process capture files Database stack Shadow capture file Shadow capture file Background Background Database backup System Architecture: Processing the Workload The workload capture data is processed, and new workload replay-specific metadata files are created that are required to replay the given workload capture. Only new files are created; no files are modified that were created during the workload capture. Because of this, you can run the preprocess multiple times on the same capture directory (for example, when the procedure encounters unexpected errors or is canceled). External client connections are remapped at this stage, and any replay parameters that affect the replay outcome can be modified. Note: Because processing workload capture can be relatively expensive, the best practice is to carry out this operation on a system other than the production database system. Production database Process capture Oracle Database 11g: Change Management Overview Seminar I - 32
33
System Architecture: Replay
Replay system Replay client Replay client Capture directory Shadow capture file Shadow … Shadow Shadow … Shadow Shadow capture file Process capture files Database stack Shadow capture file Shadow capture file Background Background Test system with changes Database backup System Architecture: Replay Prior to replaying the workload on the replay system, be sure to do the following: 1. Restore the replay database on a test system to match the capture database at the start of the workload capture. 2. Make changes (such as performing an upgrade) to the test system as needed. 3. Copy the workload to the test system. The workload recording is consumed by a special application called the replay driver, which sends requests to the RDBMS on which the workload is replayed; this RDBMS is usually a test system. It is assumed that the database of the replay system is suitable for the replay of the workload that was recorded. The internal RDBMS clients are not replayed. The replay driver is a special client that consumes the workload recording and sends appropriate requests to the test system to make it behave as if the external requests were sent by the clients used during the recording of the workload (see previous example). The use of a special driver that acts as the sole external client to the RDBMS enables the recording and replay infrastructure to be client agnostic. The replay driver consists of one or more clients that connect to the replay system and that send requests based on the workload capture. The replay driver distributes the workload capture streams equally among all the replay clients based on network bandwidth, CPU, and memory capability. Test database Oracle Database 11g: Change Management Overview Seminar I - 33
34
The Big Picture Pre-change production system Post-change test system
Clients/app servers Capture directory Replay system Shadow capture file Shadow capture file Process capture files Shadow capture file Test system with changes Production system Shadow capture file Production database Database backup Database restore Can use snapshot standby as test system The Big Picture The significant benefit with Oracle Database 11g is the added confidence to the business in the success of performing a change. Many Oracle customers have expressed strong interest in this change-management functionality. The database administrator, or a user with special privileges granted by the DBA, initiates the record and replay cycle and has full control of the entire procedure. Oracle Database 11g: Change Management Overview Seminar I - 34
35
Pre-Change Production System
Changes not supported Clients/app servers Supported changes: Database upgrades, patches Schema, parameters RAC nodes, interconnect OS platforms, OS upgrades CPU, memory Storage Production system Production database Pre-Change Production System Database Replay focuses on recording and replaying the workload that is directed to the RDBMS only. Recording at the RDBMS in the software stack makes it possible to exchange anything below this level and test the new setup using the record and replay functionality. While replaying the workload, the RDBMS performs the actions observed during recording. In other words, during the replay phase the RDBMS code is exercised very much like it was exercised during the recording phase. This is achieved by re-creating all external client requests to the RDBMS. External client requests include all the requests by all possible external clients of the RDBMS. Oracle Database 11g: Change Management Overview Seminar I - 35
36
Oracle Database 11g: Change Management Overview Seminar I - 36
Workloads Supported Supported All SQL (DML, DDL, PLSQL) with practically all types of binds Full LOB functionality (cursor based and direct OCI) Local transactions Login/Logoffs Session switching Limited PLSQL calls (database interaction) Limitations Direct path load, import/export OCI based object navigation (ADTs) and REF binds Streams, non-PLSQL based AQ Distributed transactions, remote describe, or commit operations Workloads Supported Workload initiated by a database user, and reaching the server through a connection from the user, is supported. All non-recursive SQL is recorded and replayed and falls into the categories listed above. Recursive SQL calls—calls that will be re-generated when the user calls are replayed—and SQL calls from parallel query slaves are not recorded. Parallel query slave calls are replayed as part of the top-level call that spawned them. User login and logoffs are supported, however passwords are not recorded and replayed. Remote procedure calls from PL/SQL, distributed transactions, debug calls, remote commits, remote describes, bundled PL/SQL, and session migrations are not supported. External interactions such as database links, external tables, directory objects, and URLs have to be correctly configured during replay by the replay user. Workload from Oracle utilities or features (such as Enterprise Manager, Data Pump, SQL*Loader, Replication) is supported. As EM is likely to be used to monitor and administer the database during recording you are recommended to filter out the EM workload. Data Pump, Export, and Import calls are SQL calls and as such are captured. System background workload, for example workload from processes PMON, SMON, LGWR, and others, is not recorded. Oracle Database 11g: Change Management Overview Seminar I - 36
37
Capture Considerations
The goal of capture planning is to ensure that: The database can be restored to the SCN when capture starts (StartSCN) The system has enough resources for the capture You find a suitable period for restarting the database before capture starts (if selected) You can specify filters to capture a subset of the workload Users have SYSDBA or SYSOPER privileges and appropriate OS privileges Capture Considerations You perform the following tasks in the planning phase of the workload recording: Check database backup strategy: You should ensure that the database can be restored to the StartSCN when the recording starts. Plan the capture period: You select the capture period based on the application and the peak periods. You can use existing manageability features such as Automatic Workload Repository (AWR) and Active Session History (ASH) to select an appropriate period based on workload history. The starting time for capture should be carefully planned because it is recommended that you shut down and restart the database before starting the capture. Specify the location of the workload capture data: You must set up a directory that is to be used to store the workload capture data. You should provide ample disk space because the recording will stop if there is insufficient disk space. However, everything captured up to that point is usable for replay. Define capture filters for user sessions that are not to be captured: You can specify a recording filter to skip sessions that should not be captured. No new privileges or user roles are introduced with the Database Replay functionality. The recording user and replay user must have either the SYSDBA or SYSOPER privilege. This is because only a user having SYSOPER or SYSDBA can start up or shut down the database to start the recording. Correct operating system privileges should also be assigned so that the user is able to access the recording, replay directories, and manipulate the files under those directories. Oracle Database 11g: Change Management Overview Seminar I - 37
38
Replay Considerations
Preprocess captured workload: One-time action On same database version as replay Can be performed anywhere (production, test system, or other system) if versions match Restore database and then perform the change: Upgrade Schema changes OS change Hardware change Add instance Replay Considerations The preprocess phase is required only once for the specified database version. After the necessary metadata has been created, you can replay the workload as many times as required. You must restore the replay database to match the capture database at the start of the workload capture. A successful replay depends on application transactions accessing application data identical to that on the capture system. You can choose to restore the application data by using point-in-time recovery, flashback, and import/export. Oracle Database 11g: Change Management Overview Seminar I - 38
39
Replay Considerations
Manage external interactions: Remap connection strings to be used for the workload One-to-one: Allows simple instance-to-instance remapping Many-to-one: Use of load balancer (single node to RAC) Modify external references (such as database links and directory objects) that point to production systems Set up one or more replay clients: Multithreaded clients that can each drive multiple workload sessions Replay Considerations (continued) A captured workload may contain references to external systems that are meaningful only in the capture environment. Replaying a workload with unresolved references to external systems may cause unexpected problems in the production environment. A replay should be performed in a completely isolated test. You should make sure that all references to external systems have been resolved in the replay environment so that replaying a workload will cause no harm to your production environment. You can make one-to-one or many-to-one remappings. For example, database links in a captured production environment may reference external production databases that should not be referenced during replay. Therefore, you should modify any external references that could jeopardize the production environment during replay. The replay client (an executable named wrc) submits a captured session’s workload. You should install one or more replay clients, preferably on systems other than the database host. Each replay client must be able to access the directory that holds the preprocessed workload. You can also modify the replay parameters to change the behavior of the replay. Oracle Database 11g: Change Management Overview Seminar I - 39
40
Oracle Database 11g: Change Management Overview Seminar I - 40
Replay Analysis Data divergence Number of rows compared for each call (queries, DMLs) Error divergence New errors Mutated errors Errors disappear Performance Capture and replay report ADDM report ASH report for skew analysis AWR report Replay Analysis There may be some divergence of the replay compared to what was recorded. For example, when replaying on a newer version of the RDBMS, a new algorithm may cause specific requests to be faster, and thus divergence appears as faster execution. This is considered a desirable divergence. Another example of divergence is when a SQL statement returns fewer rows during replay than during recording. This is clearly not desirable; its root cause may be a new index look-up algorithm. The replay will identify this fact. For data divergence, the result of an action can be considered as: The result set of a SQL query An update to persistent database state A return code or an error code Performance divergence is useful to determine how new algorithms introduced in the replay system may affect overall performance. There are numerous factors that can cause replay divergence. While some of them cannot be controlled, others can be mitigated. It is the task of the DBA to understand the workload run-time operations and take the necessary actions to reduce the level of record and replay divergence. Online divergence should aid the decision to stop a replay that has diverged significantly. The results of the replay before the divergence may still be useful, but further replay does not produce reliable conclusions. Offline divergence reporting is used to determine how successful the replay was after the replay has finished. Oracle Database 11g: Change Management Overview Seminar I - 40
41
Replay Data Divergence
Workload characteristics that increase data or error divergence: Implicit session dependencies by the application (for example, the use of dbms_pipe) Extensive use of multiple commits in PL/SQL User locks Use of nonrepeatable functions or system-dependent data External interactions via URLs or database links Replay Data Divergence Data divergence of the replay encompasses the results of both queries and errors. That is, errors that occurred during recording are considered proper results and any change during replay is reported. You can use existing tools such as ADDM to measure performance differences between the recording system and the replay system. Additionally, error comparison reports during the replay report on the following: Errors that did not happen during recording Errors that were not reproduced during replay Differences in error type Oracle Database 11g: Change Management Overview Seminar I - 41
42
Using Enterprise Manager for Workload Capture
Start Workload recording Plan Shutdown - startup restricted . . . Raw captured data Capture Using Enterprise Manager for Workload Capture Here you see the workflow for Database Replay. As shown, the workload capture and preprocessing need to be done only once. The data produced can be used for workload replay multiple times. The workload recording has three main steps: 1. Planning for capture 2. Preparing for capture 3. Capturing the workload These steps are discussed in detail in the following slides. Oracle Database 11g: Change Management Overview Seminar I - 42
43
Using Enterprise Manager for Workload Capture
Using Enterprise Manager for Workload Capture (continued) Enterprise Manager (EM) provides you with a user interface to manage each component in the Database Replay process. The workflow and user interface apply to both EM Database Control and EM Grid Control. You access Database Replay on the “Software and Support” tab of Database Control. You are then directed to the page to create the necessary tasks to perform the following: Manage the workload capture operations. View any previously captured workload. Manage the workload replay operations. Stop the active capture or replay: This option is available only during an active capture or replay session. You can view previously captured workloads by clicking the View Workload Capture History link to see graphical and statistical details about captured workload. Oracle Database 11g: Change Management Overview Seminar I - 43
44
Using Enterprise Manager for Workload Capture
Using Enterprise Manager for Workload Capture (continued) The EM wizard walks you through the pre-checks before beginning the database workload capture. On this page, you are asked to acknowledge the conditions that ensure a successful replay of the workload. It is recommended that you restart the database to reduce the potential for data divergence, confirm that you have sufficient disk space for the capture data, and ensure that you can effectively restore the database as of the time of capture. You should select the capture period based on the application and the peak periods. You can use existing manageability features such as Automatic Workload Repository (AWR) and Active Session History (ASH) to select an appropriate period based on workload history. Oracle Database 11g: Change Management Overview Seminar I - 44
45
Using Enterprise Manager for Workload Capture
Using Enterprise Manager for Workload Capture (continued) As an option, you can restart the database prior to beginning the capture process. If you know your workload well, you can choose not to restart the database. Not restarting the database allows in-flight transactions to be present during the capture phase, thus affecting the potential for data divergence in the replay phase. Oracle recommends that you restart the database to minimize data divergence in the replay phase. You are then asked to set up any required capture filters to customize which data is captured (or filtered out of the captured data). Since EM will be used to monitor and administer the recording and replaying sessions (essentially duplicating its workload during replay), EM provides a default filter to filter itself out. You can add additional filtering components. You must specify the location for the workload capture data. You can either specify an existing database directory or create a directory from this page. You are then prompted for a directory object name and OS path that will be validated. You should ensure that ample disk space exists to hold the captured workload because the recording stops if there is insufficient disk space. However, everything captured up to that point is usable for replay. You can (optionally) choose to export the AWR data at the same time as the capture process ensuring in-depth capture and replay analysis. Alternatively, the export can be performed at a later stage on the View Workload Capture History page. RAC Note: For RAC, the DBA should define the directory for captured data at a storage location accessible by all instances. Otherwise, the workload capture data needs to be copied from all locations to a single location before starting the processing of the workload recording. Oracle Database 11g: Change Management Overview Seminar I - 45
46
Using Enterprise Manager for Workload Capture
Using Enterprise Manager for Workload Capture (continued) You complete the schedule information to submit a capture job (IMMEDIATELY, LATER), specifying whether the capture is manually terminated or time-bound (DURATION), and then review the job information before submitting the job. Monitoring of the capture shows you the progress and resource usage. Because workload capture is typically done on a production system with a heavy workload, monitoring during the capture phase is lightweight and adds only minimal overhead to the production workload. The monitor data is accessible through V$ views. Oracle Database 11g: Change Management Overview Seminar I - 46
47
Using Enterprise Manager for Workload Replay
Process capture Replay files & metadata Raw captured data (from production system) Initialize replay Prepare Replay Replay report Analyze End Using Enterprise Manager for Workload Replay The workload replay has four steps: 1. Initializing replay data 2. Preparing for replay 3. Replay 4. Replay analysis These steps are discussed in detail in the following slides. Oracle Database 11g: Change Management Overview Seminar I - 47
48
Using Enterprise Manager for Workload Replay
Using Enterprise Manager for Workload Replay (continued) You begin the replay workflow by specifying the directory object where you stored the captured data. Once specified, the page in the slide is displayed with the Capture Summary of the selected workflow. Select Preprocess Workload to commence the prepare phase. The preprocess phase asks you to validate the database version of the replayed workload and requests you to schedule the preprocess phase (IMMEDIATELY, LATER). After the database has restarted in restricted mode, you begin the capture phase by calling the DBMS_WORKLOAD_CAPTURE package with the following arguments: Name for the capture (This allows reference of historical captured data on the capture system.) Directory object pointing to the directory that exists to store the captured workload data Time duration T for capture (This stops recording approximately after time T.) Filtering mode Restart mode Oracle Database 11g: Change Management Overview Seminar I - 48
49
Oracle Database 11g: Change Management Overview Seminar I - 49
Using Enterprise Manager for Workload Replay (continued) When you execute the FINISH_CAPTURE procedure, the capture stops and the database system flushes the capture buffers and closes all the open workload data files. After finishing the recording, you can request a report on the capture. This is used for comparison with the report generated through the replay phases. RAC Note: On instance failure during a RAC system capture, the capture continues and is not aborted. Any sessions that died as a result of the failure are replayed up to the point at which the instance died. When a dead instance is repaired and comes up again during capture, all new sessions are recorded normally. During replay, the death of instances is not replayed. Oracle Database 11g: Change Management Overview Seminar I - 49
50
Using Enterprise Manager for Workload Replay
Using Enterprise Manager for Workload Replay (continued) At this phase, the recorded data is transformed into a more suitable format. This transformation is done offline and preferably on a system different from the production system (because it is resource intensive). The capture-processing output can be used for multiple replays as a one-time activity when you use it on the same RDBMS version that is to be used for the replay. If the captured data has already been processed for a given RDBMS version (for example, A), you must perform the process capture phase again if it is required to perform replay on a RDBMS version that is newer than A. The following actions are performed at this phase: Transforming workload capture data files into suitable replay streams (the replay files) Producing all necessary metadata This phase is equivalent to the functionality in the DBMS_WORKLOAD_REPLAY.PROCESS_CAPTURE procedure. RAC Note: In a RAC setup, one database instance of the replay system is selected for processing the workload recording. If recorded data was written to a local file system for nodes in RAC, the recorded data files from all the nodes in the RAC should first be copied to the directory for the instance on which the preprocessing is to be done. If the captured data is stored in a shared file system, copying is not necessary. Oracle Database 11g: Change Management Overview Seminar I - 50
51
Using Enterprise Manager for Workload Replay
Using Enterprise Manager for Workload Replay (continued) Select the required captured data from the replay history table (if one exists), and then click Set Up Replay to begin the replay process. Oracle Database 11g: Change Management Overview Seminar I - 51
52
Using Enterprise Manager for Workload Replay
Using Enterprise Manager for Workload Replay (continued) The Set Up Replay phase can be done multiple times on the processed capture data. The above screen is displayed, reminding you to complete the following necessary steps: Restore Database: You need to restore the database objects used during capture to an equivalent state as of StartSCN (the system SCN at which the recording actually started). Perform System Changes: The intent is to test your workload under a different environment, so you make the necessary environment changes here. Resolve References to External Systems: A captured workload may contain references to external interaction that may be meaningful only in the capture environment. You should fix all the references prior to replay to ensure that replaying a workload will not cause harm to your production environment. Replaying a workload with unresolved references to an external interaction may affect your production environment. Set Up Replay Clients: Workload is replayed using replay clients connected to the replay database. You should install these replay clients on systems other than the database host. In addition, each replay client must be able to access the replay directory. Oracle Database 11g: Change Management Overview Seminar I - 52
53
Using Enterprise Manager for Workload Replay
References to resolve: Using Enterprise Manager for Workload Replay (continued) A replay should be performed in a completely isolated test environment (for example, hosts, networks, servers, or storage systems). The captured workload may have database activities that refer or connect to some systems or devices that are meaningful only in the capture environment. The three links direct you to other EM pages to see if the database links, directory objects, and streams on the replay database (as restored from the capture database) refer to production-only systems or devices. You should modify these by using test environment values. You should ensure that all references to external interactions are resolved in the replay environment so that replaying a workload causes no harm to your production environment. RAC Note: In a RAC system, the replay data files should be stored in the shared storage or copied to the appropriate local directories so that all the database instances in the RAC and all the replay clients can access them. The remapping of external interactions should include the remapping of instances. In particular, every captured connection string probably needs to be remapped to a connection string in the replay system. If the capture system is a single instance database and the replay system is also a single instance database, the remapping of the connection string is straightforward and involves adding the appropriate entry to the configuration file. The same is valid when the capture and replay systems are both RAC databases with the same number of nodes. That is, there is a 1-to-1 mapping of the connection strings of the capture system to the connection strings of the replay system. Remapping becomes more complicated if the capture and replay systems have different numbers of nodes. Oracle Database 11g: Change Management Overview Seminar I - 53
54
Using Enterprise Manager for Workload Replay
Using Enterprise Manager for Workload Replay (continued) You use either the default options or the options from a previous replay. The next step enables you to further customize the chosen configuration. Oracle Database 11g: Change Management Overview Seminar I - 54
55
Using Enterprise Manager for Workload Replay
Using Enterprise Manager for Workload Replay (continued) You use the Connection Mappings page to conveniently use a single descriptor for all connections, either as a prepopulated string or as an alias name that maps to the descriptor string. You can also select a separate connect descriptor option that enables you to specify a unique descriptor for each connection. If you select the “Use replay options from a previous replay” option in the Choose Initial Options step, the “Use a separate connect descriptor” option is selected and the previous replay system values appear in the table below. The Replay Parameters page provides advanced parameters that control some aspects of the replay. These replay parameters are documented in the Oracle Database Performance Tuning Guide (11g Release 1). Oracle Database 11g: Change Management Overview Seminar I - 55
56
Using Enterprise Manager for Workload Replay
Using Enterprise Manager for Workload Replay (continued) The Replay Parameters page provides advanced parameters that control aspects of the replay. SYNCHRONIZATION: You can disable SCN-based synchronization of the replay if your workload consists of transactions that do not heavily depend on each other, thereby accepting any divergence during replay. CONNECT_TIME_SCALE: This scales the time elapsed between the instant the workload capture was started and session connects with the given value. The input is interpreted as a percentage value. It can potentially be used to increase or decrease the number of concurrent users during the workload replay. THINK_TIME_SCALE: This scales the time elapsed between two successive user calls from the same session. The input is interpreted as a percentage value. Setting this value to 0 sends requests to the database as fast as possible. THINK_TIME_AUTO_CORRECT: This appropriately autocorrects the think time between calls when a user call takes longer to complete during replay than during the original capture. The input is interpreted as a percentage value. Please note that THINK_TIME_AUTO_CORRECT corrects the think time that is calculated based on THINK_TIME_SCALE. If TRUE, it reduces (or increases) the think time when the replay is slower (or faster) than capture. If FALSE, it does nothing. Note: These replay parameters are documented in the Oracle Database Performance Tuning Guide (11g Release 1). Oracle Database 11g: Change Management Overview Seminar I - 56
57
Using Enterprise Manager for Workload Replay
Using Enterprise Manager for Workload Replay (continued) The workload is replayed using replay clients connected to the database. You should be ready to start the replay clients at this point. When you are ready to start the replay clients, click Next and then start the clients. Oracle Database 11g: Change Management Overview Seminar I - 57
58
Using Enterprise Manager for Workload Replay
%> wrc userid=<user id> password=<password> server=<server connection string> replaydir=<replay directory> workdir=<client work directory> $ wrc REPLAYDIR=/home/oracle/solutions/dbreplay USERID=system PASSWORD=oracle Workload Replay Client: Release Production on Tue … Copyright (c) 1982, 2007, Oracle. All rights reserved. Wait for the replay to start (21:47:01) Replay started (21:48:14) Using Enterprise Manager for Workload Replay (continued) The Workload Replay Wizard waits for you to start the replay clients. You open a separate terminal window to start the replay clients. You can start multiple replay clients depending on the workload replay size. Each of the clients initiates one or more replay threads with the RDBMS, with each replay thread corresponding to a stream from the workload capture. The replay clients are started after the database server has entered replay PREPARE mode from the wizard, using the syntax illustrated in the slide. The userid and password parameters are the user ID and password of the replay user for the client. The server parameter is a connection string that connects to the instance of the replay system. The replaydir parameter points to the directory that contains the processed replay files. The workdir parameter defines the client’s working directory. If left unspecified, it defaults to the current directory. Before starting the replay clients, be sure that: The replay client software is installed on the hosts The client has access to the replay directory The replay directory has the replay files that have been preprocessed The userid and password for the replay user are correct (Furthermore, the replay user should be able to use the workload replay package and has the user SWITCH privilege.) The Client Connections table is populated when at least one replay client is connected. Oracle Database 11g: Change Management Overview Seminar I - 58
59
Using Enterprise Manager for Workload Replay
Using Enterprise Manager for Workload Replay (continued) After starting the replay clients, you review the replay setup and submit the job. A progress window is displayed, providing comparison statistics as the replay progresses. You can terminate the replay at any stage with the Stop Replay button. On successful completion of the replay, your terminal window that started the replay clients displays an information message (“Replay Finished”) followed by a time stamp. The replayed workload is now complete, and you can utilize existing manageability tools such as AWR and ASH for additional system performance information. The Elapsed Time Comparison chart shows how much time the replayed workload has taken to accomplish the same amount of work as the captured workload. The divergence table provides information about both the data and error discrepancies between the replay and capture environments, which can be used as a measure of the replay quality. You can click the View Workload Replay Report button (or Report tab after replay has completed) for a browser window that displays a report containing detailed information about the replay. RAC Note: If a specific captured instance is mapped to a new instance in the replay system, all the captured calls for the captured instances are sent to the new one. If the replay system is also RAC and a captured instance is mapped to the run-time load balancing of the replay system, all the captured calls for that recorded instance are dynamically distributed to instances in the replay RAC system using run-time load balancing. Oracle Database 11g: Change Management Overview Seminar I - 59
60
Database Replay: PL/SQL Example
exec DBMS_WORKLOAD_CAPTURE.ADD_FILTER(fname => 'sessfilt', fattribute => USER , fvalue => 'JFV'); exec DBMS_WORKLOAD_CAPTURE.START_CAPTURE(name => 'june_peak', dir => 'jun07'); Execute your workload exec DBMS_WORKLOAD_CAPTURE.FINISH_CAPTURE(); exec DBMS_WORKLOAD_REPLAY.PROCESS_CAPTURE(capture_dir => 'jun07'); wrc userid=system password=oracle replaydir=/dbreplay exec DBMS_WORKLOAD_REPLAY.INITIALIZE_REPLAY(replay_name => 'j_r' , replay_dir => 'jun07'); Database Replay: PL/SQL Example In this example, the ADD_FILTER procedure adds a filter named sessfilt that filters out all sessions belonging to the username JFV. You use the START_CAPTURE procedure to start the workload capture. In the example in the slide, a capture named june_peak is captured and stored in a directory named jun07. Because the duration parameter is not specified, the workload capture continues until the FINISH_CAPTURE procedure is called. Now you can run your workload. To stop the workload capture, use the FINISH_CAPTURE procedure. This procedure finalizes the workload capture and returns the database to a normal state. At this point, you can generate a capture report by using the REPORT function. To preprocess a captured workload, use the PROCESS_CAPTURE procedure. In this example, the captured workload stored in the jun07 directory is preprocessed. When you are done, start your replay clients. To initialize replay data, use the INITIALIZE_REPLAY procedure. Initializing replay data loads the necessary metadata into tables required by workload replay. For example, captured connection strings are loaded into a table where they can be remapped for replay. In this example, the INITIALIZE_REPLAY procedure loads preprocessed workload data from the jun07 directory into the database. Oracle Database 11g: Change Management Overview Seminar I - 60
61
Database Replay: PL/SQL Example
exec DBMS_WORKLOAD_REPLAY.REMAP_CONNECTION(connection_id => 101, replay_connection => 'edlin44:3434/bjava21'); exec DBMS_WORKLOAD_REPLAY.PREPARE_REPLAY(synchronization => TRUE, think_time_scale=> 2); exec DBMS_WORKLOAD_REPLAY.START_REPLAY (); DECLARE cap_id NUMBER; rep_id NUMBER; rep_rpt CLOB; BEGIN cap_id := DBMS_WORKLOAD_REPLAY.GET_REPLAY_INFO(dir => 'jun07'); /* Get the latest replay for that capture */ SELECT max(id) INTO rep_id FROM dba_workload_replays WHERE capture_id = cap_id; rep_rpt := DBMS_WORKLOAD_REPLAY.REPORT(replay_id => rep_id, format => DBMS_WORKLOAD_REPLAY.TYPE_TEXT); END; Database Replay: PL/SQL Example (continued) To remap connections, use the REMAP_CONNECTION procedure. In the example in the slide, the connection that corresponds to the connection ID, 101, uses the new connection string defined by the replay_connection parameter. You then use the PREPARE_REPLAY procedure to prepare the workload replay on the replay system. In this example, the PREPARE_REPLAY procedure prepares the j_r replay to preserve the COMMIT order in the workload capture. Use the START_REPLAY procedure to start a workload replay. To stop a workload replay, use the REPLAY_CANCEL procedure. In the example in the slide, the REPORT function is used to generate a workload replay report. Note: You require the EXECUTE privilege on the capture and replay packages to execute these packages. These privileges are usually assigned by the DBA. For further details about the DBMS_WORKLOAD packages, see the Oracle Database PL/SQL Packages and Types Reference 11g Release 1. Oracle Database 11g: Change Management Overview Seminar I - 61
62
Calibrating Replay Clients
$ wrc mode=calibrate replaydir=/dbreplay Workload Replay Client: Release Beta on Tue … Copyright (c) 1982, 2007, Oracle. All rights reserved. Report for Workload in: /dbreplay Recommendation: Consider using at least 1 clients divided among 1 CPU(s). Workload Characteristics: - max concurrency: 4 sessions - total number of sessions: 11 Assumptions: - 1 client process per 50 concurrent sessions - 4 client process per CPU - think time scale = 100 - connect time scale = 100 - synchronization = TRUE $ Calibrating Replay Clients Because one replay client can initiate multiple sessions with the database, it is not necessary to start a replay client for each session that was captured. The number of replay clients that need to be started depends on the number of workload streams, the number of hosts, and the number of replay clients for each host. Run the wrc executable in calibrate mode to estimate the number of replay clients and hosts that are required to replay a particular workload. In calibration mode, the wrc executable accepts the following parameters: replaydir specifies the directory that contains the preprocessed workload capture that you want to replay. If unspecified, it defaults to the current directory. process_per_cpu specifies the maximum number of client processes that can run on each CPU. The default value is 4. threads_per_process specifies the maximum number of threads that can run within a client process. The default value is 50. The example in the slide shows how to run the wrc executable in calibrate mode. In this example, the wrc executable is executed to estimate the number of replay clients and hosts that are required to replay the workload capture stored in the directory named replay. Note: Use the list_hosts mode to list hosts that participated in the capture. Oracle Database 11g: Change Management Overview Seminar I - 62
63
Packages and Procedures
DBMS_WORKLOAD_CAPTURE START_CAPTURE FINISH_CAPTURE ADD_FILTER DELETE_FILTER DELETE_CAPTURE_INFO GET_CAPTURE_INFO() EXPORT_AWR IMPORT_AWR() REPORT() DBMS_WORKLOAD_REPLAY PROCESS_CAPTURE INITIALIZE_REPLAY PREPARE_REPLAY START_REPLAY CANCEL_REPLAY DELETE_REPLAY_INFO REMAP_CONNECTION EXPORT_AWR IMPORT_AWR GET_REPLAY_INFO REPORT Packages and Procedures You need the EXECUTE privilege on the capture and replay packages to execute these packages. These privileges are usually assigned by the DBA. Note: For further details about the DBMS_WORKLOAD packages, see the Oracle Database PL/SQL Packages and Types Reference 11g, Release 1. Oracle Database 11g: Change Management Overview Seminar I - 63
64
Data Dictionary Views: Database Replay
DBA_WORKLOAD_CAPTURES: Lists all the workload captures performed in the database DBA_WORKLOAD_FILTERS: Lists all the workload filters defined in the database DBA_WORKLOAD_REPLAYS: Lists all the workload replays that have been performed in the database DBA_WORKLOAD_REPLAY_DIVERGENCE: Is used to monitor workload divergence DBA_WORKLOAD_CONNECTION_MAP: Is used to review all connection strings used by workload replays V$WORKLOAD_REPLAY_THREAD: Monitors the status of external replay clients Data Dictionary Views: Database Replay For further details about the data dictionary views, see the Oracle Database Reference 11g Release 1 (11.1). Oracle Database 11g: Change Management Overview Seminar I - 64
65
Oracle Database 11g: Change Management Overview Seminar I - 65
Summary In this lesson, you should have learned how to: Identify the benefits of using Database Replay List the steps involved in Database Replay Use Enterprise Manager to record and replay workloads Oracle Database 11g: Change Management Overview Seminar I - 65
66
Using SQL Performance Analyzer
Realistic Testing Using SQL Performance Analyzer
67
Oracle Database 11g: Change Management Overview Seminar I - 67
Objectives After completing this lesson, you should be able to: Identify the benefits of using SQL Performance Analyzer Describe the SQL Performance Analyzer workflow phases Use SQL Performance Analyzer to evaluate performance gains following a database change Oracle Database 11g: Change Management Overview Seminar I - 67
68
SQL Performance Analyzer: Overview
Helps predict the impact of system changes on SQL workload response times Builds different versions of SQL workload performance SQL execution plans and execution statistics Executes SQL serially; concurrency not respected Analyzes performance differences Offers fine-grained performance analysis on individual SQL Is integrated with SQL Tuning Advisor to tune regressions Targeted users: DBAs, QA testers, and application developers SQL Performance Analyzer: Overview Oracle Database 11g introduces SQL Performance Analyzer, which gives you an exact and accurate assessment of the impact of the change on the SQL statements in the workload. SQL Performance Analyzer helps you forecast the impact of a potential change on the performance of a SQL query workload. This capability provides DBAs with detailed information about the performance of SQL statements, such as before and after execution statistics, statements with performance improvement or degradation. This enables you to make changes in a test environment to determine, for example, if the workload performance will be improved through a database upgrade. Oracle Database 11g: Change Management Overview Seminar I - 68
69
SQL Performance Analyzer: Uses and Benefits
SQL Performance Analyzer is beneficial in the following situations: Database upgrades Implementation of tuning recommendations Schema changes Statistics gathering Database parameter changes OS and hardware changes SQL Performance Analyzer: Uses and Benefits SQL Performance Analyzer can be used to predict and prevent potential performance problems for any database environment change that affects the structure of SQL execution plans. The changes can include any of the following (but are not limited to these): Database upgrades Implementation of tuning recommendations Schema changes Statistics gathering Database parameter changes OS and hardware changes DBAs can use SQL Performance Analyzer to foresee SQL performance changes induced through above changes, for even the most complex environments. As applications evolve through the development life cycle, database application developers can test changes to schemas, database objects, and rewritten applications (for example) to mitigate any potential performance impact. SQL Performance Analyzer also allows for the comparison of SQL performance statistics. Oracle Database 11g: Change Management Overview Seminar I - 69
70
Usage Model (1): Capture SQL Workload
SQL tuning set (STS) stores SQL workload. Includes: SQL text Bind variables Execution plans Execution statistics Incremental capture populates STS from cursor cache over a time period. STS’s filtering and ranking capabilities filter out undesirable SQL. Cursor cache Incremental capture Database instance Production database Usage Model (1): Capture SQL Workload The first step in using SQL Performance Analyzer is to capture the SQL statements that represent your workload. This step is done by using the SQL Tuning Set technology or by using the high-load SQL statements captured in the Automatic Workload Repository. Oracle Database 11g: Change Management Overview Seminar I - 70
71
Usage Model (2): Transport to a Test System
Cursor cache Database instance Database instance Production database Test database Copy STS to staging table (“pack”). Transport staging table to test system (Data Pump, database link for example). Copy STS from staging table (“unpack”). Usage Model (2): Transport to a Test System The next step is to transport these SQL statements to a similar system that is being tested. Here, the STS can be exported from production and then imported into a test system. Oracle Database 11g: Change Management Overview Seminar I - 71
72
Usage Model (3): Build Before-Change Performance
Before change, the SQL performance version is the SQL workload performance baseline. SQL performance = execution plans + execution statistics Test-execute SQL in the STS: Produce execution plans and statistics. Execute SQL serially (no concurrency). Every SQL is executed only once. Skip DDL/DML effects. Explain plan the SQL in the STS to generate SQL plans only. Test/execute Before changes Database instance Test database Usage Model (3): Build Before-Change Performance The third step is to capture a baseline of the test system performance consisting of the execution plan and execution statistics. Oracle Database 11g: Change Management Overview Seminar I - 72
73
Usage Model (4): Build After-Change Performance
Manually implement the planned change: Database upgrade Implementation of tuning recommendations Schema changes Statistics gathering Database parameter changes OS/hardware changes, and so on Reexecute SQL after change: Test-execute SQL in SQL tuning set to generate SQL execution plans and statistics. Explain plan SQL in SQL tuning set to generate SQL plans. After changes Database instance After changes implemented Test database Usage Model (4): Build After-Change Performance The fourth step is to make the change to the test system and rerun the SQL statements to assess the impact of the change on SQL performance. Oracle Database 11g: Change Management Overview Seminar I - 73
74
Usage Model (5): Compare and Analyze Performance
Rely on user-specified metric to compare SQL performance: Elapsed_time, Buffer_gets, Disk_reads Calculate change impact on individual SQL and SQL workload: Overall impact on workload SQL net impact on workload Use SQL execution frequency to define a weight of importance. Detect improvements, regressions, and unchanged performance. Detect changes in execution plans. Recommend running SQL Tuning Advisor to tune regressed SQLs. Analysis results can be used to seed SQL plan management baselines. SQL Tuning Advisor improvement Regression Compare analysis Database instance Usage Model (5): Compare and Analyze Performance Enterprise Manager (EM) provides the tools to make a full comparison of performance data including execution statistics such as elapsed time, CPU time, and buffer gets. In the event that the SQL performance has regressed in some of the cases, the DBA must run the SQL Tuning Advisor to tune the SQL statements either immediately or at a scheduled time. As with any tuning strategy, it is recommended that only one change be implemented at a time and retested before making further changes. Test database Oracle Database 11g: Change Management Overview Seminar I - 74
75
SQL Performance Analyzer: Summary
Capture SQL workload on production. Transport the SQL workload to a test system. Build “before-change” performance data. Make changes. Build “after-change” performance data. Compare results from steps 3 and 5. Tune regressed SQL. SQL Performance Analyzer: Summary 1. Capture SQL: In this phase, you collect the set of SQL statements that represent your SQL workload on the production system. You can use SQL tuning sets or the Automatic Workload Repository (AWR) to capture the information to transport. As the AWR essentially captures high-load SQLs, you should consider modifying the default AWR snapshot settings and captured Top SQL to ensure that the AWR captures the maximum number of SQL statements. This would ensure a more complete SQL workload capture. 2. Transport: Here you transport the resultant workload to the test system. The STS is exported from the production system and the STS is imported into the test system. 3. Compute “before version” performance: Before any changes take place, you execute the SQL statements, collecting baseline information needed to assess the impact that a future change may have on the performance of the workload. The information collected in this stage represents a snapshot of the current state of the system workload. The performance data includes: - Execution plans: Execution plans (for example, those generated by an explain plan) - Execution statistics: For example, elapsed time, buffer gets, disk reads, and rows processed 4. Make a change: After you have the “before version” data, you can implement your planned change and start viewing the impact on performance. Oracle Database 11g: Change Management Overview Seminar I - 75
76
Oracle Database 11g: Change Management Overview Seminar I - 76
SQL Performance Analyzer: Summary (continued) 5. Compute “after version” performance: This step takes place after the change is made in the database environment. Each statement of the SQL workload runs under a mock execution—collecting statistics only—collecting the same information as captured in step 3. 6. Compare and analyze SQL Performance: After you have both versions of the SQL workload performance data, you can perform the performance analysis comparing the “after version” data with the “before version” data. The comparison is based on the execution statistics, such as elapsed time, CPU time, and buffer gets. 7. Tune regressed SQL: At this stage, you have identified exactly which SQL statements may cause performance problems when the database change is made. From here, you can use any of the database tools to tune the system. For example, you could use the SQL Tuning Advisor or Access Advisor against the identified statements, and implement those recommendations. Alternatively, you can seed SPM with plans captured in step 3 to guarantee that the plans remain the same. After you implement any tuning action, you should repeat the process to create a new “after version” and analyze the performance differences to ensure that the new performance is acceptable. Notes only page Oracle Database 11g: Change Management Overview Seminar I - 76
77
Enterprise Manager: Capturing the SQL Workload
Creating a SQL tuning set: Use a representative workload on the production system. Workload captured is: SQL text Execution context Execution frequency Performance data is calculated on a test system. Enterprise Manager: Capturing the SQL Workload The Create STS Tuning Set Wizard provided in EM in Oracle Database 11g guides you through the capturing of SQL statements. You select a load method and a data source, specify filter conditions for selecting SQL statements to be loaded, and schedule it as a job to be executed at a given time. You access the SQL Tuning Sets page from the Performance tab in Database Control. The workload you capture should reflect a representative period of time (in captured SQL statements) that you wish to test under some changed condition. The following information is captured in this process: The SQL text The execution context, including bind values, parsing schema, and compilation environment that contains a set of initialization parameters under which the statement is executed The execution frequency, which tells how many times the SQL statement has been executed during the time interval of the workload Normally, the capture SQL happens on the production system to capture the workload running on the production system. The performance data is computed later on the test system by the compute SQL performance processes. SQL Performance Analyzer tracks the SQL performance of the same STS before and after a change is made to the database. Oracle Database 11g: Change Management Overview Seminar I - 77
78
Capturing the SQL Workload
You can do either of the following: Incrementally collect SQL workload over a period of time Collect SQL statements on a one-time-only basis from the following sources: Cursor cache AWR snapshots AWR baselines User-defined workload Capturing the SQL Workload You can choose either to incrementally collect SQL workload over a period of time, or to collect SQL statements one time only from the sources shown in the slide. A user-defined workload is a user-defined table that stores SQL statements and must include sql_text and parsing_schema_name columns. Ideally, it should also have columns that contain SQL statistics. EM provides the following support for SQL Performance Analyzer: Viewing previously captured workloads and their details Capturing the SQL Exporting a workload Importing a workload Computing SQL performance Managing SQL performance data Reporting analysis result Running SQL Advisor to tune regressed SQL statements Viewing previously executed SQL Performance Analyzer tasks and their results Oracle Database 11g: Change Management Overview Seminar I - 78
79
Creating a SQL Tuning Set
You can create filters against the type of SQL conditions for capture. In the example in the slide, the schema APPS, SELECT SQL statements, and APPS_DEMO module are selected for capture from the cursor cache. The actual filter options depend on the selected load methods. The final stage in the wizard enables you to select a job schedule time (IMMEDIATELY, LATER), review your job options, and submit the job. Oracle Database 11g: Change Management Overview Seminar I - 79
80
Exporting the SQL Workload
DBMS_SQLTUNE packages: CREATE_STGTAB_SQLSET PACK_STGTAB_SQLSET SQL> exec DBMS_SQLTUNE.CREATE_STGTAB_SQLSET('STS_APPS_STG','APPS'); Exporting the SQL Workload From this page, you can choose to export the selected STS for transport to the test system. You can also drill down and see the SQL statements contained within the selected STS. You also use this page to import an STS from a previously exported file. This is how you would load an STS on the test system for comparison purposes. In a common scenario, you would be exporting from an Oracle Database 10g environment, in which case you would need to create and pack a staging table using DBMS_SQLTUNE procedures through which STS are imported and exported. These packages are shown in the slide. After you package the STS in Oracle Database 10g, use Data Pump to export the schema to the Oracle Database 11g test system on which you will replay the workload after some environment changes. This allows you to access the new environment before upgrading your production system. Oracle Database 11g: Change Management Overview Seminar I - 80
81
Creating a SQL Performance Analyzer Task
EM helps you manage each component in the SQL Performance Analyzer process and reports the analysis result. The workflow and user interface applies to both EM Database Control and EM Grid Control. You access SQL Performance Analyzer from the “Software and Support” tab of Database Control, or by selecting Database Instance > Advisor Central > Advisors > SQL Performance Analyzer. SQL Performance Analyzer offers three workflows that enable you to test different scenarios. Optimizer Upgrade Simulation: Test the effects of specified optimizer version changes on SQL Tuning Set performance. Parameter Change: Test and compare an initialization parameter change on SQL Tuning Set performance. A SQL Performance Analyzer Task is created and an initial trial run is performed with the parameter set to the base value. A second trial run is performed with the parameter set to the changed value. A replay trial comparison report is then run for the two trials. Guided Workflow: Create a SQL Performance Analyzer Task and execute custom experiments using manually created replay trials. Oracle Database 11g: Change Management Overview Seminar I - 81
82
Optimizer Upgrade Simulation
This page allows you to create a task that measures the performance impact on an STS when the database version is upgraded from 10g to 11g. You can test the performance impact after a database is upgraded and then tune the database if a degradation occurs. To create a task, you must specify the following details: Enter the name of the task and a description. Click the Select icon and select a SQL tuning set from the list. Select the Per-SQL Time Limit from the list to specify the time limit for the execution of each SQL statement. This can be: Unlimited: There is no time limit for the execution of each SQL statement. Explain Only: If you select this option, the test plan is generated but not executed. Customize: Select this option if you want to customize the execution time limit. Select the Optimizer Versions to indicate the current version of the database and the new version to which the database is being upgraded. Two replay trials will be created: the first captures STS performance with the optimizer simulating the 10g optimizer, and the second uses the native 11g optimizer. Select the Comparison Metric to be used to evaluate the performance impact due to the database upgrade. Specify the Schedule for the task (IMMEDIATELY,LATER). Oracle Database 11g: Change Management Overview Seminar I - 82
83
SQL Performance Analyzer: Task Page
A SQL Performance Analyzer Task allows you to execute a specific STS under changed environmental conditions. After you execute the task, you can then assess the impact of these changes on the performance of the STS. The comparison report is useful in assessing the impact of the changed environmental conditions on the performance of the specified STS. From this page you can also: Create a replay trial to test the performance of an STS under a specific environment. Click “Create Replay Trial.” Refer to the Guided Workflow page for details on creating a Replay Trial Run a replay trial comparison to compare the differences between the replay trials that have been created. A comparison report is generated for each replay trial run. Click “Run Replay Trial Comparison.” Refer to the Guided Workflow page for details on running a replay trial comparison. Click the icon in the Comparison Report column to view the Replay Trial Comparison report. Oracle Database 11g: Change Management Overview Seminar I - 83
84
Oracle Database 11g: Change Management Overview Seminar I - 84
Comparison Report Comparison Report The Projected Workload chart shows the projected workload for each replay trial based on the comparison metric (Execute Elapsed Time) along with the improvement and regression impact. Use the links to drill down to the details page, where you can click the associated Improvement, Regression, and Overall Impact links to view the SQL statements in each category. You can click the SQL ID to drill down to the SQL Details page. On the SQL Details page, you can view the SQL text, a line-by-line comparison of the execution statistics, and the explain plan comparison. The SQL Statement Count chart shows the number of SQL statements that have improved, regressed, or been unchanged in performance based on the comparison metric. The color of the bar indicates whether the plan changed between two trial runs. Use the links or the data buckets to access the SQL Statement Count Details page. On this page, you can see a list of SQL statements and click the SQL ID to access the SQL Details page. The report page also provides a summary of the ten statements (by SQL ID) that have had the most significant impact on performance. Use the SQL ID link to access the SQL details. Oracle Database 11g: Change Management Overview Seminar I - 84
85
Oracle Database 11g: Change Management Overview Seminar I - 85
Guided Workflow The trial environment must be set manually from a separate terminal session before the replay trials are captured. Guided Workflow You can use the Guided Workflow to define a sequence of steps to execute a two-trial SQL Performance Analyzer test. The steps are as follows: Create a SQL Performance Analyzer task based on a SQL tuning set. Replay the STS in the initial environment: Any changes to the trial environment that affect the STS must be made manually before the replay trial is executed. These trials may include changing initialization parameters, gathering optimizer statistics, and creating indexes. Create a replay trial using changed environment: You can now create the second replay trial using the changed environment, specifying all the necessary information. Performance differences between the trials are attributed to the environmental differences between trials. Create a replay trial comparison using trials from previous steps: This allows you to assess the performance impact on the STS when each replay trial is executed. View the Trial Comparison report: You can now generate the Replay Trial Comparison report. Oracle Database 11g: Change Management Overview Seminar I - 85
86
Viewing Analysis Results
When you click the SQL ID associated with the statement, you see a more detailed illustration of the SQL. The arrows give you a symbolic reference of the overall improvement expectation. The detailed page gives you SQL text, execution statistics for the selected SQL statement, and overall information findings. Using the scrollable windows, you can view the plan table of the SQL statement before and after the proposed change. The trial comparison allows you to assess the impact on SQL tuning set performance of any changes made between two trials. You should know the difference between Trial 1 and Trial 2 execution environments to understand the impact to the changes between trials. Tracking environmental changes between trials is currently a DBA responsibility. The selected comparison metric is used as the basis for comparison, and defaults to EXECUTE ELAPSED TIME when both trials contain test execution statistics. You can enact the SQL Tuning Advisor at the statement level or at the task level. Task-level activation will affect changes for all statements within the task. From the SQL Performance Analyzer Task Result page, you can directly tune all regressed statements by invoking SQL Tuning Advisor through clicking the Run SQL Tuning Advisor button, completing the job name and optional job details, and then submitting the job. You can also prevent regressions by using SQL plan baselines by clicking the Create SQL Plan Baselines button. Oracle Database 11g: Change Management Overview Seminar I - 86
87
Viewing Tuning Results
After successful job submission, go to the Advisor Central page and drill down to view the recommendations of the run. You can view the detail of the suggested statement improvements by selecting a specific type. The improvement identified by the SQL Tuning Advisor as having the highest benefit (percentage) is presented at the top of the list. It is recommended that you implement only one change at a time and repeat the analysis process, capturing “after tuning” performance data and reanalyzing the recommendations against the “after change” performance data. Oracle Database 11g: Change Management Overview Seminar I - 87
88
SQL Performance Analyzer: PL/SQL Example
exec :tname:=dbms_sqlpa.create_analysis_task( sqlset_name=>':mysts', task_type => 'sqlpia'); exec dbms_sqlpa.execute_analysis_task(:tname, execution_type=>'test execute'); select dbms_sqlpa.report_analysis_task(task_name=>:tname, type=>'text', section=>'summary') FROM dual; Make changes exec dbms_sqlpa.execute_analysis_task(:tname, execution_type=>'test execute'); select dbms_sqlpa.report_analysis_task(task_name=>:tname, type=>'text', section=>'summary') FROM dual; exec dbms_sqlpa.execute_analysis_task(:tname, execution_type=>'analyze performance'); select dbms_sqlpa.report_analysis_task(task_name=>:tname, type=>'text', section=>'summary') FROM dual; SQL Performance Analyzer: PL/SQL Example You can easily adapt the example in the slide to run your own DBMS_SQLPA analysis. 1. Create the tuning task to run SQL Performance Analyzer. 2. Execute the task once to build the before-change performance data and produce the before-change report. You can specify various parameters, some of which are: Set the execution_type parameter to EXPLAIN PLAN (generates explain plans for all SQL statements) or TEST EXECUTE (executes all SQL statements in the SQL workload). Specify execution parameters by using the execution_params parameters specified as name-value pairs. The time_limit parameter specifies the global time limit to process all SQL statements in a SQL tuning set before timing out. The local_time_limit parameter specifies the time limit to process each SQL statement in a SQL tuning set before timing out. 3. Make your changes. 4. Execute the task another time after making the changes, and then get the after-changes report. 5. Compare the two executions, and then get the analysis report. Oracle Database 11g: Change Management Overview Seminar I - 88
89
SQL Performance Analyzer: Data Dictionary Views
Modified views in Oracle Database 11g: DBA{USER}_ADVISOR_TASKS: Displays details about the advisor task DBA{USER}_ADVISOR_FINDINGS: Displays analysis findings New views in Oracle Database 11g: DBA{USER}_ADVISOR_EXECUTIONS: Lists metadata information for execution of a task DBA{USER}_ADVISOR_SQLPLANS: Displays the list of SQL execution plans DBA{USER}_ADVISOR_SQLSTATS: Displays the list of SQL compilation and execution statistics SQL Performance Analyzer: Data Dictionary Views DBA{USER}_ADVISOR_SQLPLANS: Displays the list of all SQL execution plans (or those owned by the current user) DBA{USER}_ADVISOR_SQLSTATS: Displays the list of SQL compilation and execution statistics (or those owned by the current user) DBA{USER}_ADVISOR_TASKS: Displays details about the advisor task created to perform an impact analysis of a system environment change DBA{USER}_ADVISOR_EXECUTIONS:Lists metadata information for execution of a task. SQL Performance Analyzer creates a minimum of three executions to perform a change impact analysis on a SQL workload: one execution that collects performance data for the before-change version of the workload, a second execution of the after-change version of the workload, and a final execution to perform the actual analysis. DBA{USER}_ADVISOR_FINDINGS:Displays analysis findings. The advisor generates four types of findings: performance regression, symptoms, errors, and informative messages. Oracle Database 11g: Change Management Overview Seminar I - 89
90
Oracle Database 11g: Change Management Overview Seminar I - 90
Summary In this lesson, you should have learned how to: Identify the benefits of using SQL Performance Analyzer Describe the SQL Performance Analyzer workflow phases Use SQL Performance Analyzer to evaluate performance gains following a database change Oracle Database 11g: Change Management Overview Seminar I - 90
91
Provisioning Automation
Performing Online Changes
92
Oracle Database 11g: Change Management Overview Seminar I - 92
Objectives After completing this lesson, you should be able to: Describe and use the enhanced online table redefinition and materialized views Describe fine-grained dependency management Describe and use the enhanced PL/SQL recompilation mechanism Use online table redefinition with materialized views and view logs Set up wait times for locks on DDL commands Make indexes invisible to test potential plan changes Oracle Database 11g: Change Management Overview Seminar I - 92
93
Online Application Maintenance
1. In-place redefinition Fast adding of mandatory columns with default values Testing the removal of an index before dropping it Using invisible indexes 2. Copy-based redefinition Creating an index Changing a tablespace to read-only Dropping an unused column from a table Moving a table from one tablespace to another Online Application Maintenance Online Application Maintenance is a collection of features that improve database maintenance activities on applications while they are in use. In other words, you redefine database objects that make up the application. The levels of redefinition from simple to complex are: 1. In-place redefinition You modify a database object “in place” (in the open, active database without any intermediate copy or version). An immediate change occurs to an object, for example, with the ALTER TABLE command. An enhancement to this command is the ability to add a NOT NULL column with a default value in sub-second time (independent of how the table is populated) and without consuming space. 2. Copy-based redefinition This process is transparent to the user. The database creates an intermediate copy of the object. When the hidden copy is ready, it substitutes for its predecessor. Online index rebuild and online table redefinition use this type of mechanism. Oracle Database 11g: Change Management Overview Seminar I - 93
94
Online Table Redefinition
Online table redefinition modifies the table structure without affecting the availability of the table. In Oracle Database 11g, this mechanism is enhanced to do the following: Support tables with materialized views and view logs Support triggers with ordering dependency Recompile only logically affected PL/SQL and dependent objects Perform redefinition with: EM Reorganize Objects Wizard DBMS_REDEFINITION package Online Table Redefinition The Oracle database provides a mechanism for making table structure modifications without affecting the availability of the table. The mechanism is called online table redefinition. When a table is redefined online, it is accessible to both queries and DML during much of the redefinition process. This basic mechanism has not changed in Oracle Database 11g. It is enhanced to support tables with materialized views and view logs. In addition, online redefinition supports triggers with the FOLLOWS or PRECEDES clause, which establishes an ordering dependency between the triggers. Also, PL/SQL and dependent objects are not invalidated after a redefinition unless they are logically affected. You can redefine a table online with the Enterprise Manager Reorganize Objects Wizard or with the DBMS_REDEFINITION package. You invoke the Reorganization Wizard from the Schema page of Enterprise Manager, and then click in the Select column to select the table to redefine. Oracle Database 11g: Change Management Overview Seminar I - 94
95
Redefinition and Materialized View
In prior database versions, a table could not be redefined if it had associated logs or materialized views (MV). In Oracle Database 11g, you can redefine tables with materialized views and MV logs. You can clone the materialized view log onto the interim table just as you can with triggers, indexes, and other similar dependent objects. At the end of the redefinition, ROWID logs are invalidated. Initially, all dependent materialized views need to have a complete refresh. This enhancement saves you the effort and time of dropping and re-creating the materialized views and the materialized view logs. For materialized view logs and queue tables, online redefinition is restricted to changes in physical properties. No horizontal or vertical subsetting is permitted, nor are any column transformations allowed. (The only valid value for the column mapping string is NULL). Oracle Database 11g: Change Management Overview Seminar I - 95
96
Using PL/SQL to Redefine a Table
Choose the redefinition method. Use the DBMS_REDEFINITION.CAN_REDEF_TABLE procedure to verify that the table can be redefined. Create an empty interim table without indexes. Use the DBMS_REDEFINITION.START_REDEF_TABLE procedure to start redefinition. Create indexes on the interim table. Use DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS to copy dependent objects to the interim table. Check for errors in DBA_REDEFINITION_ERRORS views. Use the DBMS_REDEFINITION.FINISH_REDEF_TABLE procedure to complete the redefinition. Drop the interim table. Using PL/SQL to Redefine a Table 1. Choose the redefinition method: by key (primary key or pseudo-primary key) or by ROWID (if no key is available). 2. Verify that the table is a candidate for online redefinition with the CAN_REDEF_TABLE procedure. 3. Create an empty interim table (in the same schema as the table to be redefined) with the desired logical and physical attributes, but without indexes. Note: Optional (and a best practice): If you are redefining a large table and want to improve the performance of the next step by running it in parallel, issue the following statements: ALTER SESSION FORCE PARALLEL DML PARALLEL <DOP>; ALTER SESSION FORCE PARALLEL QUERY PARALLEL <DOP>; 4. Start the redefinition process by calling the START_REDEF_TABLE procedure. If you did not define indexes in step 3, the initial copy uses direct path inserts and does not have to maintain indexes at this point. This is a performance benefit. 5. Create any indexes and other dependent objects on the interim table. 6. Copy dependent objects of the original table to the interim table with the COPY_TABLE_DEPENDENTS procedure. This procedure clones and registers dependent objects of the base table, such as triggers, indexes, materialized view logs, grants, and constraints. It does not clone the already registered dependent objects. Oracle Database 11g: Change Management Overview Seminar I - 96
97
Oracle Database 11g: Change Management Overview Seminar I - 97
Using PL/SQL to Redefine a Table (continued) 7. Query the DBA_REDEFINITION_ERRORS view to check for errors. Optionally and best practice: Synchronize the interim and the original tables periodically with the SYNC_INTERIM_TABLE procedure. Perform a final synchronization before completing the redefinition. 8. Complete the redefinition with the FINISH_REDEF_TABLE procedure. 9. Drop the interim table. The following are the end results of the redefinition process: The original table is redefined with the columns, indexes, constraints, grants, triggers, and statistics of the interim table. Dependent objects that were registered, either explicitly through the REGISTER_DEPENDENT_OBJECT procedure or implicitly through the COPY_TABLE_DEPENDENTS procedure, are renamed automatically, so that dependent object names on the redefined table are the same as before redefinition. If no registration is done or no automatic copying is done, then you must manually rename the dependent objects. The referential constraints involving the interim table now involve the redefined table and are enabled. Any indexes, triggers, materialized view logs, grants, and constraints defined on the original table (prior to redefinition) are transferred to the interim table and are dropped when the user drops the interim table. Any referential constraints involving the original table before the redefinition now involve the interim table and are disabled. PL/SQL procedures and dependent objects are invalidated, if they are logically affected by the redefinition. They are automatically revalidated whenever they are used next. Note: The revalidation can fail if the logical structure of the table was changed as a result of the redefinition process. Notes only page Oracle Database 11g: Change Management Overview Seminar I - 97
98
Fine-Grained Dependency Management
Adding a column to a table no longer affects dependent views and does not invalidate the dependent objects. Dependencies are tracked automatically. This mechanism requires no configuration. CREATE VIEW NEW_EMPLOYEES AS SELECT LAST_NAME FROM EMPLOYEES WHERE EMPLOYEE_ID > 20; Dependent unit Cross-unit reference Parent unit Fine-Grained Dependency Management In Oracle Database 11g, you now have access to records that describe more precise dependency metadata. This is called fine-grained dependency and enables you to see when dependent objects are not invalidated without logical requirement. Oracle Database 11g dependencies are tracked at the element level within a unit. Element-based dependency tracking covers the following: Dependency of a single-table view on its base table Dependency of a PL/SQL program unit (package specification, package body, or subprogram) on the following: Other PL/SQL program units Tables Views A cross-unit reference creates a dependency from the unit making the reference (the dependent unit—for example, the NEW_EMPLOYEES view above) to the unit being referenced (the parent unit—for example, the EMPLOYEES table). Dependencies are always tracked automatically by PL/SQL and SQL compilers. This mechanism is available without any configuration. Reducing the invalidation of dependent objects in response to changes to the objects on which they depend increases application availability, both in the development environment and during online application upgrade. Oracle Database 11g: Change Management Overview Seminar I - 98
99
Minimizing Dependent PL/SQL Recompilation
After DDL commands After online table redefinition Transparent enhancement Minimizing Dependent PL/SQL Recompilation In prior versions of Oracle Database, all directly and indirectly dependent views and PL/SQL packages are invalidated after an online redefinition or other DDL operations. These views and PL/SQL packages are automatically recompiled whenever they are next invoked. If there are a lot of dependent PL/SQL packages and views, the cost of the revalidation or recompilation can be significant. In Oracle Database 11g, views, synonyms, and other table-dependent objects (with the exception of triggers) that are not logically affected by the redefinition are not invalidated. For example, if referenced column names and types are the same after the redefinition, they are not invalidated. This optimization is “transparent”—that is, it is turned on by default. Oracle Database 11g: Change Management Overview Seminar I - 99
100
More Precise Dependency Metadata
Recording additional, fine-grained dependency management increases application availability. Prior to Oracle Database 11g, adding column D to table T invalidated the dependent objects. In Oracle Database 11g, adding column D to table T does not affect view V and does not invalidate the dependent objects. Table T Column A Column B View V Column A Column B Procedure P Function F More Precise Dependency Metadata Earlier Oracle Database releases record dependency metadata—for example, that PL/SQL unit P depends on PL/SQL unit F, or that view V depends on table T—with the precision of the whole object. This means that dependent objects are sometimes invalidated without logical requirement. For example, if view V depends only on columns A and B in table T, and column D is added to table T, the validity of view V is not logically affected. Nevertheless the view V was invalidated by the addition of column D to table T. Now, in Oracle Database Release 11g, adding column D to table T does not invalidate view V. Similarly, if procedure P depends only on elements E1 and E2 within a package, adding element E99 to the package does not invalidate procedure P. Add column D Oracle Database 11g: Change Management Overview Seminar I - 100
101
Managing Dependencies
CREATE TABLE t (col_a NUMBER, col_b NUMBER, col_c NUMBER); CREATE VIEW v AS SELECT col_a, col_b FROM T; SELECT ud.name, ud.type, ud.referenced_name, ud.referenced_type, uo.status FROM user_dependencies ud, user_objects uo WHERE ud.name = uo.object_name AND ud.name = 'V'; NAME TYPE REFERENCED_NAME REFERENCED_TYPE STATUS V VIEW T TABLE VALID 1 ALTER TABLE t ADD (col_d VARCHAR2(20)); SELECT ud.name, ud.type, ud.referenced_name, ud.referenced_type, uo.status FROM user_dependencies ud, user_objects uo WHERE ud.name = uo.object_name AND ud.name = 'V'; NAME TYPE REFERENCED_NAME REFERENCED_TYPE STATUS V VIEW T TABLE VALID 2 Example of Dependency of a Single-Table View on Its Base Table In the first example in the slide, table T is created with three columns, COL_A, COL_B, and COL_C. A view named V is created based on columns COL_A and COL_B of table T. The dictionary views are queried; view V is dependent on table T and its status is valid. In the second example in the slide, table T is altered. A new column named COL_D is added. The dictionary views still report that the view V is dependent because element-based dependency tracking realizes that the columns COL_A and COL_B are not modified and, therefore, the view does not need to be invalidated. Oracle Database 11g: Change Management Overview Seminar I - 101
102
Managing Dependencies
CREATE PACKAGE pkg IS PROCEDURE p1; END pkg; / CREATE PROCEDURE p BEGIN pkg.p1(); END; CREATE OR REPLACE PACKAGE pkg PROCEDURE unheard_of; SELECT status FROM user_objects WHERE object_name = 'P'; STATUS VALID Example of Dependency of a PL/SQL Program Unit on a PL/SQL Program Unit In the example in the slide, you create a package named PKG that has a call to procedure P1. Another procedure named P invokes PKG.P1. The definition of the package PKG is modified and another subroutine is added to the package declaration. When you query the USER_OBJECTS dictionary view for the status of the P package, it is still valid because the element you added to the definition of PKG is not referenced through procedure P. Oracle Database 11g: Change Management Overview Seminar I - 102
103
Usage Guidelines to Reduce Invalidation
Partial invalidation: Original: CREATE OR REPLACE PACKAGE PACK1 IS FUNCTION FUN1 RETURN VARCHAR2; FUNCTION FUN2 RETURN VARCHAR2; PROCEDURE PR1 (V1 VARCHAR2); END; CREATE OR REPLACE PACKAGE PACK1 IS FUNCTION FUN1 RETURN VARCHAR2; FUNCTION FUN2 RETURN VARCHAR2; FUNCTION FUN3 RETURN VARCHAR2; PROCEDURE PR1 (V1 VARCHAR2); PROCEDURE PR2 (V1 VARCHAR2); END; FUNCTION FUN3 RETURN VARCHAR2; No invalidation: CREATE OR REPLACE PACKAGE PACK1 IS FUNCTION FUN1 RETURN VARCHAR2; FUNCTION FUN2 RETURN VARCHAR2; PROCEDURE PR1 (V1 VARCHAR2); PROCEDURE PR2 (V1 VARCHAR2); END; PROCEDURE PR2 (V1 VARCHAR2); Recommended: Insert at the end. Usage Guidelines to Reduce Invalidation Add items to the end of a package to avoid changing slot numbers or entry point numbers of existing top-level elements. Avoid SELECT *, table%rowtype, and INSERT with no column names in PL/SQL units to allow for the ADD COLUMN functionality without invalidation. Use views or synonyms to provide a layer of indirection between PL/SQL code and tables. The CREATE OR REPLACE VIEW command does not invalidate views and PL/SQL dependents if the view’s new rowtype matches the old rowtype (this behavior is available in Oracle Database 10g Release 2). Likewise, the CREATE OR REPLACE SYNONYM command does not invalidate PL/SQL dependents if the old table and the new table have the same rowtype and privilege grants. Views and synonyms enable you to evolve tables independent of code in your application. Oracle Database 11g: Change Management Overview Seminar I - 103
104
Oracle Database 11g: Change Management Overview Seminar I - 104
Serializing Locks Oracle Database 11g allows DDL commands to wait for DML locks. The DDL_LOCK_TIMEOUT parameter is set at the system and session levels. Values: 0– (in seconds) 0: NOWAIT : Very long WAIT Serializing Locks You can limit the time that DDL commands wait for DML locks by setting the DDL_LOCK_TIMEOUT parameter at the system or session level. This initialization parameter is set by default to 0 (that is, NOWAIT), which ensures backward compatibility. The range of values is 0– (in seconds). The maximum value of seconds enables the DDL statement to wait for a very long time (11.5 days) for the DML lock. If the lock is not acquired on timeout expiration, your application should handle the timeout accordingly. Oracle Database 11g: Change Management Overview Seminar I - 104
105
Locking Tables Explicitly
Useful for adding a column (without a default value) to a table that is frequently updated Wait for up to 10 seconds for a DML lock: Do not wait if another user has already locked the table: Lock a table that is accessible through the remote_db database link: LOCK TABLE hr.jobs IN EXCLUSIVE MODE WAIT 10; LOCK TABLE hr.employees IN EXCLUSIVE MODE NOWAIT; LOCK TABLE IN SHARE MODE; Locking Tables Explicitly DDL commands require exclusive locks on internal structures. If these locks are unavailable when a DDL command is issued, the DDL command fails, although it might have succeeded if it had been issued subseconds later. The WAIT option allows a DDL command to wait for its locks for a specified period of time before failing. The LOCK TABLE command has new syntax that lets you specify the maximum number of seconds the statement should wait to obtain a DML lock on the table: LOCK TABLE … IN lockmode MODE [NOWAIT | WAIT integer] Specify NOWAIT if you want the database to return control to you immediately. If the specified table, partition, or table subpartition is already locked by another user, the database returns a message. Use the WAIT clause to indicate that the LOCK TABLE statement should wait up to the specified number of seconds to acquire a DML lock. There is no limit on the value of the integer. If you specify neither NOWAIT nor WAIT, the database waits indefinitely until the table is available, locks it, and returns control to you. When the database is executing DDL statements concurrently with DML statements, a timeout or deadlock can sometimes occur. The database detects such timeouts and deadlocks and then returns an error. Oracle Database 11g: Change Management Overview Seminar I - 105
106
Oracle Database 11g: Change Management Overview Seminar I - 106
Sharing Locks The following commands no longer acquire exclusive locks (X) but acquire shared exclusive locks (SX): CREATE INDEX ONLINE CREATE MATERIALIZED VIEW LOG ALTER TABLE ENABLE CONSTRAINT NOVALIDATE The benefit is that DML can continue while DDL is executed. This change is transparent (that is, there is no syntax change). Sharing Locks In highly concurrent environments, the requirement of acquiring an exclusive lock, for example, at the end of an online index creation and rebuild could lead to a spike of waiting DML operations and, therefore, a short drop and spike in system usage. Although this is not an overall problem for the database, this anomaly in system usage could trigger operating system alarm levels. This feature eliminates the need for row-exclusive locks when creating or rebuilding an online index. Oracle Database 11g: Change Management Overview Seminar I - 106
107
Invisible Index: Overview
Use index Do not use index Optimizer view point VISIBLE Index INVISIBLE Index OPTIMIZER_USE_INVISIBLE_INDEXES=FALSE Data view point Update index Update index Update table Update table Invisible Index: Overview Oracle Database 11g enables you to create and alter indexes as invisible. An invisible index is an index that is ignored by the optimizer unless you explicitly set the OPTIMIZER_USE_INVISIBLE_INDEXES initialization parameter to TRUE at the session or system level. The default value for this parameter is FALSE. Invisible indexes are maintained by DML operations, but are not used by the optimizer during queries unless the query includes a hint that names the invisible index. Using invisible indexes, you can: Test the removal of an index before dropping it Use temporary index structures for operations or modules of an application without affecting the overall application (for example, during an application upgrade process) Making an index invisible is an alternative to making it unusable or dropping it. Unlike unusable indexes, an invisible index is maintained during DML statements. Oracle Database 11g: Change Management Overview Seminar I - 107
108
Oracle Database 11g: Change Management Overview Seminar I - 108
Invisible Indexes Index is altered to be invisible to the optimizer: Optimizer considers this index for this statement: Optimizer will always consider the index: Creating an index as initially invisible : ALTER INDEX ind1 INVISIBLE; SELECT /*+ index(TAB1 IND1) */ COL1 FROM TAB1 WHERE …; ALTER INDEX ind1 VISIBLE; Invisible Indexes When an index is invisible, the optimizer generates plans that do not use the index. If there is no discernable drop in performance, you can then drop the index. If some queries show benefit from the index, you can make the index visible again, thus avoiding the effort of dropping an index and then having to re-create it. You can also create an index initially as invisible, perform testing, and then determine whether to make the index available. You can query the VISIBILITY column of the *_INDEXES data dictionary views: SELECT INDEX_NAME, VISIBILITY FROM USER_INDEXES WHERE INDEX_NAME = 'IND1'; INDEX_NAME VISIBILITY IND VISIBLE CREATE INDEX IND1 ON TAB1(COL1) INVISIBLE; Oracle Database 11g: Change Management Overview Seminar I - 108
109
Oracle Database 11g: Change Management Overview Seminar I - 109
Summary In this lesson, you should have learned how to: Describe and use the enhanced online table redefinition and materialized views Describe fine-grained dependency management Describe and use the enhanced PL/SQL recompilation mechanism Use online table redefinition with materialized views and view logs Set up wait times for locks on DDL commands Make indexes invisible to test potential plan changes Oracle Database 11g: Change Management Overview Seminar I - 109
110
Provisioning Automation
Using SQL Plan Management
111
Oracle Database 11g: Change Management Overview Seminar I - 111
Objectives After completing this lesson, you should be able to: Set up SQL Plan Management Set up various SQL Plan Management scenarios Oracle Database 11g: Change Management Overview Seminar I - 111
112
SQL Plan Management: Overview
SQL Plan Management is automatically controlled SQL plan evolution. The optimizer automatically manages SQL plan baselines. Only known and verified plans are used. Plan changes are automatically verified. Only comparable or better plans are used going forward. Can pre-seed critical SQL with SQL tuning sets from SQL Performance Analyzer. SQL Plan Management: Overview Potential performance risk occurs when the SQL execution plan changes for a SQL statement. A SQL plan change can occur due to a variety of reasons like optimizer version, optimizer statistics, optimizer parameters, schema definitions, system settings, and SQL profile creation. Various plan control techniques, such as stored outlines and SQL profiles, have been introduced in past Oracle versions to address the performance regressions due to plan changes. However, these techniques are reactive processes that require manual intervention. SQL Plan Management is a new feature introduced with Oracle Database 11g that enables the system to automatically control SQL plan evolution by maintaining what are called SQL plan baselines. With this feature enabled, a newly generated SQL plan can integrate a SQL plan baseline only if it has been proven that doing so will not generate performance regression. So, during execution of a SQL statement, only a SQL plan part of the corresponding SQL plan baseline can be used. As described later in this lesson, SQL plan baselines can be automatically loaded or can be seeded using SQL Tuning Sets. Various possible scenarios are studied later in the lesson. The main benefit of the SQL Plan Management feature is the performance stability of the system through the avoidance of plan regressions. Additionally, it saves the DBA time spent in identifying and analyzing SQL performance regressions and finding workable solutions. Oracle Database 11g: Change Management Overview Seminar I - 112
113
SQL Plan Baseline Architecture
SYSAUX SQL management base Statement log Plan history Plan history Plan baseline Plan baseline HJ GB HJ GB HJ GB HJ GB … HJ GB HJ GB SQL profile … Repeatable SQL statement Plan history Plan baseline HJ GB HJ GB HJ GB … Automatic SQL Tuning task Plan verification before integration to baseline SQL Plan Baseline Architecture The SQL Plan Management (SPM) feature introduces necessary infrastructure and services in support of plan maintenance and performance verification of new plans. For this, the optimizer maintains a history of plans for individual SQL statements for SQL statements that are executed more than once. The optimizer recognizes a repeatable SQL statement by maintaining a statement log. A SQL statement is recognized as repeatable when it is parsed or executed again after it has been logged. After a SQL statement is recognized as repeatable, various plans generated by the optimizer are maintained as a plan history, which contains relevant information used by the optimizer to reproduce an execution plan like SQL text, outline, bind variables, and compilation environment. As an alternative, or as a complement to the automatic recognition of repeatable SQL statements and the creation of their plan history, manual seeding of plans for a set of SQL statements is also supported. A plan history contains different plans generated by the optimizer for a SQL statement over time. However, only some of the plans in the plan history may be accepted for use. For example, a new plan generated by the optimizer will not normally be used until it has been verified not to cause a performance regression. As delivered, plan verification is done as part of Automatic SQL Tuning that is run as an automated task in a maintenance window. Oracle Database 11g: Change Management Overview Seminar I - 113
114
Oracle Database 11g: Change Management Overview Seminar I - 114
Notes only page SQL Plan Baseline Architecture (continued) Automatic SQL Tuning targets only the high-load SQL statements. For them, it automatically implements actions such as making a successfully verified plan an accepted plan. A set of acceptable plans constitutes a SQL plan baseline. The very first plan generated for a SQL statement is obviously acceptable for use; therefore, it forms the original plan baseline. Any new plan subsequently found by the optimizer are part of the plan history but not part of the plan baseline initially. The statement log, plan history, and plan baselines are stored in the SQL Management Base (SMB), which also contains SQL profiles. The SMB is part of the database dictionary and is stored in the SYSAUX tablespace. The SMB has automatic space management, such as periodic purging of unused plans. You can configure the SMB to change plan retention policy and set a space size limit. Note: With Oracle Database 11g, if the database instance is up but the SYSAUX tablespace is OFFLINE, the optimizer is unable to access SQL management objects. This can affect the performance on some of the SQL workload. Oracle Database 11g: Change Management Overview Seminar I - 114
115
Loading SQL Plan Baselines
dbms_spm OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES=TRUE Plan history Plan history Plan baseline HJ GB Plan baseline 1 HJ GB load_plans_from_cursor_cache load_plans_from_sqlset HJ GB alter_sql_plan_baseline 2 *_stgtab_baseline HJ GB 3 Staging table Cursor cache Plan history Plan baseline HJ GB 4 Loading SQL Plan Baseline There are two ways of loading SQL plan baselines: “On-the-fly” capture Use automatic plan capture by setting the OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES initialization parameter to TRUE (default: FALSE). Setting it to TRUE turns on automatic recognition of repeatable SQL statements and automatic creation of plan history for such statements. This is illustrated in the left portion of the graphic in the slide, where you can see the first generated SQL plan automatically integrated into the original SQL plan baseline. Bulk loading The DBMS_SPM package enables you to manually manage SQL plan baselines. With this package, you can load SQL plans into a SQL plan baseline directly from the cursor cache or from an existing SQL tuning set (STS). For a SQL statement to be loaded into a SQL plan baseline from an STS, the SQL statement needs to store its SQL plan in the STS. DBMS_SPM allows you to change the status of a baseline plan from accepted to not accepted (and vice versa). It also allows you to export baseline plans from a staging table, which can then be used to load SQL plan baselines on other databases. DBA Oracle Database 11g: Change Management Overview Seminar I - 115
116
Evolving SQL Plan Baselines
variable report clob exec :report:=DBMS_SPM.EVOLVE_SQL_PLAN_BASELINE(sql_handle=>'SYS_SQL_593bc74fca8e6738'); Print report Plan history Plan baseline Automatic SQL Tuning HJ GB DBA HJ GB >? SQL Tuning Advisor Evolving SQL Plan Baselines During the SQL plan baseline evolution phase, the Oracle database routinely evaluates the performance of new plans and integrates plans with better performance into SQL plan baselines. When the optimizer finds a new plan for a SQL statement, the plan is added to the plan history as a nonaccepted plan. The plan is then verified for performance relative to the SQL plan baseline performance. When it is verified that a nonaccepted plan does not cause a performance regression (either manually or automatically), the plan is changed to an accepted plan and integrated into the SQL plan baseline. Successful verification of a nonaccepted plan consists of comparing its performance to that of one plan selected from the SQL plan baseline and ensuring that it delivers better performance. There are two ways to evolve SQL plan baselines: By using the DBMS_SPM.EVOLVE_SQL_PLAN_BASELINE function. An invocation example is shown in the slide. The function returns a report that tells you whether some of the existing history plans were moved to the plan baseline. You can also specify specific plans in the history to be tested. By running SQL Tuning Advisor: SQL plan baselines can be evolved by manually or automatically tuning SQL statements using SQL Tuning Advisor. When SQL Tuning Advisor finds a tuned plan and verifies its performance to be better than a plan chosen from the corresponding SQL plan baseline, it makes a recommendation to accept a SQL profile. When the SQL profile is accepted, the tuned plan is added to the corresponding SQL plan baseline. Oracle Database 11g: Change Management Overview Seminar I - 116
117
Important Baseline SQL Plan Attributes
Plan history Plan Baseline HJ GB HJ GB Enabled but not accepted Enabled and accepted select signature, sql_handle, sql_text, plan_name, origin, enabled, accepted, fixed, autopurge from dba_sql_plan_baselines; SIGNATURE SQL_HANDLE PLAN_H_V SQL_TEXT PLAN_NAME ORIGIN ENA ACC FIX AUT 8.062E+18 SYS_SQL_6fe select.. SYS_SQL_PLAN_1ea AUTO-CAPTURE YES NO NO YES 8.062E+18 SYS_SQL_6fe select.. SYS_SQL_PLAN_4be AUTO-CAPTURE YES YES NO YES … exec :cnt := dbms_spm.alter_sql_plan_baseline(sql_handle => 'SYS_SQL_37e0168b0…3efe', - plan_name => 'SYS_SQL_PLAN_8dfc352f359901ea', attribute_name => 'ACCEPTED_STATUS', attribute_value => 'YES'); Important Baseline SQL Plan Attributes When a plan enters the plan history, it is associated with a number of important attributes: SIGNATURE, SQL_HANDLE, SQL_TEXT, and PLAN_NAME are important identifiers for search operations. ORIGIN allows you to determine whether the plan was automatically captured (AUTO-CAPTURE), manually evolved (MANUAL-LOAD), automatically evolved by SQL Tuning Advisor (MANUAL-SQLTUNE), or automatically evolved by Automatic SQL Tuning (AUTO-SQLTUNE). ENABLED and ACCEPTED: ENABLED means that the plan is enabled for use by the optimizer. If ENABLED is not set, the plan will not be considered. ACCEPTED means that the plan was validated as a good plan, either automatically by the system or by the user manually changing it to ACCEPTED. Once a plan changes to ACCEPTED, it will become not ACCEPTED only if someone use DBMS_SPM.ALTER_SQL_PLAN_BASELINE() to change its status. An ACCEPTED plan can be temporarily disabled by removing the ENABLED setting. A plan has to be ENABLED and ACCEPTED for the optimizer to consider using it. FIXED means that the optimizer considers only certain plans and not others. For example, if you have ten baseline plans and three of them are marked FIXED, the optimizer uses the best plan from only these three, ignoring all the others. Oracle Database 11g: Change Management Overview Seminar I - 117
118
Oracle Database 11g: Change Management Overview Seminar I - 118
Notes only page Important Baseline SQL Plan Attributes (continued) You can look at each plan’s attributes using the DBA_SQL_PLAN_BASELINES view (as shown in the slide). You can then change some of them using the DBMS_SPM.ALTER_SQL_PLAN_BASELINE function. You can also remove plans or the complete plan history using the DBMS_SPM.DROP_SQL_PLAN_BASELINE function. The example shown in the slide changes the ACCEPTED attribute of the SYS_SQL_PLAN_8DFC352F359901EA to YES, making it ACCEPTED and thus part of the baseline. Note: The DBA_SQL_PLAN_BASELINES view contains additional attributes that enable you to determine when each plan was last used and whether a plan should be automatically cleared. Oracle Database 11g: Change Management Overview Seminar I - 118
119
SQL Plan Selection … > dbms_xplan.display_plan_baseline
HJ GB Plan part of history? optimizer_use_ sql_plan_baselines =true? Yes No Plan history HJ GB No Yes Plan Baseline HJ GB HJ GB HJ GB … Plan part of baseline? Yes No Select baseline plan with lowest best-cost. HJ GB HJ GB HJ GB HJ GB Yes > No dbms_xplan.display SQL Plan Selection If you are using automatic plan capture, the first time a SQL statement is recognized as repeatable, its best-cost plan is added to the corresponding SQL plan baseline, and that plan is used to execute the statement. The optimizer uses a comparative plan selection policy when a plan baseline exists for a SQL statement and the initialization parameter OPTIMIZER_USE_SQL_PLAN_BASELINES is set to TRUE (default value). Each time a SQL statement is compiled, the optimizer first uses the traditional cost-based search method to build a best-cost plan. Then it tries to find a matching plan in the SQL plan baseline. If a match is found, the optimizer proceeds as usual. Otherwise, it first adds the new plan to the plan history and then costs each of the accepted plans in the SQL plan baseline and picks the one with the lowest cost. The accepted plans are reproduced using the outline that is stored with each of them. The effect of having a SQL plan baseline for a SQL statement is that the optimizer always selects one of the accepted plans in that SQL plan baseline. With SQL Plan Management, the optimizer can produce a plan that can be either a best-cost plan or a baseline plan. This information is dumped in the other_xml column of the plan_table upon explain plan. You can display one or more execution plans for the specified SQL_HANDLE of a plan baseline by using the new DBMS_XPLAIN.DISPLAY_PLAN_BASELINE function. If PLAN_NAME is also specified, the corresponding execution plan is displayed. Note: To preserve backward compatibility, if a stored outline for a SQL statement is active for the user session, the statement is compiled using the stored outline. In addition, a plan generated by the optimizer using a stored outline is not stored in the SMB even if automatic plan capture has been enabled for the session. Oracle Database 11g: Change Management Overview Seminar I - 119
120
Possible SQL Plan Manageability Scenarios
Database Upgrade New Application Deployment Oracle Database 11g Production database Plan History Plan History Plan Baseline Plan Baseline HJ GB HJ GB HJ GB HJ GB No plan regressions No plan regressions DBA DBA Plan History Plan Baseline HJ GB HJ GB HJ GB Well-tuned plan Well-tuned plan Baseline plans staging table Possible SQL Plan Manageability Scenarios Database upgrade Bulk SQL plan loading is especially useful when the system is being upgraded from a pre–Oracle Database 11g version to Oracle Database 11g. For this, you can capture plans for a SQL workload into a SQL tuning set (STS) before the upgrade, and then load these plans from the STS into the SQL plan baseline immediately after the upgrade. This strategy can minimize plan regressions resulting from the use of the new optimizer version. New Application Deployment The deployment of a new application module means the introduction of new SQL statements into the system. The software vendor can ship the application software along with appropriate SQL plan baselines for new SQL being introduced. Because of the plan baselines, the new SQL statements will initially run with the plans that are known to give good performance under a standard test configuration. However, if the customer system configuration is very different from the test configuration, the plan baselines can be changed over time to produce better performance. For both cases, you can use automatic SQL plan capture after manual loading to make sure that only better plans will be used for your applications in the future. Note: In all scenarios in this lesson, assume that OPTIMIZER_USE_SQL_PLAN_BASELINES is set to TRUE. Oracle Database 10g Development database Oracle Database 11g: Change Management Overview Seminar I - 120
121
SQL Performance Analyzer and SQL Plan Baseline Scenario
Oracle Database 11g Before change O_F_E=10 Plan History Plan HJ GB Baseline HJ GB Regressing statements No plan regressions After change O_F_E=11 HJ GB HJ GB HJ GB Well- tuned plans Oracle Database 10g SQL Performance Analyzer and SQL Plan Baseline Scenario A variation of the first method described in the previous slide is through the use of SQL Performance Analyzer. You can capture pre–Oracle Database 11g plans in a SQL tuning set (STS) and import them into Oracle Database 11g. You then set the optimizer_features_enable initialization parameter to 10 to make the optimizer behave like an Oracle Database 10g database. Next run SQL Performance Analyzer for the STS. When that finishes, set the optimizer_features_enable initialization parameter back to 11 and rerun SQL Performance Analyzer for the STS. SQL Performance Analyzer will produce a report that lists any SQL statement whose plan has regressed from Oracle Database 10g to Oracle Database 11g. For those SQL statements that are shown by SQL Performance Analyzer to incur performance regression due to the new optimizer version, you can capture their plans using an STS and then load them into the SMB. This method represents the best form of the plan-seeding process because it helps prevent performance regressions while preserving performance improvements upon database upgrade. O_F_E : optimizer_features_enable Oracle Database 11g: Change Management Overview Seminar I - 121
122
Automatic SQL Plan: Baseline Scenario
Oracle Database 11g No plan regressions Oracle Database 11g No plan regressions Plan History New plan waiting verification Plan History Plan Baseline HJ GB Plan Baseline HJ GB HJ GB optimizer_features_enable= optimizer_features_enable= optimizer_capture_sql_plan_baselines=true optimizer_capture_sql_plan_baselines=true Oracle Database 11g optimizer_features_enable= optimizer_capture_sql_plan_baselines=true Plan History Better plans HJ GB Plan Baseline HJ GB HJ GB Well- tuned plans Oracle Database 10g Automatic SQL Plan: Baseline Scenario Another possibility for the upgrade scenario is to use the automatic SQL plan capture mechanism. In this case, you set the OPTIMIZER_FEATURES_ENABLE (OFE) initialization parameter to the pre–Oracle Database 11g version value for an initial period of time (for example, a quarter) and you execute your workload after upgrade by using automatic SQL plan capture. During this initial time period, because of the OFE parameter setting, the optimizer is able to reproduce pre–Oracle Database 11g plans for a majority of the SQL statements. Because automatic SQL plan capture is also enabled during this period, the pre–Oracle Database 11g plans produced by the optimizer are captured as SQL plan baselines. After the initial time period ends, you can remove the setting of OFE to take advantage of the new optimizer version while incurring minimal or no plan regressions due to the plan baselines. Regressed plans will use the previous optimizer version; nonregressed statements will benefit from the new optimizer version. Oracle Database 11g: Change Management Overview Seminar I - 122
123
SQL Management Base: Purging Policy
SQL> exec dbms_spm.configure('SPACE_BUDGET_PERCENT',20); SQL> exec dbms_spm.configure('PLAN_RETENTION_WEEKS',105); time Alert.log 105 SYSAUX SQL Management Basez 53 1% 10% 20% 50% space SQL> exec :cnt := dbms_spm.drop_sql_plan_baseline('SYS_SQL_37e0168b04e73efe'); SQL Management Base: Purging Policy The space occupied by the SQL Management Base (SMB) is regularly checked against a defined limit based on the percentage size of the SYSAUX tablespace. The space budget limit for the SMB is 10 percent of the SYSAUX size (by default). However, you can configure the SMB and change the space budget to a value between 1 percent and 50 percent. A daily task measures the total space occupied by the SMB, and when it exceeds the defined percent limit, it generates a warning and writes it to the alert log. The alerts are generated daily until either the SMB space limit is increased, the size of SYSAUX is increased, or the size of the SMB is decreased by clearing some of the SQL management objects like SQL plan baselines or SQL profiles. The space management of SQL plan baselines is done proactively using a regularly scheduled purging task. The task runs as an automated task in the maintenance window. Any plan that has not been used for more than 53 weeks is cleared. However, you can configure the SMB and change the unused plan retention period to a value between 5 and 523 weeks (a little more than 10 years). You can also manually clear the SMB by using the DBMS_SPM.DROP_SQL_PLAN_BASELINE function (as shown above). Oracle Database 11g: Change Management Overview Seminar I - 123
124
Oracle Database 11g: Change Management Overview Seminar I - 124
Summary In this lesson, you should have learned how to: Set up SQL Plan Management Set up various SQL Plan Management scenarios Oracle Database 11g: Change Management Overview Seminar I - 124
125
Provisioning Automation
Diagnosing Problems
126
Oracle Database 11g: Change Management Overview Seminar I - 126
Objectives After completing this lesson, you should be able to: Set up the Automatic Diagnostic Repository Create an incident package to capture diagnostic data Use a command-line tool to view diagnostic information Perform diagnostic tasks by using EM Support Workbench Run health checks on various database components Use SQL Repair Advisor to analyze critical SQL statement failures Oracle Database 11g: Change Management Overview Seminar I - 126
127
Oracle Database 11g Fault Management
Goal: Reduce time to resolution. Change management and automatic health checks Automatic Diagnostic Workflow Intelligent resolution Proactive patching Diagnostic Solution Delivery Prevention Resolution Oracle Database 11g Fault Management The goals of the fault diagnosability infrastructure are the following: Detecting problems proactively Limiting damage and interruptions after a problem is detected Reducing problem diagnostic time Reducing problem resolution time Simplifying customer interaction with Oracle Support Oracle Database 11g: Change Management Overview Seminar I - 127
128
Ease Diagnosis: Automatic Diagnostic Workflow
Automatic Diagnostic Repository Critical Error DBA Alert DBA Targeted health checks Assisted SR filling Auto incident creation First failure capture 1 2 DBA No Known bug? Yes EM Support Workbench: Package incident info Data Repair DBA 4 EM Support Workbench: Apply patch/Data Repair Ease Diagnosis: Automatic Diagnostic Workflow An always-on, in-memory tracing facility enables database components to capture diagnostic data upon first failure for critical errors. The Automatic Diagnostic Repository is a special repository that is automatically maintained to hold diagnostic information about critical error events. This information can be used to create incident packages to be sent to Oracle Support Services for investigation. Here is a possible workflow for a diagnostic session: 1. Incident causes an alert to be raised in EM. 2. DBA can view alert on the EM Alert page. 3. DBA can drill down to incident and problem details. 4. DBA (or Oracle Support Services) can decide or ask for that information to be packaged and sent to Oracle Support Services via MetaLink. DBA can add files to the data to be packaged automatically. 3 Oracle Database 11g: Change Management Overview Seminar I - 128
129
Automatic Diagnostic Repository
DIAGNOSTIC_DEST Support Workbench BACKGROUND_DUMP_DEST $ORACLE_BASE CORE_DUMP_DEST USER_DUMP_DEST $ORACLE_HOME/log ADR Base diag rdbms DB Name ADR Home SID metadata alert cdump incpkg incident hm trace (others) incdir_1 … incdir_n ADRCI V$DIAG_INFO Automatic Diagnostic Repository (ADR) The ADR is a file-based repository for database diagnostic data such as traces, incident dumps and packages, the alert log, health monitor reports, core dumps, and more. It has a unified directory structure across multiple instances and multiple products stored outside of any database. It is therefore available for problem diagnosis when the database is down. Beginning with Oracle Database 11g, the database, Automatic Storage Management (ASM), Oracle Clusterware, and other Oracle products or components store all diagnostic data in the ADR. Each instance of each product stores diagnostic data in its own ADR home directory. For example, in a Real Application Clusters environment with shared storage and ASM, each database instance and each ASM instance has a home directory in the ADR. The ADR’s unified directory structure, consistent diagnostic data formats across products and instances, and a unified set of tools enable customers and Oracle Support to correlate and analyze diagnostic data across multiple instances. In Oracle Database 11g, the traditional …_DUMP_DEST initialization parameters are ignored. The ADR root directory is known as the ADR base. Its location is set by the DIAGNOSTIC_DEST initialization parameter. If this parameter is omitted or left null on startup, the database sets DIAGNOSTIC_DEST as follows: If the ORACLE_BASE environment variable is set, DIAGNOSTIC_DEST is set to $ORACLE_BASE. If the ORACLE_BASE environment variable is not set, DIAGNOSTIC_DEST is set to $ORACLE_HOME/log. log.xml alert_SID.log Oracle Database 11g: Change Management Overview Seminar I - 129
130
Oracle Database 11g: Change Management Overview Seminar I - 130
Notes only slide Automatic Diagnostic Repository (ADR) (continued) In the ADR base, there can be multiple ADR homes, where each ADR home is the root directory for all diagnostic data for a particular instance of a particular Oracle product or component. The location of an ADR home for a database is shown in the previous graphic. Also, two alert files are now generated. One is textual, exactly like the alert file used with previous releases of the Oracle Database and is located under the TRACE directory of each ADR home. In addition, an alert message file conforming to the XML standard is stored in the ALERT subdirectory inside the ADR home. You can view the alert log in text format (with the XML tags stripped) with Enterprise Manager and with the ADRCI utility. The graphic in the slide shows you the directory structure of an ADR home. The INCIDENT directory contains multiple subdirectories, where each subdirectory is named for a particular incident, and where each contains dumps pertaining only to that incident. The HM directory contains the checker run reports generated by the Health Monitor. There is also a METADATA directory that contains important files for the repository itself. You can compare this to a database dictionary. This dictionary can be queried using ADRCI. The ADR Command Interpreter (ADRCI) is a utility that enables you to perform all of the tasks permitted by the Support Workbench, but in a command-line environment. ADRCI also enables you to view the names of the trace files in the ADR, and to view the alert log with XML tags stripped, with and without content filtering. In addition, you can use V$DIAG_INFO to list some important ADR locations. Oracle Database 11g: Change Management Overview Seminar I - 130
131
ADRCI: ADR Command-Line Tool
Allows interaction with ADR from OS prompt Can invoke the incident packaging service (IPS) with command line DBAs should use EM Support Workbench: Easy-to-follow GUI Leverages same toolkit and libraries that ADRCI is built on ADRCI> show incident ADR Home = /u01/app/oracle/product/11.1.0/db_1/log/diag/rdbms/orcl/orcl: ***************************************************************************** INCIDENT_ID PROBLEM_KEY CREATE_TIME ORA-600_dbgris01:1,_addr=0xa JAN … ORA-600_dbgris01:12,_addr=0xa JAN … 2 incident info records fetched ADRCI> ADRCI: ADR Command-Line Tool ADRCI is a command-line tool that is part of the fault diagnosability infrastructure introduced in Oracle Database Release 11g. ADRCI enables you to: View diagnostic data in the Automatic Diagnostic Repository (ADR) Package incident and problem information into a zip file for transmission to Oracle Support ADRCI has a rich command set and can be used in interactive mode or in scripts. In addition, ADRCI can execute scripts of ADRCI commands in the same way that SQL*Plus executes scripts of SQL and PL/SQL commands. There is no need to log in to ADRCI because the data in the ADR is not intended to be secure. ADR data is secured only by operating system permissions on the ADR directories. The easiest way to package and otherwise manage diagnostic data is with the Support Workbench of Oracle Enterprise Manager. ADRCI provides a command-line alternative to most of the functionality of Support Workbench, and adds capabilities such as listing and querying trace files. The example in the slide shows you an ADRCI session where you are listing all open incidents stored in ADR. Note: For more information about ADRCI, refer to the Oracle Database Utilities guide. Oracle Database 11g: Change Management Overview Seminar I - 131
132
Oracle Database 11g: Change Management Overview Seminar I - 132
V$DIAG_INFO SQL> SELECT * FROM V$DIAG_INFO; NAME VALUE Diag Enabled TRUE ADR Base /u01/app/oracle ADR Home /u01/app/oracle/diag/rdbms/orcl/orcl Diag Trace /u01/app/oracle/diag/rdbms/orcl/orcl/trace Diag Alert /u01/app/oracle/diag/rdbms/orcl/orcl/alert Diag Incident /u01/app/oracle/diag/rdbms/orcl/orcl/incident Diag Cdump /u01/app/oracle/diag/rdbms/orcl/orcl/cdump Health Monitor /u01/app/oracle/diag/rdbms/orcl/orcl/hm Default Trace File /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_11424.trc Active Problem Count 3 Active Incident Count 8 V$DIAG_INFO The V$DIAG_INFO view lists all important ADR locations: ADR Base: Path of ADR base ADR Home: Path of ADR home for the current database instance Diag Trace: Location of the text alert log and background/foreground process trace files Diag Alert: Location of an XML version of the alert log … Default Trace File: Path to the trace file for your session. SQL Trace files are written here. Oracle Database 11g: Change Management Overview Seminar I - 132
133
Location for Diagnostic Traces
Diagnostic Data Previous Location ADR Location Foreground process traces USER_DUMP_DEST $ADR_HOME/trace Background process traces BACKGROUND_DUMP_DEST Alert log data $ADR_HOME/alert & trace Core dumps CORE_DUMP_DEST $ADR_HOME/cdump Incident dumps USER|BACKGROUND_DUMP_DEST $ADR_HOME/incident/incdir_n ADR trace = Oracle Database 10g trace: critical error trace Location for Diagnostic Traces The table in the slide describes the different classes of trace data and dumps that reside both in Oracle Database 10g and Oracle Database 11g. With Oracle Database 11g, there is no distinction between foreground and background traces files. Both types of files go into the $ADR_HOME/trace directory. All non-incident traces are stored inside the TRACE subdirectory. This is the main difference compared with previous releases where critical error information is dumped into the corresponding process trace files instead of incident dumps. Incident dumps are placed in files separated from the normal process trace files starting with Oracle Database 11g. Note: The main difference between a trace and a dump is that a trace is more of a continuous output such as when SQL tracing is turned on, and a dump is a one-time output in response to an event such as an incident. Also, a core is a binary memory dump that is port specific. Oracle Database 11g: Change Management Overview Seminar I - 133
134
Using ADRCI to View the Alert Log
adrci>>show alert –tail ADR Home = /u01/app/oracle/diag/rdbms/orcl/orcl: ************************************************************************* :10: :00 ORA-1654: unable to extend index SYS.I_H_OBJ#_COL# by 128 in tablespace SYSTEM :21: :00 Thread 1 advanced to log sequence 400 Current log# 3 seq# 400 mem# 0: +DATA/orcl/onlinelog/group_ Current log# 3 seq# 400 mem# 1: +DATA/orcl/onlinelog/group_ … Thread 1 advanced to log sequence 401 Current log# 1 seq# 401 mem# 0: +DATA/orcl/onlinelog/group_ Current log# 1 seq# 401 mem# 1: +DATA/orcl/onlinelog/group_ DIA-48223: Interrupt Requested - Fetch Aborted - Return Code [1] adrci>> adrci>>SHOW ALERT -P "MESSAGE_TEXT LIKE '%ORA-600%'" ADR Home = /u01/app/oracle/diag/rdbms/orcl/orcl: ************************************************************************* adrci>> Using ADRCI to View the Alert Log You can use the ADRCI command line utility to view the content of your alert log file. Optionally, you can change the current ADR home. Use the SHOW HOMES command to list all ADR homes, and the SET HOMEPATH command to change the current ADR home. You should ensure that OS environment variables such as ORACLE_HOME are set properly, and then enter the adrci command at the OS command prompt. The utility starts and displays its prompt as shown in the slide. Then use the SHOW ALERT command. To limit the output, you can look at the last records using the –TAIL option. This displays the last portion of the alert log (about 20 to 30 messages), and then waits for more messages to arrive in the alert log. As each message arrives, it is appended to the display. This command enables you to perform live monitoring of the alert log. Press CTRL+C to stop waiting and return to the ADRCI prompt. You can also specify the amount of lines to be print if you want. You can also filter the output of the SHOW ALERT as shown on the bottom example in the slide where you want to display only those alert log messages that contain the string ‘ORA-600’. ADRCI allows you to spool the output to a file exactly like in SQL*Plus. Note: You can also use Enterprise Manager, or a text editor, to view the alert log. Using EM, navigate to Related Links from the Database Home page and click Alert Log Contents. Oracle Database 11g: Change Management Overview Seminar I - 134
135
Problems Versus Incidents
Problem ID Critical error Problem Problem key Incident status Collecting Ready Tracking Data-purged Closed Automatically Flood control Automatic transition Incident DBA Incident ID Manually Traces ADR Auto-purge MMON Noncritical error Package to be sent to Oracle Support Problems Versus Incidents To facilitate diagnosis and resolution of critical errors, the fault diagnosability infrastructure introduces two concepts for the Oracle database: A problem is a critical error in the database and is tracked in the ADR. Each problem is identified by a unique problem ID and has a problem key, which is a set of attributes that describe the problem. The problem key includes the ORA error number, the error parameter values, and other information. Here is a list of possible critical errors: All internal Errors – ORA-60x errors All system access violations – (SEGV, SIGBUS) ORA-4020 (Deadlock on library object), ORA-8103 (Object no longer exists), ORA-1410 (Invalid ROWID), ORA-1578 (Data block corrupted), ORA (Node eviction), ORA-255 (Database is not mounted), ORA-376 (File cannot be read at this time), ORA-4030 (Out-of-process memory), ORA-4031 (Unable to allocate more bytes of shared memory), ORA-355 (The change numbers are out of order), ORA-356 (Inconsistent lengths in change description), ORA-353 (Log corruption), ORA-7445 (Operating System exception). An incident is a single occurrence of a problem. When a problem occurs multiple times, as is often the case, an incident is created for each occurrence. Incidents are tracked in the ADR. Each incident is identified by a numeric incident ID, which is unique in an ADR home. Oracle Database 11g: Change Management Overview Seminar I - 135
136
Oracle Database 11g: Change Management Overview Seminar I - 136
Notes only page Problems Versus Incidents (continued) When an incident occurs, the database makes an entry in the alert log, gathers diagnostic data about the incident (a stack trace, the process state dump, and other dumps of important data structures), tags the diagnostic data with the incident ID, and stores the data in an ADR subdirectory created for that incident. Each incident has a problem key and is mapped to a single problem. Two incidents are considered to have the same root cause if their problem keys match. Large amounts of diagnostic information can be created very quickly if a large number of sessions stumble across the same critical error. Therefore, ADR provides flood control so that only a certain number of incidents under the same problem key can be dumped in a given time interval. Flood-controlled incidents still generate incidents; only the dump actions are skipped. By default, only five dumps per hour for a given problem are allowed. You can view a problem as a set of incidents that are perceived to have the same symptoms. This makes it easier to manage system errors. For example, a symptom that occurs 20 times should be reported to Oracle only once. Mostly, you will manage problems instead of incidents using incident packaging service (IPS) to package a problem to be sent to Oracle Support. Most commonly, incidents are automatically created when a critical error occurs. However, you can also create an incident manually, via the GUI provided by the EM Support Workbench. Manual incident creation is mostly done when you want to report problems that are not accompanied by critical errors raised inside the Oracle code. As time goes by, more and more incidents will be accumulated in the ADR. You can specify a retention policy to specify how long to keep the diagnostic data. ADR incidents are controlled by two different policies: The incident metadata retention policy controls how long the metadata is kept around. This policy has a default setting of one year. The incident files and dumps retention policy controls how long generated dump files are kept around. This policy has a default setting of one month. You can change these settings using the Incident Package Configuration link on the EM Support Workbench page. Inside the RDBMS component, MMON is responsible for purging automatically expired ADR data. If an incident has been in either the Collecting or the Ready state for over twice its retention length, the incident automatically moves to the Closed state. You can manually purge incident files. For simplicity, problem metadata is internally maintained by ADR. Problems are automatically created when the first incident (of the problem key) occurs. The problem metadata is removed after the last incident is removed from the repository. Note: It is not possible to disable automatic incident creation for critical errors. Oracle Database 11g: Change Management Overview Seminar I - 136
137
Incident Packaging Service (IPS)
IPS uses rules to correlate all relevant dumps and traces from ADR for a given problem and enables you to package them to ship to Oracle Support. Rules can involve files that were generated around the same time, associated with the same client, same error codes, and so on. DBAs can explicitly add, edit, or remove files before packaging. You access IPS through either EM or ADRCI. Incident Packaging Service With the incident packaging service (IPS), you can automatically and easily gather all diagnostic data (traces, dumps, health check reports, SQL test cases, and more) pertaining to a critical error and package the data into a zip file suitable for transmission to Oracle Support. Because all diagnostic data relating to a critical error is tagged with that error’s incident number, you do not have to search through trace files, dump files, and so on to determine the files that are required for analysis; the IPS identifies all required files automatically and adds them to the package. Oracle Database 11g: Change Management Overview Seminar I - 137
138
Oracle Database 11g: Change Management Overview Seminar I - 138
Incident Packages Zip file An incident package is a logical structure inside the ADR representing one or more problems. A package is a zip file containing dump information related to an incident package. By default, only the first and last three incidents of each problem are included in an incident package. You can generate complete or incremental zip files. ADR Base diag rdbms DB Name ADR Home SID metadata alert cdump incpkg incident hm trace (others) pkg_1 … pkg_n Incident Packages You must first collect the data into an incident package before uploading diagnostic data to Oracle Support Services. In creating an incident package, you select one or more problems to add to the incident package. Support Workbench then automatically adds the incident information, trace files, and dump files associated with the selected problems to the incident package. Because a problem can have many incidents (many occurrences of the same problem), by default only the first three and last three incidents for each problem are added to the incident package. You can change this default number on the Incident Packaging Configuration page accessible from the Support Workbench page. After the incident package is created, you can add any type of external file to the incident package, or edit selected files in the incident package to remove sensitive data. An incident package is a logical construct only, until you create a physical file from the incident package contents. That is, an incident package starts out as a collection of metadata in the ADR. As you add and remove incident package contents, only the metadata is modified. When you are ready to upload the data to Oracle Support Services, you either invoke Support Workbench or an ADRCI function that gathers all the files referenced by the metadata, places them into a zip file, and then uploads the zip to MetaLink. Note: ADRCI generates packages in your current directory by default. However, Support Workbench generates them in the following directory: /u01/app/oracle/product/11.1.0/db_1/<hostname_dbname>/sysman/emd/state Oracle Database 11g: Change Management Overview Seminar I - 138
139
EM Support Workbench: Overview
EM Support Workbench is a wizard that guides you through the process of handling problems. You can perform the following tasks with the Support Workbench: View details on problems and incidents. Run health checks. Generate additional diagnostic data. Run advisors to help resolve problems. Create and track service requests through MetaLink. Generate incident packages. Close problems after they are resolved. EM Support Workbench: Overview Enterprise Manager (EM) Support Workbench helps you through the process of handling critical errors. It displays incident notifications, presents incident details, and enables you to select incidents for further processing. Further processing includes running additional health checks, invoking the IPS to package all diagnostic data about the incidents, adding SQL test cases and selected user files to the package, filing a technical assistance request (TAR) with Oracle Support, shipping the packaged incident information to Oracle Support, and tracking the TAR through its life cycle. You can perform the following tasks with the Support Workbench: View details on problems and incidents. Manually run health checks to gather additional diagnostic data for a problem. Generate additional dumps and SQL test cases to add to the diagnostic data for a problem. Run advisors to help resolve problems. Create and track a service request through MetaLink, and add the service request number to the problem data. Collect all diagnostic data relating to one or more problems into an incident package and then upload the incident package to Oracle Support Services. Close the problem when the problem is resolved. Oracle Database 11g: Change Management Overview Seminar I - 139
140
Oracle Configuration Manager
Enterprise Manager Support Workbench uses Oracle Configuration Manager to upload the physical files generated by IPS to MetaLink. If Oracle Configuration Manager is not installed or properly configured, the upload may fail. In this case, a message is displayed with a path to the incident package zip file and a request that you upload the file to Oracle Support manually. You can upload manually with MetaLink. During Oracle Database 11g installation, the Oracle Universal Installer has a special Oracle Configuration Manager Registration screen shown in the slide. On that screen, you need to select the Enable check box and accept the license agreement before you can enter your Customer Identification Number (CSI), your MetaLink account username, and your country code. If you do not configure Oracle Configuration Manager, you will still be able to manually upload incident packages to MetaLink. Note: For more information about Oracle Configuration Manager, see the Oracle Configuration Manager Installation and Administration Guide. Oracle Database 11g: Change Management Overview Seminar I - 140
141
EM Support Workbench: Roadmap
1 View critical error alerts in Enterprise Manager. 7 Close incidents. 2 View problem details. Gather additional diagnostic information. 6 Track the SR and implement repairs. 3 Create a service request. 4 Package and upload diagnostic data to Oracle Support. EM Support Workbench: Roadmap The graphic in the slide is a summary of the tasks that you complete to investigate, report, and in some cases, resolve a problem using EM Support Workbench: 1. Start by accessing the Database Home page in Enterprise Manager, and reviewing critical error alerts. Select an alert for which to view details. 2. Examine the problem details and view a list of all incidents that were recorded for the problem. Display findings from any health checks that were automatically run. 3. Optionally, run additional health checks and invoke the SQL Test Case Builder, which gathers all required data related to a SQL problem and packages the information in a way that enables the problem to be reproduced at Oracle Support. 4. Create a service request with MetaLink and optionally record the service request number with the problem information. 5. Invoke a wizard that automatically packages all gathered diagnostic data for a problem and uploads the data to Oracle Support. Optionally, edit the data to remove sensitive information before uploading. 6. Optionally, maintain an activity log for the service request in the Support Workbench. Run Oracle Advisors to help repair SQL failures or corrupted data. 7. Set status for one, some, or all incidents for the problem to Closed. Note: The following slides show the EM screens for a subset of the tasks shown above. 5 Oracle Database 11g: Change Management Overview Seminar I - 141
142
Viewing Critical Error Alerts in Enterprise Manager
1 View critical error alerts in Enterprise Manager. Viewing Critical Error Alerts in Enterprise Manager You begin the process of investigating problems (critical errors) by reviewing critical error alerts on the Database Home page. From the Home page, you can look at the Diagnostic Summary section from where you can click the Active Incidents link if there are incidents. You can also use the Alerts section and look for critical alerts flagged as Incidents. When you click the Active Incidents link, the Support Workbench page is displayed. From here, you can retrieve details about all problems and corresponding incidents. From there, you can also retrieve all Health Monitor checker run and created packages. Note: The tasks described in this section are all Enterprise Manager based. You can also accomplish all of these tasks with the ADRCI command-line utility and PL/SQL package procedures. See Oracle Database Utilities for more information about the ADRCI utility. Oracle Database 11g: Change Management Overview Seminar I - 142
143
Creating a Service Request
4 Create a service request. Creating a Service Request Before you can package and upload diagnostic information for the problem to Oracle Support, you must create a service request. To create a service request, you need to go to MetaLink first. MetaLink can be accessed directly from the Problem Details page when you click the “Go to Metalink” button in the Investigate and Resolve section of the page. Once on MetaLink, log in and create a service request in the usual manner. Once done, you can enter that service request for your problem. This is entirely optional and is for your reference only. In the Summary section, click the Edit button that is adjacent to the SR# label, and in the window that opens, enter the SR#, and then click OK. Oracle Database 11g: Change Management Overview Seminar I - 143
144
Support Workbench: Completion Steps
5. Package and upload diagnostic data to Oracle Support. Two methods: Quick Packaging method Custom Packaging method 6. Track the SR and implement repairs. Perform additional tasks to aid tracking. Use advisors: Data Recovery Advisor: For corrupted blocks, corrupted or missing files, and other data failures SQL Repair Advisor: For SQL statement failures 7. Close incidents and problems. Purges all incidents after 30 days Can be manually disabled Support Workbench: Completion Steps Support Workbench provides two methods for creating and uploading an incident package: the Quick Packaging method and the Custom Packaging method. Quick packaging is a more automated method with a minimum of steps. You select a single problem, provide an incident package name and description, and then schedule the incident package upload, either immediately or at a specified date and time. Support Workbench automatically places diagnostic data related to the problem into the incident package, finalizes the incident package, creates the zip file, and then uploads the file. With this method, you cannot add, edit, or remove incident package files or add other diagnostic data such as SQL test cases. After uploading diagnostic information to Oracle Support, you might perform various activities to track the service request and implement repairs using EM Support Workbench. Among these activities are the following: adding an Oracle bug number to the problem information, adding comments to the problem activity log, responding to requests by Oracle Support to provide additional diagnostics, and running an Oracle advisor to implement repairs. You can close an incident once you are no longer tracking it. All incidents, whether closed or not, are purged after 30 days. You can manually disable purging for an incident. Oracle Database 11g: Change Management Overview Seminar I - 144
145
Incident Packaging Configuration
You configure various aspects of retention rules and packaging generation using the Support Workbench. Incident Metadata Retention Period: Metadata is information about the data. For incidents, it is the incident time, ID, size, problem, and so forth. The data is the actual contents of an incident, such as traces. Cutoff Age for Incident Inclusion: Includes incidents for packaging that are in the range to now. If the cutoff date is 90, for instance, the system includes only the incidents that are in the last 90 days. Leading Incidents Count: For every problem included in a package, the system selects a certain number of incidents of the problem from the beginning (leading) and the end (trailing). For example, if the problem has 30 incidents, and the leading incident count is 5 and the trailing incident count is 4, the system includes the first 5 leading incidents and the last 4 trailing incidents. Trailing Incidents Count: See example in previous bullet. Correlation Time Proximity: The exact time interval that defines “happened at the same time.” There is a concept of correlated incidents/problems to a certain incident/problem. One criterion for correlation is time correlation: Find the incidents that happened at the same time as the incidents in a problem. Time Window for Package Content: Time window for content inclusion is from x hours before first included incident to x hours after last incident (where x is the number specified in that field). Oracle Database 11g: Change Management Overview Seminar I - 145
146
Invoking IPS by Using ADRCI
IPS SET CONFIGURATION INCIDENT PROBLEM | PROBLEMKEY IPS CREATE PACKAGE SECONDS | TIME INCIDENT Ø NEW INCIDENTS IPS ADD FILE IN FILE IPS COPY OUT FILE INCIDENT IPS REMOVE FILE IPS FINALIZE PACKAGE Invoking IPS by Using ADRCI Creating a package is a two-step process: 1. Create the logical package. 2. Generate the physical package as a zip file. Both steps can be done using ADRCI commands. IPS provides several variants allowing you to create and manipulate the zip file contents, some of which are described here: IPS CREATE PACKAGE: Creates an empty package IPS CREATE PACKAGE PROBLEMKEY: Creates a package based on the problem key IPS CREATE PACKAGE TIME: Creates a package based on the specified time range IPC COPY: Copies files between the ADR repository and the external file system. It has two forms: IN FILE to copy an external file into ADR, associating it with an existing package, and optionally an incident OUT FILE to copy a file from ADR to a location outside ADR IPS FINALIZE: Finalizes the package for delivery calling components such as Health Monitor to add their correlated files to the package. Recent trace files and log files are also included in the package. If required, this step is run automatically when a package is generated. Note: Refer to the Oracle Database Utilities guide for more information about ADRCI. IPS GENERATE PACKAGE Oracle Database 11g: Change Management Overview Seminar I - 146
147
Health Monitor: Overview
V$HM_CHECK DB-offline Critical error DB Structure Integrity Check ADRCI Data Block Integrity Check V$HM_RUN DBMS_HM Redo Integrity Check EM hm (reports) Reactive ADR Manual Health Monitor EM or DBMS_HM DBA V$HM_CHECK DB-online Transaction Integrity Check Undo Segment Integrity Check Dictionary Integrity Check Health Monitor: Overview Included in Oracle Database 11g is a framework called the Health Monitor for running diagnostic checks on various components of the database. The Health Monitor examines various components of the database, including files, memory, transaction integrity, metadata, and process usage. These checkers generate reports of their findings as well as recommendations for resolving problems. The Health Monitor can be run in two ways: Reactive: The fault diagnosability infrastructure can run Health Monitor checks automatically in response to critical errors. Manual: As a DBA, you can manually run Health Monitor health checks using the DBMS_HM PL/SQL package or the Enterprise Manager interface. You query the V$HM_CHECK view to see the numerous Health Monitor checks, which fall into one of two categories: DB-online: These checks can be run while the database is open (that is, in OPEN mode or MOUNT mode). DB-offline: In addition to being runnable while the database is open, these checks can also be run when the instance is available and the database itself is closed (that is, in NOMOUNT mode). The checker generates a report of its execution in XML and stores the reports in ADR. You can view these reports using either V$HM_RUN, DBMS_HM, ADRCI, or Enterprise Manager. Oracle Database 11g: Change Management Overview Seminar I - 147
148
Running Health Checks Manually: EM Example
You can access Health Monitor checkers via the Checkers tab on the Advisor Central page. The page lists each checker type, and you can run a checker by clicking it and then OK on the corresponding checker page after you entered the parameters for the run. This is illustrated in the slide where you run the Data Block Checker manually. Once a check is completed, you can view the corresponding checker run details by selecting the checker run from the Results table and click Details. Checker runs can be reactive or manual. On the Findings subpage, you can see the various findings and corresponding recommendations extracted from V$HM_RUN, V$HM_FINDING and V$HM_RECOMMENDATION. If you click View XML Report on the Runs subpage, you can view the run report in XML format. Viewing the XML report in Enterprise Manager generates the report for the first time if it is not yet generated in your ADR. You can then view the report using ADRCI without needing to generate it. Oracle Database 11g: Change Management Overview Seminar I - 148
149
Running Health Checks Manually: PL/SQL Example
SQL> exec dbms_hm.run_check('Dictionary Integrity Check', 'DicoCheck',0,'TABLE_NAME=tab$'); SQL> set long SQL> select dbms_hm.get_run_report('DicoCheck') from dual; DBMS_HM.GET_RUN_REPORT('DICOCHECK') Basic Run Information (Run Name,Run Id,Check Name,Mode,Status) Input Paramters for the Run TABLE_NAME=tab$ CHECK_MASK=ALL Run Findings And Recommendations Finding Finding Name : Dictionary Inconsistency Finding ID : 22 Type : FAILURE Status : OPEN Priority : CRITICAL Message : SQL dictionary health check: invalid column number 8 on object TAB$ failed Message : Damaged rowid is AAAAACAABAAAS7PAAB - description: Object SCOTT.TABJFV is referenced Running Health Checks Manually: PL/SQL Example You can also use the DBMS_HM.RUN_CHECK procedure for running a health check. You call RUN_CHECK by supplying the name of the check found in V$HM_CHECK, the name for the run (a label used to retrieve reports later), and the corresponding set of input parameters for controlling its execution. You can view these parameters using V$HM_CHECK_PARAM. In the above example, you run a Dictionary Integrity Check for the TAB$ table called DICOCHECK, and do not set any timeout for this check (value 0). After DICOCHECK is executed, you execute the DBMS_HM.GET_RUN_REPORT function to get the report extracted from V$HM_RUN, V$HM_FINDING, and V$HM_RECOMMENDATION. The output above shows you that a critical error was found in TAB$. This table contains an entry for a table with an invalid number of columns. The report also gives you the name of the damaged table in TAB$. When you call the GET_RUN_REPORT function, an XML report file is generated in the HM directory of your ADR. For this example, the file would be called HMREPORT_DicoCheck.hm. Note: Refer to the Oracle Database PL/SQL Packages and Types Reference for more information about DBMS_HM. Oracle Database 11g: Change Management Overview Seminar I - 149
150
Viewing HM Reports Using the ADRCI Utility
adrci> show hm_run … ADR Home = /u01/app/oracle/diag/rdbms/orcl/orcl: ************************************************************************* HM RUN RECORD 1 ********************************************************** RUN_ID RUN_NAME HM_RUN_1 CHECK_NAME DB Structure Integrity Check NAME_ID MODE START_TIME :31: :00 RESUME_TIME <NULL> END_TIME :31: :00 MODIFIED_TIME :31: :00 TIMEOUT FLAGS STATUS SRC_INCIDENT_ID NUM_INCIDENTS ERR_NUMBER REPORT_FILE <NULL> adrci> create report hm_run HM_RUN_1 Adrci> show report hm_run HM_RUN_1 Viewing HM Reports Using the ADRCI Utility You can create and view Health Monitor checker reports using the ADRCI utility. You must ensure that operating system environment variables such as ORACLE_HOME are set properly, and then enter the following command at the operating system command prompt: adrci The ADRCI utility starts and displays its prompt as shown above. You can optionally change the current ADR home. You use the SHOW HOMES command to list all ADR homes, and the SET HOMEPATH command to change the current ADR home. You can then enter the SHOW HM_RUN command to list all the checker runs registered in ADR and visible from V$HM_RUN. Locate the checker run for which you want to create a report and note the checker run name using the corresponding RUN_NAME field. The REPORT_FILE field contains a file name if a report already exists for this checker run. Otherwise, you can generate the report using the CREATE REPORT HM_RUN command as shown in the slide. You view the report using the SHOW REPORT HM_RUN command. Oracle Database 11g: Change Management Overview Seminar I - 150
151
SQL Repair Advisor: Overview
SQL statement Generate incident in ADR automatically Statement crashes Execute Trace files DBA DBA runs SQL Repair Advisor DBA is alerted SQL Repair Advisor investigates Statement executes successfully again. DBA accepts SQL patch Execute SQL patch generated SQL statement patched SQL Repair Advisor: Overview You run the SQL Repair Advisor after a SQL statement fails with a critical error that generates a problem in ADR. The advisor analyzes the statement and in many cases recommends a patch to repair the statement. If you implement the recommendation, the applied SQL patch circumvents the failure by causing the query optimizer to choose an alternate execution plan for future executions. This is done without changing the SQL statement itself. Note: In case no work around is found by the SQL Repair Advisor, you are still able to package the incident files and send the corresponding diagnostic data to Oracle Support. Oracle Database 11g: Change Management Overview Seminar I - 151
152
Using EM to Access SQL Repair Advisor
There are two ways to access the SQL Repair Advisor from Enterprise Manager. The first and easiest way is when you get alerted in the Diagnostic Summary section of Database Home page. Following a SQL statement crash that generates an incident in ADR, you are automatically alerted through the Active Incidents field. You simply click the corresponding link to get to the Support Workbench Problems page from where you click the corresponding problem ID link. This takes you to the Problem Details page from where you can click the SQL Repair Advisor link in the Investigate and Resolve section of the page. Once the SQL statement is no longer active, the SQL Advisors link under Advisor Central offers the Support Workbench link. This takes you directly to the Problem Details page where you can click the SQL Repair Advisor link. Once on the SQL Incident Analysis page, you can submit a SQL diagnostic analysis task. If you specify Immediately, the Processing: SQL Repair Advisor Task page is displayed showing you the various steps of the task execution. Note: To access the SQL Repair Advisor in case of nonincident SQL failures, you either go to the SQL Details page or the SQL Worksheet. Oracle Database 11g: Change Management Overview Seminar I - 152
153
Using SQL Repair Advisor from EM
Once the SQL Repair Advisor task is executed, you are sent to the SQL Recovery Results for that task. On this page, you can see the corresponding Recommendations, and especially if SQL Patch was generated to fix your problem. If that is the case, you can select the statement for which you want to apply the generated SQL Patch and click View. This takes you to the Repair Recommendations for SQL ID page from where you can ask the system to implement the SQL Patch by clicking Implement after selecting the corresponding finding. You then get a confirmation for the implementation and you can execute again your SQL statement. Oracle Database 11g: Change Management Overview Seminar I - 153
154
Using SQL Repair Advisor from PL/SQL
declare rep_out clob; t_id varchar2(50); begin t_id := dbms_sqldiag.create_diagnosis_task( sql_text => 'delete from t t1 where t1.a = ''a'' and rowid <> (select max(rowid) from t t2 where t1.a= t2.a and t1.b = t2.b and t1.d=t2.d)', task_name => 'sqldiag_bug_ ', problem_type => DBMS_SQLDIAG.PROBLEM_TYPE_COMPILATION_ERROR); dbms_sqltune.set_tuning_task_parameter(t_id,'_SQLDIAG_FINDING_MODE', dbms_sqldiag.SQLDIAG_FINDINGS_FILTER_PLANS); dbms_sqldiag.execute_diagnosis_task (t_id); rep_out := dbms_sqldiag.report_diagnosis_task (t_id, DBMS_SQLDIAG.TYPE_TEXT); dbms_output.put_line ('Report : ' || rep_out); end; / execute dbms_sqldiag.accept_sql_patch(task_name => 'sqldiag_bug_ ', task_owner => 'SCOTT', replace => TRUE); Using SQL Repair Advisor from PL/SQL You can also invoke the SQL Repair Advisor directly from PL/SQL. Once you are alerted about an incident of SQL failure, you can execute a SQL Repair Advisor task using the DBMS_SQLDIAG.CREATE_DIAGNOSIS_TASK function as illustrated in the slide. You specify the SQL statement for which you want the analysis to be done, as well as a task name and a problem type you want to analyze (possible values are PROBLEM_TYPE_COMPILATION_ERROR, and PROBLEM_TYPE_EXECUTION_ERROR). You then give the created task parameters using the DBMS_SQLTUNE.SET_TUNING_TASK_PARAMETER procedure. Once you are ready, you can then execute the task using the DBMS_SQLDIAG.EXECUTE_DIAGNOSIS_TASK procedure. Finally, you can get the task report using the DBMS_SQLDIAG.REPORT_DIAGNOSIS_TASK function. In the example in the slide, it is assumed that the report asks you to implement a SQL Patch to fix the problem. You would then use the DBMS_SQLDIAG.ACCEPT_SQL_PATCH procedure to implement the SQL Patch. Oracle Database 11g: Change Management Overview Seminar I - 154
155
Viewing, Disabling, and Removing a SQL Patch
Once you apply a SQL patch with the SQL Repair Advisor, you may want to view it to confirm its presence, to disable it, or to remove it. One reason to remove a patch is if you install a later release of Oracle Database that fixes the problem that caused the failure in the nonpatched SQL statement. To view, disable/enable, or remove a SQL Patch, access the Server page in EM and click the SQL Plan Control link in the Query Optimizer section of the page. This takes you to the SQL Plan Control page. From there, click the SQL Patch tab. From the resulting SQL Patch subpage, locate the desired patch by examining the associated SQL statement. Select it, and apply the corresponding task: Disable, Enable, or Delete. Oracle Database 11g: Change Management Overview Seminar I - 155
156
Oracle Database 11g: Change Management Overview Seminar I - 156
Summary In this lesson, you should have learned how to: Set up the Automatic Diagnostic Repository Create an incident package to capture diagnostic data Use a command-line tool to view diagnostic information Perform diagnostic tasks by using EM Support Workbench Run health checks on various database components Use SQL Repair Advisor to analyze critical SQL statement failures Oracle Database 11g: Change Management Overview Seminar I - 156
157
Provisioning Automation
Installing Patches
158
Oracle Database 11g: Change Management Overview Seminar I - 158
Objectives After completing this lesson, you should be able to: Discuss the use of hot patching to reduce system down time Compare the benefits of hot patching and conventional patching Oracle Database 11g: Change Management Overview Seminar I - 158
159
Hot Patching: Overview
For a bug fix or diagnostic patch on a running Oracle instance, hot patching provides the ability to do the following: Install Enable Disable Hot Patching: Overview Hot patching allows you to install, enable, and disable a bug fix or diagnostic patch on a live, running Oracle instance. You use hot patching as the recommended solution for avoiding down time when applying patches. Oracle Database 11g provides the capability to do hot patching with any Oracle database using the opatch command-line utility. Hot patches can be provided when the changed code is small in scope and complexity (for example, with diagnostic patches or small bug fixes). Oracle Database 11g: Change Management Overview Seminar I - 159
160
Oracle Database 11g: Change Management Overview Seminar I - 160
Installing a Hot Patch Applying a hot patch does not require instance shutdown, relinking of the Oracle binary, or instance restart. OPatch can be used to install or uninstall a hot patch. OPatch detects conflicts between two hot patches, as well as between a hot patch and a conventional patch. opatch query -is_online_patch <patch location> opatch query <patch location> -all Installing a Hot Patch Unlike traditional patching mechanisms, applying a hot patch does not require instance shutdown or restart. And similar to traditional patching, you can use OPatch to install a hot patch. You can determine whether a patch is a hot patch by using the commands shown above. Note: The patched code is shipped as a dynamic/shared library, which is then mapped into memory by each Oracle process. Oracle Database 11g: Change Management Overview Seminar I - 160
161
Benefits of Hot Patching
No down time and no interruption of business Extremely fast install and uninstall times Integrated with OPatch: Conflict detection Listed in patch inventory Works in RAC environment Although the on-disk Oracle binary is unchanged, hot patches persist across instance shutdown and startup. Benefits of Hot Patching The main benefit of hot patching is that you do not have to shut down your database instance while you apply the hot patch. And unlike conventional patching, hot patching is extremely fast to install and uninstall. Because hot patching uses OPatch, you get all the benefits that you already have with conventional patching that uses OPatch. It does not matter how long or how many times you shut down your database—a hot patch always persists across instance shutdown and startup. Oracle Database 11g: Change Management Overview Seminar I - 161
162
Conventional Patching and Hot Patching
Conventional Patches Hot Patches Require down time to apply or remove Do not require down time to apply or remove Installed and uninstalled via OPatch Installed and uninstalled via OPatch Persist across instance startup an d shutdown Persist across instance startup and shutdown Take several minutes to install or uninstall Take only a few seconds to install or uninstall Conventional Patching and Hot Patching Conventional patching basically requires a shutdown of your database instance. Hot patching does not require any down time. Applications can keep running while you install a hot patch. Similarly, hot patches that have been installed can be uninstalled with no down time. Oracle Database 11g: Change Management Overview Seminar I - 162
163
Hot Patching Considerations
Hot patches may not be available on all platforms. They are currently available on: Linux x86 Linux x86-64 Solaris SPARC64 Some extra memory is consumed. Exact amount depends on: Size of patch Number of concurrently running Oracle processes Minimum amount of memory: approximately one OS page per running Oracle process Hot Patching Considerations One operating system (OS) page is typically 4 KB on Linux x86 and 8 KB on Solaris SPARC64. With an average of approximately one thousand Oracle processes running at the same time, this represents around 4 MB of extra memory for a small hot patch. Oracle Database 11g: Change Management Overview Seminar I - 163
164
Hot Patching Considerations
There may be a small delay (a few seconds) before every Oracle process installs or uninstalls a hot patch. Not all bug fixes and diagnostic patches are available as a hot patch. Use hot patches in situations when down time is not feasible. When down time is possible, you should install all relevant bug fixes as conventional patches. Hot Patching Considerations (continued) A vast majority of diagnostic patches are available as hot patches. For bug fixes, it really depends on their nature. Not every bug fix or diagnostic patch is available as a hot patch. But the long-term goal of the hot-patching facility is to provide hot-patching capabilities for Critical Patch Updates. Oracle Database 11g: Change Management Overview Seminar I - 164
165
Oracle Database 11g: Change Management Overview Seminar I - 165
Summary In this lesson, you should have learned how to: Discuss the use of hot patching to reduce system down time Compare the benefits of hot patching and conventional patching Oracle Database 11g: Change Management Overview Seminar I - 165
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.