Presentation is loading. Please wait.

Presentation is loading. Please wait.

Agenda Performance Tuning Overview Oracle Database Internal

Similar presentations


Presentation on theme: "Agenda Performance Tuning Overview Oracle Database Internal"— Presentation transcript:

1 Agenda Performance Tuning Overview Oracle Database Internal
Oracle Memory Evaluation from 9i to 12c Oracle Database Wait Event AWR ,ASH and ADDM Analysis Oracle Performance Tuning Technique  Oracle Memory Tuning/Instance Tuning Redo log Tuning SQL Tuning Understanding SQL Tuning Tools (Explain Plan ,Trace and TKPROF Diagnose Database Performance Issues With Real-Time Scenarios General Possible Problem Index Rebuild Concept and Test Case Advance Database Performance Tools ( RDA and OSWATCHER) Various OS Commands For Identifying Bottleneck

2 Performance Tuning Overview
Database Tuning is the activity of making a database application run more quickly. “More quickly” usually means higher throughput, though it may mean lower response time for time-critical applications. Improving performance means A)Reduce response time B)Increase throughput There is four main area for Performance Tuning. 1. SQL Tuning – Responsibility of the Developer 2. Database Tuning – Responsibility of the Database Administrator 3. System Tuning – Responsibility of the System Administrator 4. Network Tuning – Responsibility of the Network / LAN / WAN Administrator

3 Oracle Database Tuning tools/utilities
Oracle provides the following tools/utilities to assist with performance monitoring and tuning Automated Maintenance Tasks ADDM Report / Regular ADDM Report (pre-12c) ADDM Compare Report (New in 12c) Real-Time ADDM Report (New in 12c) Emergency Monitoring (Enhacement in "Memory Access Mode") ASH Report and AWR Report Trace and TKProf Oracle Enterprise Manager /12c Grid Control RDA SQLT/STA OS watcher Various OS commands for identify bottleneck

4 Oracle Memory Evaluation from 9i to 12c
Introduction to Database Memory Components The basic memory structures associated with Oracle Database include: System Global Area (SGA) The SGA is a group of shared memory structures, known as SGA components, that contain data and control information for one Oracle Database instance. The SGA is shared by all server and background processes. Program Global Area (PGA) A PGA is a memory region that contains data and control information for a server process. It is non shared memory created by Oracle Database when a server process is started. Access to the PGA is exclusive to the server process. There is one PGA for each server process. Background processes also allocate their own PGAs. The total PGA memory allocated for all background and server processes attached to an Oracle Database instance is referred to as the total instance PGA memory, and the collection of all individual PGAs is referred to as the total instance PGA, or just instance PGA. It contains global variables and data structures and control information for a server process. An example of such information is the runtime area of a cursor. Each time a cursor is executed, a new runtime area is created for that cursor in the PGA memory region of the server process executing that cursor.

5 The performance of complex long running queries, typical in a DSS environment, depend to a large extent on the memory available in the Program Global Area (PGA) which is also called work area. Evolution of Memory Management Features Memory management has evolved with each database release: Oracle 9i Beginning with Oracle9i, the dynamic SGA infrastructure allowed for the sizing of the Buffer Cache, Shared Pool and the Large Pool without having to shut down the database. Key features being: Dynamic Memory resizing DB_CACHE_SIZE instead of DB_BLOCK_BUFFERS DB_nK_CACHE_SIZE for multiple block sizes PGA_AGGREGATE_TARGET Introduction of Automatic PGA Memory management Oracle Database 10g Automatic Shared Memory Management (ASMM) was introduced in 10g. You enable the automatic shared memory management feature by setting the SGA_TARGET parameter to a non-zero value.

6 Oracle Database 11g Automatic Memory Management is being introduced in 11g. This enables automatic tuning of PGA and SGA with use of two new parameters named MEMORY_MAX_TARGET and MEMORY_TARGET Oracle Database 12c Automatic Memory Management keeps the same behavior as in 11g. Oracle Database 11g supports various memory management methods, which are chosen by initialization parameter settings. Oracle recommends that you enable the automatic memory management method. Automatic Memory Management – For Both the SGA and Instance PGA Automatic Shared Memory Management – For the SGA Manual Shared Memory Management – For the SGA Automatic PGA Memory Management –For the Instance PGA Manual PGA Memory Management – For the Instance PGA 1. Automatic Memory Management – For Both the SGA and Instance PGA Beginning with Oracle Database 11g, Oracle Database can manage the SGA memory and instance PGA memory completely automatically. You designate only the total memory size to be used by the instance, and Oracle Database dynamically exchanges memory between the SGA and the instance PGA as needed to meet processing demands. This capability is referred to as automatic memory management. With this memory management method, the database also dynamically tunes the sizes of the individual SGA components and the sizes of the individual PGAs.

7 To achieve this, two new parameters have been introduced named MEMORY_MAX_TARGET and MEMORY_TARGET. To do so (on most platforms), you set only a target memory size initialization parameter (MEMORY_TARGET) and optionally a maximum memory size initialization parameter (MEMORY_MAX_TARGET). Switching to Automatic Memory Management Check the current values configured for SGA_TARGET and PGA_AGGREGATE_TARGET. SQL> SHOW PARAMETER target NAME TYPE VALUE memory_max_target big integer 0 memory_target big integer 0 pga_aggregate_target big integer 200M sga_target big integer 500M Add the values of PGA_AGGREGATE_TARGET and SGA_TARGET. In our case it would sum to 700MB. Check also for SGA_MAX_SIZE being set. When switching to AMM, i.e. using MEMORY_TARGET, the parameter SGA_MAX_SIZE (used for ASMM) should not be set as doing so fixes the size of the SGA, and hence conflicts with the intended use of MEMORY_TARGET.

8 Decide on a maximum amount of memory that you would want to allocate to the database which will determine the maximum value for the sum of the SGA and instance PGA sizes. In our case we decide to set to 808M Change the parameters in the initialization parameter file. When using a server parameter file, issue: SQL> ALTER SYSTEM SET memory_max_target=808M SCOPE=SPFILE; SQL> ALTER SYSTEM SET memory_target=808M SCOPE=SPFILE; SQL> ALTER SYSTEM SET sga_target=0 SCOPE=SPFILE; SQL> ALTER SYSTEM SET pga_aggregate_target=0 SCOPE=SPFILE; When using a text initialization parameter file, then edit the parameter file and set the parameters manually: MEMORY_MAX_TARGET=808M MEMORY_TARGET=808M SGA_TARGET=0 PGA_AGGREGATE_TARGET=0

9 In case you do not specify any value for MEMORY_MAX_TARGET and only use MEMORY_TARGET then database automatically sets MEMORY_MAX_TARGET to the value of MEMORY_TARGET. If you omit the line for MEMORY_TARGET and include a value for MEMORY_MAX_TARGET, the MEMORY_TARGET parameter defaults to zero. After startup, you can then dynamically change MEMORY_TARGET to a non-zero value, provided that it does not exceed the value of MEMORY_MAX_TARGET. MEMORY_MAX_TARGET is a static parameter i.e it cannot be changed dynamically and the instance has to be bounced for modifying its value. So ensure that you have set it to an appropriate value. Shutdown and startup the database: SQL> shutdown immediate; Database closed. Database dismounted. ORACLE instance shut down. SQL> startup mount ORACLE instance started.

10 SQL> show parameter target NAME TYPE VALUE memory_max_target big integer 808M memory_target big integer 808M pga_aggregate_target big integer 0 sga_target big integer 0 The preceding steps instruct you to set SGA_TARGET and PGA_AGGREGATE_TARGET to zero so that the sizes of the SGA and instance PGA are tuned up and down as required, without restrictions. You can omit the statements that set these parameter values to zero and leave either or both of the values as positive numbers. In this case, the values act as minimum values for the sizes of the SGA or instance PGA. Note: in case you set any parameter value to value which is higher then MEMORY_TARGET, then you get the ORA-844 error. E.g. setting SGA_MAX_SIZE to value of 900 M results in the following: SQL> startup ORA-00844: Parameter not taking MEMORY_TARGET into account, see alert log for more information The explanation given for this error is:00844, 00000, "Parameter not taking MEMORY_TARGET into account, see alert log for more information" // *Cause: The parameter was larger than MEMORY_TARGET. // *Action: Set the parameter to a lower value than MEMORY_TARGET.

11   Oracle Wait Events Oracle Wait Events are conditions where a session is waiting for something to happen.  A wait event can be caused by many things, from slow read/write speeds on the disk, to locking session caused by other session, to various kinds of Oracle contentions.  Waits are either system-level or session-level.  A session-level wait event is an event that affects a single user activity within the database.  System-level wait events affect the entire database system.  All wait events have been classified into categories like contention wait/OS service wait/DB Service ,Wait/idle wait, etc.. This will enable the user to immediately find out whether the system is performing poorly due to excessive contention or a background not performing well or whether the Operating System does not have enough resources. A category has many wait events assigned to it, so the below example shown displays the total waits reported for each class. The wait event class gives an overall view of a particular area. For example, the I/O wait category contains all wait events associated with disk I/O. As a rule, the wait event categories with the highest wait times and counts become the focus of tuning effort. Example : SELECT e.wait_class#, e.wait_class,sum(s.total_waits), sum(s.time_waited) FROM  v$event_name e, v$system_event s WHERE  e.name = s.event GROUP BY e.wait_class#,e.wait_class;

12 Classify the wait events into:
Idle Waits: Whenever an Oracle process has no work to do this is an idle wait. For most processes this is because they are waiting on the user to provide a new SQL statement to execute. Application: These are waits caused by the way the application is designed. These include row lock waits, and table or other locks that are requested by the application either explicitly or implicitly (possibly due to DDL). Configuration: These are waits which occur in a badly configured system and will be reduced dramatically as a result of proper tuning. Administrative: These are waits imposed by a privileged users by some action. Concurrency: These are waits that can not be tuned and will occur on a system with High Concurrency. Commit: This class only has log file sync. It deserves a special class because it is a necessary event and will be high and is supposed to be high on a system doing queries. Network: All waits due to network messaging delays belong here. They are supposed to point out network congestion or latency. They should not include think or processing time, only the time spent in the networking code and hardware. User I/O Waits: All waits for Disk I/O done by User queries or even SMON, MMON System I/O Waits: All waits for Disk I/O done by backgrnd processes like LGWR, DBWR, ARCH, RFS. But not SMON and MMON Cluster: waits which will occur only in RAC mode. Other: All the wait events, which do not fit into one of the above classes clearly, or are not important to classify. By not important I mean those that wait for an insignificant amount of time or really do not fit into any one class.

13 Most common wait events, along with explanations and potential solutions
Some "healthy" events are: db file sequential read db file scattered read read by other session log file switch (checkpoint incomplete) log file sync direct path read direct path write Direct Path Read Temp direct path write temp The presence of healthy events mean that you will need to either add capacity like CPUs or I/O, or maybe tune the SQL. Either way, the solution is achievable by some common solutions 13

14 If DB Time is greater during bad period, It is likely the database is causing the problem;
You have to verify problem inside the database. Some unhealthy events are: Free buffer waits Buffer Busy Waits Enqueue Latch events Unhealthy events need more diagnostic effort to determine what is causing them.

15 db file sequential read
Single Block Read (Data via Index and/or Rowid) But A large number of waits could indicate poor joining orders of tables, or unselective indexing When this occur in conjunction with db file scattered read in the Top 5 Wait Events section of AWR , first you should examine the SQL Ordered by Physical Reads section of the report and see the "Tablespace IO", and "File IO" sections of the AWR (or STATSPACK) reports Buffer Cache Shadow Process Select * from emp where empno = 9999; Index on emp.empno Note: “sequential” means A sequence like a rowid 15

16 Db file Sequential Read from AWR
Top 5 Timed Events ~~~~~~~~~~~~~~~~~~ % Total Event Waits Time (s) Ela Time db file scattered read , CPU time db file sequential read control file sequential read control file parallel write 16

17 Db file sequential read from OEM
17

18 db file scattered read SQL> select * from all_objects;
Multi Block Read Full Table Scan- As full table scans are pulled into memory, they rarely fall into contiguous buffers but instead are scattered throughout the buffer cache A large number indicates that table may have missing or suppressed indexes. Index Fast Full Scans See the "Top SQL by Disk Reads" sections of AWR reports for clues about any SQL causing high I/O. See the "Tablespace IO", and "File IO" sections of the AWR (or STATSPACK) reports, along with ADDM and ASH output. If statistics gathering is enabled then V$SQL_PLAN can also give clues about SQL statements using FULL scans. Buffer Cache Shadow Process SQL> select * from all_objects; 18

19 db file scattered Read fro OEM
19

20 Read by other Session Multiple sessions reading the same data that requires IO and Waiting for the other session to finish IO Buffer Cache To find more information Use ADDM and ASH reports for advice and information about sessions and SQL involved. Check the "Buffer Wait Statistics" section of AWR reports for details of block classes causing waits. The "Buffer Waits" columns of Tablespace and File information in AWR reports helps show where the waits are occurring, but be aware that these figures also include "buffer busy waits". Check if these same tablespace/file/s show excessive IO or poor IO performance. S1 S2 Determine why the filesystems are performing poorly. Some common causes are: "hot files ystems" - too many active files on the same file system exhausting the I/O bandwidth In Parallel Execution (PX) is being used, determine if the I/O subsystem is saturated by having too many slaves in use. 20

21 Read by other session 21

22 log file switch (checkpoint incomplete)
This wait indicates that the process is waiting for the log file switch to complete, but the log file switch is not possible because the checkpoint process for that log file has not completed. Problem Confirmation Only certain sessions, queries or jobs are experiencing slowness (not throughout the database) Database has less than three redolog groups Waits for 'log file switch (checkpoint incomplete)' are a significant component of DB Time Reducing Waits Enlarge existing redologs Add redo log groups One has to go below the AWR and look for log swtiches. Ideally the log switches should be 2 or 3 for hour. The log switches under the Instance Activity Stats show that there are 33 switches happened during an hour which mean every 2 minutes there is switch. We need to increase the size of the online redo log files to get rid of this wait event as well as improve the performance of the database.

23 Log file sync When a user session commits, the session's redo information needs to be flushed from memory to the redo logfile to make it permanent. At the time of commit, the user session will post the LGWR to write the log buffer (containing the current unwritten redo, including this session's redo information) to the redo log file. When the LGWR has finished writing, it will post the user session to notify it that this has completed. The user session waits on 'log file sync' while waiting for LGWR to post it back to confirm all redo changes have made it safely on to disk. The time between the user session posting the LGWR and the LGWR  posting the user after the write has completed is the wait time for 'log file sync' that the user session will show To initially analyze 'log file sync' waits the following information is helpful AWR report from a similar time frame and period where 'log file sync' waits are not experienced in order to use as a baseline for reasonable performance for comparison purposes AWR report when 'log file sync' waits are occurring Note: The 2 reports should be for between minutes each. Lgwr trace file  The lgwr trace file will show warning messages for periods when 'log file parallel wait' may be high

24 X Direct IO PGA Shadow Process Sort Area Direct Path PQO Reads
Buffer Cache X 24

25 Direct path read Direct path reads are generally used by Oracle when reading directly into PGA memory (as opposed to into the buffer cache). This style of read request is typically used for: Sort I/Os when memory Sort areas are exhausted and temporary tablespaces are used to perform the sort Parallel Query slaves. Problem Confirmation: The time spent actively waiting on the above event is significant 'direct path read' waits are the significant component of total DB Time Reducing Number of Waits: 1. If you are seeing long delays taken to service this wait event then check the amount of I/O being performed on the device identified by the P1 argument of this wait event. 2. The device and / or the controller may be overloaded. If this is the case then take the standard steps of spreading the file across further devices etc. 25

26 Further Investigation from ASH
select session_id sid,QC_SESSION_ID qsid, ash.p3, CURRENT_OBJ#||' '||o.object_name objn, o.object_type otype, CURRENT_FILE# fn,CURRENT_BLOCK# blockn, ash.SQL_ID from v$active_session_history ash,all_objects o where event like 'direct path read' and o.object_id (+)= ash.CURRENT_OBJ# Order by sample_time;3 SID QSID P3 OBJN OTYPE FN BLOCKN SQL_ID TOTO TABLE gp8tg0b2s722 TOTO TABLE gp8tg0b2s722 TOTO TABLE gp8tg0b2s722 TOTO TABLE gp8tg0b2s722 TOTO TABLE gp8tg0b2s722 TOTO TABLE gp8tg0b2s722 TOTO TABLE gp8tg0b2s722 select --+ parallel(a,4) count(*) from toto a;o col block_type for a18 col objn for a25 col otype for a15 col event for a25 col p3 for 999 col fn for 999 col sid for 9999 col qsid for 9999 select session_id sid, QC_SESSION_ID qsid, --event, --ash.p1, --ash.p2, ash.p3, CURRENT_OBJ#||' '||o.object_name objn, o.object_type otype, CURRENT_FILE# fn, CURRENT_BLOCK# blockn, ash.SQL_ID from v$active_session_history ash, all_objects o where event like 'direct path read' and o.object_id (+)= ash.CURRENT_OBJ# Order by sample_time / 26

27 Direct path write Direct path writes allow a session to queue an IO write request and continue processing whilst the OS handles the IO. When a process is writing buffers directly from the PGA (as opposed to the DBWR writing them from the buffer cache), the process waits on this event to ensure that all outstanding write requests are completed. Example of "direct path writes" operations are: sorts that go to disk, parallel DML operations, direct-path INSERTs, parallel create table as select, and some LOB operations.  If the session needs to know if an outstanding write is complete then it waits on this waitevent. This can happen because the session is out of free slots and just needs an empty buffer (it waits on the oldest IO) or because it needs to ensure all writes are flushed. If asynchronous IO is not being used then the IO write request blocks until completed but this does not show as a wait at the time the IO is issued. The session returns later to pick up the completed IO data but can then show a wait on "direct path write" even though this wait will return immediately. . This style of write request is typically used for: Direct load operations (eg: Create Table as Select (CTAS) may use this) Parallel DML operations Sort IO (when a sort does not fit in memory) Writes to uncached "LOB" segments (later releases wait on "direct path write (lob)" ) 27

28 One can look at: <<View:V$SESSION_EVENT>> to identify sessions with high numbers of waits <<View:V$SESSTAT>> to identify sessions with high "physical writes direct" (statistic only present in newer Oracle releases) and <<View:V$FILESTAT>> to see where the IO is occuring Problem Confirmation: The time spent actively waiting on 'direct path write' waits is significant 'direct path write' waits are the significant component of total DB Time Average time for an I/O is exceeds typical standards (ie takes greater than 20 mSec) for I/O performance . Reducing Number of Waits: If you are seeing long delays taken to service this wait event then check the amount of I/O being performed on the device identified by the P1 argument of this wait event. The device and / or the controller may be overloaded. If this is the case then take the standard steps of spreading the file across further devices etc. Check that the real problem isn't the amount of time that the operating system is taking to service the system call. Find out which file numbers are causing the highest average waits and then determine which filesystem contains the file Determine why the filesystems are performing poorly. Some common causes are: "hot filesystems" - too many active files on the same filesystem exhausting the I/O bandwidth hardware problem In Parallel Execution (PX) is being used, determine if the I/O subsystem is saturated by having too many slaves in use. Set PGA_AGGREGATE_TARGET if not set already to avoid incorrect manual workarea sizing which could also lead to the issue.

29 Direct Path Read Temp Direct path reads from "temp" are used by Oracle when reading from TEMPORARY database files into PGA memory (as opposed to into the buffer cache). If asynchronous IO is supported (and in use) then Oracle can submit IO requests and continue processing. It can then pick up the results of the IO request later and will wait on "direct path read temp" until the required IO completes. If asynchronous IO is not being used then the IO requests block until completed but these do not show as waits at the time the IO is issued. The session returns later to pick up the completed IO data but can then show a wait on "direct path read temp" even though this wait will return immediately. Reducing Waits / Wait times: The reads are for temporary tablespaces so check for unexpected disk sort operations etc. that need temp space. Ensure <<Parameter:DISK_ASYNCH_IO>> is TRUE. This is unlikely to reduce wait times from the wait event timings but may reduce sessions elapsed times (as synchronous direct IO is not accounted for in wait event timings). Ensure the OS asynchronous IO is configured correctly. Check for IO heavy sessions / SQL and see if the amount of IO can be reduced. Ensure no disks are IO bound. 29

30 One can look at: AWR and ADDM reports V$SESSION_WAIT to identify sessions with high numbers of waits V$SESSTAT to identify sessions with high "physical reads direct temporary tablespace" (statistic not present in older Oracle releases) AWR or V$SQLAREA for statements with SORTS and high DISK_READS (which may or may not be due to direct temp reads) Top 5 Timed Foreground Events Event Waits Time(s) Avg wait (ms) % DB time Wait Class DB CPU , direct path read temp 4,477,755 18, User I/O library cache lock , Concurrency direct path write temp 4,079,189 14, User I/O db file scattered read 288,902 9, User I/O

31 Direct path write temp Direct path writes allow a session to queue an IO write request and continue processing whilst the OS handles the IO. If the session needs to know if an outstanding direct write to a TEMPFILE is complete then it waits on this waitevent. This can happen because the session is out of free slots and just needs an empty buffer (it waits on the oldest IO) or because it needs to ensure all writes are flushed. If asynchronous IO is not being used then the IO write request blocks until completed but this does not show as a wait at the time the IO is issued. The session returns later to pick up the completed IO data but can then show a wait on "direct path write temp" even though this wait will return immediately. One can look at: AWR and ADDM reports <<View:V$SESSION_EVENT>> to identify sessions with high numbers of waits <<View:V$SESSTAT>> to identify sessions with high "physical writes direct temporary tablespace" only present in newer Oracle releases)

32 Reducing Waits / Wait times:
The writes are for temporary tablespaces so check for unexpected disk sort operations etc. that need temp space. eg: Ensure statistics are up to date so that the optimizer has the best chance to get good execution plans. Ensure <<Parameter:DISK_ASYNCH_IO>> is TRUE. This is unlikely to reduce wait times from the wait event timings but may reduce sessions elapsed times (as synchronous direct IO is not accounted for in wait event timings). Ensure the OS asynchronous IO is configured correctly. Ensure no disks are IO bound. How to Address High Wait Times for the 'direct path write temp' Wait Event Slow performance is observed. Significant waits for 'direct path write temp' wait are seen on the system Top 5 Timed Events Avg % Total ~~~~~~~~~~~~~~~~~~ wait Call Event Waits Time (s) (ms) Time direct path write temp , , log file sync , , log file parallel write , ,

33 Waits for 'direct path write temp' are typically caused by sort operations that cannot be completed in memory, spilling over to disk. Database sorting is mainly required for the following operations: Index Creation Aggregation/Group By Order By SQL's UNION/Intersect/Minus Operations Sorting is usually performed in a memory area called the PGA (configured via the PGA_AGGREGATE_TARGET parameter). If memory is insufficient to accommodate sorting, then the sort over flows to disk within the temporary tablespace. While writing to Temporary Tablespace, waits for 'direct path write temp' will occur. If these writes takes significant time then these will be reported in AWR, statspack or other performance tools.

34 Free buffer waits Server processes scan LRU list to get free buffers (e.g. while reading a block from disk, or cloning a buffer for CR etc.). After scanning it up to a threshold level, if server process could not find a free buffer, it request DBWR to write dirty buffers from the LRU lists to disk or it waits until a pinned buffer is freed. While DBWR writes the dirty buffers/ a pinned buffer is freed, the session waits on 'free buffer waits'. Problem Confirmation: AWR/statspack report top timed event shows significant percentage of database time spent on this wait event. Reducing Waits: There are 2 main scenarios where high waits for 'free buffer waits' occur. Firstly make sure that DBWR is able to write out the blocks quickly enough, there is information on this here. If write throughput is within acceptable thresholds then the other alternative is that the buffer cache is too small. Use the following to address each scenario:+

35 DBWR is not fast enough clearing dirty blocks.
Check if CPU is not saturated. A saturated CPU can amplify wait events where a background process does not get enough CPU to progress faster. Check Slow IO (Poor data file write performance). Some file systems have poor write performance (writes take too long) and is impacting DBwriter's ability to keep enough clean buffers in the buffer cache. The DBWriter will achieve optimal throughput when asynchronous I/O is available to it. DBWriter may not be able to keep up with buffer demands if asynch I/O is not available. If your platform doesn't support it, then adding multiple DBWriters can help divide the workload. Tune checkpoints, so that we can flush dirty buffers fast enough. Tune the SQL which can eliminate excess IO Buffer cache is too small: If the buffer cache is too small and filled with hot blocks, then sessions will be starved for free buffers (clean, cold blocks) and will need to spend too much time looking for free buffers and/or posting DBWR to write dirty blocks and make them free. Increase the parameter or DB_CACHE_SIZE and monitor the effect of the change.

36 Buffer Busy Waits This wait happens when a session wants to access a database block in the buffer cache but it cannot as the buffer is "busy". The two main cases where this can occur are: Another session is reading the block into the buffer Another session holds the buffer in an incompatible mode to our request Reducing Waits / Wait times: As buffer busy waits are due to contention for particular blocks then you cannot take any action until you know which blocks are being competed for and why. Eliminating the cause of the contention is the best option. Note that "buffer busy waits" for data blocks are often due to several processes repeatedly reading the same blocks (eg: if lots of people scan the same index) - the first session processes the blocks that are in the buffer cache quickly but then a block has to be read from disk - the other sessions (scanning the same index) quickly 'catch up' and want the block which is currently being read from disk - they wait for the buffer as someone is already reading the block in. The following hints may be useful for particular types of contention - these are things that MAY reduce contention for particular situations:

37 Block Type Possible Actions
data blocks Eliminate HOT blocks from the application. Check for repeatedly scanned / unselective indexes. Change PCTFREE and/or PCTUSED. Check for 'right- hand-indexes' (indexes that get inserted into at the same point by many processes).Increase INITRANS. Reduce the number of rows per block. segment header Increase of number of FREELISTs. Use FREELIST GROUPs (even in single instance this can make a difference). freelist blocks Add more FREELISTS. In case of Parallel Server make sure that each instance has its own FREELIST GROUP(s). undo header Add more rollback segments.  The technique of analyzing both / TKprof and AWR / statspack reports data are discussed below. TKProf In the "Overall Totals" section, confirm that "buffer busy wait" events (Overall Totals, recursive and non-recursive) have the highest wait times. Determine which call type is associated with the highest elapsed time: execute or fetch Generate a new TKProf report sorted by the call type found for the highest elapsed times. For example:

38 tkprof trace_file_name output_file sort=exeela Fetch calls:
Execute calls: tkprof trace_file_name output_file sort=exeela Fetch calls: tkprof trace_file_name output_file sort=fchela Choose a few of the top statements in this new TKProf report and find them in the original trace file. Examine parts of the raw trace file where the top statements are running and look at the lines with "WAIT #" for the buffer busy wait event corresponding to the cursors.  For example: WAIT #2: nam='buffer busy waits' ela= 222 file#=4 block#=78391 class#=1 obj#=57303 tim= Find the value of the P1, P2, and P3 fields. These correspond to the file number, block number, and reason code (9.2 and below) or block class (10g and above). You will likely find many "WAIT#" lines for different combinations of P1, P2, and sometimes P3 values. The goal is to determine which segments are related to these waits.  In 10g and above, this is very easy because the WAIT line will include the object ID for the segment (in the example above, it's: obj#=57303).  You can find the information about the object using this query: SELECT owner, object_name, object_type FROM dba_objects WHERE object_id = 57303;  

39 OWNER      OBJECT_NAME          OBJECT_TYPE SCOTT      STOCK_PRICES         TABLE If you need to find the segment using file# and block#, you can use this query:    SELECT owner, segment_name, file_id, block_id starting_block_id, block_id + blocks ending_block_id, blocks FROM dba_extents WHERE file_id = &file_num AND ( block_id <= &block_id AND (&block_id < (block_id + blocks)) )  OWNER      SEGMENT_NAME            FILE_ID STARTING_BLOCK_ID ENDING_BLOCK_ID     BLOCKS SCOTT      STOCK_PRICES                  4             78385           78393          8 Now you know the segment names and SQL statements with highest buffer busy waits.  AWR or statspack report  9.2 or higher: Review the section Segments by Buffer Busy Waits, and note the segments with the highest waits (collected by statspack at level 7 or higher) It is difficult to point to a specific query, but sometimes the statements with the highest wait time (elapsed - cpu) are related to these waits. Review the statements in the Top SQL sections and find the ones with the highest wait times that use the segments with high buffer busy waits. Ideally, you will obtain an extended SQL trace and TKProf to accurately identify the statements with the highest waits (see above).

40 Enqueues are shared memory structures (locks) that serialize access to database resources.
Enqueues are local locks that serialize access to various resources. This wait event indicates a wait for a lock that is held by another session (or sessions) in an incompatible mode to the requested mode They can be associated with a session or transaction. User Type Locks (TM, TX and UL Enqueues) User locks are locks that are obtained by user applications to protect the integrity of data and the structure of schema objects. Contention on User Type (TX/TM/UL) locks waits are not Oracle code issues. If you encounter contention in this area there is an application coding issues and application Developers need to be engaged to resolve these issues. Enqueue

41 The following 3 enqueue types are defined as "User Type" Locks:
TM - DML (Table Manipulation) Enqueue called against a base table or partition for various table / partition operations that need to be co-ordinated. TX - Transaction Enqueue used to protect transaction information. UL - User Lock Enqueue used when an application makes use of the DBMS_LOCK package. TX Transaction Enqueue TX lock contention is an application coding, design and usage problem and can ONLY be resolved by making modifications to one or more of these 3 aspects of the application. TM - DML (Table Manipulation) Enqueue A TM lock is acquired by a transaction when a table is modified by an INSERT, UPDATE, DELETE, MERGE, SELECT with the FOR UPDATE clause, or LOCK TABLE statement. DML operations require table locks to reserve DML access to the table on behalf of a transaction and to prevent DDL operations that would conflict with the transaction.

42 UL - User Defined Lock UL locks are taken when an application makes use of the DBMS_LOCK package. What is a deadlock? A deadlock is a situation in which two or more users are trying to modify the same piece of data, but the second user finds that the user holding the resource is waiting for data that he is already locking. This situation usually occurs because a consistent locking strategy has not been followed throughout an application. Deadlocks prevent some transactions from continuing to work. Oracle Database automatically detects deadlocks and resolves them by rolling back one statement involved in the deadlock, releasing one set of the conflicting row locks. The database returns a corresponding message to the transaction that undergoes statement-level rollback. The statement rolled back belongs to the transaction that detected the deadlock.

43 latch events A latch is a low-level internal lock used by Oracle to protect memory structures. The latch free event is updated when a server process attempts to get a latch, and the latch is unavailable on the first attempt. There is a dedicated latch-related wait event for the more popular latches that often generate significant contention. For those events, the name of the latch appears in the name of the wait event, such as latch: library cache or latch: cache buffers chains. This enables you to quickly figure out if a particular type of latch is responsible for most of the latch-related contention. Waits for all other latches are grouped in the generic latch free wait event. Actions This event should only be a concern if latch waits are a significant portion of the wait time on the system as a whole, or for individual users experiencing problems. ■ Examine the resource usage for related resources. For example, if the library cache latch is heavily contended for, then examine the hard and soft parse rates. ■ Examine the SQL statements for the sessions experiencing latch contention to see if there is any commonality. Check the following V$SESSION_WAIT parameter columns: ■ P1 - Address of the latch ■ P2 - Latch number ■ P3 - Number of times process has already slept, waiting for the latch

44 Latch free Latches are like short duration locks that protect critical bits of code. This wait indicates that the process is waiting for a latch that is currently busy (held by another process). For versions prior to 10g, there is an umbrella wait event called "latch free" that covers all latch waits. In these older releases typically has to look at the wait parameters and latch sleep information to get an idea of which latches are causing waits. In Oracle 10g or later, finding which latches are causing waits is easier because new wait event names have been introduced for the more common latch waits (eg. a session will wait on "latch: shared pool" instead of "latch free"). However, some latch waits do still use the "latch free" wait event andWhen a session waits on latch free it effectively sleeps for a short time then re-tests the latch to see if it is free . If it still cannot be acquired then P3 is incremented and the session waits again. The wait time may increase exponentially and does not include spinning on the latch (active waiting). The exact latch wait behaviour For certain latches a waiting session may be posted once the latch is free. Oracle9i and later use this "latch posting" far more than Oracle8i (and earlier) releases. The SECONDS_IN_WAIT figure in <<View:V$SESSION_WAIT>> shows the total time spent waiting for the latch including all sleeps.

45 One can also look at: Does the same session/s keep appearing in <<View:V$LATCH_HOLDER>> Sessions with high latch waits in <<View:V$SESSION_EVENT>> (Although it is important to note that innocent sessions may show high numbers of waits if some other session is repeatedly holding the latch) Reducing Waits / Wait times: There is no general advice to reduce latch waits as the action to take depends heavily on the latch type which is causing the waits. If there is no particular latch and waits occur across all latches then check for CPU starvati on or uneven O/S scheduling policies - a CPU bound system will often show numerous latch waits across many types of latch.

46 shared pool latch The wait event is used when waiting for one of the shared pool latches. Shared pool latches protect critical operations on shared memory structures in the shared pool and need to be gotten in order to allocate or free chunks of memory from the top level shared pool memory heaps. In Oracle releases before 10g waits for "shared pool" latches showed up as "latch free" waits. Systemwide Waits: Waits for "latch: shared pool" indicate contention on the shared pool latch which protects critical operations when allocating and freeing memory in the shared pool. Heavy use of literal SQL will stress this latch significantly. If your online application makes heavy use of literal SQL statements then converting these to use bind variables will give significant improvements in latch contention in this area. Reducing Waits / Wait times: Since the shared pool covers a wide range of areas, typically you need to look at each individual area in turn to eliminate issues before moving to the next. If you are not using Automatic Memory Management (AMM) then it is usually good practice to start by checking that the shared pool is correctly sized, or to start using AMM, then look at how well your database is sharing cursors etc. Troubleshooting Contention on "latch: shared pool" is most commonly attributable to an undersized shared pool, non-sharing of SQL resulting in hard parsing or contention on the dictionary cache.

47 library cache latches From Oracle 7.2 onwards the library cache latch has child latches . Problems with these latches are typically due to heavy use of literal SQL or very poor shared pool configuration. If your online application makes heavy use of literal SQL statements then converting these to use bind variables will give significant improvements in latch contention in this area. See Note: for issues affecting the shared pool.

48 Cache buffers chains ; "latch: cache buffers chains" contention is typically encountered because SQL statements read more buffers than they need to and multiple sessions are waiting to read the same block. If you have high contention, you need to look at the statements that perform the most buffer gets and then look at their access paths to determine whether these are performing as efficiently as you would like. Typical solutions are:- Look for SQL that accesses the blocks in question and determine if the repeated reads are necessary. This may be within a single session or across multiple sessions. Check for suboptimal SQL (this is the most common cause of the events) - look at the execution plan for the SQL being run and try to reduce the gets per executions which will minimize the number of blocks being accessed and therefore reduce the chances of multiple sessions contending for the same block. If you can identify a poor SQL and have identified a better plan,

49 General possible Problem
Possible problems Application issues - Contention High resource SQL – This is always the most likely Database configuration OS configuration Tools Wait events(ADDM,AWR,ASH) The V$ tables Tracing Enterprise Manager

50 Diagnose Database Performance issues
As part of Diagnose database performance issues, We will check Database Performance issue from OEM , AWR,ADD and ASH and will try to investigate whether external factor are causing issue or any application job and recent changes in infrastructural or database and any new module /jobs are added. 1) Generate and Check ADDM report, implement findings, re-test 2) Gather Diagnostics a) AWR report covering problem period b) AWR report covering good period of similar load and duration for comparison c) AWR Compare Periods report comparing the good and bad periods d) Collect an ASH report for the same period 3) Collect OSWatcher data 4) Collect Alert log and traces covering the duration of the problem a) Check the Alert log for the period when the issue occurred b) Find any trace files referenced in the problem period

51 Oracle Memory Tuning Monitoring and Tuning Automatic Memory Management
The dynamic performance view V$MEMORY_DYNAMIC_COMPONENTS shows the current sizes of all dynamically tuned memory components, including the total sizes of the SGA and instance PGA. The view V$MEMORY_TARGET_ADVICE provides tuning advice for the MEMORY_TARGET initialization parameter: SQL> SELECT * FROM v$memory_target_advice ORDER BY memory_size; You can also use V$MEMORY_RESIZE_OPS which has a circular history buffer of the last 800 memory resize requests. Note: if both MEMORY_TARGET and PGA_AGGREGATE_TARGET instance parameters have been set, then querying V$PGASTAT can show a value for 'total PGA allocated' which is less than PGA_AGGREGATE_TARGET Total PGA allocated: This gives the current amount of PGA memory allocated by the instance. Oracle tries to keep this number less than the value of PGA_AGGREGATE_TARGET. However, it is possible for the PGA allocated to exceed that value by a small percentage and for a short period of time, when the work area workload is increasing very rapidly or when the initialization parameter PGA_AGGREGATE_TARGET is set to a too small value.

52 11g MEMORY_TARGET Parameter Dependency If MEMORY_TARGET is set to a non-zero value: If SGA_TARGET and PGA_AGGREGATE_TARGET are set, they will be considered the minimum values for the sizes of SGA and the PGA respectively. MEMORY_TARGET values can range from SGA_TARGET + PGA_AGGREGATE_TARGET to MEMORY_MAX_TARGET. If SGA_TARGET is set and PGA_AGGREGATE_TARGET is not set, we will still auto-tune both parameters. PGA_AGGREGATE_TARGET will be initialized to a value of MEMORY_TARGET - SGA_TARGET. If PGA_AGGREGATE_TARGET is set and SGA_TARGET is not set, we will still auto-tune both parameters. SGA_TARGET will be initialized to the minimum non-zero value of MEMORY_TARGET - PGA_AGGREGATE_TARGET and SGA_MAX_SIZE and will auto tune its components. If neither is set, they will be auto-tuned without any minimum or default values. We will have a policy of distributing the total memory set by MEMORY_TARGET parameter in a fixed ratio to the the SGA and PGA during initialization. The policy is to give 60% to the SGA and 40% to the PGA at startup.

53 If MEMORY_MAX_TARGET has not been explicitly set, but MEMORY_TARGET has, the instance automatically sets MEMORY_MAX_TARGET to the same value as MEMORY_TARGET. If MEMORY_TARGET has not been explicitly set, but MEMORY_MAX_TARGET has, then MEMORY_TARGET defaults to 0. After instance startup, it then is possible to dynamically change MEMORY_TARGET to a non-zero value, provided that it does not exceed the value of MEMORY_MAX_TARGET. If MEMORY_TARGET is not set or set to set to 0 explicitly (default value is 0 for 11g): If SGA_TARGET is set we will only auto-tune the sizes of the components of the SGA. PGA will be autotuned independent of whether it is explicitly set or not. However, the combination of SGA and PGA will not be auto-tuned, i.e. the SGA and PGA will not share memory and resize as with the case of MEMORY_TARGET being set to a non-zero value. If neither SGA_TARGET nor PGA_AGGREGATE_TARGET is set, we will follow the same policy as we have today; PGA will be auto-tuned and the SGA will not be auto-tuned and parameters for some of the SGA components will have to be set explicitly (for SGA_TARGET).

54 If only MEMORY_MAX_TARGET is set, MEMORY_TARGET will default to 0 and we will not auto tune the SGA and PGA. It will default to 10gR2 behavior. If SGA_MAX_SIZE is not user set, it is internally set to MEMORY_MAX_TARGET. 2. Automatic Shared Memory Management – For the SGA If you want to exercise more direct control over the size of the SGA, you can disable automatic memory management and enable automatic shared memory management. This feature was introduced in 10g with a parameter known as SGA_TARGET. When automatic SGA memory management is enabled, the sizes of the different SGA components are flexible and can adapt to the needs of current workload without requiring any additional configuration. In case you have enabled Automatic Memory Management and wish to switch to Automatic Shared Memory Management, then follow the below procedure: SQL> ALTER SYSTEM SET memory_target=0 SCOPE=BOTH; SQL> ALTER SYSTEM SET SGA_TARGET=500M SCOPE=BOTH;

55 3. Manual Shared Memory Management – For the SGA
If you want complete control of individual SGA component sizes, you can disable both automatic memory management and automatic shared memory management. In this mode, you need to set the sizes of several individual SGA components, thereby determining the overall SGA size. You then manually tune these individual SGA components on an ongoing basis. In this case you set SGA_TARGET and MEMORY_TARGET to 0 and set value for other SGA components upto the total value of SGA_MAX_SIZE. Please note that SGA re-sizes can occur after upgrade to 11.2 despite the fact that automatic memory management (AMM/ASMM) is disabled via the MEMORY_TARGET and SGA_TARGET parameters being set to zero.  This typically appears as growth in the __SHARED_POOL_SIZE value and a reduction in the __DB_CACHE_SIZE value being used in the instance, such that __DB_CACHE_SIZE may be shrunk below the DB_CACHE_SIZE value specified in the init.ora/spfile. This is expected behavior in 11.2 for immediate memory allocation requests, which added this as a new feature when automatic memory management was disabled.

56 SGA Analysis from AWR

57 Analyze memory component from AWR

58 Buffer Pool Analysis

59 4. Automatic PGA Memory Management – For the Instance PGA While using Automatic memory management, PGA memory is allocated based upon the value of MEMORY_TARGET. In case you enable automatic shared memory management or manual shared memory management, you also implicitly enable automatic PGA memory management. For more information, see Document SGA and PGA Management in 11g's Automatic Memory Management (AMM). Automatic/Manual PGA memory management is decided by the initialization parameter WORKAREA_SIZE_POLICY which is a session- and system-level parameter that can take only two values: MANUAL or AUTO. The default is AUTO. You can set the parameter _MEMORY_IMM_MODE_WITHOUT_AUTOSGA=false in the instance to disable this feature with the consequence that in future an ORA-4031 error would be raised, e.g.: connect / as sysdba alter system set "_memory_imm_mode_without_autosga"=FALSE scope=both; exit Parameter:        _MEMORY_IMM_MODE_WITHOUT_AUTOSGA Default value:   0   (SGA autotuning is disabled for DEFERRED mode autotuning requests, but allowed for IMMEDIATE mode  autotuning requests)

60 5. Manual PGA Memory Management – For the Instance PGA
With automatic PGA memory management, you set a target size for the instance PGA by defining value for parameter named PGA_AGGREGATE_TARGET and sizing of SQL work areas is automatic and all *_AREA_SIZE initialization parameters are ignored for these sessions. This feature is available from 9i. At any given time, the total amount of PGA memory available to active work areas on the instance is automatically derived from the parameter PGA_AGGREGATE_TARGET. This amount is set to the value of PGA_AGGREGATE_TARGET minus the PGA memory allocated for other purposes (for example, session memory). The resulting PGA memory is then allotted to individual active work areas based on their specific memory requirements. 5. Manual PGA Memory Management – For the Instance PGA In case you wish to manually specify the maximum work area size for each type of SQL operator (such as sort or hash-join) then you can enable Manual PGA Memory management. Set WORKAREA_SIZE_POLICY value to MANUAL and also specify values for *_AREA_SIZE such as SORT_AREA_SIZE, HASH_AREA_SIZE, BITMAP_MERGE_AREA_SIZE, and CREATE_BITMAP_AREA_SIZE, etc. 

61 SQL> show parameter pga NAME TYPE VALUE
Although the Oracle Database 11g supports this manual PGA memory management method, Oracle strongly recommends that you leave automatic PGA memory management enabled. The table below summarizes the various memory management methods: SQL> show parameter pga NAME TYPE VALUE pga_aggregate_target big integer 17204M SQL> Run this query to see total PGA memory used by the instance: select sn.name, sum(s.value) from v$sesstat s, v$statname sn where s.statistic# = sn.statistic# and sn.name like '%pga%‘ group by sn.name select PGA_TARGET_FOR_ESTIMATE/1024/1024,BYTES_PROCESSED/1024/1024,ESTD_EXTRA_BYTES_RW/1024/1024 from V$PGA_TARGET_ADVICE; SELECT round(PGA_TARGET_FOR_ESTIMATE/1024/1024) target_mb,ESTD_PGA_CACHE_HIT_PERCENTAGE cache_hit_perc, ESTD_OVERALLOC_COUNT FROM v$pga_target_advice;

62  PGA_AGGREGATE_TARGET is a soft limit, if any server process require more memory than PGA_AGGREGATE_TARGET process can go and use all available memory and can bring down server to knee but which is rare occurrence normally happens due to a bug a or bad query. Estd PGA Overalloc Count' shows estimations of how many times database would need to request from OS more PGA memory than the amount shown in the 'PGA Target Est (MB)'. Normally we should have a value of 0 in AWR reports but its not that quite easy to achieve this value as you may have to push Gigs of memory.

63 The table below summarizes the various memory management methods: Memory Management ModeForYou SetOracle Database Automatically TunesAutomatic memory management (AMM)SGA and PGATotal memory target size for the Oracle instance (MEMORY_TARGET) (Optional) Maximum memory size for the Oracle instance (MEMORY_MAX_TARGET) Total SGA size SGA component sizes Instance PGA size Individual PGA sizes Automatic shared memory management (ASMM) (AMM disabled)SGASGA target size (SGA_TARGET) (Optional) SGA maximum size (SGA_MAX_SIZE) SGA component sizesManual shared memory management (AMM and ASMM disabled)SGAShared pool size (SHARED_POOL_SIZE) Buffer cache size (DB_CACHE_SIZE or DB_BLOCK_BUFFERS) Java pool size (JAVA_POOL_SIZE) Large pool size (LARGE_POOL_SIZE) NoneAutomatic PGA memory managementPGAInstance PGA target size (PGA_AGGREGATE_TARGET)Individual PGA sizesManual PGA memory management (Not recommended)PGAMaximum work area size for each type of SQL operatorNoneThe Automatic Memory Management (AMM) feature uses the Memory Manager (MMAN) background process. This process was introduced in 10g which assisted in Automatic Shared Memory Management (ASMM) using SGA_TARGET. MMAN serves as the SGA Memory Broker and coordinates the sizing of the memory components. The SGA Memory Broker keeps track of the sizes of the components and pending resize operations.

64 Important query to help in tuning
SET PAGESIZE 900 col 'Total SGA (Fixed+Variable)' format col 'Total PGA Allocated (Mb)' format col component format a40 col current_size format SPOOL DBMEMINFO_camprd04aug.TXT /* Database Identification */ select NAME, PLATFORM_ID, DATABASE_ROLE from v$database; select * from V$version where banner like 'Oracle Database%'; select INSTANCE_NAME, to_char(STARTUP_TIME,'DD/MM/YYYY HH24:MI:SS') "STARTUP_TIME" from v$instance; SET LINESIZE 200 /* Memory resize and sga resize*/ COLUMN parameter FORMAT A25 SELECT start_time,end_time,component, oper_type,oper_mode,parameter, ROUND(initial_size/1024/1204) AS initial_size_mb, ROUND(target_size/1024/1204) AS target_size_mb, ROUND(final_size/1024/1204) AS final_size_mb, status FROM v$memory_resize_ops ORDER BY start_time;

65 select * from DBA_HIST_MEMORY_RESIZE_OPS;
set linesize 90 set pagesize 60 column component format a25 column Final format 99,999,999,999 column Started format A25 SELECT COMPONENT ,OPER_TYPE,FINAL_SIZE/1024/102 Final,to_char(start_time,'dd-mon hh24:mi:ss') Started FROM V$SGA_RESIZE_OPS; SELECT MAX(final_size)/(1024*1024) FROM V$SGA_RESIZE_OPS WHERE component='shared pool'; SELECT MAX(final_size)/(1024*1024) FROM V$SGA_RESIZE_OPS WHERE component='DEFAULT buffer cache'; SELECT MAX(final_size)/(1024*1024) FROM V$SGA_RESIZE_OPS WHERE component='large pool'; SELECT min(final_size)/(1024*1024) FROM V$SGA_RESIZE_OPS WHERE component='shared pool'; SELECT min(final_size)/(1024*1024) FROM V$SGA_RESIZE_OPS WHERE component='DEFAULT buffer cache'; SELECT min(final_size)/(1024*1024) FROM V$SGA_RESIZE_OPS WHERE component='large pool';

66 SET HEADING ON SET PAGESIZE 20 SELECT name, bytes FROM v$sgastat WHERE pool = 'shared pool' AND (bytes > OR name = 'free memory') AND rownum < 16 ORDER BY bytes DESC; /*buffer cache advice*/ select size_for_estimate,size_factor,ESTD_PHYSICAL_READ_FACTOR,ESTD_PHYSICAL_READ_TIME from v$db_cache_advice; column size_for_estimate format 999,999,999,999 column buffers_for_estimate format 999,999,999 column estd_physical_read_factor format column estd_physical_reads format 999,999,999 SELECT size_for_estimate, buffers_for_estimate , estd_physical_read_factor, estd_physical_reads/1024/1024 estd_physical_reads FROM V$DB_CACHE_ADVICE WHERE name = 'DEFAULT' AND block_size = (SELECT value FROM V$PARAMETER WHERE name = 'db_block_size') AND advice_status = 'ON'; /* Memory advice */ select * from v$memory_target_advice order by memory_size; /* free sga */ SELECT bytes/1024/104 FROM v$sgainfo WHERE name = 'Free SGA Memory Available'; /* AMM MEMORY settings */ show parameter MEMORY%TARGET show parameter SGA%TARGET show parameter PGA%TARGET /* Current MEMORY settings */ select component, current_size/(1024*1024) from V$MEMORY_DYNAMIC_COMPONENTS; /* SGA */ select sum(value)/(1024*1024)"Total SGA (Fixed+Variable)" from v$sga; select * from v$sga_target_advice order by sga_size; /* PGA */ select sum(PGA_ALLOC_MEM)/1024/1024 "Total PGA Allocated (Mb)" from v$process p, v$session s where p.addr = s.paddr;

67 select PGA_TARGET_FOR_ESTIMATE/1024/1024,BYTES_PROCESSED/1024/1024,ESTD_EXTRA_BYTES_RW/1024/1024 from V$PGA_TARGET_ADVICE; SELECT round(PGA_TARGET_FOR_ESTIMATE/1024/1024) target_mb,ESTD_PGA_CACHE_HIT_PERCENTAGE cache_hit_perc, ESTD_OVERALLOC_COUNT FROM v$pga_target_advice; COLUMN name FORMAT A30 COLUMN value FORMAT A10 SELECT name, value FROM v$parameter WHERE name IN ('pga_aggregate_target', 'sga_target') UNION SELECT 'maximum PGA allocated' AS name, TO_CHAR(value) AS value FROM v$pgastat WHERE name = 'maximum PGA allocated'; -- Calculate MEMORY_TARGET select memory_target/1024/1024 from ( SELECT sga.value + GREATEST(pga.value, max_pga.value) AS memory_target FROM (SELECT TO_NUMBER(value) AS value FROM v$parameter WHERE name = 'sga_target') sga, (SELECT TO_NUMBER(value) AS value FROM v$parameter WHERE name = 'pga_aggregate_target') pga, (SELECT value FROM v$pgastat WHERE name = 'maximum PGA allocated') max_pga); show parameter pga show parameter sga show parameter shared pool show parameter db_cache_size show parameter memory SPOOL OFF

68 Redo log Tuning System wide waits on "log file sync" show the time spent waiting for COMMITs (or ROLLBACKs) to complete. If this is significant then there may be a problem with LGWR's ability to flush redo out quickly enough. Reducing Waits / Wait times: Here are 3 main general tuning tips to help you reduce waits on "log file sync": Tune LGWR to get good throughput to disk . eg: Do not put redo logs on RAID 5. If there are lots of short duration transactions see if it is possible to BATCH transactions together so there are fewer distinct COMMIT operations. Each commit has to have it confirmed that the relevant REDO is on disk. See if any of the processing can use the COMMIT NOWAIT option (be sure to understand the semantics of this before using it). See if any activity can safely be done with NOLOGGING / UNRECOVERABLE options. Check to see if redologs are large enough. Enlarge the redologs so the logs switch between 15 to 20 minutes.

69 For Diagnosis, Check as below 1) Alert
For Diagnosis, Check as below 1) Alert.log file 2) LGWR Trace file 3) Output of the below queries: select to_char(sysdate,'Mondd hh24:mi:ss') TIME from dual; SELECT distinct(to_char((bytes* ),' ')) size_mb FROM v$log; SQL> Select * from v$logfile; SQL> show parameter archive; SQL> archive log list PROMPT PROMPT IMPORTANT PARAMETERS RELATING TO LOG FILE SYNC WAITS: column name format a40 wra column value format a40 wra select inst_id, name, value from gv$parameter where ((value is not null and name like '%log_archive%') or name like '%commit%' or name like '%event=%' or name like '%lgwr%') and name not in (select name from gv$parameter where (name like '%log_archive_dest_state%' and value = 'enable') or name = 'log_archive_format') order by 1,2,3;

70 set linesize 200 select name,value from v$sysstat where name in ('redo synch poll writes','redo synch polls'); column EVENT format a30 select EVENT,TOTAL_WAITS,TOTAL_TIMEOUTS,TIME_WAITED,AVERAGE_WAIT from v$system_event where event in ('log file sync','log file parallel write'); / select value from v$parameter where name = 'log_buffer'; column member format a50 wrap heading 'Member' column group# format heading 'Group#' column status format a10 wrap heading 'Status' SELECT a.group#, a.member, b.bytes/1024/1024 FROM v$logfile a, v$log b WHERE a.group# = b.group#; select group#,member from v$logfile;

71 From AWR and ASH

72 SQL Tuning What is a Query Tuning Issue?
A particular SQL statement or group of statements that run slowly at a time when other statements run well One or more sessions are running slowly and most of the delay occurs during the execution of a particular You might have identified from these queries from: user complaints statspack or AWR reports showing expensive SQL statements a query appearing to hang session consuming a large amount of CPU These problems might appear after: schema changes changes in stats changes in data volumes changes in application database upgrades.

73 Identifying the Problem
When does it happen Current session At specific times All the time What is the scope of the problem Single user Particular time/job Overall performance Has anything changed

74 Root Causes of Poor SQL Performance
1.Optimizer statistics issues Stale/Missing statistics Incomplete statistics Improper optimizer configuration Upgraded database: new optimizer Changing statistics Rapidly changing data 2.Application Issues Missing access structures Poorly written SQL statements 3.Cursor sharing issues Bind-sensitive SQL with bind peeking Literal usage 4.Resource and contention issues Hardware resource crunch Contention (row lock contention, block update contention) Data fragmentation 5.Parallelism issues Not parallelized (no scaling to large data) Improperly parallelized (partially parallelized, skews)

75 Oracle Optimizer Statistics
Inaccurate statistics Suboptimal Plans Optimizer Statistics Table Statistics Column Statistics Index Statistics Partition Statistics System Statistics

76 Oracle Optimizer Statistics Preventing SQL Regressions
Automatic Statistics Collection Job (stale or missing) Out-of-the box, runs in maintenance window Configuration can be changed (at table level) Gathers statistics on user and dictionary objects Uses new collection algorithm with accuracy of compute and speed faster than sampling of 10% Incrementally maintains statistics for partitioned tables – very efficient

77 OEM find top SQL

78 Identify performance problems using ADDM Automatic Database Diagnostic Monitor
Provides database and cluster-wide performance diagnostic Throughput centric - Focus on reducing time ‘DB time’ Identifies top SQL: Shows SQL impact Frequency of occurrence Pinpoints root cause: SQL stmts waiting for Row Lock waits SQL stmts not shared

79 Identify performance problems using ADDM Automatic Database Diagnostic Monitor
Identify Top SQL by DB Time: CPU I/O Non-idle waits Different Levels of Analysis Historical analysis AWR data Performance Page Real-time analysis ASH data More granular analysis Enables identification of transient problem SQL Top Activity Page Tune using SQL Tuning Advisor Performance Page Top Activity

80 Find Sessions with the Highest Resource Consumption
-- sessions with highest CPU consumption SELECT s.sid, s.serial#, p.spid as "OS PID",s.username, s.module, st.value/100 as "CPU sec“ FROM v$sesstat st, v$statname sn, v$session s, v$process p WHERE sn.name = 'CPU used by this session' – CPU AND st.statistic# = sn.statistic# AND st.sid = s.sid AND s.paddr = p.addr AND s.last_call_et < active within last 1/2 hour AND s.logon_time > (SYSDATE - 240/1440) -- sessions logged on within 4 hours ORDER BY st.value; sessions with the highest time for a certain wait SELECT s.sid, s.serial#, p.spid as "OS PID", s.username, s.module, se.time_waited FROM v$session_event se, v$session s, v$process p WHERE se.event = '&event_name' AND se.sid = s.sid AND s.paddr = p.addr ORDER BY se.time_waited

81 From AWR

82

83 Understanding SQL tuning tools
The foundation tools for SQL tuning are: The EXPLAIN PLAN command The SQL Trace facility The tkprof trace file formatter Effective SQL tuning requires either familiarity with these tools

84 Explain Plan An execution plan is a list of steps that Oracle will follow in order to execute a SQL statement. Each step is one of a finite number of basic operations known to the database server. Even the most complex SQL statement can be broken down into a series of basic operations. EXPLAIN PLAN is a statement that allows you to have Oracle generate the execution plan for any SQL statement without actually executing it. You will be able to examine the execution plan by querying the plan table. A plan table holds execution plans generated by the EXPLAIN PLAN statement. The typical name for a plan table is plan_table, but you may use any name you wish. Create the plan table by running utlxplan.sql, located in $ORACLE_HOME/rdbms/admin The EXPLAIN PLAN reveals the execution plan for an SQL statement. The execution plan reveals the exact sequence of steps that the Oracle optimizer has chosen to employ to process the SQL. The execution plan is stored in an Oracle table called the “plan table” Suitably formatted queries can be used to extract the execution plan from the plan table.

85 Important Columns in the Plan Table
statement_id Unique identifier for each execution plan timestamp When the execution plan was generated operation The operation performed in one step of the execution plan, such as “table access” options Additional information about the operation, such as “by index ROWID” object_name Name of table, index, view, etc. accessed optimizer Optimizer goal used when creating execution plan id Step number in execution plan parent_id Step number of parent step

86 A simple Explain Plan SQL> EXPLAIN PLAN FOR select count(*) from sales where product_id=1; Explained. SQL> SELECT RTRIM (LPAD (' ', 2 * LEVEL) || RTRIM (operation) ||' '||RTRIM (options) || ' ' || object_name) query_plan 2 FROM plan_table 3 CONNECT BY PRIOR id = parent_id 4* START WITH id = 0 QUERY_PLAN SELECT STATEMENT SORT AGGREGATE TABLE ACCESS FULL SALES

87 Interpreting Explain Plan
The more heavily indented an access path is, the earlier it is executed. If two steps are indented at the same level, the uppermost statement is executed first. Some access paths are “joined” – such as an index access that is followed by a table lookup. An execution plan is a hierarchical listing of steps. Each step is one of a few basic data access operations known to the database server. The most complex SQL statement can be broken down into a series of basic operations. “Read from the most indented step outward.” This is not exactly correct! Instead, take this approach: a) Start at the least indented step b) Find the step or steps that provide direct input to the step noted in (a). c) Evaluate each of the steps found in (b). This may involve recursively finding steps that provide input and evaluating them.

88 SELECT customer_id, customer_name FROM customers
WHERE UPPER (customer_name) LIKE 'ACME%‘ ORDER BY customer_name; OPERATION OBJECT_NAME SELECT STATEMENT SORT ORDER BY TABLE ACCESS FULL CUSTOMERS Execution Plan Operations TABLE ACCESS FULL Perform a full table scan of the indicated table and retrieve all rows that meet criteria from the WHERE clause. Input: no subordinate operations. Output: the necessary columns from the rows meeting all criteria. Sort the input rows for the purpose of satisfying an ORDER BY clause. Input: the rows to be sorted. Output: the rows in sorted order.

89 INDEX UNIQUE SCAN Look up a complete key in a unique index. Input: usually no subordinate operations. (Key values typically come from the original query or a parent operation.) Output: zero or one ROWIDs from the index. INDEX RANGE SCAN Look up a key in a non-unique index, or an incomplete key in a unique index. Input: usually no subordinate operations. Output: zero or more ROWIDs from the index. TABLE ACCESS BY INDEX ROWID Look up rows in a table by their ROWIDs. Input: a list of ROWIDs to look up. Output: the necessary columns from the rows with the given ROWIDs. NESTED LOOPS Perform a join between two sets of row data using the nested loops algorithm. Inputs: two separate sets of row data. Output: the results of the join. For each row Oracle reads from the first input, the operations that make up the second input are executed once and matching rows generate output.

90 HASH JOIN Perform a join between two sets of row data using the hash join algorithm. Inputs: two separate sets of row data. Output: the results of the join. Oracle reads all rows from the second input and builds a hash structure, before reading each row from the first input one at a time. For each row from the first input, the hash structure is probed and matching rows generate output. NESTED LOOPS OUTER Same as the NESTED LOOPS operation, except that an outer join is performed. SORT GROUP BY Same as the SORT ORDER BY operation, except that the rows are sorted and grouped to satisfy a GROUP BY clause. FILTER Read a set of row data and discard some rows based on various criteria. To determine the criteria, operations from a second input may need to be performed. Input: rows to be examined and, sometimes, an additional subordinate operation that must be performed for each row from the first input in order to evaluate criteria. Output: the rows from the first input that met the criteria

91 VIEW Build a physical representation of a database view or subset of a database view. Input: set of row data. Output: set of row data that implements the view or subset of the view The optimizer rewrites subqueries as joins and merges them into the main query whenever possible. If a subquery is completely independent of the main query and cannot be merged into the main query, the optimizer may treat the subquery as a separate statement and leave it out of the execution plan for the main query. The optimizer expands view definitions and merges them into the main query wherever possible. A VIEW operation will only appear in an execution plan when the view definition could not be merged. REMOTE Submit a SQL statement to a remote database via Oracle Net. Input: typically no subordinate operations. Output: the results of the query from the remote database. Note that the database link used to access the remote database and the actual SQL submitted to the remote database will be accessible from the execution plan. SORT JOIN Same as the SORT GROUP BY operation, except that the input is sorted by the join column or columns in preparation for a join using the merge join algorithm.

92 MERGE JOIN Perform a join between two sets of row data using the merge join algorithm. Inputs: two separate sets of row data. Output: the results of the join. Oracle reads rows from both inputs in an alternating fashion and merges together matching rows in order to generate output. The two inputs are assumed to be sorted on the join column or columns.

93 Display the execution plan with plan statistics (for last executed cursor): SQL> set linesize 150 SQL> set pagesize 2000 SQL> select * from TABLE(dbms_xplan.display_cursor('NULL, NULL ,RUNSTATS_LAST')) To get the plan of the last executed SQL issue the following: SQL> select * from table(dbms_xplan.display_cursor(null,null, 'ALL')); If the SQL has been executed, and you know the SQL_ID value of the SQL, you can pull the plan from from the library cache as shown: SQL> select * from TABLE(dbms_xplan.display_cursor('&SQL_ID', &CHILD,'ALL')); If the cursor happened to be executed when plan statistics were gathered, then use "RUNSTATS_LAST" instead of just "ALL".sql_id:

94 Alternate Approach Use this approach if you are unable to capture the plan using the preferred approach. This approach may be used to collect plans reliably from queries that don't have bind variables. a. Generate the execution plan: SQL> EXPLAIN PLAN FOR < your query goes here > b. Display the execution plan: SQL> set lines 130 SQL> set head off SQL> spool SQL> alter session set cursor_sharing=EXACT; SQL> select plan_table_output from table(dbms_xplan.display('PLAN_TABLE',null,'ALL')); SQL> spool off

95 Explain Plan Limitations
The EXPLAIN PLAN statement provides a good faith estimate of the execution plan that Oracle would use. The real plan that gets used may differ from what EXPLAIN PLAN tells you for many reasons: Optimizer stats, cursor sharing,, bind variable peeking, dynamic instance parameters make plans less stable. EXPLAIN PLAN does not peek at bind variables. EXPLAIN PLAN does not check the library cache to see if the statement has already been parsed. EXPLAIN PLAN does not work for some queries: ORA-22905: cannot access rows from a non-nested table item

96 Viewing Actual Execution Plans
The v$sql view shows statements in the library cache. Here you can find the address, hash value, and child number for a statement of interest. The v$sql_plan view shows the actual execution plan for each statement, given its address, hash value, and child number. The columns are similar to the plan table. The v$sql_plan_statistics view shows actual statistics (rows, buffer gets, elapsed time, etc.) for each operation of the execution plan. The v$sql_plan and v$sql_plan_statistics views are available starting in Oracle 9i. v$sql_plan_statistics is not populated by default.

97 SQL_TRACE and tkprof The Oracle server process managing a database session writes a verbose trace file when SQL trace is enabled for the session. ALTER SESSION SET SQL_TRACE TRUE causes a trace of SQL execution to be generated. TKPROF is a utility provided by Oracle that formats SQL trace files into very helpful and readable reports. TKPROF is installed automatically when the database server software is installed. You invoke TKPROF from the operating system command line; there is no graphical interface for TKPROF. Starting in Oracle 9i TKPROF can read extended SQL trace files and report on wait events statistics Tkprof output contains breakdown of execution statistics, execution plan and rows returned for each step. These stats are not available from any other source.

98 we will collect data to help verify whether the suspected query is the one that should be tuned. We should be prepared to identify the specific steps in the application that cause the slow query to execute. We will trace the database session while the application executes this query The extended SQL trace (10046 trace at level 12) will capture execution statistics of all SQL statements issued by a session during the trace. It will show us how much time is being spent per statement, how much of the time was due to CPU or waitevents, and what the bind values were. We will be able to verify if the "candidate" SQL statement is truly among the SQL issued by a typical session. Choose a session to trace Users that are experiencing the problem most severely; e.g., normally the transaction is complete in 1 sec, but now it takes 30 sec. Users that are aggressively accumulating time in the database

99 Trace a Connected Session
Ideally, start the trace as soon as the user logs on and begins the operation or transaction. Continue tracing until the operation is finished. Try to avoid starting or ending the trace in the middle of a call unless you know the call is not important to the solution This is the most common way to get a trace file. Start tracing on a connected session Coordinate with the user to start the operation Let the trace collect while the operation is in progress Stop tracing when the operation is done Gather the trace file from the "user_dump_dest" location (you can usually identify the file just by looking at the timestamp). Generate a TKProf report and sort the SQL statements in order of most elapsed time using the following command: tkprof <trace file name> <output file name> sort=fchela,exeela,prsela .

100 Enabling SQL Trace At the instance level: sql_trace = true
timed_statistics = true (optional) In your own session: ALTER SESSION SET sql_trace = TRUE; ALTER SESSION SET timed_statistics = TRUE; (optional) In another session: SYS.dbms_system.set_sql_trace_in_session (<SID>, <serial#>, TRUE) Finding the Trace File Look in the user dump destination. On OFA compliant systems this will be $ORACLE_BASE/admin/$ORACLE_SID/udump Check timestamps and file contents to see which trace file is yours If non-DBAs need access to trace files, add _trace_files_public = true to the parameter file to avoid permissions problems on Unix platforms Use a dedicated server connection when tracing, if possible.

101 To indentify OS Process id
select p.PID,p.SPID,s.SID from v$process p,v$session s where s.paddr = p.addr and s.sid = &SESSION_ID To indentify Oracle session ID from OS Process id select p.PID,p.SPID,s.SID,s.serial# from v$process p,v$session s where s.paddr = p.addr and p.SPID = 29179 SPID is the operating system Process identifier (os pid).PID is the Oracle Process identifier (ora pid) Once the OS process id for the process has been determined then the trace can be initialized as follows: Lets assume that the process to be traced has an os pid of 9834. Login to SQL*Plus as a dba and execute the following: connect / as sysdba oradebug setospid 9834 oradebug unlimit oradebug event trace name context forever,level 12 oradebug tracefile_name Remember to replace the example '9834' value with the actual os pid. Note that it is also possible to attach to a session via oradebug using the 'setorapid'. In this case the PID (Oracle Process identifier ) would be used (rather than the 'SPID') and the oradebug text would change to:

102 connect / as sysdba oradebug setorapid 9834 oradebug unlimit oradebug event trace name context forever,level 12 To identify trace location select c.value || '/' || d.instance_name || '_ora_' || a.spid || '.trc' trace from v$process a, v$session b, v$parameter c, v$instance d where a.addr = b.paddr and b.audsid = 4459 and c.name = 'user_dump_dest' ======================= Other method to trace RUN THIS PACKAGE FIRST dbmssupp.sql FOR SQL TRACING exec dbms_support.start_trace_in_session (4361,2072,binds=>true,waits=>true); exec dbms_support.stop_trace_in_session (4361,2072); exec dbms_system.set_sql_trace_in_session(2473,1,true); EXEC DBMS_SYSTEM.set_sql_trace_in_session(sid=>123, serial#=>1234, sql_trace=>FALSE); tkprof rcc1_ora_ trc rcc1_ora_ prf SYS=NO SORT= EXECPU,FCHCPU or tkprof rcc2_ora_12.trc tkp_rcc2.txt waits=yes sys=no sort=exeela,fchela explain='system/' tkprof user_sql_001.trc user1.prf explain=hr/hr table=hr.temp_plan_table_a sys=no sort=exeela,prsela,fchela

103 Formatting a Trace File with TKPROF
Invoke TKPROF from the operating system prompt like this: tkprof <trace file> <output file> \ [explain=<username/password>] \ [sys=n] [sort=<keyword>] TKPROF Command-line Arguments trace file The SQL trace file to be formatted output file The formatted output to be written by TKPROF explain= Database login to be used if you want the output to include execution plans sys=n Omit “recursive SQL” performed by the SYS user sort= List traced SQL statement in the output file in a specific order

104 Report heading TKPROF version, date run, sort option, trace file One entry for each distinct SQL statement in trace file Listing of SQL statement OCI call statistics: count of parse, execute, and fetch calls, rows processed, and time and I/O used Parse information: parsing user, recursive depth, library cache misses, and optimizer mode Row source operation listing Execution plan listing (optional) Wait event listing (optional) Report Summary OCI call statistics totals Counts of how many statements were found in the trace file, how many were distinct, and how many were explained in the report.

105 Tkprof output TKPROF: Release Production on Wed Aug 9 19:06: (c) Copyright 1999 Oracle Corporation. All rights reserved. Trace file: example.trc Sort options: default ************************************************************************ count = number of times OCI procedure was executed cpu = cpu time in seconds executing elapsed = elapsed time in seconds executing disk = number of physical reads of buffers from disk query = number of buffers gotten for consistent read current = number of buffers gotten in current mode (usually for update) rows = number of rows processed by the fetch or execute call

106 Tkprof output count2 cpu3 elapsed4 disk5 query6 current7 rows8
Parsea d Executeb 1e Fetchc j i total k f g h Rowsl Execution Planm 0 SELECT STATEMENT GOAL: CHOOSE 99 FILTER TABLE ACCESS GOAL: ANALYZED (FULL) OF 'CUSTOMERS' TABLE ACCESS GOAL: ANALYZED (FULL) OF 'EMPLOYEES'

107 Sample OCI Call Statistics
SELECT table_name FROM user_tables ORDER BY table_name call count cpu elapsed disk query current rows Parse Execute Fetch total Misses in library cache during parse: 1 Optimizer goal: CHOOSE Parsing user id: RSCHRAG [recursive depth: 0] The application called on Oracle to parse this statement once while SQL trace was enabled. The parse took 0.01 CPU seconds, 0.02 elapsed seconds.

108 No disk I/Os or buffer gets took place during the parse, suggesting that no misses in the dictionary cache. Oracle was called on to execute this statement once. The execution took under 0.01 CPU seconds. No disk I/Os or buffer gets took place during the execution. (Queries often defer the work to the fetch phase.) Oracle was called on 14 times to perform a fetch, and a total of 194 rows were returned. Fetching took 0.59 CPU seconds, 0.99 elapsed seconds. Fetching required 33,633 buffer gets in consistent mode, but no physical reads were required. The statement was not in the library cache (shared pool) when the parse call came in. The cost-based optimizer and a goal of “choose” were used to parse the statement. The RSCHRAG user was connected to the database when the parse occurred. This statement was executed directly by the application; it was not invoked recursively by the SYS user or a database trigger. Execution plans are only included in TKPROF reports if the explain= parameter is specified when TKPROF is invoked TKPROF will create and drop its own plan table if one does not already exist The row counts on each step are actuals—not estimates. This can be very helpful when troubleshooting queries that perform poorly.

109 When TKPROF runs the EXPLAIN PLAN statement for a query, a different execution plan could be returned than was actually used in the traced session. TKPROF Reports: More Than Just Execution Plans Listing of SQL statements and library cache miss information helps you determine if applications are using Oracle’s shared SQL facility effectively. Parse, execute, and fetch call counts help you determine if applications are using Oracle APIs effectively. CPU and I/O statistics help you zero in on resource-intensive SQL statements. Row counts on individual steps of the execution plans help you rework inefficient execution plans. 3. Does the query spend most of its time in the execute/fetch phases (not parse phase)? Rationale: If the query spends most its time parsing, normal query tuning techniques that alter the execution plan to reduce logical I/O during execute or fetch calls probably won't help. The focus of the tuning should be in reducing parse times; see the "Parse Reduction" strategy.

110 For example, here is an excerpt from a TKProf for a query:
SELECT * FROM ct_dn dn, ds_attrstore store . . . call count cpu elapsed disk query current rows Parse Execute Fetch total The elapsed time spent parsing was seconds compared to only seconds for fetching. This query is having trouble parsing - tuning the query's execution plan to reduce the number of buffers read during the fetch call will not give the greatest performance gain (in fact only about 85 out of 386 seconds could be improved in the fetch call).

111 2. Does the time spent parsing, executing, and fetching account for most of the elapsed time in the trace. If so, continue to the next question. If not, check client waits ("SQLNet Message from Client") time between calls l Are the client waits occurring in between fetch calls for the same cursor ? m If so, update the problem statement to note this fact and continue with the next question. m If most of the time is spent waiting in between calls for different cursors, the bottleneck is in the client tier or network - SQL tuning may not improve the performance of the application. This is no longer a query tuning issue but requires analysis of the client or network. Detailed Explanation The goal of query tuning is to reduce the amount of time a query takes to parse, execute, and/or fetch data. If the trace file shows that these operations occur quickly relative to the total elapsed time, then we may actually need to tune the client or network.

112 Contact Information Tel: For more details: Web:


Download ppt "Agenda Performance Tuning Overview Oracle Database Internal"

Similar presentations


Ads by Google