Automatic Storage Management The New Best Practice Steve Adams Ixora Rich Long Oracle Corporation Session id: 40288.

Slides:



Advertisements
Similar presentations
Scalable Storage Configuration for the Physics Database Services Luca Canali, CERN IT LCG Database Deployment and Persistency Workshop October, 2005.
Advertisements

Tom Hamilton – America’s Channel Database CSE
Database Tuning. Objectives Describe the roles associated with database tuning. Describe the dependency between tuning in different development phases.
The Architecture of Oracle
Database Architectures and the Web
Mendel Rosenblum and John K. Ousterhout Presented by Travis Bale 1.
Oracle Architecture. Instances and Databases (1/2)
Segmentation and Paging Considerations
Introduction to DBA.
Enhanced Availability With RAID CC5493/7493. RAID Redundant Array of Independent Disks RAID is implemented to improve: –IO throughput (speed) and –Availability.
Wim Coekaerts Director of Linux Engineering Oracle Corporation.
Oracle Data Guard Ensuring Disaster Recovery for Enterprise Data
Managing storage requirements in VMware Environments October 2009.
The design and implementation of a log-structured file system The design and implementation of a log-structured file system M. Rosenblum and J.K. Ousterhout.
Lecture 17 I/O Optimization. Disk Organization Tracks: concentric rings around disk surface Sectors: arc of track, minimum unit of transfer Cylinder:
1 - Oracle Server Architecture Overview
1 I/O Management in Representative Operating Systems.
I/O Systems and Storage Systems May 22, 2000 Instructor: Gary Kimura.
Backup and Recovery Part 1.
Module 14: Scalability and High Availability. Overview Key high availability features available in Oracle and SQL Server Key scalability features available.
Introduction to Oracle Backup and Recovery
Simplify your Job – Automatic Storage Management Angelo Session id:
© 2009 Oracle Corporation. S : Slash Storage Costs with Oracle Automatic Storage Management Ara Vagharshakian ASM Product Manager – Oracle Product.
1 Copyright © 2009, Oracle. All rights reserved. Exploring the Oracle Database Architecture.
© 2011 IBM Corporation 11 April 2011 IDS Architecture.
SANPoint Foundation Suite HA Robert Soderbery Sr. Director, Product Management VERITAS Software Corporation.
Chapter 10 : Designing a SQL Server 2005 Solution for High Availability MCITP Administrator: Microsoft SQL Server 2005 Database Server Infrastructure Design.
Database Storage Considerations Adam Backman White Star Software DB-05:
IT The Relational DBMS Section 06. Relational Database Theory Physical Database Design.
Guide to Linux Installation and Administration, 2e 1 Chapter 9 Preparing for Emergencies.
Oracle on Windows Server Introduction to Oracle10g on Microsoft Windows Server.
Module 9: Configuring Storage
CSE 781 – DATABASE MANAGEMENT SYSTEMS Introduction To Oracle 10g Rajika Tandon.
Achieving Scalability, Performance and Availability on Linux with Oracle 9iR2-RAC Grant McAlister Senior Database Engineer Amazon.com Paper
The Design and Implementation of Log-Structure File System M. Rosenblum and J. Ousterhout.
Windows Server 2003 硬碟管理與磁碟機陣列 林寶森
"1"1 Introduction to Managing Data " Describe problems associated with managing large numbers of disks " List requirements for easily managing large amounts.
Server Virtualization
The Self-Managing Database: Automatic SGA Memory Management Tirthankar Lahiri Senior Manager, Distributed Cache & Memory Management Oracle Corporation.
Process Architecture Process Architecture - A portion of a program that can run independently of and concurrently with other portions of the program. Some.
VMware vSphere Configuration and Management v6
Coupling Facility. The S/390 Coupling Facility (CF), the key component of the Parallel Sysplex cluster, enables multisystem coordination and datasharing.
Infrastructure for Data Warehouses. Basics Of Data Access Data Store Machine Memory Buffer Memory Cache Data Store Buffer Bus Structure.
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
Oracle Architecture - Structure. Oracle Architecture - Structure The Oracle Server architecture 1. Structures are well-defined objects that store the.
Page 1 Mass Storage 성능 분석 강사 : 이 경근 대리 HPCS/SDO/MC.
1 CEG 2400 Fall 2012 Network Servers. 2 Network Servers Critical Network servers – Contain redundant components Power supplies Fans Memory CPU Hard Drives.
REMINDER Check in on the COLLABORATE mobile app Best Practices for Oracle on VMware - Deep Dive Darryl Smith Chief Database Architect Distinguished Engineer.
13 Copyright © 2004, Oracle. All rights reserved. Optimizing Database Performance.
2 Copyright © 2006, Oracle. All rights reserved. RAC and Shared Storage.
Oracle Database Architectural Components
Chapter 9: Virtual Memory
Maximum Availability Architecture Enterprise Technology Centre.
CSE451 I/O Systems and the Full I/O Path Autumn 2002
Storage Virtualization
Introduction of Week 6 Assignment Discussion
RAID RAID Mukesh N Tekwani
Chapter 9: Virtual-Memory Management
THE HP AUTORAID HIERARCHICAL STORAGE SYSTEM
Overview Continuation from Monday (File system implementation)
CSE 451: Operating Systems Winter 2009 Module 13 Redundant Arrays of Inexpensive Disks (RAID) and OS structure Mark Zbikowski Gary Kimura 1.
Mark Zbikowski and Gary Kimura
CSE 451: Operating Systems Winter 2012 Redundant Arrays of Inexpensive Disks (RAID) and OS structure Mark Zbikowski Gary Kimura 1.
Cloud Computing Architecture
Cloud Computing Architecture
ASM File Group Parity Protection New to ASM for Oracle Database 19c
ASM Database Clones New to ASM for Oracle Database 18c
RAID RAID Mukesh N Tekwani April 23, 2019
Page Cache and Page Writeback
The Design and Implementation of a Log-Structured File System
Presentation transcript:

Automatic Storage Management The New Best Practice Steve Adams Ixora Rich Long Oracle Corporation Session id: 40288

The Challenge  Today’s databases – large – growing  Storage requirements – acceptable performance – expandable and scalable – high availability – low maintenance

Outline  Introduction – get excited about ASM  Current best practices – complex, demanding, but achievable  Automatic storage management – simple, easy, better  Conclusion

Current Best Practices  General principles to follow – direct I/O – asynchronous I/O – striping – mirroring – load balancing  Reduced expertise and analysis required – avoids all the worst mistakes

File System Cache Database Cache SGA PGA Buffered I/O  Reads – stat: physical reads – read from cache – may require physical read  Writes – written to cache – synchronously (Oracle waits until the data is safely on disk too)

File System Cache Direct I/O  I/O – bypasses file system cache  Memory – file system cache does not contain database blocks (so it’s smaller) – database cache can be larger Database Cache SGA PGA

Buffered I/O – Cache Usage File System Cache Database Cache Legend: hot data recent warm data older warm data recent cold data o/s data

Direct I/O – Cache Usage File System Cache Database Cache Legend: hot data recent warm data older warm data recent cold data o/s data

Cache Effectiveness  Buffered I/O – overlap wastes memory – caches single use data – simple LRU policy – file system cache hits are relatively expensive – extra physical read and write overheads – floods file system cache with Oracle data  Direct I/O – no overlap – no single use data – segmented LRU policy – all cached data is found in the database cache – no physical I/O overheads – non-Oracle data cached more effectively

Buffered Log Writes  Most redo log writes address part of a file system block  File system reads the target block first – then copies the data  Oracle waits for both the read and the write – a full disk rotation is needed in between File System Cache Log Buffer SGA

I/O Efficiency  Buffered I/O – small writes  must wait for preliminary read – large reads & writes  performed as a series of single block operations – tablespace block size must match file system block size exactly  Direct I/O – small writes  no need to re-write adjacent data – large reads & writes  passed down the stack without any fragmentation – may use any tablespace block size without penalty

Direct I/O – How To  May need to – set filesystemio_options parameter – set file system mount options – configure using operating system commands  Depends on – operating system platform – file system type

Synchronous I/O  Processes wait for I/O completion and results  A process can only use one disk at a time  For a series of I/Os to the same disk – the hardware cannot service the requests in the optimal order – scheduling latencies DBWn write batch

Asynchronous I/O  Can perform other tasks while waiting for I/O  Can use many disks at once  For a batch of I/Os to the same disk – the hardware can service the requests in the optimal order – no scheduling latencies DBWn write batch

Asynchronous I/O – How To  Threaded asynchronous I/O simulation – multiple threads perform synchronous I/O – high CPU cost if intensively used – only available on some platforms  Kernelized asynchronous I/O – must use raw devices or a pseudo device driver product  eg: Veritas Quick I/O, Oracle Disk Manager, etc

Striping – Benefits  Concurrency – hot spots are spread over multiple disks which can service concurrent requests in parallel  Transfer rate – large reads & writes use multiple disk in parallel  I/O spread – full utilization of hardware investment  important for systems relatively few large disks

Striping – Fine or Coarse  Concurrency – coarse grain – most I/Os should be serviced by a single disk – caching ensures that disk hot spots are not small – 1 Mb is a reasonable stripe element size  Transfer rate – fine grain – large I/Os should be serviced by multiple disks – but very fine striping increases rotational latency and reduces concurrency – 128 Kb is commonly optimal

Striping – Breadth  Comprehensive (SAME) – all disks in one stripe – ensures even utilization of all disks – needs reconfiguration to increase capacity – without a disk cache log write performance may be unacceptable  Broad (SAME sets) – two or more stripe sets – one sets may be busy while another is idle – can increase capacity by adding a new set – can use a separate disk set to isolate log files from I/O interference

Striping – How To  Stripe breadth – broad (SAME sets)  to allow for growth  to isolate log file I/O – comprehensive (SAME)  otherwise  Stripe grain – choose coarse for high concurrency applications – choose fine for low concurrency applications

Data Protection  Mirroring – only half the raw disk capacity is usable – can read from either side of the mirror – must write to both sides of the mirror  Half the data capacity  Maximum I/O capacity  RAID-5 – parity data use the capacity of one disk – only one image from which to read – must read and write both the data and parity  Nearly full data capacity  Less than half I/O ability “Data capacity is much cheaper than I/O capacity.”

Mirroring – Software or Hardware  Software mirroring – a crash can leave mirrors inconsistent – complete resilvering takes too long – so a dirty region log is normally needed  enumerates potentially inconsistent regions  makes resilvering much faster  but it is a major performance overhead  Hardware mirroring is best practice – hot spare disks should be maintained

Data Protection – How To  Choose mirroring, not RAID-5 – disk capacity is cheap – I/O capacity is expensive  Use hardware mirroring if possible – avoid dirty region logging overheads  Keep hot spares – to re-establish mirroring quickly after a failure

Load Balancing – Triggers  Performance tuning – poor I/O performance – adequate I/O capacity – uneven workload  Workload growth – inadequate I/O capacity – new disks purchased – workload must be redistributed  Data growth – data growth requires more disk capacity – placing the new data on the new disks would introduce a hot spot

Load Balancing – Reactive  Approach – monitor I/O patterns and densities – move files to spread the load out evenly  Difficulties – workload patterns may vary – file sizes may differ, thus preventing swapping – stripe sets may have different I/O characteristics

Load Balancing – How To  Be prepared – choose a small, fixed datafile size – use multiple such datafiles for each tablespace – distribute these datafiles evenly over stripe sets  When adding capacity – for each tablespace, move datafiles pro-rata from the existing stripe sets into the new one

Automatic Storage Management  What is ASM?  Disk Groups  Dynamic Rebalancing  ASM Architecture  ASM Mirroring

Automatic Storage Management New capability in the Oracle database kernel  Provides a vertical integration of the file system and volume manager for simplified management of database files  Spreads database files across all available storage for optimal performance  Enables simple and non-intrusive resource allocation with automatic rebalancing  Virtualizes storage resources

ASM Disk Groups Disk Group  A pool of disks managed as a logical unit

ASM Disk Groups Disk Group  A pool of disks managed as a logical unit  Partitions total disk space into uniform sized megabyte units

ASM Disk Groups Disk Group  A pool of disks managed as a logical unit  Partitions total disk space into uniform sized megabyte units  ASM spreads each file evenly across all disks in a disk group

ASM Disk Groups Disk Group  A pool of disks managed as a logical unit  Partitions total disk space into uniform sized megabyte units  ASM spreads each file evenly across all disks in a disk group  Coarse or fine grain striping based on file type

ASM Disk Groups Disk Group  A pool of disks managed as a logical unit  Partitions total disk space into uniform sized megabyte units  ASM spreads each file evenly across all disks in a disk group  Coarse or fine grain striping based on file type  Disk groups integrated with Oracle Managed Files

ASM Dynamic Rebalancing Disk Group  Automatic online rebalance whenever storage configuration changes

ASM Dynamic Rebalancing  Automatic online rebalance whenever storage configuration changes  Only move data proportional to storage added Disk Group

ASM Dynamic Rebalancing  Automatic online rebalance whenever storage configuration changes  Only move data proportional to storage added  No need for manual I/O tuning Disk Group

ASM Dynamic Rebalancing  Automatic online rebalance whenever storage configuration changes  Online migration to new storage Disk Group

ASM Dynamic Rebalancing  Automatic online rebalance whenever storage configuration changes  Online migration to new storage Disk Group

ASM Dynamic Rebalancing  Automatic online rebalance whenever storage configuration changes  Online migration to new storage Disk Group

ASM Dynamic Rebalancing  Automatic online rebalance whenever storage configuration changes  Online migration to new storage Disk Group

ASM Architecture Pool of Storage ASM Instance Server Non–RAC Database Oracle DB Instance Disk Group

ASM Architecture Clustered Pool of Storage ASM Instance Clustered Servers RAC Database Oracle DB Instance Oracle DB Instance Disk Group

ASM Architecture Clustered Pool of Storage ASM Instance Clustered Servers RAC Database Oracle DB Instance Oracle DB Instance Disk Group

ASM Architecture Clustered Pool of Storage ASM Instance Clustered Servers RAC or Non–RAC Databases Oracle DB Instance Oracle DB Instance Oracle DB Instance Oracle DB Instance Oracle DB Instance Disk Group

ASM Mirroring  3 choices for disk group redundancy – External: defers to hardware mirroring – Normal: 2-way mirroring – High: 3-way mirroring  Integration with database removes need for dirty region logging

ASM Mirroring  Mirror at extent level  Mix primary & mirror extents on each disk

ASM Mirroring  Mirror at extent level  Mix primary & mirror extents on each disk

ASM Mirroring  No hot spare disk required – Just spare capacity – Failed disk load spread among survivors – Maintains balanced I/O load

Conclusion  Best practice is built into ASM  ASM is easy  ASM benefits – performance – availability – automation

Best Practice Is Built Into ASM  I/O to ASM files is direct, not buffered  ASM allows kernelized asynchronous I/O  ASM spreads the I/O as broadly as possible – can have both fine and coarse grain striping  ASM can provide software mirroring – does not require dirty region logging – does not require hot spares, just spare capacity  When new disks are added ASM does load balancing automatically without downtime

ASM is Easy  You only need to answer two questions – Do you need a separate log file disk group?  intensive OLTP application with no disk cache – Do you need ASM mirroring?  storage not mirrored by the hardware  ASM will do everything else automatically  Storage management is entirely automated – using BIGFILE tablespaces, you need never name or refer to a datafile again

ASM Benefits  ASM will improve performance – very few sites follow the current best practices  ASM will improve system availability – no downtime needed for storage changes  ASM will save you time – it automates a complex DBA task entirely

A Q & Q U E S T I O N S A N S W E R S

Next Steps….  Automatic Storage Management Demo in the Oracle DEMOgrounds – Pod 5DD – Pod 5QQ

Reminder – please complete the OracleWorld online session survey Thank you.