Presentation is loading. Please wait.

Presentation is loading. Please wait.

Automatic Data Optimization and Oracle Intelligent Storage Protocol

Similar presentations


Presentation on theme: "Automatic Data Optimization and Oracle Intelligent Storage Protocol"— Presentation transcript:

1

2 Automatic Data Optimization and Oracle Intelligent Storage Protocol
Kevin Jernigan Senior Director Product Management Oracle Product Development

3 Program Agenda Data Growth and Information Lifecycle Management
Database Partitioning and Compression Heat Map and Automatic Data Optimization Automatic Data Optimization Use Cases Oracle Intelligent Storage Protocol and ZFS Storage

4 Growth in Data Diversity and Usage 1,800 Exabytes of Data in 2011, 20x Growth by 2020
Today’s Drivers Emerging Growth Factors Enterprise 45% per year growth in database data Cloud 80% of new applications and their data Regulation 300 exabytes in archives by 2015 Mobile #1 Internet access device in 2013 Big Data Large customers top 50PB Let’s talk for a moment about the challenges that data growth has on your database environment. Today we accept that we’re living in an era of data explosion. According to IDC, the total amount of digital information in the world has been increasing roughly ten-fold every five years. More storage of course means more disk drives, which consume more datacenter space, power, and cooling resources, as well as human resources. This means that the budget for storage also keeps growing. Companies will spend more than one-fourth of their IT budgets on storage requirements. ILM is a result of the amount of data that is growing exponentially. Amount of data is growing exponentially Today’s digital universe will grow 20X over next 10 years Not just “Big Data” – also structured data in relational databases Cost of hardware storage is shrinking – but not exponentially Cost per IOP is not improving much Adding more Tier 1 storage to production databases with performance problems can be expensive Storage tiering and ILM is complex Social Business $30B/year in commerce by 2015

5 Information Lifecycle Management Managing Data Over its Lifetime
$ $$ Total Cost of Ownership (TCO)‏ $$$ High Value Medium Low Value at Risk “The policies, processes, practices, and tools used to align the business value of information with the most appropriate and cost effective IT infrastructure from the time information is conceived through its final disposition.” Lets start by talking about Information Lifecycle Management and why an ILM strategy is such an important approach in a database environment. This slide illustrates our working definition of ILM –customers value different parts of their data differently, and they would like to associate different costs with storing that data. The objective is to apply the right storage optimizations to the right data at the right time. Oracle Database 11g provides the ideal environment for implementing your ILM solution, because it offers a cost-effective solution that’s secure, transparent to the application and achieves all of this without compromising performance, and often improving performance. Our ILM solution is based on a combination of Partitioning, Compression, ASM, and other core database features. In addition, we provide tools to help you define lifecycle policies, manage data movement and report on benefits. Storage Networking Industry Association (SNIA) Data Management Forum

6 Information Lifecycle Management
Active Less Historical Improve DW and batch performance by 10X or more with Partitioning and Compression Improve OLTP performance with Compression and Flash Cache Reduce TCO for database storage by 10X or more with Compression Automate tamper-proof history tracking with Flashback Data Archive Define Data Classes *Structured and Unstructured Create Storage Tiers for the Data Classes Create Data Access and Migration Policies Define and Enforce Compliance Policies Data Optimization Mainframe Desktop Servers Hardware Optimization Reduce Storage Requirements with Compression: 2X – 4X with ACO 10X – 50X with HCC Software Optimization Application Optimization Use ZFS, SAM, and StorageTek tape solutions

7 Database Partitioning

8 The Concept of Partitioning
Simple Yet Powerful ORDERS ORDERS ORDERS USA EUROPE JAN FEB JAN FEB Large Table Difficult to Manage Partition Divide and Conquer Easier to Manage Improve Performance Composite Partition Better Performance More flexibility to match business needs Transparent to applications

9 What Can Be Partitioned?
Global Non-Partitioned Index Local Partitioned Index Global Partitioned Index Table Partition Tables Heap tables Index-organized tables Indexes Global Indexes Local Indexes Materialized Views

10 Partition for Performance
Partition Pruning Q: What was the total sales for the weekend of May ? Sales Table May 22nd 2008 May 23rd 2008 May 24th 2008 May 18th 2008 May 19th 2008 May 20th 2008 May 21st 2008 SELECT sum(sales_amount) FROM sales WHERE sales_date BETWEEN to_date(‘05/19/2008’,’MM/DD/YYYY’) AND to_date(‘05/22/2008’,’MM/DD/YYYY’); Only relevant partitions will be accessed Static pruning with known values in advance Dynamic pruning at runtime Minimizes I/O operations Provides massive performance gains

11 Partition for Tiered Storage
ORDERS TABLE (10 years) 2003 2012 2013 95% Less Active 5% Active High End Storage Tier Low End Storage Tier 2-3x less per terabyte 11

12 Database Compression

13 Compression Techniques
COMPRESSION TYPE: SUITABLE FOR: Basic Compression “Mostly read” tables and partitions in Data Warehouse environments or “inactive” data partitions in OLTP environments Advanced Row Compression Active tables and partitions in OLTP and Data Warehouse environments Advanced LOB Compression and Deduplication Non-relational data in OLTP and Data Warehouse environments Advanced Network Compression and Data Guard Redo Transport Compression All environments RMAN/Data Pump Backup Compression Index Key Compression Indexes on tables for OLTP and Data Warehouse Hybrid Columnar Compression – Warehouse Level “Mostly read” tables and partitions in Data Warehouse environments Hybrid Columnar Compression – Archive Level “Inactive” data partitions in OLTP and Data Warehousing environments Basic Compression Basic compression, first made available with Oracle9i Database Enterprise Edition, is used to compress read-only data that may be found in a data warehouse, or in the non-active partitions of a large partitioned table underneath OLTP applications. The data is compressed as it is loaded into the table (or moved into the partition), with typically around a 2 to 4 times compression ratio achieved. SecureFiles Compression The Advanced Compression option of Oracle Database 11g also supports compression of data stored as large objects (LOBs) in the database. This is especially important for environments that mix structured data with unstructured data underneath rich applications – for example, XML, spatial, audio, image or video information stored in Oracle Database 11g can also be compressed. Compression ratios are dependent on the type of media being compressed, however three levels of compression are provided to allow the desired level of compression to be achieved based on available CPU resources Index Compression Oracle Database 11g not only compresses row data, but also compresses the indexes associated with these rows. Indexes can be compressed independent of whether the underlying table data is compressed or not. Index compression can significantly reduce the storage associated with large indexes in large database environments. Backup Compression Oracle Database 11g can also compress backups of data stored in the rows and indexes in databases, independent of whether the data in these rows or indexes have in turn been compressed.

14 Advanced Row Compression
Partition/table/tablespace data compression Support for conventional DML Operations (INSERT, UPDATE) Customers indicate that 2x to 4x compression ratio’s typical Significantly eliminates/reduces write overhead of DML’s Batched compression minimizes impact on transaction performance “Database-Aware” compression Does not require data to be uncompressed – keeps data compressed in memory Reads often see improved performance due to fewer I/Os and enhanced memory efficiency

15 Hybrid Columnar Compression
Hybrid Columnar Compressed Tables Compressed tables can be modified using conventional DML operations Useful for data that is bulk loaded and queried How it Works Tables are organized into Compression Units (CUs) CUs are multiple database blocks Within Compression Unit, data is organized by column instead of by row Column organization brings similar values close together, enhancing compression Compression Unit Column 1 Column 2 Column 3 Column 4 Column 5 10x to 15x Reduction

16 Data Compression 15X Columnar Archive Compression
Reduce storage footprint, read compressed data faster Hot Data Warm Data Archive Data 15X Columnar Archive Compression 10X Columnar Query Compression 3X Advanced Row Compression

17 Compression Benefits Transparent: 100% Application Transparent
Smaller: Reduces Footprint CapEx: Lowers server & storage costs for primary, standby, backup, test & dev databases … OpEx: Lowers heating, cooling, floor space costs … Additional ongoing savings over life of a database as database grows in size Faster: Transactional, Analytics, DW Greater speedup from in-memory: 3-10x more data fits in buffer cache & flash cache Faster queries Faster backup & restore speeds End-to-end Cost / Performance Benefits across CPU, DRAM, Flash, Disk & Network

18 Heat Map and Automatic Data Optimization

19 Understanding Data Usage Patterns
Database ‘heat map’ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

20 Understanding Data Usage Patterns
Database ‘heat map’ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

21 Heat Map What it tracks HOT WARM “Heat Map” tracking Comprehensive
Access and modification times tracked for tables and partitions Modification times tracked for database blocks Comprehensive Segment level shows both reads and writes Distinguishes index lookups from full table scans Automatically excludes stats gathering, DDLs, table redefinitions, etc High Performance Object level at no cost Block level < 5% cost Active Frequent Access Occasional Dormant HOT COLD WARM

22 Heat Map Enterprise Manager Visualization
No Performance Impact ??

23 Automatic Data Optimization Usage Based Data Compression
Hot Data 3X Advanced Row Compression Warm Data 10X Columnar Query Compression Archive Data 15X Columnar Archive Compression 23 Copyright © 2012, Oracle and/or its affiliates. All rights reserved. Confidential – Oracle Restricted

24 Automatic Data Optimization Simple Declarative SQL extension
ALTER TABLE sales ILM add Active Frequent Access Occasional Dormant Advanced Row Compression (2-4x) Affects ONLY candidate rows Cached in DRAM & FLASH row store compress advanced row after 2 days of no update Warehouse Compression(10x) High Performance Storage column store compress for query low after 1 week of no update Low Cost Storage tier to SATA Tablespace Archive Compression(15-50X) Archival Storage column store compress for archive high after 6 months of no access

25 Scheduled Policy Execution Automatic Data Optimization
Immediate and background policy execution Policies are executed in maintenance windows Can be manually executed as needed Policies can be extended to incorporate Business Rules Users can add custom conditions to control placement (e.g. 3 months after the ship date of an order)

26 Automatic Data Optimization Use Cases

27 Automatic Data Optimization
OLTP Reporting Compliance & Reporting 10x compressed 15x compressed As data cools down, Automatic Data Optimization automatically converts data to columnar compressed Online This Quarter This Year Prior Years As data ages: Activity declines Volume grows Older data primarily for reporting alter table … add policy … compress for query after 3 months of no modification … compress for archive after 1 year … Row Store for fast OLTP Compressed Column Store for fast analytics Archive Compressed Column Store for max compression

28 Up to 15x Smaller Footprint & Faster Queries
Automatic Data Optimization for OLTP Both Columnar & Archive Compression now complement Advanced Row Compression Best Practice: Step 1: Use Advanced Row Compression for entire DB and then Step 2: ADO automatically converts into columnar compressed once the updates cool down, and is used mainly for reporting => Query speed of Columnar & 10x smaller footprint Step 3: ADO automatically converts into archive compressed once data cools down further and is no longer frequently queried => 15-50x smaller footprint

29 Optimizes Data Based on Heat Map Automatic Data Optimization for DW
Data generally comes in via Bulk Loading Workload dominated by queries, even during loading Step 1: Bulk Load directly into Columnar Compressed 10x smaller footprint, Query speed of Columnar Step 2: ADO automatically converts to Archive Compressed and moves to Lower Cost Storage once its queried infrequently Data remains online, with 15-50x smaller footprint, & lower storage cost

30 Fast, Flexible Loads & Queries on Columnar
Automatic Data Optimization – Mixed Use Fastest Load with uncompressed & Fastest Queries with columnar Mixed workloads often use Java app or 3rd party tools to insert and update data that does not use Bulk Loads, so cannot use Columnar Step 1: Load into uncompressed, conventional inserts & updates Fast loading, & flexibility of using a regular OLTP app for loading Step 2: ADO moves to Row Compressed or Columnar Compressed or Low Cost Storage once updates cool down Faster Queries, 3-10x smaller footprint

31 Oracle Intelligent Storage Protocol (OISP)

32 Introduction to NFS NFS means Network File System
Originally developed by Sun Microsystems in 1984, built on RPC NFS is a distributed file system protocol NFS is an open standard, anyone can implement it Secure data access anywhere in the enterprise Commonly implemented with TCP over IP, can use RPC/RDMA Thanks to Moore’s Law Block and file protocol is a matter of convenience, not performance Show of hands: how many people like NFS storage for Oracle databases? How many people wish NFS storage for Oracle databases was faster? If we go back in history we see in 1984 Sun Microsystems released the Network File System. As Sun declared the network is the computer they delivered a distributed file system so data was accessible to anyone that should be able to access it. NFS was a breakthrough in simplifying data sharing, and with an open standard, it gained wide adoption. Common NFS implementations are based on the simple and versatile TCP/IP stack, but advances in Oracle integration are bringing RPC over RDMA to the commercial market. As we will show in the talk, this advancement changes the game for database storage. NFS, a feature-rich file system protocol, compared to block protocol, takes a lot more CPU to transmit I/O . While this limited the range of use cases for NFS in the 80s, Moore’s Law has delivered enough CPU bandwidth so that in many practical use cases file and block protocol performance is close enough that your choice of protocol is a matter of convenience, not performance. We will cover more details about this in the talk.

33 Direct NFS Client New feature introduced in Oracle Database 11g Release 1 NFS client embedded in Oracle database kernel Improves NAS storage performance for Oracle database Improves high availability of NAS storage Optimizes scalability of NAS storage Vastly reduces system CPU utilization Significant cost savings for database storage Simplifies client management and eliminates config errors Consistent configuration and performance across different platforms Even Microsoft Windows

34 Direct NFS Architecture
Direct NFS client Database NAS Storage LGWR I/O queue LGWR TCP connection DBWR I/O queue DBWR TCP connection PQ slave I/O queue PQ slave TCP connection RMAN I/O queue RMAN TCP connection Parallel network paths for scalability and failover Direct NFS can issue 1000s of concurrent operations due to the parallel architecture Every Oracle process has its own TCP connection

35 Oracle Intelligent Storage Protocol
Oracle Database on ZFS Database files Control Data Online Redo Recovery files Archived redo Backup sets Image copies Oracle Intelligent Storage Protocol Object-Relational Database Management System (RDMS) Basic Oracle Database system consists of an Oracle database and a database instance 3 major structures to an Oracle Database Memory Structures, Process Structures, Storage Structures Everything in white is database the database – process, memory, etc. Red is NFS on ZFS Storage: the storage structure where control files, datafiles and redo log files “live” So where/how does OISP fit between the database and the red storage structures? Here (green bar flies in). OISP is the communication method between the DB and Storage. This is unique to Oracle and ZFS Storage. No other 3rd party storage, or other storage within Oracle’s portfolio, has OISP.

36 Setting Up An Oracle Database Without OISP
Many Steps to Configure 21 7 Shares + 7 Record Size Tunes + 7 LogBias Tunes = 21 Steps Production Database Test 21 7 Shares + 7 Record Size Tunes + 7 LogBias Tunes = 21 Steps 21 7 Shares + 7 Record Size Tunes + 7 LogBias Tunes = 21 Steps Dev Oracle best practices for database deployment is to set up 7 shares. Each share needs to have its own record size and log bias tunes made = 21 set up steps. Now say you need some test/dev systems off that production database. Each test/dev system also needs to have 7 shares, or 21 setup steps. Don’t forget a backup & recovery system! Best practices are 4 shares = 12 setup steps. The more systems you need, the more steps, the higher the OpEx 12 4 Shares + 4 Record Size Tunes + 4 LogBias Tunes = 12 Steps BU&R

37 Oracle Intelligent Storage Protocol (OISP)
Only on Oracle: Simplify Setup and Tuning for Oracle Database 12c Oracle Database 12c IO metadata exchanged with ZFS Storage Appliance Database data OISP will dynamically tune 65% of the critical factors for optimal database performance Reduced Administration Time ZFS Storage Appliance ZFS Storage Appliance dynamically tunes critical storage parameters Since the acquisition, Sun Storage has been deepening its integration with the Oracle database – Oracle Database runs best on Oracle Storage. Oracle Intelligent Storage Protocol – or OISP for short – is a product of that integration work, a truly unique language between the Oracle DB and ZFS Storage. Availably ONLY on ZFS Storage OISP starts out by understanding the IO requirements of Oracle Database data - everything from Tablespaces to Indexes, Redo logs, and RMAN backups. Then, when used in conjunction with forthcoming ZFS Storage Appliances OISP will be able to communicate Oracle Database metadata information about its IO patterns with ZFS Storage Appliances – and ONLY ZFS Storage Appliances. The ZFS Storage Appliance then dynamically tunes critical IO parameters to maximize query performance. In its initial implementation, OISP will be able to use database metadata to reduce the configuration of storage for Oracle Database by more than 65%.

38 Setting Up An Oracle Database With OISP
Fewer Steps to Configure 2 Shares + 0 Record Size Tunes + 0 LogBias Tunes = 2 Steps 2 Production Database Test 2 2 Shares + 0 Record Size Tunes + 0 LogBias Tunes = 2 Steps 2 Shares + 0 Record Size Tunes + 0 LogBias Tunes = 2 Steps 2 Dev 2 Shares + 0 Record Size Tunes + 0 LogBias Tunes = 2 Steps 2 BU&R

39 Auto Tuning of Record Size, LogBias
Without OISP With OISP NFS Server Multiple shares, each with its own Record Size and LogBias setting NFS Server OISP Tunes Logfile share Datafile share Two shares: logfile and datafile mnt/dbname/redo (Admin sets RS, LB) mnt/dbname/control (Admin sets RS, LB) mnt/dbname/pfile (Admin sets RS, LB) mnt/dbname/datafile (Admin sets RS, LB) mnt/dbname/tempfile (Admin sets RS, LB) mnt/dbname/chgtrack (Admin sets RS, LB) mnt/dbname/backup (Admin sets RS, LB) mnt/dbname/logfile (OISP sets RS, LB) redo mnt/dbname/datafile (OISP sets RS, LB) control pfile datafile tempfile chgtrack backup Multiple shares = hard to manage The Redo Share needs a LogBias of latency that pushes I/O to the write flash, waits for acknowledgement. Appropriate when there is not a lot of pending I/O. The Datafile Share needs a LogBias of throughput that aggregates I/O (batches) and pushes them straight to spinning disk. Right now, per our best practices, you’d configure 7 mount points/shares per database and set the record size and logbias individually per share. One DB would be 21 “tunes”:  (set up share + select record size + select logbias)*7 With OISP record size and logbias are set dynamically without admin intervention. Configure 2 shares/mount points, with a directory and file types: one for datafiles and one for logfiles. OISP will automatically set the record size for the file types, as well as the logbias for the directory. One DB would now be 2 “tunes”:  set up logfile share + set up datafile share Fewer shares also benefits snap & cloning performance.

40 Architecture ZFS File System Oracle Instance Oracle Direct NFS RPC TCP
IP NIC LINK NFS Server Hints Hints on I/O requests So how does OISP work? Oracle Disk Manager (ODM) hints are passed … … via NFSv4 compound through Oracle Direct NFS … … to NFS Server and ZFS File System Keys to architecture: Oracle Direct NFS. OISP does not work w/ block or kernel NFS. dNFS ONLY! ZFS Storage MUST be running 8S (AK8) software. Not available until early September when the new ZS3 hardware product line launches. Direct NFS issues an SNMP call to the storage to enable protocol. ZFS Storage will respond “appropriately,” thereby activating OISP. Non-ZFS Storage will respond incorrectly and OISP will not work. NFSv4 compound is able to pass hints. We need NFSv4 because NFSv3 doesn’t have the compound, and is therefore unable to pass hints.

41 Use Case Example: RMAN Backup
Non-OISP/Un-Tuned 420 MBPS 1.7x Improvement Over Un-Tuned / Non-OISP OISP-Enabled 734 MBPS Equivalent to a Tuned setup* * set share logbias = throughput

42 Summary Oracle Advanced Compression + Oracle Database 12c
Advanced Row Compression Advanced LOB Compression Advanced LOB Deduplication Heat Map (Object and Row Level) Automatic Data Optimization Temporal Optimization + Hybrid Columnar Compression (Exadata, ZFSSA, Axiom)

43 Summary Partitioning: An enabler to allow data classification
Compression: Reduce data storage footprint and improve performance Tiered Storage: Key ILM strategy to lower storage usage and costs Heat Map: Automatically tracks data access patterns Automatic Data Optimization Automates data movement for compression and storage tiering Enables ILM for Oracle Database data ZFS Storage Appliance: database-aware storage Oracle Intelligent Storage Protocol: ZFS-specific admin reduction

44

45


Download ppt "Automatic Data Optimization and Oracle Intelligent Storage Protocol"

Similar presentations


Ads by Google