Download presentation
Presentation is loading. Please wait.
1
University of Wolverhampton, UK
Oracle Data Pump Working with Oracle since 1986 Oracle DBA - OCP Oracle7, 8, 9, 10 Oracle DBA of the Year – 2002 Oracle ACE Director Regular Presenter at Oracle Conferences Consultant and Trainer Technical Editor for a number of Oracle texts UK Oracle User Group Director Member of IOUC Day job – University of Wolverhampton, UK Carl Dudley University of Wolverhampton, UK UKOUG SIG Director
2
Oracle Data Pump Oracle10g Data Pump Environment Data Pump Exports
The Master Table Data Pump Import Attaching to Data Pump Jobs Performance Tests Data Pump and External Tables Summary
3
The Data Pump Utility Enhanced Export and Import utility used for a variety of purposes Produce logical dumps of database objects Reorganize database storage Transfer data across systems Upgrade (migrate) to different versions of Oracle Store data offline for future use Perform TableSpace Point-In-Time Recovery (TSPITR) Essential features Users may export/import their own objects (or row subsets) Data Pump can use direct path or external table method Easiest method to restore a single table Cannot be used to recover data Data Pump export file is a binary file in “internal” Oracle format Export does not drop exported objects Import can create objects as well as import rows <ipf>L</ipf>
4
Data Pump Architecture
Dump file set Data, Metadata, Master Table Client 1 Master table Shadow process 1 Worker Process 1 metadata Master control process Worker Process 2 Direct path Status queue Parallel process 1 Worker Process 3 External table Shadow process 2 Parallel process 2 Control queue Client 2 Database Log file
5
Data Pump Architecture (continued)
Shadow Process Creates a job which includes master table, master process and queues Checks job status during the run If the client process detaches, the other processes remain active Another shadow process can be invoked to connect to the job Need to know job name – can be seen in user_datapump_jobs Allows a change of parameter e.g.PARALLEL Master Control Process Controls execution and sequencing Divides processing among worker processes Manages information in the master table and log file Worker Process Loads and unloads data and metadata When using external table API, number of worker processes can be set by PARALLEL parameter (Enterprise Edition only) Maintains master table (type of object being handled etc.)
6
Directories for Data Pump
Output is server based, so directory objects required to ensure security Directory objects must be created by the sys user Necessary because the privileged 'Oracle' account is used to write to the files, thus presenting a security risk Must grant READ and WRITE access to the Data Pump user on the directory Oracle reads/writes files in the directory on the users behalf DATA_PUMP_DIR directory used by default when no DIRECTORY specified In windows and UNIX this is pre-created DATA_PUMP_DIR is pre_defined on install On Windows, if setting the environmental variable DATA_PUMP_DIR, the directory name must be UPPERCASE C:\> SET DATA_PUMP_DIR=DATA_PUMP_DIR CREATE DIRECTORY dpump_dir AS ‘c:\extfiles'; GRANT READ,WRITE ON dpump_dir TO fred; <ipf>L</ipf>
7
Finding Permissions on Directories
SELECT grantee ,privilege ,directory_name FROM all_tab_privs t ,all_directories d WHERE t.table_name = d.directory_name ORDER BY d.directory_name ,t.privilege GRANTEE PRIVILEGE DIRECTORY_NAME FRED READ FILE1_DIR FRED READ DPUMP1_DIR FRED WRITE DPUMP1_DIR PUBLIC READ DPUMP2_DIR PUBLIC WRITE DPUMP2_DIR
8
Data Pump Queues Two queues observed in dba_queues (names contain timestamps) KUPC$S_1_ Status queue KUPC$C_1_ Control queue Queue table used by both queues observed in dba_queue_tables KUPC$DATAPUMP_QUETAB In Release 2, Data Pump needs to have a Streams Pool configured Requires STREAMS_POOL_SIZE > 0 Or use Automatic Shared Memory Management (ASMM) SGA_TARGET > 0
9
Methods of Exporting/Importing
Can interactively stop and restart jobs by attaching from another session Multiple clients (expdp) can attach to the same export job Certain operations can be performed within OEM All imported rows are placed in new blocks beyond the table HWM (no searching for free space) Data Pump uses a direct path mode whenever possible Structures such as clustered tables or tables with triggers and/or active referential constraints prevent this The (slower) External Table API is used instead Do not use sys except at the request of Oracle Technical Support <ipf>R</i
10
Oracle Data Pump Oracle10g Data Pump Environment Data Pump Exports
The Master Table Data Pump Import Attaching to Data Pump Jobs Performance Tests Data Pump and External Tables Summary
11
Data Pump Export Levels
Table Specific tables can be exported (with or without the data) Specific partitions and subpartitions Row subsets using query specifications (forces external table method) Schema (default level) Allows export of all objects owned by one user DBAs may use this to export a series of users Tablespace Transportable Tablespaces Tablespace level export Full DBAs may export all objects in database except those owned by sys expdp amy/amypw DIRECTORY=dpump_dir DUMPFILE=amy_emp.dmp QUERY=emp:"WHERE job='CLERK' AND sal<900" <ipf>R</ipf>
12
Exporting Tables from Different Schemas
Original Export allowed DBAs to export tables owned by different users Cannot be done directly in Oracle10g Data Pump All specified tables must reside in same schema Must perform schema level exports to export objects across schemas Requires EXP_FULL_DATABASE privilege All objects (with their dependents) are exported Restriction removed in Oracle11g Release 2 exp system/manager tables = fred.emp, sh.sales, scott.dept About to export specified tables by conventional path Current user changed to FRED ..exporting table EMP 14 rows exported Current user changed to SH ..exporting table SALES rows exported : : <ipf>R</ipf>
13
Exporting Tables from Different Schemas - Workaround
Can specify a list of tables from within the schemas Note the need to escape the double quote characters (Windows) On UNIX all special characters ( ' " may need to be escaped Subqueries can also be used expdp system/manager DIRECTORY=dpump_dir DUMPFILE=test.dmp SCHEMAS=fred,sh,scott INCLUDE=TABLE:\"IN('EMP','SALES','DEPT')\" <ipf>R</ipf> expdp system/manager DIRECTORY=dpump_dir DUMPFILE=test2.dmp INCLUDE=TABLE:\"IN (SELECT tname FROM tab WHERE tname LIKE '%EMP%' AND tabtype = 'TABLE‘)\" expdp scott/tiger DIRECTORY=dpump_dir DUMPFILE=trg.dmp INCLUDE=TRIGGER:\"IN (SELECT trigger_name FROM user_triggers WHERE table_name = 'EMP')\"
14
Exporting the Meta Data
Data, metadata or both (default) can be exported CONTENT=DATA_ONLY | METADATA_ONLY | ALL Metadata is written as XML (for portability) using dbms_metadata Seven times bigger than normal export Schema level dump of metadata expdp fred/fred DIRECTORY=dpump_dir DUMPFILE=meta.dmp CONTENT=METADATA_ONLY <ipf>L</ipf>
15
Compressing and Encrypting
In 10g R2, Compression of metadata can occur before an export COMPRESSION = METADATA_ONLY | NONE Uncompressed during import Specific columns in tables may now be stored in encrypted form Such columns can be re-encrypted in dumpfile set if a password is supplied Otherwise encrypted column data is dumped in clear text ENCRYPTION_PASSWORD[ = PASSWORD] The same password is needed on import Requires Transparent Data Encryption Part of the Advanced Security Option <ipf>L</ipf>
16
Filtering Objects to Export
Tables Indexes Triggers Views Procedures <ipf>L</ipf>
17
Selecting Objects to Export
Export can exclude OR include certain objects Can be used to exert fine control over what is exported Excludes all indexes, and triggers with names beginning with 'copy' Exports only table level grants expdp amy/amypw DIRECTORY=dpump_dir DUMPFILE=amy.dmp EXCLUDE=INDEX,TRIGGER:\"LIKE 'COPY%'\" expdp amy/amypw DIRECTORY=dpump_dir DUMPFILE=amy.dmp INCLUDE=TABLE/GRANT <ipf>L</ipf> expdp fred/fred DIRECTORY=dpump_dir DUMPFILE=amy3.dmp INCLUDE=TABLE:\"LIKE 'DEP%'\" If colon and escaped characters are missed, defaults to schema level! Table name must be uppercase Escape character
18
Object Paths The set of object paths can be seen in datapump_paths
OBJECT_PATH FULL_PATH : : INDEX TABLE_EXPORT/TABLE/INDEX TABLE/INDEX TABLE_EXPORT/TABLE/INDEX TABLE_EXPORT/TABLE/INDEX TABLE_EXPORT/TABLE/INDEX INDEX TABLE_EXPORT/TABLE/INDEX/INDEX INDEX/INDEX TABLE_EXPORT/TABLE/INDEX/INDEX TABLE/INDEX/INDEX TABLE_EXPORT/TABLE/INDEX/INDEX TABLE_EXPORT/TABLE/INDEX/INDEX TABLE_EXPORT/TABLE/INDEX/INDEX CONSTRAINT TABLE_EXPORT/TABLE/CONSTRAINT TABLE/CONSTRAINT TABLE_EXPORT/TABLE/CONSTRAINT <ipf>L</ipf> This view contains 1903 rows Types of exports can be seen in dba_export_objects (571 rows)
19
Data Pump Export Files Perform exports in parallel for increased performance (PARALLEL=integer) Dump file set will consist of one or more files to the value of PARALLEL Can use substitution variable in the filename for automatic naming For example : mydumpfileset%U.dmp %U can have values from 1 to 99 The export can also be created in multiple files based on a file size limit FILESIZE=integer[B | K | M | G] Multiple file names or ‘%u’ is required if multiple files are needed Multiple directories can be used dpdir1:f1.dmp,dpdir2:f2.dmp Size of file is independent of direct or external table method Export will not overwrite existing files Master table cannot be stored across multiple files in an export 400,000 objects (10,000 tables) creates a master table of 189mb Make FILESIZE big enough to store the master table <ipf>L</ipf>
20
Export and Block Corruption
Data Pump Export does not detect corrupted blocks SQL> select count(*) from scott.empb; select count(*) from scott.empb * ERROR at line 1: ORA-01578: ORACLE data block corrupted (file # 8, block # 17) ORA-01110: data file 8: 'C:\T3_F1' C:\>expdp scott/tiger dumpfile = dp:test.cor tables = empb Export: Release Production on Sunday, 07 May, :28:55 : . . exported "SCOTT".“EMPB" KB rows Master table "SCOTT"."SYS_EXPORT_TABLE_04" successfully loaded/unloaded ******************************************************************** Dump file set for SCOTT.SYS_EXPORT_TABLE_04 is: C:\ATEST.COR Job "SCOTT"."SYS_EXPORT_TABLE_04" successfully completed at 09:29:37
21
Interactive command line Very limited functionality Command line
Data Pump Interfaces Interactive command line Very limited functionality Command line Parameter file Data Pump API Database Control (OEM) <ipf>L</ipf>
22
Interactive Method Schema level export is automatically invoked
Can use a default directory c:\>expdp directory = data_pump_dir Export: Release on Thursday, 22 September, :46:45 Copyright (c) 2003, Oracle. All rights reserved. Username: scott Password: xxxxx Connected to: Oracle Database 10g Enterprise Edition Release With the Partitioning, OLAP and Data Mining options Starting "SCOTT"."SYS_EXPORT_SCHEMA_01": scott/******** directory=data_pump_dir Estimate in progress using BLOCKS method... Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: MB Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/TABLE/TABLE Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS . . exported "SCOTT"."EMPBIG" MB rows Master table "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded ****************************************************************************** Dump file set for SCOTT.SYS_EXPORT_SCHEMA_01 is: D:\ORACLE\PRODUCT\10.2.0\ADMIN\ORA1\DPDUMP\EXPDAT.DMP Job "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully completed at 20:48:31
23
Command Line Method C:\>expdp scott/tiger tables = empbig,emp directory = dp dumpfile = emp.dmp job_name = q1 Export: Release on Sunday, 25-July, :20:05 Copyright (c) 2003, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release With the Partitioning, OLAP and Data Mining options Starting "SCOTT"."Q1": scott/******** tables = empbig,emp directory = dp dumpfile = emp.dmp job_name = q1 Estimate in progress using BLOCKS method... Processing object type TABLE_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: MB Processing object type TABLE_EXPORT/TABLE/TABLE Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS . . exported "SCOTT"."EMPBIG" MB rows . . exported "SCOTT"."EMP" KB rows Master table "SCOTT"."Q1" successfully loaded/unloaded ****************************************************************************** Dump file set for SCOTT.Q1 is: C:\EMP.DMP Job "SCOTT"."Q1" successfully completed at 18:20:50
24
Data Pump API PL/SQL interface to allow routines to be constructed and scheduled Can execute jobs and set parameters BEGIN handle1 := DBMS_DATAPUMP.OPEN('EXPORT','SCHEMA',NULL,‘EX_JOB_1','LATEST'); DBMS_DATAPUMP.SET_PARAMETER(handle1,'KEEP_MASTER',1); DBMS_DATAPUMP.ADD_FILE(handle1,‘scott.dmp','DPDIR'); DBMS_DATAPUMP.METADATA_FILTER(handle1,'SCHEMA_EXPR', 'IN(''SCOTT'')'); DBMS_DATAPUMP.START_JOB(handle1); DBMS_DATAPUMP.DETACH(handle1); END; /
25
Data Pump Export Features
Flashback exports (time or SCN based) can be generated as of a specific recent point in time (consistent exports) FLASHBACK_SCN=234671 FLASHBACK_TIME='" :10:00"' Can be executed as a job and subsequently stopped and resumed Estimates size of export file(s) based on blocks or statistics Option to perform an 'estimates only' export (ESTIMATE_ONLY=Y) Does not account for metadata Export can be performed from a remote database across a network <ipf>L</ipf>
26
Estimates of Export size
Export estimates size of export file(s) based on blocks or statistics ESTIMATE = BLOCKS|STATISTICS Option to perform an 'estimates only' export (ESTIMATE_ONLY=Y) Test conducted on a 220,000 employee table Availability of statistics Value of ESTIMATE Estimated Value No statistics ESTIMATE=BLOCKS 18M ESTIMATE=STATISTICS 8.058M Statistics (computed) 14.83M <ipf>L</ipf> Actual size of table export = 15.08M
27
Exporting from a Remote Database
Based on database links and uses external tables If export is run on a read only database, it must be remote Need to maintain the master table Creates the dumpfile set on the instance where the job is running Network bandwidth dictates performance Parallel operations could saturate expdp scott/tiger DIRECTORY=dp NETWORK_LINK=source_database_link DUMPFILE=net_exp.dmp LOGFILE=net_exp.log Remote Database expdp using a database link Export file on local server
28
Oracle Data Pump Oracle10g Data Pump Environment Data Pump Exports
The Master Table Data Pump Import Attaching to Data Pump Jobs Performance Tests Data Pump and External Tables Summary
29
Data Pump Master Table Created and maintained during the export operation Same name as the Data Pump job name, e.g. sys_export_schema_03 Can be set with JOB_NAME parameter Avoid job names like emp-history Oracle will attempt to build a master table called emp Automatically dropped when export completes successfully Can be preserved using KEEP_MASTER=y Data Pump user will need quota for master table data Has over 70 columns Final object to be placed in dump file set Allows monitoring of export process and maintains context for stop/restart
30
Master Table Columns PROCESS_ORDER BASE_PROCESS_ORDER TOTAL_BYTES
DUPLICATE BASE_OBJECT_TYPE METADATA_IO DUMP_FILEID BASE_OBJECT_NAME DATA_IO DUMP_POSITION BASE_OBJECT_SCHEMA CUMULATIVE_TIME DUMP_LENGTH ANCESTOR_PROCESS_ORDER PACKET_NUMBER DUMP_ALLOCATION DOMAIN_PROCESS_ORDER OLD_VALUE SEED COMPLETED_ROWS UNLOAD_METHOD LAST_FILE ERROR_COUNT PARALLELIZATION USER_NAME ELAPSED_TIME GRANULES OPERATION OBJECT_TYPE_PATH SCN JOB_MODE OBJECT_PATH_SEQNO GRANTOR CONTROL_QUEUE OBJECT_TYPE XML_CLOB STATUS_QUEUE IN_PROGRESS NAME REMOTE_LINK OBJECT_NAME VALUE_T VERSION_DB_VERSION OBJECT_LONG_ VALUE_N TIMEZONE OBJECT_SCHEMA IS_DEFAULT STATE ORIGINAL_ FILE_TYPE PHASE PARTITION_NAME USER_DIRECTORY GUID SUBPARTITION_NAME USER_FILE_NAME START_TIME FLAGS FILE_NAME BLOCK_SIZE PROPERTY EXTEND_SIZE METADATA_BUFFER_SIZE COMPLETION_TIME FILE_MAX_SIZE DATA_BUFFER_SIZE OBJECT_TABLESPACE PROCESS_NAME DEGREE SIZE_ESTIMATE LAST_UPDATE PLATFORM OBJECT_ROW WORK_ITEM ABORT_STEP PROCESSING_STATE OBJECT_NUMBER INSTANCE PROCESSING_STATUS COMPLETED_BYTES
31
Indexes on Master Table
Indexes built in default tablespace object_schema object_name object_type sys_mtable_00000d5f6_ind_1 sys_mtable_00000d5f6_ind_2 base_process_order sys_mtable_00000d5f6_ind_3 object_path_seqno Process_order duplicate sys_c006183
32
Data Pump Master Table (continued)
The reason why Data Pump cannot be used on a READ ONLY database Writes are performed on the master table But can perform on a READ ONLY database via a network connection Can export data out of standby read only databases Allows the restart of Data Pump jobs Records current state of every object imported or exported Holds locations in the dump file set, status of worker processes, current job status and restart information
33
Identifying Contents of Master Table
To find file names used by a Data Pump job SELECT user_file_name FROM <master_table_name> WHERE process_order IN (-22,-21); To find the kinds of database objects in the export SELECT object_type,COUNT(*) FROM <master_table_name> GROUP BY object_type; To find tables collected in the export SELECT object_schema, object_name FROM <master_table_name> WHERE process_order > 0 AND object_type = 'TABLE';
34
Using Data Pump as Part of a Backup Strategy
A full database-level export is a logical backup of the database Slower than OS physical backups but can be parallelized Useful for restoring single tables from a DROP command Database must be open to perform an export Export guarantees a read-consistent view as of the time of the export or at a specified flashback SCN or time The database can be placed in RESTRICTed mode by DBAs to guarantee a consistent full database export Only users with RESTRICTED SESSION system privilege can connect <ipf>L</ipf> STARTUP OPEN RESTRICT
35
Using Data Pump as Part of a Backup Strategy (continued)
Data Pump can be used to restore/reorganize a database Rebuild a database to effect a change in the block size Must prebuild all tablespaces first Move tables across users, reduce fragmentation, migration effects No rollforward recovery is possible Set VERSION for export so that it can be read by a previous Oracle release 9.2 is allowed!
36
Oracle Data Pump Oracle10g Data Pump Environment Data Pump Exports
The Master Table Data Pump Import Attaching to Data Pump Jobs Performance Tests Data Pump and External Tables Summary
37
Data Pump Import The only utility that can read Data Pump export files
Can selectively import individual database objects and types of objects using EXCLUDE or INCLUDE Can import only the metadata to a special sqlfile File contains commands to recreate original objects Edit the create statements before submission to the target database Useful for moving from development to live database Contains all the code for the procedural objects If TRACE=2 is specified, the XML is also included If job stops on a corrupted object, the import can jump over it on restart with START_JOB=SKIP_CURRENT submitted in an attached session impdp fred/fred DIRECTORY=dpump_dir DUMPFILE=f1.dmp SQLFILE=fred_ddl.sql <ipf>L</ipf>
38
Data Pump Import DDL Transformations - Remapping
Schemas can be remapped from one user to another with REMAP_SCHEMA Loads data from two schemas, fred’s data is loaded into amy’s schema Objects can be moved to different tablespaces using REMAP_TABLESPACE Much more convenient than original exp/imp The XML used for metadata allows easy transformation via XSL-T Files can be mapped to different file names using REMAP_DATAFILE MASTER_ONLY=Y (hidden parameter) will import only the master table OEM uses the master table for other purposes Consider disabling referential constraints and triggers during import impdp system/manager DIRECTORY=dpump_dir DUMPFILE=users.dmp SCHEMAS=fred,scott REMAP_SCHEMA=fred:amy <ipf>L</ipf>
39
Data Pump Import DDL Transformations (continued)
The TRANSFORM parameter can prevent generation of 1. STORAGE and TABLESPACE clause 2. STORAGE clause only Applies to both tables and indexes unless TABLE or INDEX is specified STORAGE Controls use of existing storage parameters (default is y) Omits only the storage clauses for tables Storage parameters always ignored if SEGMENT_ATTRIBUTES is set to 'n' SEGMENT_ATTRIBUTES Controls the preservation of the tablespace (default is y) TRANSFORM=SEGMENT_ATTRIBUTES|STORAGE:{y|n}[:TABLE|INDEX] TRANSFORM=STORAGE:n:TABLE <ipf>L</ipf>
40
Import of Data into Tables Already Present
Import of rows based on value of TABLE_EXISTS_ACTION Value Action SKIP Leaves table unchanged Default value when not in DATA_ONLY mode APPEND Adds new rows using external table method Default value when in DATA_ONLY mode TRUNCATE Removes data before importing new rows using external table method REPLACE Drops the table and recreates <ipf>L</ipf>
41
Data Pump Import from a Remote Database
Transfer data between development, production and standby databases Source database can be read only Schema owner(s) on the source must have access to a locally managed temporary tablespace No creation of dumpfile Requires a database link – does not use network pipes Uses direct path and performs INSERT /*+APPEND*/ ... CREATE DATABASE LINK test_public USING 'test'; impdp system/manager NETWORK_LINK=test DIRECTORY=dpump_dir SCHEMAS=fred,scott Log file is written to dpump_dir <ipf>L</ipf>
42
Data Pump Import from a Remote Database
impdp scott/tiger TABLES=emp DIRECTORY=dpump_dir NETWORK_LINK=source_database_link Server Process Data Pump Job Remote Database impdp using a database link Target Database + Log file Can specify flashback scn or time, only when importing from a remote database Can also use ESTIMATE on import from a remote database Instructs the source system to estimate how much data will be generated
43
Oracle Data Pump Oracle10g Data Pump Environment Data Pump Exports
The Master Table Data Pump Import Attaching to Data Pump Jobs Performance Tests Data Pump and External Tables Summary
44
Output from a Running Export Session
C:\>expdp sh/sh directory=ext_dir dumpfile=sh.dmp job_name=j1 Export: Release Production on Thursday, 07 October, :10 Copyright (c) 2003, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release – Production With the Partitioning, OLAP and Data Mining options FLASHBACK automatically enabled to preserve database integrity. Starting "SH"."J1": sh/****** directory=ext_dir dumpfile=sh.dmp job_name=J1 Estimate in progress using BLOCKS method... Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: MB Processing object type SCHEMA_EXPORT/SE_PRE_SCHEMA_PROCOBJACT/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/TABLE/TABLE Processing object type SCHEMA_EXPORT/TABLE/GRANT/OBJECT_GRANT Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
45
Setting up the Second (attach) Session
c:\>expdp sh/sh attach=j1 Job: J1 Owner: SH Operation: EXPORT Creator Privs: FALSE GUID: A3FA0099CB2A49429F2B754C114F9E05 Start Time: Thursday, 07 October, :14 Mode: SCHEMA Instance: orac Max Parallelism: 1 EXPORT Job Parameters: Parameter Name Parameter Value: CLIENT_COMMAND sh/******** directory = ext_dir dumpfile = sh.dmp job_name= j1 DATA_ACCESS_METHOD AUTOMATIC ESTIMATE BLOCKS INCLUDE_METADATA LOG_FILE_DIRECTORY EXT_DIR LOG_FILE_NAME export.log TABLE_CONSISTENCY 0 State: EXECUTING Bytes Processed: 0 Current Parallelism: 1 Job Error Count: 0 Dump File: C:\SH.DMP bytes written: 4,096 Worker 1 Status: Hitting 'CTRL-C' in the original session will automatically place the session in the 'attach' mode
46
Controlling Data Pump Jobs
Export> STATUS Job: J1 Operation: EXPORT Mode: SCHEMA State: EXECUTING Bytes Processed: 0 Current Parallelism: 1 Job Error Count: 0 Dump File: C:\SH.DMP bytes written: 4,096 Worker 1 Status: Export> STOP_JOB Are you sure you wish to stop this job ([y]/n): y STOP_JOB preserves the master table – for future START_JOB commands KILL_JOB deletes the master table Could also issue ADD_FILE,TRACE and/or PARALLEL commands Gives the opportunity to fix space related problems and then restart export CONTINUE_CLIENT starts the job Logging info is sent to the client session STATUS 120 will display job status information every 2 minutes
47
Keywords in Interactive Mode
Command Effect ADD_FILE Add additional dump files CONTINUE_CLIENT Exit interactive mode and enter logging mode (Restarts job, if job is in the stopped state) EXIT_CLIENT Stop export client session, but leave job running HELP Display a summary of available commands KILL_JOB Detach attached client sessions and kill current job PARALLEL Adjust the number of active worker processes for the current job (Enterprise Edition only) START_JOB Restart a stopped job to which you are attached STATUS Show status for current job and/or set status interval STOP_JOB [= IMMEDIATE] Stop the current job for later restart IMMEDIATE aborts worker processes
48
Monitoring and Removing Failed Export Jobs
dba_datapump_jobs dba_datapump_sessions v$session_longops On stopping an export, the job remains visible in dba_datapump_jobs If the export file is unavailable or corrupted the job cannot be killed by attaching to the export Need to physically drop the master table from the user schema Indirectly removes the job record from dba_datapump_jobs SQL> CONNECT scott/tiger SQL> DROP TABLE SYS_EXPORT_SCHEMA_02;
49
Y is a schema level export
dba_datapump_jobs OWNER ATTACHED DATAPUMP NAME JOB_NAME OPERATION JOB_MODE STATE DEGREE SESSIONS SESSIONS SCOTT X EXPORT TABLE NOT RUNNING SCOTT SYS_IMPORT_TABLE_01 IMPORT TABLE NOT RUNNING SCOTT Y EXPORT SCHEMA NOT RUNNING SCOTT Z EXPORT TABLE NOT RUNNING SCOTT T EXPORT TABLE STOP PENDING Four exports have completed with KEEP_MASTER = Y Export T has been recently stopped by an attached session The table level export has a default job name Combination of owner name and job name uniquely identifies a job Y is a schema level export The datapump_sessions column is not documented Refers to the number of sessions attached to the job’s queues
50
Oracle Data Pump Oracle10g Data Pump Environment Data Pump Exports
The Master Table Data Pump Import Attaching to Data Pump Jobs Performance Tests Data Pump and External Tables Summary
51
Performance – Example Scenario
Test has 2.0 GB of data (16.2M rows) involving two fact tables Export (single stream) Original export 10min 40sec Data Pump export 3min 12sec Expect 1.5 – 2 times single stream speed Higher factors of improvement depending on degree of parallelism and sufficient hardware Import Original import 2hr 26min 10sec Data Pump import 0hr 03min 05sec
52
Performance – Further Tests
Timings taken for sample employee tables containing 1, 0.5m, 1m, 2m, 4m and 8m rows Original export Data Pump export using direct path and external table Original import Sizes of dump file sets compared for EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO SEQID 7369 SMITH CLERK DEC 7499 ALLEN SALESMAN FEB 7521 WARD SALESMAN FEB 7566 JONES MANAGER APR
53
Export Timings and Sizes
Timings (seconds) Rows 1 0.5M 1M 2M 4M 8M Data Pump Direct path 28 38 58 69 90 114 External Tables 51 70 105 120 146 Original Export 2 21 32 52 100 192 Size of Export File (MB) Rows 1 0.5M 1M 2M 4M 8M Data Pump Direct path 0.008 20 41 82 169 331 External Tables Original Export 23 47 89 190 389
54
Export Performance 3 min Response time Export - conventional path
Data Pump - direct path Data Pump - external table 2 min 1 min 0.5m 1m 2m 4m Rows (millions) 8m
55
Import Timings Timings (seconds) Rows 1 0.5M 1M 2M 4M 8M Data Pump
Direct path 7 12 16 25 46 86 External Tables 33 44 63 120 Original Import 2 15 73 157 306
56
Import Performance 5 min Import Data Pump - direct path
Data Pump - external table Response time 4 min 3 min 2 min 1 min 0.5m 1m 2m 4m Rows (millions) 8m
57
Access Methods Oracle uses external table mode for imports when restrictions due to object types prevent the use of direct path External table mode is used when the table Has referential constraints Has active triggers Has a global index present during a single-partition load Has a domain index on a LOB column Is in a cluster Has fine-grained access control enabled in insert mode Contains BFILE columns or columns of opaque types Contains VARRAY columns with an embedded opaque type
58
Setting the Data Access Method
Undocumented parameter ACCESS_METHOD Can have values of EXTERNAL_TABLE and DIRECT_PATH Restrictions may not necessarily cause the value to be overridden Actual method used can be observed in a trace of master process(es) ... DATA_ACCESS_METHOD = ... Significant start up time spent in estimation Performance improvement highest with large tables
59
Data Pump Tracing Tracing can be enabled at a number of levels using the TRACE parameter Can also be achieved by setting an event in the server parameter file EVENT='39089 TRACE NAME CONTEXT FOREVER, LEVEL 0nnn0300' The first four digits identify the level of tracing 1FF0300 (or 01FF0300) is the highest level is the setting for standard tracing and can identify most errors Item to be traced Setting API 0001 MCP 0008 FILE 0010 QUEUE 0020 WORKER 0040 DATA 0080 METADATA API 0100 MCP = Master control Process
60
Data Pump Tracing Trace files are written to BACKGROUND_DUMP_DEST Format of trace filename for master process <sid>_dm<integer>_<process_id>.trc expdp amy/amypw DIRECTORY=dp DUMPFILE=amy.dmp TRACE=01FF0300 db1_dm00_2198.trc Format of trace filename(s) - one for each worker process <sid>_dw<integer>_<process_id>.trc db1_dw01_3167.trc There is also a 'component' trace file in USER_DUMP_DEST Data Pump user must be a DBA to perform traced exports or imports
61
Sampling the Data Can specify a percentage of data to be sampled and exported Allows a subset of data to be obtained for testing SAMPLE = "SCOTT"."EMP":50 Causes 50% of the rows to be exported Sample based on a random selection of blocks ( up to 100%) Tablespace storage can be decreased on import Large datafiles in production mapped to smaller files in a test database PCTSPACE can be used to specify a percentage reduction Extent allocations are altered and sizes of datafiles are adjusted TRANSFORM = PCTSPACE:50 Needs to be used in conjunction with the SAMPLE parameter Size of sample and storage need to be compatible
62
Oracle Data Pump Oracle10g Data Pump Environment Data Pump Exports
The Master Table Data Pump Import Attaching to Data Pump Jobs Performance Tests Data Pump and External Tables Summary
63
Handling External Tables
External Table files can be read and written Original Table(s) External Table File Ext_tab1 Ext_tab2 Database
64
Writing to External Tables
Unload and transform data into a flat file In Oracle9i, external tables had read only access via the ORACLE_LOADER access driver ORACLE_DATAPUMP access driver is now able to write to external tables Only CREATE TABLE AS SELECT statements are allowed Subsequent DML is not allowed The resultant flat file is of a platform independent Oracle proprietary format Allows transform operations on the data as it is moved Joins can be created on data as it is loaded or unloaded More flexible than simple data pump export/import Data is written out in granules to allow parallel processing Parallelism is possible even with a single output file Except for writes to tape devices Can use the external table from within different databases
65
Writing to External Tables - Example
Create an external table by unloading data from the database CREATE DIRECTORY ext_dir AS 'c:\oracle\db1\unload'; GRANT READ,WRITE ON DIRECTORY ext_dir TO fred; -- necessary if FRED is going to use the directory CREATE TABLE emp_dept_xt (ename,job,hiredate,dname,loc) ORGANIZATION EXTERNAL (TYPE ORACLE_DATAPUMP DEFAULT DIRECTORY ext_dir LOCATION ('emp_dept_file.dmp')) AS SELECT e.ename,e.job,e.hiredate,d.dname,d.loc FROM emp e,dept d WHERE e.deptno = d.deptno;
66
Writing to External Tables – Example (2)
Create an external table from a previously created external table file CREATE TABLE emp_dept_from_xt ( ename VARCHAR2(10) ,job VARCHAR2(9) ,hiredate DATE ,dname VARCHAR2(14) ,loc VARCHAR2(12)) ORGANIZATION EXTERNAL (TYPE ORACLE_DATAPUMP DEFAULT DIRECTORY ext_dir LOCATION ('emp_dept_file.dmp')); Note the absence of the 'AS' clause Can create multiple tables containing subsets of columns from the file
67
Contents of the External Table
SELECT * FROM emp_dept_xt WHERE dname = 'SALES'; ENAME JOB HIREDATE DNAME LOC ALLEN SALESMAN 20-FEB-1981 SALES CHICAGO WARD SALESMAN 22-FEB-1981 SALES CHICAGO MARTIN SALESMAN 28-SEP-1981 SALES CHICAGO BLAKE MANAGER 01-MAY-1981 SALES CHICAGO TURNER SALESMAN 08-SEP-1981 SALES CHICAGO JAMES CLERK DEC-1981 SALES CHICAGO Test to see if there are any discrepancies between the data SELECT e.ename,e.job,e.hiredate,d.dname,d.loc FROM emp e,dept d WHERE e.deptno = d.deptno MINUS SELECT * FROM emp_dept_xt; no rows selected.
68
Oracle Data Pump Oracle10g Data Pump Environment Data Pump Exports
The Master Table Data Pump Import Attaching to Data Pump Jobs Performance Tests Data Pump and External Tables Oracle11g New features Summary
69
Compressed and Encrypted Export Parameters
Compressed exports COMPRESSION={ALL | DATA_ONLY | METADATA_ONLY | NONE} ALL and DATA_ONLY require Advanced Compression option METADATA_ONLY metadata is compressed (default) NONE disables compression Encrypted exports ENCRYPTION = {ALL | DATA_ONLY | ENCRYPTED_COLUMNS_ONLY | METADATA_ONLY | NONE} Default is ALL if ENCRYPTION_PASSWORD is set DATA_ONLY, METADATA_ONLY, ALL available only in 11g ENCRYPTION_MODE = { DUAL | PASSWORD | TRANSPARENT } Governs whether it can be imported using a password, wallet, or either ENCRYPTION_ALGORITHM = { AES128 | AES192 | AES256 } Available only in 11g
70
Oracle 11g Data Pump Additional Features
Legacy mode Automatic translation of old exp commands into Datapump syntax Legacy scripts still work Overwrite existing dump files with REUSE_DUMPFILES Metadata differ Requires Change Management Licence Objects having deferred segment creation are exported even if not built Original export cannot do this REMAP_DATA Masks data values – useful for moving data from production to test Useful white papers with examples
71
Compression and Encryption Performance
Export 2 million row employee table with compression and encryption Dumpfile size Winzip size from export Time to Export Normal 82.6M 17.3M 44 secs Compressed 15.2M 11.7M 46 secs Encrypted Encrypted and compressed
72
Oracle Data Pump Oracle10g Data Pump Environment Data Pump Exports
The Master Table Data Pump Import Attaching to Data Pump Jobs Performance Tests Data Pump and External Tables Summary
73
Differences between Data Pump and EXP/IMP
Startup time is longer Designed for big jobs Typical time could be 10 seconds (or longer) Stream format makes Data Pump exports 15% smaller Master table is placed at end of exported file using direct path mode Import needs to locate the master table and build its indexes Importing a small subset of the data from an export could take a long time The master table needs to be maintained Metadata construction performance is about the same Seven times bigger than metadata from original export Can use gzip to compress the metadata
74
Points to Consider Faster Watch out for lots of small tables
More Flexible Can export a selection of procedures and functions The dumpfile can act as a backup of the source code Multiple files can easily be used in multiple directories Good for restructuring tables Set up a good PCTFREE before importing data Drop old tables, then import under new storage definitions Can import database objects without including rows Useful for moving from development to live database Can effect the transfer of tablespaces across database and networks without the need for read only status Very useful for OLTP databases
75
Future of Original Export and Import Utilities
Import will remain available in future releases for ever Will handle export files from all versions Export will be deprecated Oracle 9i export can be used with 10g for downgrade purposes New features in 10g and 11g are not supported in exp Schema containing 200 tables each with 14 rows of employee data Time for exp : 23 secs Time for expdp : 2mins 13 seconds Size of exp dumpfile : 246K Size of expdp dumpfile : 2764K
76
University of Wolverhampton, UK
Oracle Data Pump Carl Dudley University of Wolverhampton, UK UKOUG SIG Director
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.