New Size Based ADR Purge Policy in 12.2

With the introduction of the Automatic Diagnostic Repository (ADR) we got some automatic purging of logs and traces too, at least for the database. This purging is based on the age of files. All the files in ADR are put in two categories, files with a short life and files with a long life. So we also have two retention policies for these files, the SHORTP_POLICY and the LONGP_POLICY. The SHORTP_POLICY defaults to 720 hours which is roughly a month whilst the LONGP_POLICY defaults to 8760 hours which is roughly a year. You can read more on setting the policies in MOS Note “Retention Policy for ADR (Doc ID 564269.1)“. If you are unsure which files are controlled by which policy, have a look at MOS Note “Which Files Are Part Of SHORTP_POLICY And LONGP_POLICY In ADR? (Doc ID 975448.1)“, basically traces, core dumps and IPS have a short life, all others have a long life.

This is a good starting point for housekeeping, but there might be situations were a lot of large traces are written in a short amount of time. This might fill up you filesystem leading to a database outage. Now with Oracle Database 12.2 and onwards, there is another size based policy. This policy is not immediatly visible:

[oracle@odax7-2m rdbms]$ adrci

ADRCI: Release 12.2.0.1.0 - Production on Fri Mar 15 08:07:49 2019

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

ADR base = "/u01/app/oracle"
adrci> set home mmidb
adrci> show homes
ADR Homes:
diag/rdbms/mmidb/MMIDB

adrci>  show control

ADR Home = /u01/app/oracle/diag/rdbms/mmidb/MMIDB:
*************************************************************************
ADRID                SHORTP_POLICY        LONGP_POLICY         LAST_MOD_TIME                            LAST_AUTOPRG_TIME                        LAST_MANUPRG_TIME                        ADRDIR_VERSION       ADRSCHM_VERSION      ADRSCHMV_SUMMARY     ADRALERT_VERSION     CREATE_TIME
-------------------- -------------------- -------------------- ---------------------------------------- ---------------------------------------- ---------------------------------------- -------------------- -------------------- -------------------- -------------------- ----------------------------------------
1312824711           72                   72                   2019-03-14 22:37:18.660133 +01:00        2019-03-14 09:13:26.445071 +01:00        2019-03-15 07:58:30.921596 +01:00        1                    2                    107                  1                    2018-10-02 09:11:29.809205 +02:00
1 row fetched

To see the size based policy (SIZEP_POLICY), we need to query it explicitly:

adrci> select sizep_policy from adr_control_aux;

ADR Home = /u01/app/oracle/diag/rdbms/mmidb/MMIDB:
*************************************************************************
SIZEP_POLICY
--------------------
18446744073709551615
1 row fetched

This number represents the maximum size of the specific ADR home in bytes. To get an idea, how much space is required to keep files for a given amount of time or vice versa, we can use the “estimate” command.

adrci> estimate (LONGP_POLICY = 72)
Estimate
Short Policy Hours: 72
Long Policy Hours: 72
Size Policy Bytes: 2459925363

adrci> estimate (SIZEP_POLICY = 1000000000)
Estimate
Short Policy Hours: 26
Long Policy Hours: 26
Size Policy Bytes: 1000000000

So if we want to limit the size to 1GB, we can set the SIZEP_POLICY accordingly.

adrci> set control (SIZEP_POLICY = 1000000000)

adrci> select sizep_policy from adr_control_aux;

ADR Home = /u01/app/oracle/diag/rdbms/mmidb/MMIDB:
*************************************************************************
SIZEP_POLICY
--------------------
1000000000
1 row fetched

But remember, this does not take immediate effect. The database is the only facility, that does automatic purging. All other ADR homes from other facilities need explicit purging via the “purge” command. And also the database does the purging only once a week and there is no way to change that, see MOS Note “Is there a way to control Auto_Purge Frequency done by the MMON ? (Doc ID 1446242.1)” for details. So you should consider using a separate purge job for that. The “purgeLogs” script (MOS Note “purgeLogs: Cleanup traces, logs in one command (Doc ID 2081655.1)“) might be a good starting point for that, even though it was designed to be used on ExaData, Database Appliance etc., it should also do it’s job on standard installations.

Advertisements

Platform change and upgrade in one step – Manually

In a previous blog post I outlined the steps to migrate between platforms and versions in one step and ended up with an error from the XTTS scripts provided by Oracle.
Now I’ll show you, how easy this is, even if you do it by hand.

Step 1: create initial backup

At first, we get the current SCN of the target database. This SCN will be used later on.

SQL>  select current_scn from v$database;

CURRENT_SCN
-----------
    1446020

Then we create a classic backup of all tablespaces that shall be transported. Note, this is all online and read/write at this time. So there is no impact to any production processes beside the load generated by the backup itself.

RMAN> backup for transport allow inconsistent incremental level 0 format '/tmp/MMI_%U.bkp' tablespace mmi;

Starting backup at 2018-09-24 13:43:45
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=271 device type=DISK


channel ORA_DISK_1: starting incremental level 0 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=+DATA/MMI/DATAFILE/mmi.296.987685353
channel ORA_DISK_1: starting piece 1 at 2018-09-24 13:43:54
channel ORA_DISK_1: finished piece 1 at 2018-09-24 13:43:57
piece handle=/tmp/MMI_02tdtqsq_1_1.bkp tag=TAG20180924T134353 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 2018-09-24 13:43:57

Step 2: restore the back at destination

For the restore, we need to have an empty database in place. This database may be of a different, e.g. higher version as the source. We do the restore and the necessary platform conversion in one step. You could also restore and convert afterwards, but this would require twice the storage.

RMAN> restore  from platform 'Linux x86 64-bit'  all foreign datafiles to new from backupset '/tmp/MMI_02tdtqsq_1_1.bkp';

Starting restore at 2018-09-24 14:05:00
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring all foreign files in backup piece
channel ORA_DISK_1: reading from backup piece /tmp/MMI_02tdtqsq_1_1.bkp
channel ORA_DISK_1: restoring foreign file 5 to +DATA/MMIRECO/DATAFILE/mmi.301.987689101
channel ORA_DISK_1: foreign piece handle=/tmp/MMI_02tdtqsq_1_1.bkp
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:16
Finished restore at 2018-09-24 14:05:16

At this point we have the files available at the destination, but they are not yet part of the destination database.

Step 3: create incremental backup

To keep the destination files up-to-date, we create an incremental backup at the source that covers all changes from the beginning of the initial (or previous incremental) backup that we took. That’s why we gathered the SCN beforehand. And we will get the SCN again right before the incremental backup for the next increment.

SQL>  select current_scn from v$database;

CURRENT_SCN
-----------
    1458731

Now we create the incremental backup.

RMAN>  backup for transport allow inconsistent incremental from scn 1446020 format '/tmp/MMI_%U.bkp' tablespace mmi;

Starting backup at 2018-09-24 13:54:20
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=+DATA/MMI/DATAFILE/mmi.296.987685353
channel ORA_DISK_1: starting piece 1 at 2018-09-24 13:54:21
channel ORA_DISK_1: finished piece 1 at 2018-09-24 13:54:22
piece handle=/tmp/MMI_03tdtrgd_1_1.bkp tag=TAG20180924T135421 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 2018-09-24 13:54:22

Step 4: apply incremental backup

The incremental backup, that we created in step 3 can now be converted to the destination platform and applied to the datafiles.

RMAN> recover foreign datafilecopy '+DATA/MMIRECO/DATAFILE/mmi.301.987689101' from backupset '/tmp/MMI_03tdtrgd_1_1.bkp' from platform 'Linux x86 64-bit';

Starting restore at 2018-09-24 14:06:19
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring foreign file +DATA/MMIRECO/DATAFILE/mmi.301.987689101
channel ORA_DISK_1: reading from backup piece /tmp/MMI_03tdtrgd_1_1.bkp
channel ORA_DISK_1: foreign piece handle=/tmp/MMI_03tdtrgd_1_1.bkp
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 2018-09-24 14:06:21

Step 5: keep datafile copies current

This is simply done by repeating steps 3 & 4 until the final downtime takes place.

Step 6: final incremental backup

For the final transport we create a last incremental backup with the tablespaces being read-only now.

RMAN> sql "alter tablespace mmi read only";

sql statement: alter tablespace mmi read only


RMAN> backup for transport incremental from scn 1458731 format '/tmp/MMI_%U.bkp' tablespace mmi DATAPUMP FORMAT '/tmp/mmi_ts.dmpdp';

Starting backup at 2018-09-24 14:10:48
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=271 device type=DISK
Running TRANSPORT_SET_CHECK on specified tablespaces
TRANSPORT_SET_CHECK completed successfully

Performing export of metadata for specified tablespaces...
   EXPDP> Starting "SYS"."TRANSPORT_EXP_MMI_nffu":
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
   EXPDP> Master table "SYS"."TRANSPORT_EXP_MMI_nffu" successfully loaded/unloaded
   EXPDP> ******************************************************************************
   EXPDP> Dump file set for SYS.TRANSPORT_EXP_MMI_nffu is:
   EXPDP>   /u01/app/oracle/product/12.2.0.1/db/dbs/backup_tts_MMI_95169.dmp
   EXPDP> ******************************************************************************
   EXPDP> Datafiles required for transportable tablespace MMI:
   EXPDP>   +DATA/MMI/DATAFILE/mmi.296.987685353
   EXPDP> Job "SYS"."TRANSPORT_EXP_MMI_nffu" successfully completed at Mon Sep 24 14:12:46 2018 elapsed 0 00:01:29
Export completed

channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=+DATA/MMI/DATAFILE/mmi.296.987685353
channel ORA_DISK_1: starting piece 1 at 2018-09-24 14:12:51
channel ORA_DISK_1: finished piece 1 at 2018-09-24 14:12:54
piece handle=/tmp/MMI_05tdtsj3_1_1.bkp tag=TAG20180924T141050 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting full datafile backup set
input Data Pump dump file=/u01/app/oracle/product/12.2.0.1/db/dbs/backup_tts_MMI_95169.dmp
channel ORA_DISK_1: starting piece 1 at 2018-09-24 14:12:55
channel ORA_DISK_1: finished piece 1 at 2018-09-24 14:12:56
piece handle=/tmp/mmi_ts.dmpdp tag=TAG20180924T141050 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 2018-09-24 14:12:56

This step creates the last incremental backup that we need to apply at the destination as well as a metadata export that we’ll need to plugin the datafiles into the destination database.

Step 7: final apply and plugin

The application of the incremental backup is identical to the previous apply from step 4.

RMAN> recover foreign datafilecopy '+DATA/MMIRECO/DATAFILE/mmi.301.987689101' from backupset '/tmp/MMI_05tdtsj3_1_1.bkp' from platform 'Linux x86 64-bit';

Starting restore at 2018-09-24 14:14:29
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring foreign file +DATA/MMIRECO/DATAFILE/mmi.301.987689101
channel ORA_DISK_1: reading from backup piece /tmp/MMI_05tdtsj3_1_1.bkp
channel ORA_DISK_1: foreign piece handle=/tmp/MMI_05tdtsj3_1_1.bkp
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 2018-09-24 14:14:31

Now we need to get the metadata export dump from the final backupset.

RMAN> restore dump file datapump destination '/u01/app/oracle/admin/mmireco/dpdump/' from backupset '/tmp/mmi_ts.dmpdp';


Starting restore at 2018-09-24 15:04:15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=267 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring Data Pump dump file to /u01/app/oracle/admin/mmireco/dpdump/backup_tts_MMIRECO_22317.dmp
channel ORA_DISK_1: reading from backup piece /tmp/mmi_ts.dmpdp
channel ORA_DISK_1: foreign piece handle=/tmp/mmi_ts.dmpdp
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
Finished restore at 2018-09-24 15:04:20

Finally, we can import the metadata export making the foreign datafiles part of the destination database. In my example I used just one file/tablespace, you can easily plugin several datafiles at once by using wildcards in the file list. As a prerequisite, create all the users that own segments inside the transported tablespaces. Otherwise the import will fail.

[oracle@vm141 ~]$ impdp  dumpfile=data_pump_dir:backup_tts_MMIRECO_22317.dmp logfile=data_pump_dir:mmi_imp_tts.impdp.log transport_datafiles='+DATA/MMIRECO/DATAFILE/mmi.301.987689101';

Import: Release 12.2.0.1.0 - Production on Mon Sep 24 15:06:15 2018

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Master table "SYS"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "SYS"."SYS_IMPORT_TRANSPORTABLE_01":  /******** AS SYSDBA dumpfile=data_pump_dir:backup_tts_MMIRECO_22317.dmp logfile=data_pump_dir:mmi_imp_tts.impdp.log transport_datafiles=+DATA/MMIRECO/DATAFILE/mmi.301.987689101
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job "SYS"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at Mon Sep 24 15:07:02 2018 elapsed 0 00:00:43

Step 8: Finalization

Finally, we can switch the tablespace(s) read/write again at the destination.

SQL> alter tablespace mmi read write;

Tablespace altered.

And since the export only includes metadata for all the segments inside the transported files, we need to copy all metadata that resides in other tablespaces, e.g. SYSTEM tablespaces. That are views, PL/SQL, grants etc. So best is to do a metadata-only export at the source and import that into the destination database. I asume, you know how to do it, so I safely ommit this step.

Conclusion

Transporting tablespaces between platforms and version is not a one-liner, but it is a acceptable effort when you consider the downtime, that is needed to switch from one system to another. And in my opinion, when you do it manually without the scripts from Oracle, it becomes much clearer, how things work.

Platform change and upgrade in one step – XTTS

Today I want to tell you something about cross platform transportable tablespaces (XTTS) and incrementally updated datafile copies. This is a very nice feature to move to another platform and change the Oracle version in one step. There are several MOS notes, whitepapers and of cause some Upgrade-Slides by Mike Dietrich covering this topic.

The basic steps in all cases and versions are these:

  1. set tablespace(s) read only
  2. export metadata for those tablespaces
  3. transport datafiles and dump to target system
  4. create necessary database users
  5. import dump into target database which makes the datafiles part of the target database
  6. set tablespace(s) read write
  7. optionally, transport other objects like views, PL/SQL etc.

But what if the tablespaces are too big and copy time would exceed the allowed downtime window? Then we can use incremental backups to shorten the downtime. The steps to do this, are quite similar.

  1. backup tablespace(s) for transport
  2. copy backup to target system
  3. restore and convert datafiles to target destination and platform
  4. create incremental backup for transported tablespace(s)
  5. recover transported datafiles using the incremental backup
  6. repeat steps 4 and 5 until final downtime window
  7. set tablespace(s) read only
  8. create final incremental backup for transported tablespace(s)
  9. apply incremental backup to transported datafiles
  10. continue with step 2 of the initial process above

Sounds a bit complicated, doesn’t it? So Oracle was so kind to put this whole process into scripts. The following MOS notes contain and describe these scripts:

  • 11G – Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 1389592.1)
  • 12C – Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2005729.1)
  • V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2471245.1)

During a migration project from Linux to SPARC (don’t ask) I wanted to use those scripts. But I experienced the following during “Phase 2”:

oracle@server2:~/xtts> $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup
============================================================
trace file is /oracle/xtts/backup_Sep20_Thu_12_20_52_917//Sep20_Thu_12_20_52_917_.log
=============================================================

[…]

--------------------------------------------------------------------
Done with prepare phase
--------------------------------------------------------------------


--------------------------------------------------------------------
Find list of datafiles in system
--------------------------------------------------------------------

SP2-0027: Input is too long (> 2499 characters) - line ignored
SP2-0027: Input is too long (> 2499 characters) - line ignored
           from dual
           *
ERROR at line 19:
ORA-00936: missing expression

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Error:
------
Error in executing /oracle/xtts/backup_Sep20_Thu_12_27_07_889//diff.sql

Okay. The reason is, I have loads of tablespaces that need to be transferred. This list messed up the whole process. So I tried to understand the process and just do it myself. What I learned is, that it is quite easy to do it manually. And I share my findings with you by simply outlining the basic process in an upcoming blog post.

Parallel Limit of RMAN Duplicate

A long time since my last post, and a lot of topics in the pipeline to be posted. So about time to get started.
In last years October I was part of a PoC which a customer initiated to find out if Solars SPARC together with a ZFS Storage Appliance might be a good platform to migrate and consolidate their systems to. A requirement was to have a Data Guard setup in place, so I needed to create the standby database from the primary. I use RMAN for this and since SPARC platforms typically benefit from heavy parallelization, I tried to use as much channels as possible.

RMAN> connect target sys/***@pocsrva:1521/pocdba
RMAN> connect auxiliary sys/***@pocsrvb:1521/pocdbb
RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 40 BACKUP TYPE TO BACKUPSET;
RMAN> duplicate target database
2> for standby
3> from active database
4> spfile
5>   set db_unique_name='POCDBB'
6>   reset control_files
7>   reset service_names
8> nofilenamecheck
9> dorecover;

Unfortunately this failed:

released channel: ORA_AUX_DISK_38
released channel: ORA_AUX_DISK_39
released channel: ORA_AUX_DISK_40
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 10/18/2018 12:02:33
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script
ORA-17619: max number of processes using I/O slaves in a instance reached

The documentation says:

$ oerr ora 17619
17619, 00000, "max number of processes using I/O slaves in a instance reached"
// *Cause:  An attempt was made to start large number of processes
//          requiring I/O slaves.
// *Action: There can be a maximum of 35 processes that can have I/O
//          slaves at any given time in a instance.

Ok, there is a limit for I/O slaves per instance. By the way, this is all single instance, no RAC. So I reduced the amount of channels to 35 and tried again.

$ rman

Recovery Manager: Release 12.1.0.2.0 - Production on Thu Oct 18 12:05:09 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect target sys/***@pocsrva:1521/pocdba
RMAN> connect auxiliary sys/***@pocsrvb:1521/pocdbb
RMAN> startup clone nomount force
RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 35 BACKUP TYPE TO BACKUPSET;
RMAN> duplicate target database
2> for standby
3> from active database
4> spfile
5>   set db_unique_name='POCDBB'
6>   reset control_files
7>   reset service_names
8> nofilenamecheck
9> dorecover;

But soon the duplicate errored out again.

channel ORA_AUX_DISK_4: starting datafile backup set restore
channel ORA_AUX_DISK_4: using network backup set from service olga9788:1521/eddppocb
channel ORA_AUX_DISK_4: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_4: restoring datafile 00004 to /u02/app/oracle/oradata/POCDBB/datafile/o1_mf_sysaux__944906718442_.dbf
PSDRPC returns significant error 1013.
PSDRPC returns significant error 1013.
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 10/18/2018 12:09:13
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script

ORA-19845: error in backupSetDatafile while communicating with remote database server
ORA-17628: Oracle error 17619 returned by remote Oracle server
ORA-17619: max number of processes using I/O slaves in a instance reached
ORA-19660: some files in the backup set could not be verified
ORA-19661: datafile 4 could not be verified
ORA-19845: error in backupSetDatafile while communicating with remote database server
ORA-17628: Oracle error 17619 returned by remote Oracle server
ORA-17619: max number of processes using I/O slaves in a instance reached

Obviously the instance still tries to allocate to many I/O slaves. I asume, there are I/O slaves for normal channels as well as for auxiliary channels per instance. That’s why I tried again with a parallelism of 16 which would result in 32 I/O slaves.

RMAN> connect target sys/***@pocsrva:1521/pocdba
RMAN> connect auxiliary sys/***@pocsrvb:1521/pocdbb
RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 16 BACKUP TYPE TO BACKUPSET;
RMAN> duplicate target database
2> for standby
3> from active database
4> spfile
5>   set db_unique_name='POCDBB'
6>   reset control_files
7>   reset service_names
8> nofilenamecheck
9> dorecover;

With this configuration the duplicate went fine without any further issues. Parallelization is good, but it has it’s limits.

Parallelizing Standard Edition Data Pump

Todays blog post is about Data Pump. Sometimes we use this tool to migrate applications to another database. There are some reasons for doing so, reorganizing data, difficult upgade/migration paths to mention two. With the Enterpise Edition of the Oracle Database we can do the export and import in parallel. But what if we are using Standard Edition only? We are bound to export and import serially. This is a major restriction, especially if time is limited. But there is a way to make it run in parallel. The basic idea is to split the export/import process into several tasks which then can run in parallel. There is no restriction in having several Data Pump processes at a time. It makes the process more difficult, but it can speed up the whole process a lot. Let’s see how it is done. In my example I use direct import using a network link, this skips the step of persisting the transported data in the filesystem and makes the process even faster.
Basically there are three steps that cost a lot of time:

  1. Transport table data
  2. Create indexes
  3. Create/validate constraints

To parallelize the transport of table data, I first analyze the size of the tables. Typically there are a handful large tables and many many small tables. I create sets of tables that should be transported together, these sets are made of the large tables. Then there is another import, that imports all the other tables. For the latter one I need to exclude indexes and constraints since those will be created in parallel afterwards. For best reproducability I use parameter files for all import runs. Beside that, it makes handling of quotes much easier than directly at the prompt. And I use “job_name” to better identify and monitor the data pump jobs.
First, the parameters for the sets of large tables, in my example there is just one table per set:

userid=
network_link=
logfile=DATA_PUMP_DIR:import_table1.log
include=TABLE:"=<TABLE>"
job_name=table_1
userid=
network_link=
logfile=DATA_PUMP_DIR:import_table2.log
include=TABLE:"=<TABLE>"
job_name=table_2

Repeat this for all the other table sets. Now the parameter file for the rest of the data and all the other information, I use a schema based import and exclude the large tables from above.

userid=
network_link=
logfile=DATA_PUMP_DIR:import_therest.log
schemas=
exclude=TABLE:"in ('TABLE_1', 'TABLE_2')"
exclude=INDEX
exclude=CONSTRAINT
job_name=the_rest

Since the last job creates all required users, I start this job first and wait until the user creation is finished. After that, the jobs for the large tables can be started.
When all of those jobs are finished, it is time to transport the indexes. To create all indexes, I use database jobs with no repeat interval. This makes the jobs go away once they finish successfully. The degree of parallelizm can be easily adapted by setting the “job_queue_processes” parameter. The job action can be created using DBMS_METADATA. The following script will generate a SQL script that creates one database job per index creation.

set serveroutput on
begin
  dbms_output.put_line('var x number');
  dbms_metadata.set_transform_param(
    TRANSFORM_HANDLE=>DBMS_METADATA.SESSION_TRANSFORM,
    name=>'STORAGE', 
    value=>FALSE
  );
  for obj in (
    select 'execute immediate ''''' || dbms_metadata.get_ddl('INDEX', index_name, owner) || ''''';' ddl
    from dba_indexes where owner=''
  ) loop
    dbms_output.put_line('begin dbms_job.submit(:x, what=>''' || obj.ddl || ''', next_date=>sysdate, interval=>null); end;');
  dbms_output.put_line('/');
  dbms_output.put_line('');
  end loop;
  dbms_output.put_line('commit;');
end;
/

After that, the constraints can be created in a similar way.

set serveroutput on
begin
  dbms_output.put_line('var x number');
  DBMS_METADATA.set_transform_param (DBMS_METADATA.session_transform, 'SQLTERMINATOR', true);
  for obj in (
    select dbms_metadata.get_ddl('CONSTRAINT', constraint_name, owner) ddl
    from dba_constraints where owner=''
    and constraint_type IN ('U', 'P')
  ) loop
    dbms_output.put_line('exec dbms_job.submit(:x, what=>''' || obj.ddl || ''', next_date=>sysdate, interval=>null);');
    dbms_output.put_line('/');
    dbms_output.put_line('');
  end loop;
  dbms_output.put_line('commit;');
end;
/

This craetes all the unique and primary constraints. If you wish to include more, simply adapt the script. Be aware of the fact, that I did not include all constraint types. I need to do a last metadata import at last to make sure all objects are copied to the new environment.

The approach can also be used to speed up Data Pump for Enterprise Edition since this only transports tables in parallel and does the index creation etc. serially too, that depends on the version you use.

Hope this helps.

Transporting SQL Patches Between Databases

A while ago I wrote about SQL Patches – what hints do they exactly use? Now I came to the point where the underlying application was moved to another database using Data Pump. And consequently some parts of the application started to run into performance issues again because the SQL patches, that made some statements run fast, were not transported to the new database. So the need to re-create or transport those SQL patches arrised.

SQL patches are not part of the application schema, they are stored in the data dictionary of the database. That is the cause why they are not transported when using Data Pump. But there is a way to transport them. The DBMS_SQLDIAG package provides some procedures for this, CREATE_STGTAB_SQLPATCH, PACK_STGTAB_SQLPATCH and UNPACK_STGTAB_SQLPATCH. This listing represents the order in which these functions are used. Basically the procedure to move SQL patches is this:

  1. Create staging table in a schema other than SYS
  2. Import SQL patches (all or some) into this staging table
  3. Transport staging table to destination database (Data Pump, CTAS over DB-Link will not work since there is a LONG column in the staging table)
  4. Extract SQL patches (all or some) from staging table back into the dictionary

Let’s see how this works, first extract the SQL patches.

SQL> select name from dba_sql_patches;

NAME
--------------------------------------------------------------------------------
Patch_2
Patch_5
Patch_3
Patch_4
Patch_1

SQL> exec dbms_sqldiag.create_stgtab_sqlpatch('SQLPATCH_STAGE','SYSTEM');

PL/SQL procedure successfully executed.

SQL>  exec  dbms_sqldiag.pack_stgtab_sqlpatch(staging_table_name=>'SQLPATCH_STAGE', staging_schema_owner=>'SYSTEM');

PL/SQL procedure successfully executed.

Now move the staging table to the destination database.

[oracle ~]$ expdp system/**** tables=system.sqlpatch_stage directory=dings dumpfile=sqlpatch.dmpdp

[oracle ~]$ impdp system/**** directory=dings dumpfile=sqlpatch.dmpdp full=y

Finally, extract the SQL patches back into the data dictionary.

SQL>  exec dbms_sqldiag.unpack_stgtab_sqlpatch(replace=>true, staging_table_name=>'SQLPATCH_STAGE', staging_schema_owner=>'SYSTEM');

PL/SQL procedure successfully executed.

That’s it. Nothing more to do. Keep that in mind in case your applications are fine tuned using SQL patches and you need to move them to different databases.

Oracle SE2 and Instance Caging

At DOAG Exaday 2018 Martin Bach talked about using the Resource Manager. He explained, that Resource Manager is using the CPU_COUNT initialization parameter to limit the CPU utilization of a database instance. This is called instance caging and requires an Enterprise Edition License. Resource Manager is not available in Standard Edtion 2 (SE2) according to the Licensing Guide:

se2_resmgr_cpucount_licguide

On the other hand, SE2 is limited to 16 threads per database. Franck Pachot did investigate this in his blog post and found out, that internally Resource Manager features are being used to enforce this limitation.

So the question came up, what will happen to a SE2 Oracle Database that has CPU_COUNT set. According to the documentation, CPU_COUNT works only with Resource Manager which is not part of SE2, but SE2 is using Resource Manager internally.

Now, let’s try. I did use the same method to generate load that Franck Pachot used for his tests. A simple PL/SQL block running in several database jobs.  For testing this, I set CPU_COUNT to 4 and did run 10 parallel jobs. Having this, my workload is definitely below the internal maximum of 16 threads. To measure the workload I used top, oratop and Statspack, database version was 12.2.0.1.171017.

And these are the results. My server (ODA X7-2S, 1 CPU, 10 cores hyperthreaded) had an utilization of roughly 20% plus a little overhead. Having 20 cores in the OS and a CPU_COUNT of 4, I end up with a maximum of 1/5th of the server that I can utilize. In other words: 20%. This is what “top” showed:

se2_resmgr_cpucount_jobs_top

To see, what the database instance is actually doing, I used “oratop”:

se2_resmgr_cpucount_jobs_oratop

You see, there are 4 session on CPU and some others waiting for Resource Manager. That proofes, that SE2 internal Resource Manager is using the CPU_COUNT parameter to limit the utilization.  Finally, let’s check the Statspack report:

sp_2_3_jobs

In this overview you can see the 10 sessions that are trying to do something (DB time) and the 4 sessions that are actually doing some work on CPU (DB CPU).

Conclusion / Disclaimer

Instance caging does work in a SE2 Oracle Database. You can limit the database to use even less than 16 threads. But the fact that this works does not necessarily mean that it is allowed. So use in case you use this featuer, you do that on your own risk.

Edit: Dominic Giles stated in the Twitter discussion that it is allowed to do instance caging in SE2.

se2_resmgr_cpucount_giles