Parallel Limit of RMAN Duplicate

A long time since my last post, and a lot of topics in the pipeline to be posted. So about time to get started.
In last years October I was part of a PoC which a customer initiated to find out if Solars SPARC together with a ZFS Storage Appliance might be a good platform to migrate and consolidate their systems to. A requirement was to have a Data Guard setup in place, so I needed to create the standby database from the primary. I use RMAN for this and since SPARC platforms typically benefit from heavy parallelization, I tried to use as much channels as possible.

RMAN> connect target sys/***@pocsrva:1521/pocdba
RMAN> connect auxiliary sys/***@pocsrvb:1521/pocdbb
RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 40 BACKUP TYPE TO BACKUPSET;
RMAN> duplicate target database
2> for standby
3> from active database
4> spfile
5>   set db_unique_name='POCDBB'
6>   reset control_files
7>   reset service_names
8> nofilenamecheck
9> dorecover;

Unfortunately this failed:

released channel: ORA_AUX_DISK_38
released channel: ORA_AUX_DISK_39
released channel: ORA_AUX_DISK_40
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 10/18/2018 12:02:33
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script
ORA-17619: max number of processes using I/O slaves in a instance reached

The documentation says:

$ oerr ora 17619
17619, 00000, "max number of processes using I/O slaves in a instance reached"
// *Cause:  An attempt was made to start large number of processes
//          requiring I/O slaves.
// *Action: There can be a maximum of 35 processes that can have I/O
//          slaves at any given time in a instance.

Ok, there is a limit for I/O slaves per instance. By the way, this is all single instance, no RAC. So I reduced the amount of channels to 35 and tried again.

$ rman

Recovery Manager: Release 12.1.0.2.0 - Production on Thu Oct 18 12:05:09 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect target sys/***@pocsrva:1521/pocdba
RMAN> connect auxiliary sys/***@pocsrvb:1521/pocdbb
RMAN> startup clone nomount force
RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 35 BACKUP TYPE TO BACKUPSET;
RMAN> duplicate target database
2> for standby
3> from active database
4> spfile
5>   set db_unique_name='POCDBB'
6>   reset control_files
7>   reset service_names
8> nofilenamecheck
9> dorecover;

But soon the duplicate errored out again.

channel ORA_AUX_DISK_4: starting datafile backup set restore
channel ORA_AUX_DISK_4: using network backup set from service olga9788:1521/eddppocb
channel ORA_AUX_DISK_4: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_4: restoring datafile 00004 to /u02/app/oracle/oradata/POCDBB/datafile/o1_mf_sysaux__944906718442_.dbf
PSDRPC returns significant error 1013.
PSDRPC returns significant error 1013.
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 10/18/2018 12:09:13
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script

ORA-19845: error in backupSetDatafile while communicating with remote database server
ORA-17628: Oracle error 17619 returned by remote Oracle server
ORA-17619: max number of processes using I/O slaves in a instance reached
ORA-19660: some files in the backup set could not be verified
ORA-19661: datafile 4 could not be verified
ORA-19845: error in backupSetDatafile while communicating with remote database server
ORA-17628: Oracle error 17619 returned by remote Oracle server
ORA-17619: max number of processes using I/O slaves in a instance reached

Obviously the instance still tries to allocate to many I/O slaves. I asume, there are I/O slaves for normal channels as well as for auxiliary channels per instance. That’s why I tried again with a parallelism of 16 which would result in 32 I/O slaves.

RMAN> connect target sys/***@pocsrva:1521/pocdba
RMAN> connect auxiliary sys/***@pocsrvb:1521/pocdbb
RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 16 BACKUP TYPE TO BACKUPSET;
RMAN> duplicate target database
2> for standby
3> from active database
4> spfile
5>   set db_unique_name='POCDBB'
6>   reset control_files
7>   reset service_names
8> nofilenamecheck
9> dorecover;

With this configuration the duplicate went fine without any further issues. Parallelization is good, but it has it’s limits.

Advertisements

Parallelizing Standard Edition Data Pump

Todays blog post is about Data Pump. Sometimes we use this tool to migrate applications to another database. There are some reasons for doing so, reorganizing data, difficult upgade/migration paths to mention two. With the Enterpise Edition of the Oracle Database we can do the export and import in parallel. But what if we are using Standard Edition only? We are bound to export and import serially. This is a major restriction, especially if time is limited. But there is a way to make it run in parallel. The basic idea is to split the export/import process into several tasks which then can run in parallel. There is no restriction in having several Data Pump processes at a time. It makes the process more difficult, but it can speed up the whole process a lot. Let’s see how it is done. In my example I use direct import using a network link, this skips the step of persisting the transported data in the filesystem and makes the process even faster.
Basically there are three steps that cost a lot of time:

  1. Transport table data
  2. Create indexes
  3. Create/validate constraints

To parallelize the transport of table data, I first analyze the size of the tables. Typically there are a handful large tables and many many small tables. I create sets of tables that should be transported together, these sets are made of the large tables. Then there is another import, that imports all the other tables. For the latter one I need to exclude indexes and constraints since those will be created in parallel afterwards. For best reproducability I use parameter files for all import runs. Beside that, it makes handling of quotes much easier than directly at the prompt. And I use “job_name” to better identify and monitor the data pump jobs.
First, the parameters for the sets of large tables, in my example there is just one table per set:

userid=
network_link=
logfile=DATA_PUMP_DIR:import_table1.log
include=TABLE:"=<TABLE>"
job_name=table_1
userid=
network_link=
logfile=DATA_PUMP_DIR:import_table2.log
include=TABLE:"=<TABLE>"
job_name=table_2

Repeat this for all the other table sets. Now the parameter file for the rest of the data and all the other information, I use a schema based import and exclude the large tables from above.

userid=
network_link=
logfile=DATA_PUMP_DIR:import_therest.log
schemas=
exclude=TABLE:"in ('TABLE_1', 'TABLE_2')"
exclude=INDEX
exclude=CONSTRAINT
job_name=the_rest

Since the last job creates all required users, I start this job first and wait until the user creation is finished. After that, the jobs for the large tables can be started.
When all of those jobs are finished, it is time to transport the indexes. To create all indexes, I use database jobs with no repeat interval. This makes the jobs go away once they finish successfully. The degree of parallelizm can be easily adapted by setting the “job_queue_processes” parameter. The job action can be created using DBMS_METADATA. The following script will generate a SQL script that creates one database job per index creation.

set serveroutput on
begin
  dbms_output.put_line('var x number');
  dbms_metadata.set_transform_param(
    TRANSFORM_HANDLE=>DBMS_METADATA.SESSION_TRANSFORM,
    name=>'STORAGE', 
    value=>FALSE
  );
  for obj in (
    select 'execute immediate ''''' || dbms_metadata.get_ddl('INDEX', index_name, owner) || ''''';' ddl
    from dba_indexes where owner=''
  ) loop
    dbms_output.put_line('begin dbms_job.submit(:x, what=>''' || obj.ddl || ''', next_date=>sysdate, interval=>null); end;');
  dbms_output.put_line('/');
  dbms_output.put_line('');
  end loop;
  dbms_output.put_line('commit;');
end;
/

After that, the constraints can be created in a similar way.

set serveroutput on
begin
  dbms_output.put_line('var x number');
  DBMS_METADATA.set_transform_param (DBMS_METADATA.session_transform, 'SQLTERMINATOR', true);
  for obj in (
    select dbms_metadata.get_ddl('CONSTRAINT', constraint_name, owner) ddl
    from dba_constraints where owner=''
    and constraint_type IN ('U', 'P')
  ) loop
    dbms_output.put_line('exec dbms_job.submit(:x, what=>''' || obj.ddl || ''', next_date=>sysdate, interval=>null);');
    dbms_output.put_line('/');
    dbms_output.put_line('');
  end loop;
  dbms_output.put_line('commit;');
end;
/

This craetes all the unique and primary constraints. If you wish to include more, simply adapt the script. Be aware of the fact, that I did not include all constraint types. I need to do a last metadata import at last to make sure all objects are copied to the new environment.

The approach can also be used to speed up Data Pump for Enterprise Edition since this only transports tables in parallel and does the index creation etc. serially too, that depends on the version you use.

Hope this helps.

Transporting SQL Patches Between Databases

A while ago I wrote about SQL Patches – what hints do they exactly use? Now I came to the point where the underlying application was moved to another database using Data Pump. And consequently some parts of the application started to run into performance issues again because the SQL patches, that made some statements run fast, were not transported to the new database. So the need to re-create or transport those SQL patches arrised.

SQL patches are not part of the application schema, they are stored in the data dictionary of the database. That is the cause why they are not transported when using Data Pump. But there is a way to transport them. The DBMS_SQLDIAG package provides some procedures for this, CREATE_STGTAB_SQLPATCH, PACK_STGTAB_SQLPATCH and UNPACK_STGTAB_SQLPATCH. This listing represents the order in which these functions are used. Basically the procedure to move SQL patches is this:

  1. Create staging table in a schema other than SYS
  2. Import SQL patches (all or some) into this staging table
  3. Transport staging table to destination database (Data Pump, CTAS over DB-Link will not work since there is a LONG column in the staging table)
  4. Extract SQL patches (all or some) from staging table back into the dictionary

Let’s see how this works, first extract the SQL patches.

SQL> select name from dba_sql_patches;

NAME
--------------------------------------------------------------------------------
Patch_2
Patch_5
Patch_3
Patch_4
Patch_1

SQL> exec dbms_sqldiag.create_stgtab_sqlpatch('SQLPATCH_STAGE','SYSTEM');

PL/SQL procedure successfully executed.

SQL>  exec  dbms_sqldiag.pack_stgtab_sqlpatch(staging_table_name=>'SQLPATCH_STAGE', staging_schema_owner=>'SYSTEM');

PL/SQL procedure successfully executed.

Now move the staging table to the destination database.

[oracle ~]$ expdp system/**** tables=system.sqlpatch_stage directory=dings dumpfile=sqlpatch.dmpdp

[oracle ~]$ impdp system/**** directory=dings dumpfile=sqlpatch.dmpdp full=y

Finally, extract the SQL patches back into the data dictionary.

SQL>  exec dbms_sqldiag.unpack_stgtab_sqlpatch(replace=>true, staging_table_name=>'SQLPATCH_STAGE', staging_schema_owner=>'SYSTEM');

PL/SQL procedure successfully executed.

That’s it. Nothing more to do. Keep that in mind in case your applications are fine tuned using SQL patches and you need to move them to different databases.

Oracle SE2 and Instance Caging

At DOAG Exaday 2018 Martin Bach talked about using the Resource Manager. He explained, that Resource Manager is using the CPU_COUNT initialization parameter to limit the CPU utilization of a database instance. This is called instance caging and requires an Enterprise Edition License. Resource Manager is not available in Standard Edtion 2 (SE2) according to the Licensing Guide:

se2_resmgr_cpucount_licguide

On the other hand, SE2 is limited to 16 threads per database. Franck Pachot did investigate this in his blog post and found out, that internally Resource Manager features are being used to enforce this limitation.

So the question came up, what will happen to a SE2 Oracle Database that has CPU_COUNT set. According to the documentation, CPU_COUNT works only with Resource Manager which is not part of SE2, but SE2 is using Resource Manager internally.

Now, let’s try. I did use the same method to generate load that Franck Pachot used for his tests. A simple PL/SQL block running in several database jobs.  For testing this, I set CPU_COUNT to 4 and did run 10 parallel jobs. Having this, my workload is definitely below the internal maximum of 16 threads. To measure the workload I used top, oratop and Statspack, database version was 12.2.0.1.171017.

And these are the results. My server (ODA X7-2S, 1 CPU, 10 cores hyperthreaded) had an utilization of roughly 20% plus a little overhead. Having 20 cores in the OS and a CPU_COUNT of 4, I end up with a maximum of 1/5th of the server that I can utilize. In other words: 20%. This is what “top” showed:

se2_resmgr_cpucount_jobs_top

To see, what the database instance is actually doing, I used “oratop”:

se2_resmgr_cpucount_jobs_oratop

You see, there are 4 session on CPU and some others waiting for Resource Manager. That proofes, that SE2 internal Resource Manager is using the CPU_COUNT parameter to limit the utilization.  Finally, let’s check the Statspack report:

sp_2_3_jobs

In this overview you can see the 10 sessions that are trying to do something (DB time) and the 4 sessions that are actually doing some work on CPU (DB CPU).

Conclusion / Disclaimer

Instance caging does work in a SE2 Oracle Database. You can limit the database to use even less than 16 threads. But the fact that this works does not necessarily mean that it is allowed. So use in case you use this featuer, you do that on your own risk.

Edit: Dominic Giles stated in the Twitter discussion that it is allowed to do instance caging in SE2.

se2_resmgr_cpucount_giles

Transportable Tablespaces, Deferred Segment Creation and SE

Right now I am in the middle of a project which is about moving an old 11.2.0.2 Oracle Enterprise Edition Database of roughly 1.5TB from the US to new hardware in Germany including an upgrade to 12.2.0.1 and a switch the Standard Edition 2. As you see, there are a couple of things to do and I am more than happy to do this challenging project. The basic plan is this:

  1. Create a new 12.2.0.1 database in Standard Edition 2
  2. Get an initial copy of the source database to Germany
  3. Restore that copy on the new hardware with an 11.2.0.2 Oracle Home
  4. Get the archivelogs on a regular basis from the source database in the US
  5. Recover the german copy of the database
  6. Repeat 4 and 5 until cut-over
  7. Open Resetlogs the german copy of the database
  8. Move the data to the new 12.2.0.1 database using transportable tablespace

According to the Licensing Guide it is allowed to plug in transportable tablespaces into a SE2 database. So I am completely happy with my approach.

But during the test phase of this migration I encountered a nasty error when plugging in the tablespace. I obfuscated some identifiers, so don’t be too strict:

Import: Release 12.2.0.1.0 - Production on Thu May 31 15:32:01 2018

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.
;;; 
Connected to: Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production
Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01":  system/******** directory=copy_dump logfile=some_ts.impdp.log dumpfile=some_ts.dmpdp transport_datafiles=/u02/app/oracle/oradata/SOMEDB/SOMEDB/datafile/e*dbf 
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
ORA-39083: Object type TABLE:"SOME_USER"."SOME_TABLE" failed to create with error:
ORA-01647: tablespace 'SOME_TS' is read-only, cannot allocate space in it

Failing sql is:
CREATE TABLE "SOME_USER"."SOME_TABLE" (<Column list stripped>) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING TABLESPACE "SOME_TS" 

Ok, the tablesapce is read only. That is supposed to be. A quick research at MOS revealed a note “ORA-39083 & ORA-01647 during TTS or EXPDP/IMPDP from Enterprise Edition to Standard Edition when deferred_segment_creation=True (Doc ID 2288535.1)” which states, that the deferred segment creation is the root cause. That makes sense somehow. The note further states, the workaround is either to use the version parameter of “impdp” during import TTS or to disable deferred segment creation at the source database before starting the TTS.
To cut a long story short, both workarounds do not work and end up with the exact same error. In my opinion this makes totally sense, because the segment for the table in question is simply not present in the transported tablespace. And it can’t be created during the TTS import since the tablespace is marked read only. It cannot be created when I use the version parameter for impdp nor it is being created when I disable deferred segment creation.
The only feasible solution is to create the segment at the source database before setting the tablespace to read only. A simple

SQL> alter table SOME_USER.SOME_TABLE allocate extent;

Table altered.

does the trick. And that’s it. After creating a segment for this table the TTS import went fine without any further errors.
To make sure all objects do have a segment created, I use these queries:

SQL>  select 'alter table ' || owner || '.' || table_name || ' allocate extent;'
  2   from dba_tables where SEGMENT_CREATED='NO'
  3*  and owner='&USERNAME.';

So when using Transportable Tablespaces to move to a SE2 database and the source is 11.2.0.2 (the version when deferred segment creating was introduced) or higher, you better check that in advance. Hope that helps.

Why running “configure-firstnet” must not be run twice (ODA)

Recently I was deplyoing some Oracle Database Appliances X7-2M at a customers site. Everything went quite smooth until we tried to configure the network using “configure-firstnet”. The documentation is quite clear about this, one shall not run this more than once. But in our case we first configured the network without VLANs which ended up with a nice “ifcfg-btbond1” configuration containing the IP information.
But as we figured out we had to use a VLAN so we simply run “configure-firstnet” again. This generated an “ifcfg-btbond1.26” configuration file for our VLAN with ID 26. It had the same IP information in it as the one without VLANs. This works fine since the script is obviously creating the interfaces properly.
But after a

service network restart

The machine was not reachable anymore. As we investigated this, we saw that there are now two bond interfaces with a configured IP which obviously prevents any network communication.

So the solution was to remove all lines/parameters from “ifcfg-btbond1” configuration file, that were IP-related and restart the network again. At the end the issue was quite obvious and easy to remedy but it took us some time which could have been used better. That emphasizes the importance of clear information and a well structured preparation when deplyoing an Oracle Database Appliance (and other appliances too).

Missing Disk / Dismounting Diskgroup after duplicate from ASM to ACFS

Last week I was asked to create a Data Guard environment. Quite simple task, you may think. And actually it was, but with some funny side effects. The primary database is running on an Oracle Database Appliance X6-2M using ASM. The Standby database was planned to run on another ODA, a X5-2HA. The X5 is using pure ACFS. Both are running 12.1.0.2.170418 Bundlepatch. Be aware of that, the HA ODA’s are using PSUs whilst the smaller ones are using Bundlepatches. You should not mix up these, so I created another DB Home on the HA with the propper Bundlepatch. With the January ODA Update for the HA versions, Oracle moved to Bundlepatches too, but we are not yet there. So that’s it for the sake of completeness.

So what I did obviously in the first place was duplicating the primary database to the HA ODA. Once that was finished, I wanted to clean up the controlfile and get rid of all those backup and archivelog records and keep just the ones that are really available.

oracle@odax51 ~]$ rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Fri Mar 16 09:11:42 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: COMA (DBID=1562414168, not open)

RMAN> catalog db_recovery_file_dest;

Starting implicit crosscheck backup at 2018-03-16 09:11:44
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
allocated channel: ORA_DISK_2
allocated channel: ORA_DISK_3
allocated channel: ORA_DISK_4
allocated channel: ORA_DISK_5
allocated channel: ORA_DISK_6
allocated channel: ORA_DISK_7
allocated channel: ORA_DISK_8

At this point the RMAN was stuck. A quick look in the alert.log revealed a whole bunch of messages like these:

2018-03-16 09:08:32.000000 +01:00
WARNING: ASMB force dismounting group 3 (RECO) due to missing disks
SUCCESS: diskgroup RECO was dismounted
NOTE: ASMB mounting group 3 (RECO)
NOTE: ASM background process initiating disk discovery for grp 3 (reqid:0)
WARNING: group 3 (RECO) has missing disks
ORA-15040: diskgroup is incomplete
WARNING: group 3 is being dismounted.

The ASM alert.log had corresponding messages:

2018-03-16 09:11:48.567000 +01:00
NOTE: client COMA1:COMA:odax5-c dismounting group 3 (RECO)
NOTE: client COMA1:COMA:odax5-c dismounting group 3 (RECO)
NOTE: client COMA1:COMA:odax5-c dismounting group 3 (RECO)
NOTE: client COMA1:COMA:odax5-c dismounting group 3 (RECO)
NOTE: client COMA1:COMA:odax5-c dismounting group 3 (RECO)
NOTE: client COMA1:COMA:odax5-c dismounting group 3 (RECO)
NOTE: client COMA1:COMA:odax5-c dismounting group 3 (RECO)
NOTE: client COMA1:COMA:odax5-c dismounting group 3 (RECO)

Oh sh… you might think, and that was exactly what I thought at that time. So I checked the ASM diskgroups, disks etc. but did not find anything that could be a problem.

So after a while of thinking, the idea came up that it might be related to the backup stuff in the controlfile. So I checked that and tried to unregister a backupiece manually. I used the undocumented DBMS_BACKUP_RESTORE package for that, so do this at your own risk.

SQL> select RECID, STAMP, SET_STAMP, SET_COUNT, HANDLE, PIECE# from v$backup_piece
2 where handle like '+%' and rownum=1;


    RECID      STAMP  SET_STAMP  SET_COUNT PIECE# HANDLE
--------- ---------- ---------- ---------- ------ ----------------------------------------------------------------------------
   129941  969656433  969656431     130820      7 +RECO/COMAX6/BACKUPSET/2018_03_01/nnndn1_tag20180301t210006_0.2815.969656433

SQL> exec dbms_backup_restore.changebackuppiece( -
2      recid => 129941, -
3      stamp => 969656433, -
4      set_stamp => 969656431, -
5      set_count => 130820, -
6      pieceno => 7, -
7      handle => '+RECO/COMAX6/BACKUPSET/2018_03_01/nnndn1_tag20180301t210006_0.2815.969656433', -
8      status => 'D' -
9	);

During the PL/SQL call I saw exact one message like the ones above in the alert.log. That explains te behaviour. During the “catalog” call from RMAN, an implicit crosscheck takes place. Since this tries to access the files in the RECO diskgroup and there is really nothing in that diskgroup except an ACFS volume, this error is being thrown.

That means, I need to get rid of all these records. A simple PL/SQL block helped me doing that.

SQL> set serveroutput on 
SQL> begin
2  for rec in (select RECID, STAMP, SET_STAMP, SET_COUNT, HANDLE, PIECE# 
3              from v$backup_piece 
4			  where HANDLE like '+%'
5  ) loop 
6    dbms_output.put_line('deleting ''' ||rec.handle);
7    dbms_backup_restore.changebackuppiece( 
8       recid => rec.recid,
9       stamp => rec.stamp, 
10      set_stamp => rec.set_stamp,
11      set_count => rec.set_count,
12      pieceno => rec.piece#,
13      handle => rec.handle,
14      status => 'D'
15	 );
16   end loop;
17 end;
18 /

It took a while and caused again a lot of messages in both, the database and the ASM alert.log, but finally I was able to run RMAN commands successfully again.

Maybe this helps you solve such issues, but be aware of the fact that using DBMS_BACKUP_RESTORE is not supported.