Platform change and upgrade in one step – XTTS

Today I want to tell you something about cross platform transportable tablespaces (XTTS) and incrementally updated datafile copies. This is a very nice feature to move to another platform and change the Oracle version in one step. There are several MOS notes, whitepapers and of cause some Upgrade-Slides by Mike Dietrich covering this topic.

The basic steps in all cases and versions are these:

  1. set tablespace(s) read only
  2. export metadata for those tablespaces
  3. transport datafiles and dump to target system
  4. create necessary database users
  5. import dump into target database which makes the datafiles part of the target database
  6. set tablespace(s) read write
  7. optionally, transport other objects like views, PL/SQL etc.

But what if the tablespaces are too big and copy time would exceed the allowed downtime window? Then we can use incremental backups to shorten the downtime. The steps to do this, are quite similar.

  1. backup tablespace(s) for transport
  2. copy backup to target system
  3. restore and convert datafiles to target destination and platform
  4. create incremental backup for transported tablespace(s)
  5. recover transported datafiles using the incremental backup
  6. repeat steps 4 and 5 until final downtime window
  7. set tablespace(s) read only
  8. create final incremental backup for transported tablespace(s)
  9. apply incremental backup to transported datafiles
  10. continue with step 2 of the initial process above

Sounds a bit complicated, doesn’t it? So Oracle was so kind to put this whole process into scripts. The following MOS notes contain and describe these scripts:

  • 11G – Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 1389592.1)
  • 12C – Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2005729.1)
  • V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2471245.1)

During a migration project from Linux to SPARC (don’t ask) I wanted to use those scripts. But I experienced the following during “Phase 2”:

oracle@server2:~/xtts> $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup
============================================================
trace file is /oracle/xtts/backup_Sep20_Thu_12_20_52_917//Sep20_Thu_12_20_52_917_.log
=============================================================

[…]

--------------------------------------------------------------------
Done with prepare phase
--------------------------------------------------------------------


--------------------------------------------------------------------
Find list of datafiles in system
--------------------------------------------------------------------

SP2-0027: Input is too long (> 2499 characters) - line ignored
SP2-0027: Input is too long (> 2499 characters) - line ignored
           from dual
           *
ERROR at line 19:
ORA-00936: missing expression

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Error:
------
Error in executing /oracle/xtts/backup_Sep20_Thu_12_27_07_889//diff.sql

Okay. The reason is, I have loads of tablespaces that need to be transferred. This list messed up the whole process. So I tried to understand the process and just do it myself. What I learned is, that it is quite easy to do it manually. And I share my findings with you by simply outlining the basic process in an upcoming blog post.

Advertisements

Parallel Limit of RMAN Duplicate

A long time since my last post, and a lot of topics in the pipeline to be posted. So about time to get started.
In last years October I was part of a PoC which a customer initiated to find out if Solars SPARC together with a ZFS Storage Appliance might be a good platform to migrate and consolidate their systems to. A requirement was to have a Data Guard setup in place, so I needed to create the standby database from the primary. I use RMAN for this and since SPARC platforms typically benefit from heavy parallelization, I tried to use as much channels as possible.

RMAN> connect target sys/***@pocsrva:1521/pocdba
RMAN> connect auxiliary sys/***@pocsrvb:1521/pocdbb
RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 40 BACKUP TYPE TO BACKUPSET;
RMAN> duplicate target database
2> for standby
3> from active database
4> spfile
5>   set db_unique_name='POCDBB'
6>   reset control_files
7>   reset service_names
8> nofilenamecheck
9> dorecover;

Unfortunately this failed:

released channel: ORA_AUX_DISK_38
released channel: ORA_AUX_DISK_39
released channel: ORA_AUX_DISK_40
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 10/18/2018 12:02:33
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script
ORA-17619: max number of processes using I/O slaves in a instance reached

The documentation says:

$ oerr ora 17619
17619, 00000, "max number of processes using I/O slaves in a instance reached"
// *Cause:  An attempt was made to start large number of processes
//          requiring I/O slaves.
// *Action: There can be a maximum of 35 processes that can have I/O
//          slaves at any given time in a instance.

Ok, there is a limit for I/O slaves per instance. By the way, this is all single instance, no RAC. So I reduced the amount of channels to 35 and tried again.

$ rman

Recovery Manager: Release 12.1.0.2.0 - Production on Thu Oct 18 12:05:09 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect target sys/***@pocsrva:1521/pocdba
RMAN> connect auxiliary sys/***@pocsrvb:1521/pocdbb
RMAN> startup clone nomount force
RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 35 BACKUP TYPE TO BACKUPSET;
RMAN> duplicate target database
2> for standby
3> from active database
4> spfile
5>   set db_unique_name='POCDBB'
6>   reset control_files
7>   reset service_names
8> nofilenamecheck
9> dorecover;

But soon the duplicate errored out again.

channel ORA_AUX_DISK_4: starting datafile backup set restore
channel ORA_AUX_DISK_4: using network backup set from service olga9788:1521/eddppocb
channel ORA_AUX_DISK_4: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_4: restoring datafile 00004 to /u02/app/oracle/oradata/POCDBB/datafile/o1_mf_sysaux__944906718442_.dbf
PSDRPC returns significant error 1013.
PSDRPC returns significant error 1013.
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 10/18/2018 12:09:13
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script

ORA-19845: error in backupSetDatafile while communicating with remote database server
ORA-17628: Oracle error 17619 returned by remote Oracle server
ORA-17619: max number of processes using I/O slaves in a instance reached
ORA-19660: some files in the backup set could not be verified
ORA-19661: datafile 4 could not be verified
ORA-19845: error in backupSetDatafile while communicating with remote database server
ORA-17628: Oracle error 17619 returned by remote Oracle server
ORA-17619: max number of processes using I/O slaves in a instance reached

Obviously the instance still tries to allocate to many I/O slaves. I asume, there are I/O slaves for normal channels as well as for auxiliary channels per instance. That’s why I tried again with a parallelism of 16 which would result in 32 I/O slaves.

RMAN> connect target sys/***@pocsrva:1521/pocdba
RMAN> connect auxiliary sys/***@pocsrvb:1521/pocdbb
RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 16 BACKUP TYPE TO BACKUPSET;
RMAN> duplicate target database
2> for standby
3> from active database
4> spfile
5>   set db_unique_name='POCDBB'
6>   reset control_files
7>   reset service_names
8> nofilenamecheck
9> dorecover;

With this configuration the duplicate went fine without any further issues. Parallelization is good, but it has it’s limits.

Recyclebin vs. Autoextend

At the end of last year I did a presentation at the Nuremberg DOAG Regional Meetup. One of my talks was about Flashback and Un-drop a table. The question came up wether a tablespace will autoextend or overwrite objects in the recyclebin in case of space pressure. I did not have an answer to that so I investigated this. And here’s the result.

First I created an user and a tablespace and granted some privileges to the user:

[oracle@oel6u4 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Fri Jan 5 12:02:44 2018

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SYS@perf> create tablespace test datafile size 10m autoextend on next 1m;

Tablespace created.

SYS@perf> create user test identified by test default tablespace test quota unlimited on test;

User created.

SYS@perf> grant create session to test;

Grant succeeded.

SYS@perf> grant create table to test;

Grant succeeded.

SYS@perf> grant select any dictionary to test;

Grant succeeded.

This user now can be used to test the behaviour. Basically I created a table that does not fit in the tablespace first.

SYS@perf> conn test/test
Connected.
TEST@perf> show parameter recyclebin

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
recyclebin                           string      on
TEST@perf> create table test as select * from all_source;

Table created.

The alert.log shows some file resize operations during this table creation:

2018-01-05 12:03:09.686000 +01:00
create tablespace test datafile size 10m autoextend on next 1m
Completed: create tablespace test datafile size 10m autoextend on next 1m
2018-01-05 12:05:08.112000 +01:00
Resize operation completed for file# 6, old size 10240K, new size 11264K
Resize operation completed for file# 6, old size 11264K, new size 12288K
2018-01-05 12:05:09.325000 +01:00
Resize operation completed for file# 6, old size 12288K, new size 13312K

Let’s check the current space usage from the database perspective:

TEST@perf> select sum(bytes),segment_name from user_segments group by segment_name;

SUM(BYTES) SEGMENT_NAME
---------- ----------------------------------------
  12582912 TEST

TEST@perf> select sum(bytes) from dba_free_space where tablespace_name='TEST';

SUM(BYTES)
----------
   1048576

TEST@perf> select sum(bytes) from dba_data_files where tablespace_name='TEST';

SUM(BYTES)
----------
  13631488

So I dropped the table and checked the situation again:

TEST@perf> drop table test;

Table dropped.

TEST@perf> show recyclebin
ORIGINAL NAME    RECYCLEBIN NAME                OBJECT TYPE  DROP TIME
---------------- ------------------------------ ------------ -------------------
TEST             BIN$YgZ7U68dFi7gU244qMDQ2Q==$0 TABLE        2018-01-05:12:09:26

TEST@perf> select sum(bytes),segment_name from user_segments group by segment_name;

SUM(BYTES) SEGMENT_NAME
---------- ----------------------------------------
  12582912 BIN$YgZ7U68dFi7gU244qMDQ2Q==$0

TEST@perf> select sum(bytes) from dba_free_space where tablespace_name='TEST';

SUM(BYTES)
----------
  13631488

Ok, the dropped segment is still there, but we have the whole tablespace as free space available.
Now the funny part, what will happen if I create another table?

TEST@perf> create table test as select * from all_source;

Table created.

TEST@perf> select sum(bytes),segment_name from user_segments group by segment_name;

SUM(BYTES) SEGMENT_NAME
---------- ----------------------------------------
  12582912 TEST

TEST@perf> show recyclebin
TEST@perf>

That means, Oracle prefers to overwrite the dropped segment instead of increasing the tablespace size. It makes sense in my opinion, the “flashback to before drop” feature is meant to help in case of emergency when something really bad happened, not the keep old things for as long as possible.
So if you want to keep your objects, just do not drop them 🙂

RMAN delete obsolete but keep archivelogs

When doing RMAN backups we typically have one strategy and one policy. But the deletion policy always considers not only backups and copies but also archivelogs. In some cases one might want to keep archivelogs for a longer period of time, for instance if you are running a non-Dataguard Standby database using DBVisit. Let’s say we want to keep two generations of backups, we would do the following:

RMAN> CONFIGURE RETENTION POLICY TO REDUNDANCY 2;

new RMAN configuration parameters:
CONFIGURE RETENTION POLICY TO REDUNDANCY 2;
new RMAN configuration parameters are successfully stored

Now let’s see what backups we have:

RMAN> list backup of database summary;


List of Backups
===============
Key     TY LV S Device Type Completion Time     #Pieces #Copies Compressed Tag
------- -- -- - ----------- ------------------- ------- ------- ---------- ---
23      B  F  A DISK        22.12.2016 09:19:14 1       1       NO         TAG20161222T091750
27      B  F  A DISK        22.12.2016 10:02:33 1       1       NO         TAG20161222T100145
28      B  F  A DISK        22.12.2016 10:02:57 1       1       NO         TAG20161222T100145
29      B  F  A DISK        22.12.2016 10:03:22 1       1       NO         TAG20161222T100145
33      B  F  A DISK        22.12.2016 10:17:14 1       1       NO         TAG20161222T101632
34      B  F  A DISK        22.12.2016 10:17:32 1       1       NO         TAG20161222T101632
35      B  F  A DISK        22.12.2016 10:18:00 1       1       NO         TAG20161222T101632


RMAN> list backup of archivelog all summary;


List of Backups
===============
Key     TY LV S Device Type Completion Time     #Pieces #Copies Compressed Tag
------- -- -- - ----------- ------------------- ------- ------- ---------- ---
24      B  A  A DISK        22.12.2016 09:19:30 1       1       NO         TAG20161222T091930
26      B  A  A DISK        22.12.2016 10:01:34 1       1       NO         TAG20161222T100127
30      B  A  A DISK        22.12.2016 10:03:35 1       1       NO         TAG20161222T100335
32      B  A  A DISK        22.12.2016 10:16:24 1       1       NO         TAG20161222T101615
36      B  A  A DISK        22.12.2016 10:18:16 1       1       NO         TAG20161222T101815

So we have two database backups and corresponding backups of archivelogs. And what archivelogs are still on disk?

RMAN> list archivelog all;

List of Archived Log Copies for database with db_unique_name OLTP
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - -------------------
46      1    15      A 21.12.2016 12:56:40
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_15_d5q709b5_.arc

41      1    16      A 22.12.2016 09:17:31
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_16_d5q709x1_.arc

43      1    17      A 22.12.2016 09:19:29
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_17_d5q709oq_.arc

35      1    18      A 22.12.2016 10:01:26
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_18_d5q5q65p_.arc

36      1    19      A 22.12.2016 10:03:33
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_19_d5q6gtmj_.arc

38      1    20      A 22.12.2016 10:16:10
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_20_d5q6lm3q_.arc

45      2    11      A 21.12.2016 12:56:30
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_11_d5q709gs_.arc

42      2    12      A 22.12.2016 09:17:20
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_12_d5q709xo_.arc

44      2    13      A 22.12.2016 09:19:14
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_13_d5q709ln_.arc

40      2    14      A 22.12.2016 10:01:12
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_14_d5q709pq_.arc

37      2    15      A 22.12.2016 10:03:18
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_15_d5q6gg9k_.arc

39      2    16      A 22.12.2016 10:15:58
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_16_d5q6l6jm_.arc

That’s all archivelogs that would be needed for recovery of the older database backup. Now let’s see what is obsolete:

RMAN> report obsolete;

RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 2
Report of obsolete backups and copies
Type                 Key    Completion Time    Filename/Handle
-------------------- ------ ------------------ --------------------
Backup Set           23     22.12.2016 09:19:16
  Backup Piece       23     22.12.2016 09:19:16 /u01/app/oracle/fra/OLTP/43C6AA13C2390666E0538D24100A09EF/backupset/2016_12_22/o1_mf_nnndf_TAG20161222T091750_d5q33orv_.bkp
Backup Set           24     22.12.2016 09:19:30
  Backup Piece       24     22.12.2016 09:19:30 /u01/app/oracle/fra/OLTP/backupset/2016_12_22/o1_mf_annnn_TAG20161222T091930_d5q34ljw_.bkp
Backup Set           25     22.12.2016 09:19:34
  Backup Piece       25     22.12.2016 09:19:34 /u01/app/oracle/fra/OLTP/autobackup/2016_12_22/o1_mf_s_931252772_d5q34ol5_.bkp
Backup Set           26     22.12.2016 10:01:36
  Backup Piece       26     22.12.2016 10:01:36 /u01/app/oracle/fra/OLTP/backupset/2016_12_22/o1_mf_annnn_TAG20161222T100127_d5q5m88s_.bkp
Archive Log          43     22.12.2016 10:25:30 /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_17_d5q709oq_.arc
Archive Log          41     22.12.2016 10:25:30 /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_16_d5q709x1_.arc
Archive Log          42     22.12.2016 10:25:30 /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_12_d5q709xo_.arc
Archive Log          44     22.12.2016 10:25:31 /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_13_d5q709ln_.arc
Archive Log          45     22.12.2016 10:25:35 /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_11_d5q709gs_.arc
Archive Log          46     22.12.2016 10:25:37 /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_15_d5q709b5_.arc

The oldest backup is obsolete which is fine since we already have three full backups. Also all the archivelogs are obsolete since they were already backuped up and can be restored from those backups in case of emergency. But what can we do if we want to keep the archivelogs on disk as long as possible? A “delete obsolete” would remove them along with the outdated database backup.
Let’s try to keep them using “change”:

RMAN> change archivelog all keep until time 'sysdate+3';

released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=279 instance=oltp_1 device type=DISK
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of KEEP command at 12/22/2016 10:29:34
RMAN-06529: CHANGE ... KEEP not supported for ARCHIVELOG

So that is not an option. Maybe changing them to “unavailable” might do the job?

RMAN> change archivelog all unavailable;

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of unavailable command on ORA_DISK_1 channel at 12/22/2016 10:30:43
ORA-19813: cannot have unavailable file /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_15_d5q709b5_.arc in DB_RECOVERY_FILE_DEST

It is not possible to mark archivelog files in FRA as unavailable. There must be another way to excempt the archivelogs from the retention policy. The policy applies to all files known to the database. So let’s get rid of the archivelogs:

RMAN> change archivelog all uncatalog;

uncataloged archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_15_d5q709b5_.arc RECID=46 STAMP=931256737
uncataloged archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_16_d5q709x1_.arc RECID=41 STAMP=931256730
uncataloged archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_17_d5q709oq_.arc RECID=43 STAMP=931256730
uncataloged archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_18_d5q5q65p_.arc RECID=35 STAMP=931255414
uncataloged archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_19_d5q6gtmj_.arc RECID=36 STAMP=931256170
uncataloged archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_20_d5q6lm3q_.arc RECID=38 STAMP=931256291
uncataloged archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_11_d5q709gs_.arc RECID=45 STAMP=931256735
uncataloged archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_12_d5q709xo_.arc RECID=42 STAMP=931256730
uncataloged archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_13_d5q709ln_.arc RECID=44 STAMP=931256731
uncataloged archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_14_d5q709pq_.arc RECID=40 STAMP=931256729
uncataloged archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_15_d5q6gg9k_.arc RECID=37 STAMP=931256158
uncataloged archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_16_d5q6l6jm_.arc RECID=39 STAMP=931256278
Uncataloged 12 objects

That worked. The files are still there but the database does not know that anymore. Now we can apply the policy in the way we want it:

RMAN> report obsolete;

RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 2
Report of obsolete backups and copies
Type                 Key    Completion Time    Filename/Handle
-------------------- ------ ------------------ --------------------
Backup Set           23     22.12.2016 09:19:16
  Backup Piece       23     22.12.2016 09:19:16 /u01/app/oracle/fra/OLTP/43C6AA13C2390666E0538D24100A09EF/backupset/2016_12_22/o1_mf_nnndf_TAG20161222T091750_d5q33orv_.bkp
Backup Set           24     22.12.2016 09:19:30
  Backup Piece       24     22.12.2016 09:19:30 /u01/app/oracle/fra/OLTP/backupset/2016_12_22/o1_mf_annnn_TAG20161222T091930_d5q34ljw_.bkp
Backup Set           25     22.12.2016 09:19:34
  Backup Piece       25     22.12.2016 09:19:34 /u01/app/oracle/fra/OLTP/autobackup/2016_12_22/o1_mf_s_931252772_d5q34ol5_.bkp
Backup Set           26     22.12.2016 10:01:36
  Backup Piece       26     22.12.2016 10:01:36 /u01/app/oracle/fra/OLTP/backupset/2016_12_22/o1_mf_annnn_TAG20161222T100127_d5q5m88s_.bkp

The oldest backup is still obsolete, but now archivelogs anymore. That’s want we wannted to achieve. I can now remove the old backup:

RMAN> delete noprompt obsolete;

RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 2
using channel ORA_DISK_1
Deleting the following obsolete backups and copies:
Type                 Key    Completion Time    Filename/Handle
-------------------- ------ ------------------ --------------------
Backup Set           23     22.12.2016 09:19:16
  Backup Piece       23     22.12.2016 09:19:16 /u01/app/oracle/fra/OLTP/43C6AA13C2390666E0538D24100A09EF/backupset/2016_12_22/o1_mf_nnndf_TAG20161222T091750_d5q33orv_.bkp
Backup Set           24     22.12.2016 09:19:30
  Backup Piece       24     22.12.2016 09:19:30 /u01/app/oracle/fra/OLTP/backupset/2016_12_22/o1_mf_annnn_TAG20161222T091930_d5q34ljw_.bkp
Backup Set           25     22.12.2016 09:19:34
  Backup Piece       25     22.12.2016 09:19:34 /u01/app/oracle/fra/OLTP/autobackup/2016_12_22/o1_mf_s_931252772_d5q34ol5_.bkp
Backup Set           26     22.12.2016 10:01:36
  Backup Piece       26     22.12.2016 10:01:36 /u01/app/oracle/fra/OLTP/backupset/2016_12_22/o1_mf_annnn_TAG20161222T100127_d5q5m88s_.bkp
deleted backup piece
backup piece handle=/u01/app/oracle/fra/OLTP/43C6AA13C2390666E0538D24100A09EF/backupset/2016_12_22/o1_mf_nnndf_TAG20161222T091750_d5q33orv_.bkp RECID=23 STAMP=931252741
deleted backup piece
backup piece handle=/u01/app/oracle/fra/OLTP/backupset/2016_12_22/o1_mf_annnn_TAG20161222T091930_d5q34ljw_.bkp RECID=24 STAMP=931252770
deleted backup piece
backup piece handle=/u01/app/oracle/fra/OLTP/autobackup/2016_12_22/o1_mf_s_931252772_d5q34ol5_.bkp RECID=25 STAMP=931252773
deleted backup piece
backup piece handle=/u01/app/oracle/fra/OLTP/backupset/2016_12_22/o1_mf_annnn_TAG20161222T100127_d5q5m88s_.bkp RECID=26 STAMP=931255288
Deleted 4 objects

Afterwards I can re-register the archivelogs.

RMAN> catalog recovery area noprompt;

searching for all files in the recovery area

List of Files Unknown to the Database
=====================================
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_20_d5q6lm3q_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_16_d5q6l6jm_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_15_d5q709b5_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_11_d5q709gs_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_13_d5q709ln_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_17_d5q709oq_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_14_d5q709pq_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_18_d5q5q65p_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_19_d5q6gtmj_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_15_d5q6gg9k_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_16_d5q709x1_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_12_d5q709xo_.arc
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_20_d5q6lm3q_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_16_d5q6l6jm_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_15_d5q709b5_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_11_d5q709gs_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_13_d5q709ln_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_17_d5q709oq_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_14_d5q709pq_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_18_d5q5q65p_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_19_d5q6gtmj_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_15_d5q6gg9k_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_16_d5q709x1_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_12_d5q709xo_.arc

The archivelog files are back in the database catalog. Now I can handle them separately and apply any rule I want.

RMAN> delete noprompt archivelog until time 'sysdate-1/12';

released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=279 instance=oltp_1 device type=DISK
List of Archived Log Copies for database with db_unique_name OLTP
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - -------------------
49      1    15      A 21.12.2016 12:56:40
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_15_d5q709b5_.arc

50      2    11      A 21.12.2016 12:56:30
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_11_d5q709gs_.arc

deleted archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_15_d5q709b5_.arc RECID=49 STAMP=931257331
deleted archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_11_d5q709gs_.arc RECID=50 STAMP=931257331
Deleted 2 objects

Not very straight forward, but it does the job.

Inspired by a comment on Twitter by Franck Pachot, the best way to keep a history of archivelogs is not to use “ALL” but “FROM”. So the approach the keep a history of three days would be the follwing:

RMAN> change archivelog from time 'sysdate-3' uncatalog;
RMAN> delete noprompt obsolete;
RMAN> catalog recovery area noprompt;

That way all archivelogs from the last three days are uncataloged. All other archivelogs are handled by the “delete obsolete” operation and there is no need to remove archivelogs manually.

How DB_LOST_WRITE_PROTECT works with Data Guard

Preface

Today I will blog again about a Data Guard topic. There are a couple of best practices out there which one should follow. One of these best practises is enabeling block checking and lost write protection. About the latter there are not many information out there. So that’s why I want to outline the concept and importance of this feature. Actually this post is inspired by a talk that I had during DOAG Conference 2016. I had a presentation about best practices in Data Guard and someone from the audience was asking how that lost write protection actually works.
Basically it is there to detect lost writes, as the parameter clearly states. That means, a write request to the disk was commited an the database is happy with that. But the write did not actually happen for whatever reason. So when the block will be read the next time, it is still in old state, any changed, deleted or added values are not included. The block itself is consistent, it is not corrupted. The DBA will not notice it since there is no error. An error will occur only when you restore the tablespace containing the block and then try to apply the redo stream. The recovery will detect a newer SCN in the redo stream which does not match the blocks SCN. That is the point where it gets tricky.

The test environment

My simple test cases run on a VirtualBox VM with OEL 6.7, Oracle Restart 12.1.0.2 and Oracle Database 12.1.0.2. Primary and Standby run on the same host.
DB_NAME: db12c
DB_UNIQUE_NAME: db12ca
DB_UNIQUE_NAME: db12cb
You will see the names in my SQL prompt to make things clear.

This is the current state of the system:

SYS@db12ca> show parameter lost

NAME                                 TYPE                              VALUE
------------------------------------ --------------------------------- ------------------------------
db_lost_write_protect                string                            NONE

SYS@db12ca> select database_role from v$database;

DATABASE_ROLE
------------------------------------------------
PHYSICAL STANDBY
SYS@db12cb> show parameter lost


NAME                                 TYPE                              VALUE
------------------------------------ --------------------------------- ------------------------------
db_lost_write_protect                string                            NONE


SYS@db12cb> select database_role from v$database;

DATABASE_ROLE
------------------------------------------------
PRIMARY

So “db12cb” is my primary and “db12ca” my standby instance. by the way, that’s why I gave them the suffix “a” and “b” because they may change roles over and over again.

For testing I create a separate tablespace with manual space management. This allows me to specify FREELISTS=1. Otherwise the changes to my data may end up in different blocks which is not what I want for my testing. Beside that, I create an user which I will use for testing and which gets the necessary grants.

SYS@db12cb> create tablespace marco datafile size 100m segment space management manual;

Tablespace created.

SYS@db12cb> create user marco identified by marco default tablespace marco quota unlimited on marco;

User created.

SYS@db12cb> grant create session to marco;

Grant succeeded.

SYS@db12cb> grant create table to marco;

Grant succeeded.

Scenario #1: No Lost Write Detection

The new user can now create a table and insert some data, so let’s do that.

SYS@db12cb> conn marco/marco
Connected.
MARCO@db12cb> create table testtable (id number, txt varchar2(100)) storage (freelists 1);

Table created.

MARCO@db12cb> insert into testtable values (1, 'Test Lost Write Detection - 1');

1 row created.

MARCO@db12cb> commit;

Commit complete.

Now we can identify the block and check if the data is really in there.

SYS@db12cb> select file_name from dba_data_files where tablespace_name='MARCO';

FILE_NAME
------------------------------------------------------------------------------------------------------------------------
/u01/app/oracle/oradata/DB12CB/datafile/o1_mf_marco_d3llm6dd_.dbf

SYS@db12cb> select block_id, blocks from dba_extents where segment_name='TESTTABLE' and owner='MARCO';

  BLOCK_ID     BLOCKS
---------- ----------
       128          8

SYS@db12cb> alter system checkpoint;

System altered.
[oracle@oel6u4 ~]$ dd if=/u01/app/oracle/oradata/DB12CB/datafile/o1_mf_marco_d3llm6dd_.dbf of=myblock.v1 skip=129 count=1 bs=8192
1+0 records in
1+0 records out
8192 bytes (8.2 kB) copied, 0.000162476 s, 50.4 MB/s
[oracle@oel6u4 ~]$ grep Detection myblock.v1
Binary file myblock.v1 matches

Ok, the data is in that block. In the same way I can now check if the DML was successfully applied on the standby.

SYS@db12ca> alter system flush buffer_cache;

System altered.

SYS@db12ca> select name from v$datafile where name like '%marco%';

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/DB12CA/datafile/o1_mf_marco_d3llm8nt_.dbf
[oracle@oel6u4 ~]$ dd if=/u01/app/oracle/oradata/DB12CA/datafile/o1_mf_marco_d3llm8nt_.dbf of=sbblock.v1 skip=129 count=1 bs=8192
1+0 records in
1+0 records out
8192 bytes (8.2 kB) copied, 0.000662024 s, 12.4 MB/s
[oracle@oel6u4 ~]$ grep Detection sbblock.v1
Binary file sbblock.v1 matches

So everything is fine until now as it should be.
I will now insert another row into the test table, force that change to be written to disk and then clear the buffer cache.

MARCO@db12cb> insert into testtable values (2, 'Oh my god!');

1 row created.

MARCO@db12cb> commit;

Commit complete.

MARCO@db12cb>

MARCO@db12cb> conn / as sysdba
Connected.
SYS@db12cb> alter system checkpoint;

System altered.

SYS@db12cb> alter system flush buffer_cache;

System altered.

Again, check if it was written to disk.

[oracle@oel6u4 ~]$ dd if=/u01/app/oracle/oradata/DB12CB/datafile/o1_mf_marco_d3llm6dd_.dbf of=myblock.v2 skip=129 count=1 bs=8192
1+0 records in
1+0 records out
8192 bytes (8.2 kB) copied, 0.000318304 s, 25.7 MB/s
[oracle@oel6u4 ~]$ grep Detection myblock.v2
Binary file myblock.v2 matches
[oracle@oel6u4 ~]$ grep god myblock.v2
Binary file myblock.v2 matches

Both values that I inserted are on disk now. Just to make sure everything is ok, I check the block on the standby.

[oracle@oel6u4 ~]$ dd if=/u01/app/oracle/oradata/DB12CA/datafile/o1_mf_marco_d3llm8nt_.dbf of=sbblock.v2 skip=129 count=1 bs=8192
1+0 records in
1+0 records out
8192 bytes (8.2 kB) copied, 0.000162124 s, 50.5 MB/s
[oracle@oel6u4 ~]$ grep Detection sbblock.v2
Binary file sbblock.v2 matches
[oracle@oel6u4 ~]$ grep god sbblock.v2
Binary file sbblock.v2 matches

So far, so good. Now comes the funny part. I will simulate a lost write by just putting my first extracted block back in the datafile.

[oracle@oel6u4 ~]$ dd if=myblock.v1 of=/u01/app/oracle/oradata/DB12CB/datafile/o1_mf_marco_d3llm6dd_.dbf seek=129 count=1 bs=8192 conv=notrunc
1+0 records in
1+0 records out
8192 bytes (8.2 kB) copied, 0.000154517 s, 53.0 MB/s

Now let us query the test table and see what’s happening.

MARCO@db12cb> select * from testtable;

        ID
----------
TXT
--------------------------------------------------------------------------------
         1
Test Lost Write Detection - 1

No error, no waring, just the result. But the result set obviously lacks the row from the second insert. And as the block is completely intact and not corrupted, there is no need to raise any error.
So now it is time to do another INSERT.

MARCO@db12cb> insert into testtable values (3, 'Inconsistency!');

1 row created.

That is the point where it comes to light. The redo apply of the standby database detects a redo record which does not match the data block that itself has. It has no other chance as to stop recovery and raise an error in the alert.log.

2016-11-26 09:52:02.752000 +01:00
ERROR: ORA-00600: internal error code, arguments: [3020] recovery detected a data block with invalid SCN raised at location:kcbr_media_ap_1
Checker run found 1 new persistent data failures
Errors in file /u01/app/oracle/diag/rdbms/db12ca/db12ca/trace/db12ca_pr02_2466.trc  (incident=2705):
ORA-00600: internal error code, arguments: [3020], [2], [129], [8388737], [], [], [], [], [], [], [], []
ORA-10567: Redo is inconsistent with data block (file# 2, block# 129, file offset is 1056768 bytes)
ORA-10564: tablespace MARCO
ORA-01110: data file 2: '/u01/app/oracle/oradata/DB12CA/datafile/o1_mf_marco_d3llm8nt_.dbf'
ORA-10561: block type 'TRANSACTION MANAGED DATA BLOCK', data object# 93368
2016-11-26 09:52:03.882000 +01:00
Incident details in: /u01/app/oracle/diag/rdbms/db12ca/db12ca/incident/incdir_2705/db12ca_pr02_2466_i2705.trc

Beside that, the primary is still running fine, accepts changes, commits and is just doing what a database is supposed to do. This is very unkind since the only way to recover from such a situation is doing a failover to the standby and lose all changes that happened after the change to damaged block. And this can be a lot.

Scenario #2: Lost Write Detection enabled

I enable it by simply setting the parameter to typical on both instances.

SYS@db12ca> alter system set db_lost_write_protect=typical;

System altered.
SYS@db12cb> alter system set db_lost_write_protect=typical;

System altered.

This parameter forces the database to record the SCN of all blocks that it reads from disk to the redo stream. The standby database can use this information to compare the recorded SCN from the redo stream to the actual SCN of the block at the standby site. If there is a difference, it can report a lost write.

Now I walk through the same steps as above. But this time, after simulating the lost write, I simply query the table.

MARCO@db12cb> select * from testtable;

        ID
----------
TXT
--------------------------------------------------------------------------------
         1
Test Lost Write Detection - 1

The SELECT succeeds, but the alert.log of the primary reports the following error.

2016-11-26 10:40:47.143000 +01:00
DMON: A primary database lost write was reported by standby database db12ca. Please look at the alert and DRC logs of the standby database db12ca to see more information about the lost write.

The standby’s alert.log now reports an ORA-752 instead of an ORA-600.

No redo at or after SCN 3448159 can be used for recovery.
PR02: Primary database lost write detected by standby database db12ca
BLOCK THAT LOST WRITE 129, FILE 2, TABLESPACE# 7
The block read during the normal successful database operation had SCN 3346737 (0x0000.00331131) seq 1 (0x01)
ERROR: ORA-00752 detected lost write on primary
Slave exiting with ORA-752 exception
Errors in file /u01/app/oracle/diag/rdbms/db12ca/db12ca/trace/db12ca_pr02_2924.trc:
ORA-00752: recovery detected a lost write of a data block
ORA-10567: Redo is inconsistent with data block (file# 2, block# 129, file offset is 1056768 bytes)
ORA-10564: tablespace MARCO
ORA-01110: data file 2: '/u01/app/oracle/oradata/DB12CA/datafile/o1_mf_marco_d3lnpn8n_.dbf'
ORA-10561: block type 'TRANSACTION MANAGED DATA BLOCK', data object# 93369
Recovery Slave PR02 previously exited with exception 752
MRP0: Background Media Recovery terminated with error 448
Errors in file /u01/app/oracle/diag/rdbms/db12ca/db12ca/trace/db12ca_pr00_2919.trc:
ORA-00448: normal completion of background process

Recovering from a lost write

As in scenario #1, the only way to work around this error is to failover to the standby database.

[oracle@oel6u4 ~]$ dgmgrl
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys@db12ca
Password:
Connected as SYSDBA.
DGMGRL> failover to db12ca immediate
Performing failover NOW, please wait...
Failover succeeded, new primary is "db12ca"

Now I can query my test table at the new primary.

SYS@db12ca> select * from marco.testtable;

        ID TXT
---------- ------------------------------
         1 Test Lost Write Detection - 1
         2 Oh my god!

I now need to re-create the old primary. Reinstate using Flashback Database will not work. The steps will be basically these:

  • remove database from configuration
  • recreate the database using duplicate
  • add database back to the configuration

A lot of effort for such a “small” failure….

Conclusion

Enabling lost write detection is crucial in a Data Guard setup. Lost writes are detected at read time which allows to perform recovery steps much earlier than without it. Nevertheless, lost writes should not occur. If it does occur, something really bad is going on in your environment and you need to investigate the root cause of the lost write.
That’s it, basically. I hope it makes things a little more clear.

Further reading

Resolving ORA-752 or ORA-600 [3020] During Standby Recovery (Doc ID 1265884.1)
Oracle Docs – Database Reference: DB_LOST_WRITE_PROTECT

OTN Appreciation Day: Flashback

Following Tim Hall’s suggestion, I will write about my favorite feature: Flashback.

The term “Flashback” covers different technologies. The most useful ones in my opinion are Flashback Query and Flashback Drop, they become very useful in case of human errors. Those errors are much more likely to occur than errors caused by hardware failures. We are all humans and we all make mistakes. Flashback technologies provide easy methods to recover from those errors in contrast to RMAN features for instance.

In the past I already wrote about that, you can the article here: Flashback – Recover without Recovery

ODA: Performance Impact due to Flashback Database

Here and there I am planning and actually doing migrations to the Oracle Database Appliance (ODA). I like this appliance because of it’s relative simplicity and ease of management. In most cases we connect two ODAs in different locations using DataGuard with the goal of maximizing availability. That means we have a primary 2-node-RAC on ODA-1 and another 2-node-RAC on ODA-2 which operates as standby.
In order to fulfill the requirements of availability and fault tolerance the databases are configured as “Maximum Availability”:

  • sufficient Standby Redologs configured – on the dedicated Redo SSDs
  • Force Logging mode
  • Flashback Database configured and activated – located in FRA per definition

In terms of availability and manageability this is quite good, but in terms of performance this turned out to be really bad sometimes. Look at this AWR snippets:

              Snap Id      Snap Time      Sessions Curs/Sess
            --------- ------------------- -------- ---------
Begin Snap:     88808 01-Sep-15 17:30:06       480       2.7
  End Snap:     88855 02-Sep-15 17:00:10       478       6.8
   Elapsed:            1,410.06 (mins)
   DB Time:            3,311.41 (mins)
Wait Classes by Total Wait Time
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                                        Avg             Avg
                                        Total Wait     Wait   % DB   Active
Wait Class                  Waits       Time (sec)     (ms)   time Sessions
---------------- ---------------- ---------------- -------- ------ --------
System I/O             11,522,327          161,304       14   81.2      1.9
DB CPU                                     101,562            51.1      1.2
User I/O               19,956,840           88,663        4   44.6      1.0
Background Wait Events                 DB/Inst: XXX/xxx  Snaps: 88808-88855
-> ordered by wait time desc, waits desc (idle events last)
-> Only events with Total Wait Time (s) >= .001 are shown
-> %Timeouts: value of 0 indicates value was > .5%.  Value of null is truly 0

                                                             Avg                
                                        %Time Total Wait    wait    Waits   % bg
Event                             Waits -outs   Time (s)    (ms)     /txn   time
-------------------------- ------------ ----- ---------- ------- -------- ------
db file async I/O submit        969,898     0     90,151      93      0.3   46.5
db file parallel write        5,015,605     0     33,520       7      1.3   17.3
flashback log file write        617,118     0     30,295      49      0.2   15.6
Backup: MML write backup p   10,608,828     0     11,727       1      2.7    6.0
RMAN backup & recovery I/O      288,934     0      3,508      12      0.1    1.8

Remember, the flashback logs are stored inside FRA. And where does ODA place the FRA? On the slower part of the spinning disks. The system can not be faster than the slowest part in the chain. And in this configuration the user I/Os are influenced by flashback I/Os making things even worse.
So turn off flashback database in order to improve performance.

              Snap Id      Snap Time      Sessions Curs/Sess
            --------- ------------------- -------- ---------
Begin Snap:     89528 16-Sep-15 17:30:45       393       5.0
  End Snap:     89575 17-Sep-15 17:00:44       576       5.2
   Elapsed:            1,409.98 (mins)
   DB Time:            2,792.69 (mins)
Wait Classes by Total Wait Time
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                                        Avg             Avg
                                        Total Wait     Wait   % DB   Active
Wait Class                  Waits       Time (sec)     (ms)   time Sessions
---------------- ---------------- ---------------- -------- ------ --------
System I/O              7,424,187           97,029       13   57.9      1.1
DB CPU                                      96,595            57.6      1.1
User I/O                8,191,268           63,769        8   38.1      0.8

Only half the System I/O than with flashback database enabled.

Background Wait Events                 DB/Inst: XXX/xxx  Snaps: 89528-89575
-> ordered by wait time desc, waits desc (idle events last)
-> Only events with Total Wait Time (s) >= .001 are shown
-> %Timeouts: value of 0 indicates value was > .5%.  Value of null is truly 0

                                                             Avg                
                                        %Time Total Wait    wait    Waits   % bg
Event                             Waits -outs   Time (s)    (ms)     /txn   time
-------------------------- ------------ ----- ---------- ------- -------- ------
db file async I/O submit        531,487     0     73,471     138      0.1   57.5
db file parallel write        2,384,365     0     15,169       6      0.7   11.9
Backup: MML write backup p   10,121,315     0     11,769       1      2.8    9.2
RMAN backup & recovery I/O      197,080     0      4,906      25      0.1    3.8

The background wait for “flashback log file write” completely gone.

Keep that in mind when using Flashback Database. This will also apply to other hardware than ODA. Be careful which disks you use for FRA/flashback logs.