RMAN delete obsolete but keep archivelogs

When doing RMAN backups we typically have one strategy and one policy. But the deletion policy always considers not only backups and copies but also archivelogs. In some cases one might want to keep archivelogs for a longer period of time, for instance if you are running a non-Dataguard Standby database using DBVisit. Let’s say we want to keep two generations of backups, we would do the following:

RMAN> CONFIGURE RETENTION POLICY TO REDUNDANCY 2;

new RMAN configuration parameters:
CONFIGURE RETENTION POLICY TO REDUNDANCY 2;
new RMAN configuration parameters are successfully stored

Now let’s see what backups we have:

RMAN> list backup of database summary;


List of Backups
===============
Key     TY LV S Device Type Completion Time     #Pieces #Copies Compressed Tag
------- -- -- - ----------- ------------------- ------- ------- ---------- ---
23      B  F  A DISK        22.12.2016 09:19:14 1       1       NO         TAG20161222T091750
27      B  F  A DISK        22.12.2016 10:02:33 1       1       NO         TAG20161222T100145
28      B  F  A DISK        22.12.2016 10:02:57 1       1       NO         TAG20161222T100145
29      B  F  A DISK        22.12.2016 10:03:22 1       1       NO         TAG20161222T100145
33      B  F  A DISK        22.12.2016 10:17:14 1       1       NO         TAG20161222T101632
34      B  F  A DISK        22.12.2016 10:17:32 1       1       NO         TAG20161222T101632
35      B  F  A DISK        22.12.2016 10:18:00 1       1       NO         TAG20161222T101632


RMAN> list backup of archivelog all summary;


List of Backups
===============
Key     TY LV S Device Type Completion Time     #Pieces #Copies Compressed Tag
------- -- -- - ----------- ------------------- ------- ------- ---------- ---
24      B  A  A DISK        22.12.2016 09:19:30 1       1       NO         TAG20161222T091930
26      B  A  A DISK        22.12.2016 10:01:34 1       1       NO         TAG20161222T100127
30      B  A  A DISK        22.12.2016 10:03:35 1       1       NO         TAG20161222T100335
32      B  A  A DISK        22.12.2016 10:16:24 1       1       NO         TAG20161222T101615
36      B  A  A DISK        22.12.2016 10:18:16 1       1       NO         TAG20161222T101815

So we have two database backups and corresponding backups of archivelogs. And what archivelogs are still on disk?

RMAN> list archivelog all;

List of Archived Log Copies for database with db_unique_name OLTP
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - -------------------
46      1    15      A 21.12.2016 12:56:40
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_15_d5q709b5_.arc

41      1    16      A 22.12.2016 09:17:31
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_16_d5q709x1_.arc

43      1    17      A 22.12.2016 09:19:29
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_17_d5q709oq_.arc

35      1    18      A 22.12.2016 10:01:26
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_18_d5q5q65p_.arc

36      1    19      A 22.12.2016 10:03:33
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_19_d5q6gtmj_.arc

38      1    20      A 22.12.2016 10:16:10
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_20_d5q6lm3q_.arc

45      2    11      A 21.12.2016 12:56:30
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_11_d5q709gs_.arc

42      2    12      A 22.12.2016 09:17:20
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_12_d5q709xo_.arc

44      2    13      A 22.12.2016 09:19:14
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_13_d5q709ln_.arc

40      2    14      A 22.12.2016 10:01:12
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_14_d5q709pq_.arc

37      2    15      A 22.12.2016 10:03:18
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_15_d5q6gg9k_.arc

39      2    16      A 22.12.2016 10:15:58
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_16_d5q6l6jm_.arc

That’s all archivelogs that would be needed for recovery of the older database backup. Now let’s see what is obsolete:

RMAN> report obsolete;

RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 2
Report of obsolete backups and copies
Type                 Key    Completion Time    Filename/Handle
-------------------- ------ ------------------ --------------------
Backup Set           23     22.12.2016 09:19:16
  Backup Piece       23     22.12.2016 09:19:16 /u01/app/oracle/fra/OLTP/43C6AA13C2390666E0538D24100A09EF/backupset/2016_12_22/o1_mf_nnndf_TAG20161222T091750_d5q33orv_.bkp
Backup Set           24     22.12.2016 09:19:30
  Backup Piece       24     22.12.2016 09:19:30 /u01/app/oracle/fra/OLTP/backupset/2016_12_22/o1_mf_annnn_TAG20161222T091930_d5q34ljw_.bkp
Backup Set           25     22.12.2016 09:19:34
  Backup Piece       25     22.12.2016 09:19:34 /u01/app/oracle/fra/OLTP/autobackup/2016_12_22/o1_mf_s_931252772_d5q34ol5_.bkp
Backup Set           26     22.12.2016 10:01:36
  Backup Piece       26     22.12.2016 10:01:36 /u01/app/oracle/fra/OLTP/backupset/2016_12_22/o1_mf_annnn_TAG20161222T100127_d5q5m88s_.bkp
Archive Log          43     22.12.2016 10:25:30 /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_17_d5q709oq_.arc
Archive Log          41     22.12.2016 10:25:30 /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_16_d5q709x1_.arc
Archive Log          42     22.12.2016 10:25:30 /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_12_d5q709xo_.arc
Archive Log          44     22.12.2016 10:25:31 /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_13_d5q709ln_.arc
Archive Log          45     22.12.2016 10:25:35 /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_11_d5q709gs_.arc
Archive Log          46     22.12.2016 10:25:37 /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_15_d5q709b5_.arc

The oldest backup is obsolete which is fine since we already have three full backups. Also all the archivelogs are obsolete since they were already backuped up and can be restored from those backups in case of emergency. But what can we do if we want to keep the archivelogs on disk as long as possible? A “delete obsolete” would remove them along with the outdated database backup.
Let’s try to keep them using “change”:

RMAN> change archivelog all keep until time 'sysdate+3';

released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=279 instance=oltp_1 device type=DISK
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of KEEP command at 12/22/2016 10:29:34
RMAN-06529: CHANGE ... KEEP not supported for ARCHIVELOG

So that is not an option. Maybe changing them to “unavailable” might do the job?

RMAN> change archivelog all unavailable;

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of unavailable command on ORA_DISK_1 channel at 12/22/2016 10:30:43
ORA-19813: cannot have unavailable file /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_15_d5q709b5_.arc in DB_RECOVERY_FILE_DEST

It is not possible to mark archivelog files in FRA as unavailable. There must be another way to excempt the archivelogs from the retention policy. The policy applies to all files known to the database. So let’s get rid of the archivelogs:

RMAN> change archivelog all uncatalog;

uncataloged archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_15_d5q709b5_.arc RECID=46 STAMP=931256737
uncataloged archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_16_d5q709x1_.arc RECID=41 STAMP=931256730
uncataloged archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_17_d5q709oq_.arc RECID=43 STAMP=931256730
uncataloged archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_18_d5q5q65p_.arc RECID=35 STAMP=931255414
uncataloged archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_19_d5q6gtmj_.arc RECID=36 STAMP=931256170
uncataloged archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_20_d5q6lm3q_.arc RECID=38 STAMP=931256291
uncataloged archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_11_d5q709gs_.arc RECID=45 STAMP=931256735
uncataloged archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_12_d5q709xo_.arc RECID=42 STAMP=931256730
uncataloged archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_13_d5q709ln_.arc RECID=44 STAMP=931256731
uncataloged archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_14_d5q709pq_.arc RECID=40 STAMP=931256729
uncataloged archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_15_d5q6gg9k_.arc RECID=37 STAMP=931256158
uncataloged archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_16_d5q6l6jm_.arc RECID=39 STAMP=931256278
Uncataloged 12 objects

That worked. The files are still there but the database does not know that anymore. Now we can apply the policy in the way we want it:

RMAN> report obsolete;

RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 2
Report of obsolete backups and copies
Type                 Key    Completion Time    Filename/Handle
-------------------- ------ ------------------ --------------------
Backup Set           23     22.12.2016 09:19:16
  Backup Piece       23     22.12.2016 09:19:16 /u01/app/oracle/fra/OLTP/43C6AA13C2390666E0538D24100A09EF/backupset/2016_12_22/o1_mf_nnndf_TAG20161222T091750_d5q33orv_.bkp
Backup Set           24     22.12.2016 09:19:30
  Backup Piece       24     22.12.2016 09:19:30 /u01/app/oracle/fra/OLTP/backupset/2016_12_22/o1_mf_annnn_TAG20161222T091930_d5q34ljw_.bkp
Backup Set           25     22.12.2016 09:19:34
  Backup Piece       25     22.12.2016 09:19:34 /u01/app/oracle/fra/OLTP/autobackup/2016_12_22/o1_mf_s_931252772_d5q34ol5_.bkp
Backup Set           26     22.12.2016 10:01:36
  Backup Piece       26     22.12.2016 10:01:36 /u01/app/oracle/fra/OLTP/backupset/2016_12_22/o1_mf_annnn_TAG20161222T100127_d5q5m88s_.bkp

The oldest backup is still obsolete, but now archivelogs anymore. That’s want we wannted to achieve. I can now remove the old backup:

RMAN> delete noprompt obsolete;

RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 2
using channel ORA_DISK_1
Deleting the following obsolete backups and copies:
Type                 Key    Completion Time    Filename/Handle
-------------------- ------ ------------------ --------------------
Backup Set           23     22.12.2016 09:19:16
  Backup Piece       23     22.12.2016 09:19:16 /u01/app/oracle/fra/OLTP/43C6AA13C2390666E0538D24100A09EF/backupset/2016_12_22/o1_mf_nnndf_TAG20161222T091750_d5q33orv_.bkp
Backup Set           24     22.12.2016 09:19:30
  Backup Piece       24     22.12.2016 09:19:30 /u01/app/oracle/fra/OLTP/backupset/2016_12_22/o1_mf_annnn_TAG20161222T091930_d5q34ljw_.bkp
Backup Set           25     22.12.2016 09:19:34
  Backup Piece       25     22.12.2016 09:19:34 /u01/app/oracle/fra/OLTP/autobackup/2016_12_22/o1_mf_s_931252772_d5q34ol5_.bkp
Backup Set           26     22.12.2016 10:01:36
  Backup Piece       26     22.12.2016 10:01:36 /u01/app/oracle/fra/OLTP/backupset/2016_12_22/o1_mf_annnn_TAG20161222T100127_d5q5m88s_.bkp
deleted backup piece
backup piece handle=/u01/app/oracle/fra/OLTP/43C6AA13C2390666E0538D24100A09EF/backupset/2016_12_22/o1_mf_nnndf_TAG20161222T091750_d5q33orv_.bkp RECID=23 STAMP=931252741
deleted backup piece
backup piece handle=/u01/app/oracle/fra/OLTP/backupset/2016_12_22/o1_mf_annnn_TAG20161222T091930_d5q34ljw_.bkp RECID=24 STAMP=931252770
deleted backup piece
backup piece handle=/u01/app/oracle/fra/OLTP/autobackup/2016_12_22/o1_mf_s_931252772_d5q34ol5_.bkp RECID=25 STAMP=931252773
deleted backup piece
backup piece handle=/u01/app/oracle/fra/OLTP/backupset/2016_12_22/o1_mf_annnn_TAG20161222T100127_d5q5m88s_.bkp RECID=26 STAMP=931255288
Deleted 4 objects

Afterwards I can re-register the archivelogs.

RMAN> catalog recovery area noprompt;

searching for all files in the recovery area

List of Files Unknown to the Database
=====================================
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_20_d5q6lm3q_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_16_d5q6l6jm_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_15_d5q709b5_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_11_d5q709gs_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_13_d5q709ln_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_17_d5q709oq_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_14_d5q709pq_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_18_d5q5q65p_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_19_d5q6gtmj_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_15_d5q6gg9k_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_16_d5q709x1_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_12_d5q709xo_.arc
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_20_d5q6lm3q_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_16_d5q6l6jm_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_15_d5q709b5_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_11_d5q709gs_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_13_d5q709ln_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_17_d5q709oq_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_14_d5q709pq_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_18_d5q5q65p_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_19_d5q6gtmj_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_15_d5q6gg9k_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_16_d5q709x1_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_12_d5q709xo_.arc

The archivelog files are back in the database catalog. Now I can handle them separately and apply any rule I want.

RMAN> delete noprompt archivelog until time 'sysdate-1/12';

released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=279 instance=oltp_1 device type=DISK
List of Archived Log Copies for database with db_unique_name OLTP
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - -------------------
49      1    15      A 21.12.2016 12:56:40
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_15_d5q709b5_.arc

50      2    11      A 21.12.2016 12:56:30
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_11_d5q709gs_.arc

deleted archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_15_d5q709b5_.arc RECID=49 STAMP=931257331
deleted archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_11_d5q709gs_.arc RECID=50 STAMP=931257331
Deleted 2 objects

Not very straight forward, but it does the job.

Inspired by a comment on Twitter by Franck Pachot, the best way to keep a history of archivelogs is not to use “ALL” but “FROM”. So the approach the keep a history of three days would be the follwing:

RMAN> change archivelog from time 'sysdate-3' uncatalog;
RMAN> delete noprompt obsolete;
RMAN> catalog recovery area noprompt;

That way all archivelogs from the last three days are uncataloged. All other archivelogs are handled by the “delete obsolete” operation and there is no need to remove archivelogs manually.

Advertisements

How DB_LOST_WRITE_PROTECT works with Data Guard

Preface

Today I will blog again about a Data Guard topic. There are a couple of best practices out there which one should follow. One of these best practises is enabeling block checking and lost write protection. About the latter there are not many information out there. So that’s why I want to outline the concept and importance of this feature. Actually this post is inspired by a talk that I had during DOAG Conference 2016. I had a presentation about best practices in Data Guard and someone from the audience was asking how that lost write protection actually works.
Basically it is there to detect lost writes, as the parameter clearly states. That means, a write request to the disk was commited an the database is happy with that. But the write did not actually happen for whatever reason. So when the block will be read the next time, it is still in old state, any changed, deleted or added values are not included. The block itself is consistent, it is not corrupted. The DBA will not notice it since there is no error. An error will occur only when you restore the tablespace containing the block and then try to apply the redo stream. The recovery will detect a newer SCN in the redo stream which does not match the blocks SCN. That is the point where it gets tricky.

The test environment

My simple test cases run on a VirtualBox VM with OEL 6.7, Oracle Restart 12.1.0.2 and Oracle Database 12.1.0.2. Primary and Standby run on the same host.
DB_NAME: db12c
DB_UNIQUE_NAME: db12ca
DB_UNIQUE_NAME: db12cb
You will see the names in my SQL prompt to make things clear.

This is the current state of the system:

SYS@db12ca> show parameter lost

NAME                                 TYPE                              VALUE
------------------------------------ --------------------------------- ------------------------------
db_lost_write_protect                string                            NONE

SYS@db12ca> select database_role from v$database;

DATABASE_ROLE
------------------------------------------------
PHYSICAL STANDBY
SYS@db12cb> show parameter lost


NAME                                 TYPE                              VALUE
------------------------------------ --------------------------------- ------------------------------
db_lost_write_protect                string                            NONE


SYS@db12cb> select database_role from v$database;

DATABASE_ROLE
------------------------------------------------
PRIMARY

So “db12cb” is my primary and “db12ca” my standby instance. by the way, that’s why I gave them the suffix “a” and “b” because they may change roles over and over again.

For testing I create a separate tablespace with manual space management. This allows me to specify FREELISTS=1. Otherwise the changes to my data may end up in different blocks which is not what I want for my testing. Beside that, I create an user which I will use for testing and which gets the necessary grants.

SYS@db12cb> create tablespace marco datafile size 100m segment space management manual;

Tablespace created.

SYS@db12cb> create user marco identified by marco default tablespace marco quota unlimited on marco;

User created.

SYS@db12cb> grant create session to marco;

Grant succeeded.

SYS@db12cb> grant create table to marco;

Grant succeeded.

Scenario #1: No Lost Write Detection

The new user can now create a table and insert some data, so let’s do that.

SYS@db12cb> conn marco/marco
Connected.
MARCO@db12cb> create table testtable (id number, txt varchar2(100)) storage (freelists 1);

Table created.

MARCO@db12cb> insert into testtable values (1, 'Test Lost Write Detection - 1');

1 row created.

MARCO@db12cb> commit;

Commit complete.

Now we can identify the block and check if the data is really in there.

SYS@db12cb> select file_name from dba_data_files where tablespace_name='MARCO';

FILE_NAME
------------------------------------------------------------------------------------------------------------------------
/u01/app/oracle/oradata/DB12CB/datafile/o1_mf_marco_d3llm6dd_.dbf

SYS@db12cb> select block_id, blocks from dba_extents where segment_name='TESTTABLE' and owner='MARCO';

  BLOCK_ID     BLOCKS
---------- ----------
       128          8

SYS@db12cb> alter system checkpoint;

System altered.
[oracle@oel6u4 ~]$ dd if=/u01/app/oracle/oradata/DB12CB/datafile/o1_mf_marco_d3llm6dd_.dbf of=myblock.v1 skip=129 count=1 bs=8192
1+0 records in
1+0 records out
8192 bytes (8.2 kB) copied, 0.000162476 s, 50.4 MB/s
[oracle@oel6u4 ~]$ grep Detection myblock.v1
Binary file myblock.v1 matches

Ok, the data is in that block. In the same way I can now check if the DML was successfully applied on the standby.

SYS@db12ca> alter system flush buffer_cache;

System altered.

SYS@db12ca> select name from v$datafile where name like '%marco%';

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/DB12CA/datafile/o1_mf_marco_d3llm8nt_.dbf
[oracle@oel6u4 ~]$ dd if=/u01/app/oracle/oradata/DB12CA/datafile/o1_mf_marco_d3llm8nt_.dbf of=sbblock.v1 skip=129 count=1 bs=8192
1+0 records in
1+0 records out
8192 bytes (8.2 kB) copied, 0.000662024 s, 12.4 MB/s
[oracle@oel6u4 ~]$ grep Detection sbblock.v1
Binary file sbblock.v1 matches

So everything is fine until now as it should be.
I will now insert another row into the test table, force that change to be written to disk and then clear the buffer cache.

MARCO@db12cb> insert into testtable values (2, 'Oh my god!');

1 row created.

MARCO@db12cb> commit;

Commit complete.

MARCO@db12cb>

MARCO@db12cb> conn / as sysdba
Connected.
SYS@db12cb> alter system checkpoint;

System altered.

SYS@db12cb> alter system flush buffer_cache;

System altered.

Again, check if it was written to disk.

[oracle@oel6u4 ~]$ dd if=/u01/app/oracle/oradata/DB12CB/datafile/o1_mf_marco_d3llm6dd_.dbf of=myblock.v2 skip=129 count=1 bs=8192
1+0 records in
1+0 records out
8192 bytes (8.2 kB) copied, 0.000318304 s, 25.7 MB/s
[oracle@oel6u4 ~]$ grep Detection myblock.v2
Binary file myblock.v2 matches
[oracle@oel6u4 ~]$ grep god myblock.v2
Binary file myblock.v2 matches

Both values that I inserted are on disk now. Just to make sure everything is ok, I check the block on the standby.

[oracle@oel6u4 ~]$ dd if=/u01/app/oracle/oradata/DB12CA/datafile/o1_mf_marco_d3llm8nt_.dbf of=sbblock.v2 skip=129 count=1 bs=8192
1+0 records in
1+0 records out
8192 bytes (8.2 kB) copied, 0.000162124 s, 50.5 MB/s
[oracle@oel6u4 ~]$ grep Detection sbblock.v2
Binary file sbblock.v2 matches
[oracle@oel6u4 ~]$ grep god sbblock.v2
Binary file sbblock.v2 matches

So far, so good. Now comes the funny part. I will simulate a lost write by just putting my first extracted block back in the datafile.

[oracle@oel6u4 ~]$ dd if=myblock.v1 of=/u01/app/oracle/oradata/DB12CB/datafile/o1_mf_marco_d3llm6dd_.dbf seek=129 count=1 bs=8192 conv=notrunc
1+0 records in
1+0 records out
8192 bytes (8.2 kB) copied, 0.000154517 s, 53.0 MB/s

Now let us query the test table and see what’s happening.

MARCO@db12cb> select * from testtable;

        ID
----------
TXT
--------------------------------------------------------------------------------
         1
Test Lost Write Detection - 1

No error, no waring, just the result. But the result set obviously lacks the row from the second insert. And as the block is completely intact and not corrupted, there is no need to raise any error.
So now it is time to do another INSERT.

MARCO@db12cb> insert into testtable values (3, 'Inconsistency!');

1 row created.

That is the point where it comes to light. The redo apply of the standby database detects a redo record which does not match the data block that itself has. It has no other chance as to stop recovery and raise an error in the alert.log.

2016-11-26 09:52:02.752000 +01:00
ERROR: ORA-00600: internal error code, arguments: [3020] recovery detected a data block with invalid SCN raised at location:kcbr_media_ap_1
Checker run found 1 new persistent data failures
Errors in file /u01/app/oracle/diag/rdbms/db12ca/db12ca/trace/db12ca_pr02_2466.trc  (incident=2705):
ORA-00600: internal error code, arguments: [3020], [2], [129], [8388737], [], [], [], [], [], [], [], []
ORA-10567: Redo is inconsistent with data block (file# 2, block# 129, file offset is 1056768 bytes)
ORA-10564: tablespace MARCO
ORA-01110: data file 2: '/u01/app/oracle/oradata/DB12CA/datafile/o1_mf_marco_d3llm8nt_.dbf'
ORA-10561: block type 'TRANSACTION MANAGED DATA BLOCK', data object# 93368
2016-11-26 09:52:03.882000 +01:00
Incident details in: /u01/app/oracle/diag/rdbms/db12ca/db12ca/incident/incdir_2705/db12ca_pr02_2466_i2705.trc

Beside that, the primary is still running fine, accepts changes, commits and is just doing what a database is supposed to do. This is very unkind since the only way to recover from such a situation is doing a failover to the standby and lose all changes that happened after the change to damaged block. And this can be a lot.

Scenario #2: Lost Write Detection enabled

I enable it by simply setting the parameter to typical on both instances.

SYS@db12ca> alter system set db_lost_write_protect=typical;

System altered.
SYS@db12cb> alter system set db_lost_write_protect=typical;

System altered.

This parameter forces the database to record the SCN of all blocks that it reads from disk to the redo stream. The standby database can use this information to compare the recorded SCN from the redo stream to the actual SCN of the block at the standby site. If there is a difference, it can report a lost write.

Now I walk through the same steps as above. But this time, after simulating the lost write, I simply query the table.

MARCO@db12cb> select * from testtable;

        ID
----------
TXT
--------------------------------------------------------------------------------
         1
Test Lost Write Detection - 1

The SELECT succeeds, but the alert.log of the primary reports the following error.

2016-11-26 10:40:47.143000 +01:00
DMON: A primary database lost write was reported by standby database db12ca. Please look at the alert and DRC logs of the standby database db12ca to see more information about the lost write.

The standby’s alert.log now reports an ORA-752 instead of an ORA-600.

No redo at or after SCN 3448159 can be used for recovery.
PR02: Primary database lost write detected by standby database db12ca
BLOCK THAT LOST WRITE 129, FILE 2, TABLESPACE# 7
The block read during the normal successful database operation had SCN 3346737 (0x0000.00331131) seq 1 (0x01)
ERROR: ORA-00752 detected lost write on primary
Slave exiting with ORA-752 exception
Errors in file /u01/app/oracle/diag/rdbms/db12ca/db12ca/trace/db12ca_pr02_2924.trc:
ORA-00752: recovery detected a lost write of a data block
ORA-10567: Redo is inconsistent with data block (file# 2, block# 129, file offset is 1056768 bytes)
ORA-10564: tablespace MARCO
ORA-01110: data file 2: '/u01/app/oracle/oradata/DB12CA/datafile/o1_mf_marco_d3lnpn8n_.dbf'
ORA-10561: block type 'TRANSACTION MANAGED DATA BLOCK', data object# 93369
Recovery Slave PR02 previously exited with exception 752
MRP0: Background Media Recovery terminated with error 448
Errors in file /u01/app/oracle/diag/rdbms/db12ca/db12ca/trace/db12ca_pr00_2919.trc:
ORA-00448: normal completion of background process

Recovering from a lost write

As in scenario #1, the only way to work around this error is to failover to the standby database.

[oracle@oel6u4 ~]$ dgmgrl
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys@db12ca
Password:
Connected as SYSDBA.
DGMGRL> failover to db12ca immediate
Performing failover NOW, please wait...
Failover succeeded, new primary is "db12ca"

Now I can query my test table at the new primary.

SYS@db12ca> select * from marco.testtable;

        ID TXT
---------- ------------------------------
         1 Test Lost Write Detection - 1
         2 Oh my god!

I now need to re-create the old primary. Reinstate using Flashback Database will not work. The steps will be basically these:

  • remove database from configuration
  • recreate the database using duplicate
  • add database back to the configuration

A lot of effort for such a “small” failure….

Conclusion

Enabling lost write detection is crucial in a Data Guard setup. Lost writes are detected at read time which allows to perform recovery steps much earlier than without it. Nevertheless, lost writes should not occur. If it does occur, something really bad is going on in your environment and you need to investigate the root cause of the lost write.
That’s it, basically. I hope it makes things a little more clear.

Further reading

Resolving ORA-752 or ORA-600 [3020] During Standby Recovery (Doc ID 1265884.1)
Oracle Docs – Database Reference: DB_LOST_WRITE_PROTECT

DataPatch stuck on RAC – PSU October 2016

Yesterday one of my customers wanted to patch two 2-node clusters with the current PSU October 2016 (161018). Both are running 12.1.0.2 Grid Infrastructure and 12.1.0.2 Database. The servers run SPARC Solaris 10. When applying the patch on the first cluster using “opatchauto” everything went fine until the “trying to apply SQL Patch” part on the 2nd node. So I went to the log directory and found the following:

$ cd $ORACLE_BASE/cfgtoollogs/sqlpatch/sqlpatch_27075_2016_11_30_17_12_08
$ tail sqlpatch_catcon_0.log

SQL> GRANT SELECT ON sys.gv_$instance TO dv_secanalyst
  2  /

At that line it was stuck. Searching My Oracle Support brought up nothing helpful. So I had a look at the database sessions:

SQL> select sid, username, event, state, seconds_in_wait 
2    from v$session where username='SYS';

       SID USERNAME                       EVENT                                                            STATE                                                   SECONDS_IN_WAIT
---------- ------------------------------ ---------------------------------------------------------------- -----------                                    -------- ---------------
        13 SYS                            SQL*Net message from client                                      WAITING                                                             226
        30 SYS                            SQL*Net message from client                                      WAITING                                                             473
        32 SYS                            SQL*Net message to client                                        WAITED SHOR                                    T TIME                 0
       411 SYS                            SQL*Net message from client                                      WAITING                                                             473
       783 SYS                            library cache lock                                               WAITING                                                             211
       786 SYS                            SQL*Net message from client                                      WAITING                                                               4
      1155 SYS                            SQL*Net message from client                                      WAITING                                                             467

The session is waiting for something dictionary related. Since the waiting statement was related to RAC, I stopped the other instance which made sqlplatch continue immediately. So the workaround looked like this:

$ srvctl stop instance -db <dbname> -node <node1>
$ srvctl start instance -db <dbname> -node <node1>

This happened on both clusters. So be aware of that in case you are applying that PSU patch to RAC databases.
In case you missed to stop the 1st instance in time, the GRANT statement will run into a timeout (ORA-4021) and the SQL-Patch will be marked with “ERROR” in DBA_REGISTRY_SQLPATCH. In such case, just re-run “datapatch” again and monitor the logfile.
Happy patching.

Update 07-DEC-2016

I was not able to reproduce this issue on a Linux x86-64 system. So there is a chance that the issue is OS related.

Update 12-DEC-2016

Finally I reproduced this issue on my Linux x86-64 test system. Now I opened a SR for that.

Update 13-DEC-2016

Thanks to a quick and efficient Oracle Support guy (yes, there are such people!) we found the root cause of that issue. There is a bug in the Enterprise Manager Agent (DB Express maybe too) that it holds a shared lock on some GV$ views during the whole lifetime of a session. That’s why datapatch got stuck. If you just stop the Agent, datapatch will continue immediatly. There is no need to stop the whole instance. We just need to get rid of the Agent’s sessions.
Thanks a lot to Prakash from Oracle Support for his engagement in investigating this issue.