Default Collation and PL/SQL

One of the new features that got introduced in Oracle Database 12.2 is the possibility to define the default collation at schema-, table- or column-level. This is quite nice because we do not have to use NLS_SORT in each and every query anymore.
But obviously there are some restrictions. Let’s create two users, one with no specific collation and one that uses a non-default collation.

SQL> show user
USER is "SYS"
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         3 PDBMMI                         READ WRITE NO

         
SQL> create user usr_default identified by xxx;

User created.

SQL> grant create session, alter session, create procedure to usr_default;

Grant succeeded.

SQL> create user usr_collate identified by xxx default collation binary;

User created.

SQL> grant create session, alter session, create procedure to usr_collate;

Grant succeeded.

Now I create a simple piece of PL/SQL, first with the default user.

SQL> conn usr_default/xxx@pdbmmi
Connected.
SQL> show user
USER is "USR_DEFAULT"
SQL> select default_collation from user_users;

DEFAULT_COLLATION
--------------------------------------------------------------------------------
USING_NLS_COMP

SQL> create procedure do_nothing
  2  as
  3  begin
  4    null;
  5  end;
  6  /

Procedure created.

SQL> exec do_nothing;

PL/SQL procedure successfully completed.

SQL> drop procedure do_nothing;

Procedure dropped.

Nothing special here. Everything works as expected and as we know it since years. So let’s try the same with the user that has a different default collation.

SQL> conn usr_collate/xxx@pdbmmi
Connected.
SQL> show user
USER is "USR_COLLATE"
SQL> select default_collation from user_users;

DEFAULT_COLLATION
--------------------------------------------------------------------------------
BINARY

SQL> create procedure do_nothing
  2  as
  3  begin
  4    null;
  5  end;
  6  /

Warning: Procedure created with compilation errors.

SQL> show err
Errors for PROCEDURE DO_NOTHING:

LINE/COL ERROR
-------- -----------------------------------------------------------------
0/0      PL/SQL: Compilation unit analysis terminated
0/0      PLS-00761: Program unit collation may only be USING_NLS_COMP

Obviously the PL/SQL compiler is not very happy with the different collation option. So we need to change that, at session level in my case, to make it compile.

SQL> alter session set default_collation=USING_NLS_COMP;

Session altered.

SQL> alter procedure do_nothing compile;

Procedure altered.

SQL> show err
No errors.
SQL> exec do_nothing;

PL/SQL procedure successfully completed.

SQL> drop procedure do_nothing;

Procedure dropped.

Now it compiles without any warnings or errors.

So keep that in mind when you create users with a specific collation option.

Oracle 12.2 – SQL*Plus Command History

Currently I am preparing some slides for future events. I will share some of the topics beforehand. First topic is the one mentioned in the title, with Oracle Database 12.2 we finally get a command history. Though only a tiny new bit this is one of my favorite new features.
Let’s see how this works.

$ sqlplus / as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Mon May 8 09:49:04 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> history
SP2-1650: History is off, use "SET HIST[ORY] ON" to enable History.

Ok, the feature is inactive by default. We need to enable it by setting it to “ON” or to a number specifying how many statements should be kept in the history.

SQL> set history 50

SQL> select * from dual;
D
-
X
SQL> history
  1  select * from dual;

Let’s see what else is possible with the “history” command:

SQL> help history

 HISTORY
 -------

 Stores, lists, executes, edits of the commands
 entered during the current SQL*Plus session.

 HIST[ORY] [N {RUN | EDIT | DEL[ETE]}] | [CLEAR]

 N is the entry number listed in the history list.
 Use this number to recall, edit or delete the command.

 Example:
 HIST 3 RUN - will run the 3rd entry from the list.

 HIST[ORY] without any option will list all entries in the list.

We can easily run any command by specifying the number from the history or remove a command from the history.

SQL> hist
  1  select * from dual;
  2  select count(*) from dba_objects where status <> 'VALID';

SQL> hist 2 run

  COUNT(*)
----------
         0

SQL> hist 1 del
SQL> hist
  1  select count(*) from dba_objects where status <> 'VALID';

And by the way, the history runs at statement level, not at line level. So this is maybe the most important advantage over OS level command history using arrow-up and arrow-down (natively on Windows or with “rlwrap” on Linux).

SQL> select count(*) from dba_users
  2  where oracle_maintained = 'Y'
  3  and account_status = 'OPEN';

  COUNT(*)
----------
         2

SQL> hist
  1  select count(*) from dba_objects where status <> 'VALID';
  2  select count(*) from dba_users
     where oracle_maintained = 'Y'
     and account_status = 'OPEN';

SQL> hist 2 run

  COUNT(*)
----------
         2

Enjoy the new feature.

ORA-28545 when querying HS gateway in Oracle 12c

Since our customers are forced to move to Oracle Database 12.1, I am in the role to support their upgrades. Recently I upgraded a Windows environment that used the Heterogeneous Services to connect to an external ODBC datasource via a database link. The configuration is a little tricky, but pretty straight forward once you get the idea.

First, there must be an ODBC datasource. That is a simple System DSN which was already in place.

Second, you have to choose a kind of “virtual” SID for your connection to this ODBC datasource.

Third, create an init.ora in the $ORACLE_HOME/hs/admin directory and put at least one parameter into it:

HS_FDS_CONNECT_INFO="<Name of ODBC System DSN>"

Fourth, add static SID to the listener  by modifying the listener.ora file:

SID_LIST_LISTENER_STD =
(SID_LIST =
(SID_DESC=
(SID_NAME=<virtual SID>)
(ORACLE_HOME=c:\oracle\product\12.1.0.2\dbprod)
(PROGRAM=dg4odbc)
)
)

Then reload the listener configuration to make it active.

Fifth, create a tnsnames.ora entry that points to the listener and the virtual SID. Make sure, you specify “HS=OK”.

MYODBCDS =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = tcp)(HOST = <listener host>)(PORT = 1521))
(CONNECT_DATA =
(SID = <virtual SID>)
)
(HS = OK)
)

Sixth and finally, create a database link that points to the tnsnames entry:

SQL> create database link myodbcds connect to <user> identified by <pwd> using "MYODBCDS";

Database link created.

All those steps were done years ago in the old 11.2 environment. Everything worked fine there. So I simply copied all the configuration details to the new 12.1 Oracle home. Then I moved the listener to the new 12.1 home and upgraded the database. But when testing the HS connection, the following happened:

SQL*Plus: Release 12.1.0.2.0 Production on Tue Mar 28 11:30:45 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Last Successful login time: Tue Mar 28 2017 11:06:02 +02:00

Connected to:
Oracle Database 12c Standard Edition Release 12.1.0.2.0 - 64bit Production

SQL> select * from dual@myodbcds ;
select * from dual@myodbcds
*
ERROR at line 1:
ORA-28545: error diagnosed by Net8 when connecting to an agent
Unable to retrieve text of NETWORK/NCR message 65535
ORA-02063: preceding 2 lines from MYODBCDS

I checked the listener services, the connection seemed to be established successfully.

C:\>lsnrctl services listener

LSNRCTL for 64-bit Windows: Version 12.1.0.2.0 - Production on 28-MAR-2017 11:39:05

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=<hostname>)(PORT=1521)))
Services Summary...
Service "<virtual SID>" has 1 instance(s).
Instance "<virtual SID>", status UNKNOWN, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:1 refused:0
LOCAL SERVER
The command completed successfully

So I created a new listener in the old 11.2 home that used another port to test things. The 11.2 listener worked fine. So what is the difference between both listeners?
After a little research I stumbled upon the Windows services. The database service was not running as LocalSystem but as a domain user. This is because the database writes it’s backups to an UNC share. Both, 11.2 and 12.1, listeners used the LocalSystem account. So I changed the 12.1 listener to use the domain account too and this solved the problem.

SQL*Plus: Release 12.1.0.2.0 Production on Tue Mar 28 11:39:10 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Last Successful login time: Tue Mar 28 2017 11:36:53 +02:00

Connected to:
Oracle Database 12c Standard Edition Release 12.1.0.2.0 - 64bit Production

SQL> select * from dual@myodbcds;

D
-
X

I have no idea why the 11.2 listener can run as LocalSystem whilst the 12.1 listener cannot. But generally I think it is a good practice to run listener and database services with the same user credentials.

12c Upgrade stuck at fixed object stats

A while ago I did some database upgrades to version 12.1.0.2. One of those upgrades made me a bit nervous. I used “dbca” to perform the upgrade and it was kind of stuck. So I went to the command prompt and started to investigate this. What I found was this:

12c_fixed_obj_stats

A session was waiting for “ADR block file read” again and again. The SQL that was causing this waits can be seen in the screenshot too. It was gathering statistics for X$DBKFDG, part of the “gather fixed object stats” step.

A quick research in My Oracle Support did not bring up any helpful information. My first idea was to check the ADR but I found it to be nearly empty, at least in the suspicous directories like “trace” etc. So I posted this issue on Twitter and asked for help. @realadrienne pointed to a similar issue related to Recovery Advisor and the list of critical issues. So I checked that using

SQL> select * from v$ir_failure;

which returned loads of records that had a priority of “critical” and a status of “closed”. So I tried to get rid of these records using “adrci”.

adrci> purge -age 1 -type hm

This took quite a while, about 10 minutes. to complete. But in the end the v$ir_failure view had no records anymore and the fixed object stats where gathered quickly.

So now there is an additional pre-upgrade check on my personal list, that says “clean up v$ir_failure”. You should add this to your list too to prevent unnecessary delays during database upgrades.

Thanks to @realadrienne, @FranckPachot and @OracleSK for the quick and very helpful assistance.

Thanks also to @MikeDietrichDE for the feedback on this issue.

 

 

Things to consider when moving a database

A main task of a DBA’s life is keeping database versions and hardware up-to-date. This sounds easy, but actually it is not. This is especially if the system was not touched for a long period of time. And there might also be a lot of applications using a database. This implies dependencies.
The todays story is about a server migration that we did a couple of weeks ago. It meant moving 7 databases to a new hardware which we did by simply stopping the databases on the old server, stopping ASM on the old server, mounting the ASM disks to the new server(s) and creating cluster resources for the databases. But there were several other things to take care of. The following list is long but might not be complete, so please feel free to comment on that to make the list as good as possible.

  • cron jobs of ALL users
    • copy cron entries and all references scripts/directories etc
  • scripts
    • copy any scripts that are on the old system
  • local applications
    • copy any application and their configuration that might run on the old system
  • tnsnames.ora of ALL clients
    • prepare TNS aliases for the new system and activate them during the migration
    • notify all application owners who do not use tnsnames.ora or directory services (JDBC Thin, EZ-Connect, …)
  • external procedures
    • be ea
  • directories
    • adump
    • diagnostic_dest
    • directory objects
  • database links in ALL other databases
  • backup
    • RMAN configuration, especially locations for backups and snapshot controlfile
  • connect identifiers in applications, that do not use tnsnames.ora (JDBC etc)
  • SSH keys, authorized_keys
    • prepare any remote systems that were able to login via SSH to work seamlessly with the new server
    • add any neccessary entries to known_hosts and/or authorized_keys on the new server
  • Monitoring systems
    • change your central monitoring tool to monitor the new environment

 

Once you moved everything to the new system it might be a good idea to keep the Oracle Listener(s) up and running on the old system for a while. Doing that, you will be able to monitor the listener logfiles and identify any application/user/interface that were not updated and  are still trying to connect to the old environment.

 

RMAN delete obsolete but keep archivelogs

When doing RMAN backups we typically have one strategy and one policy. But the deletion policy always considers not only backups and copies but also archivelogs. In some cases one might want to keep archivelogs for a longer period of time, for instance if you are running a non-Dataguard Standby database using DBVisit. Let’s say we want to keep two generations of backups, we would do the following:

RMAN> CONFIGURE RETENTION POLICY TO REDUNDANCY 2;

new RMAN configuration parameters:
CONFIGURE RETENTION POLICY TO REDUNDANCY 2;
new RMAN configuration parameters are successfully stored

Now let’s see what backups we have:

RMAN> list backup of database summary;


List of Backups
===============
Key     TY LV S Device Type Completion Time     #Pieces #Copies Compressed Tag
------- -- -- - ----------- ------------------- ------- ------- ---------- ---
23      B  F  A DISK        22.12.2016 09:19:14 1       1       NO         TAG20161222T091750
27      B  F  A DISK        22.12.2016 10:02:33 1       1       NO         TAG20161222T100145
28      B  F  A DISK        22.12.2016 10:02:57 1       1       NO         TAG20161222T100145
29      B  F  A DISK        22.12.2016 10:03:22 1       1       NO         TAG20161222T100145
33      B  F  A DISK        22.12.2016 10:17:14 1       1       NO         TAG20161222T101632
34      B  F  A DISK        22.12.2016 10:17:32 1       1       NO         TAG20161222T101632
35      B  F  A DISK        22.12.2016 10:18:00 1       1       NO         TAG20161222T101632


RMAN> list backup of archivelog all summary;


List of Backups
===============
Key     TY LV S Device Type Completion Time     #Pieces #Copies Compressed Tag
------- -- -- - ----------- ------------------- ------- ------- ---------- ---
24      B  A  A DISK        22.12.2016 09:19:30 1       1       NO         TAG20161222T091930
26      B  A  A DISK        22.12.2016 10:01:34 1       1       NO         TAG20161222T100127
30      B  A  A DISK        22.12.2016 10:03:35 1       1       NO         TAG20161222T100335
32      B  A  A DISK        22.12.2016 10:16:24 1       1       NO         TAG20161222T101615
36      B  A  A DISK        22.12.2016 10:18:16 1       1       NO         TAG20161222T101815

So we have two database backups and corresponding backups of archivelogs. And what archivelogs are still on disk?

RMAN> list archivelog all;

List of Archived Log Copies for database with db_unique_name OLTP
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - -------------------
46      1    15      A 21.12.2016 12:56:40
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_15_d5q709b5_.arc

41      1    16      A 22.12.2016 09:17:31
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_16_d5q709x1_.arc

43      1    17      A 22.12.2016 09:19:29
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_17_d5q709oq_.arc

35      1    18      A 22.12.2016 10:01:26
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_18_d5q5q65p_.arc

36      1    19      A 22.12.2016 10:03:33
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_19_d5q6gtmj_.arc

38      1    20      A 22.12.2016 10:16:10
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_20_d5q6lm3q_.arc

45      2    11      A 21.12.2016 12:56:30
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_11_d5q709gs_.arc

42      2    12      A 22.12.2016 09:17:20
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_12_d5q709xo_.arc

44      2    13      A 22.12.2016 09:19:14
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_13_d5q709ln_.arc

40      2    14      A 22.12.2016 10:01:12
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_14_d5q709pq_.arc

37      2    15      A 22.12.2016 10:03:18
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_15_d5q6gg9k_.arc

39      2    16      A 22.12.2016 10:15:58
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_16_d5q6l6jm_.arc

That’s all archivelogs that would be needed for recovery of the older database backup. Now let’s see what is obsolete:

RMAN> report obsolete;

RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 2
Report of obsolete backups and copies
Type                 Key    Completion Time    Filename/Handle
-------------------- ------ ------------------ --------------------
Backup Set           23     22.12.2016 09:19:16
  Backup Piece       23     22.12.2016 09:19:16 /u01/app/oracle/fra/OLTP/43C6AA13C2390666E0538D24100A09EF/backupset/2016_12_22/o1_mf_nnndf_TAG20161222T091750_d5q33orv_.bkp
Backup Set           24     22.12.2016 09:19:30
  Backup Piece       24     22.12.2016 09:19:30 /u01/app/oracle/fra/OLTP/backupset/2016_12_22/o1_mf_annnn_TAG20161222T091930_d5q34ljw_.bkp
Backup Set           25     22.12.2016 09:19:34
  Backup Piece       25     22.12.2016 09:19:34 /u01/app/oracle/fra/OLTP/autobackup/2016_12_22/o1_mf_s_931252772_d5q34ol5_.bkp
Backup Set           26     22.12.2016 10:01:36
  Backup Piece       26     22.12.2016 10:01:36 /u01/app/oracle/fra/OLTP/backupset/2016_12_22/o1_mf_annnn_TAG20161222T100127_d5q5m88s_.bkp
Archive Log          43     22.12.2016 10:25:30 /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_17_d5q709oq_.arc
Archive Log          41     22.12.2016 10:25:30 /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_16_d5q709x1_.arc
Archive Log          42     22.12.2016 10:25:30 /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_12_d5q709xo_.arc
Archive Log          44     22.12.2016 10:25:31 /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_13_d5q709ln_.arc
Archive Log          45     22.12.2016 10:25:35 /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_11_d5q709gs_.arc
Archive Log          46     22.12.2016 10:25:37 /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_15_d5q709b5_.arc

The oldest backup is obsolete which is fine since we already have three full backups. Also all the archivelogs are obsolete since they were already backuped up and can be restored from those backups in case of emergency. But what can we do if we want to keep the archivelogs on disk as long as possible? A “delete obsolete” would remove them along with the outdated database backup.
Let’s try to keep them using “change”:

RMAN> change archivelog all keep until time 'sysdate+3';

released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=279 instance=oltp_1 device type=DISK
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of KEEP command at 12/22/2016 10:29:34
RMAN-06529: CHANGE ... KEEP not supported for ARCHIVELOG

So that is not an option. Maybe changing them to “unavailable” might do the job?

RMAN> change archivelog all unavailable;

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of unavailable command on ORA_DISK_1 channel at 12/22/2016 10:30:43
ORA-19813: cannot have unavailable file /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_15_d5q709b5_.arc in DB_RECOVERY_FILE_DEST

It is not possible to mark archivelog files in FRA as unavailable. There must be another way to excempt the archivelogs from the retention policy. The policy applies to all files known to the database. So let’s get rid of the archivelogs:

RMAN> change archivelog all uncatalog;

uncataloged archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_15_d5q709b5_.arc RECID=46 STAMP=931256737
uncataloged archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_16_d5q709x1_.arc RECID=41 STAMP=931256730
uncataloged archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_17_d5q709oq_.arc RECID=43 STAMP=931256730
uncataloged archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_18_d5q5q65p_.arc RECID=35 STAMP=931255414
uncataloged archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_19_d5q6gtmj_.arc RECID=36 STAMP=931256170
uncataloged archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_20_d5q6lm3q_.arc RECID=38 STAMP=931256291
uncataloged archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_11_d5q709gs_.arc RECID=45 STAMP=931256735
uncataloged archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_12_d5q709xo_.arc RECID=42 STAMP=931256730
uncataloged archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_13_d5q709ln_.arc RECID=44 STAMP=931256731
uncataloged archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_14_d5q709pq_.arc RECID=40 STAMP=931256729
uncataloged archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_15_d5q6gg9k_.arc RECID=37 STAMP=931256158
uncataloged archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_16_d5q6l6jm_.arc RECID=39 STAMP=931256278
Uncataloged 12 objects

That worked. The files are still there but the database does not know that anymore. Now we can apply the policy in the way we want it:

RMAN> report obsolete;

RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 2
Report of obsolete backups and copies
Type                 Key    Completion Time    Filename/Handle
-------------------- ------ ------------------ --------------------
Backup Set           23     22.12.2016 09:19:16
  Backup Piece       23     22.12.2016 09:19:16 /u01/app/oracle/fra/OLTP/43C6AA13C2390666E0538D24100A09EF/backupset/2016_12_22/o1_mf_nnndf_TAG20161222T091750_d5q33orv_.bkp
Backup Set           24     22.12.2016 09:19:30
  Backup Piece       24     22.12.2016 09:19:30 /u01/app/oracle/fra/OLTP/backupset/2016_12_22/o1_mf_annnn_TAG20161222T091930_d5q34ljw_.bkp
Backup Set           25     22.12.2016 09:19:34
  Backup Piece       25     22.12.2016 09:19:34 /u01/app/oracle/fra/OLTP/autobackup/2016_12_22/o1_mf_s_931252772_d5q34ol5_.bkp
Backup Set           26     22.12.2016 10:01:36
  Backup Piece       26     22.12.2016 10:01:36 /u01/app/oracle/fra/OLTP/backupset/2016_12_22/o1_mf_annnn_TAG20161222T100127_d5q5m88s_.bkp

The oldest backup is still obsolete, but now archivelogs anymore. That’s want we wannted to achieve. I can now remove the old backup:

RMAN> delete noprompt obsolete;

RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 2
using channel ORA_DISK_1
Deleting the following obsolete backups and copies:
Type                 Key    Completion Time    Filename/Handle
-------------------- ------ ------------------ --------------------
Backup Set           23     22.12.2016 09:19:16
  Backup Piece       23     22.12.2016 09:19:16 /u01/app/oracle/fra/OLTP/43C6AA13C2390666E0538D24100A09EF/backupset/2016_12_22/o1_mf_nnndf_TAG20161222T091750_d5q33orv_.bkp
Backup Set           24     22.12.2016 09:19:30
  Backup Piece       24     22.12.2016 09:19:30 /u01/app/oracle/fra/OLTP/backupset/2016_12_22/o1_mf_annnn_TAG20161222T091930_d5q34ljw_.bkp
Backup Set           25     22.12.2016 09:19:34
  Backup Piece       25     22.12.2016 09:19:34 /u01/app/oracle/fra/OLTP/autobackup/2016_12_22/o1_mf_s_931252772_d5q34ol5_.bkp
Backup Set           26     22.12.2016 10:01:36
  Backup Piece       26     22.12.2016 10:01:36 /u01/app/oracle/fra/OLTP/backupset/2016_12_22/o1_mf_annnn_TAG20161222T100127_d5q5m88s_.bkp
deleted backup piece
backup piece handle=/u01/app/oracle/fra/OLTP/43C6AA13C2390666E0538D24100A09EF/backupset/2016_12_22/o1_mf_nnndf_TAG20161222T091750_d5q33orv_.bkp RECID=23 STAMP=931252741
deleted backup piece
backup piece handle=/u01/app/oracle/fra/OLTP/backupset/2016_12_22/o1_mf_annnn_TAG20161222T091930_d5q34ljw_.bkp RECID=24 STAMP=931252770
deleted backup piece
backup piece handle=/u01/app/oracle/fra/OLTP/autobackup/2016_12_22/o1_mf_s_931252772_d5q34ol5_.bkp RECID=25 STAMP=931252773
deleted backup piece
backup piece handle=/u01/app/oracle/fra/OLTP/backupset/2016_12_22/o1_mf_annnn_TAG20161222T100127_d5q5m88s_.bkp RECID=26 STAMP=931255288
Deleted 4 objects

Afterwards I can re-register the archivelogs.

RMAN> catalog recovery area noprompt;

searching for all files in the recovery area

List of Files Unknown to the Database
=====================================
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_20_d5q6lm3q_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_16_d5q6l6jm_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_15_d5q709b5_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_11_d5q709gs_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_13_d5q709ln_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_17_d5q709oq_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_14_d5q709pq_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_18_d5q5q65p_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_19_d5q6gtmj_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_15_d5q6gg9k_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_16_d5q709x1_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_12_d5q709xo_.arc
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_20_d5q6lm3q_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_16_d5q6l6jm_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_15_d5q709b5_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_11_d5q709gs_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_13_d5q709ln_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_17_d5q709oq_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_14_d5q709pq_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_18_d5q5q65p_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_19_d5q6gtmj_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_15_d5q6gg9k_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_16_d5q709x1_.arc
File Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_12_d5q709xo_.arc

The archivelog files are back in the database catalog. Now I can handle them separately and apply any rule I want.

RMAN> delete noprompt archivelog until time 'sysdate-1/12';

released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=279 instance=oltp_1 device type=DISK
List of Archived Log Copies for database with db_unique_name OLTP
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - -------------------
49      1    15      A 21.12.2016 12:56:40
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_15_d5q709b5_.arc

50      2    11      A 21.12.2016 12:56:30
        Name: /u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_11_d5q709gs_.arc

deleted archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_1_15_d5q709b5_.arc RECID=49 STAMP=931257331
deleted archived log
archived log file name=/u01/app/oracle/fra/OLTP/archivelog/2016_12_22/o1_mf_2_11_d5q709gs_.arc RECID=50 STAMP=931257331
Deleted 2 objects

Not very straight forward, but it does the job.

Inspired by a comment on Twitter by Franck Pachot, the best way to keep a history of archivelogs is not to use “ALL” but “FROM”. So the approach the keep a history of three days would be the follwing:

RMAN> change archivelog from time 'sysdate-3' uncatalog;
RMAN> delete noprompt obsolete;
RMAN> catalog recovery area noprompt;

That way all archivelogs from the last three days are uncataloged. All other archivelogs are handled by the “delete obsolete” operation and there is no need to remove archivelogs manually.

How DB_LOST_WRITE_PROTECT works with Data Guard

Preface

Today I will blog again about a Data Guard topic. There are a couple of best practices out there which one should follow. One of these best practises is enabeling block checking and lost write protection. About the latter there are not many information out there. So that’s why I want to outline the concept and importance of this feature. Actually this post is inspired by a talk that I had during DOAG Conference 2016. I had a presentation about best practices in Data Guard and someone from the audience was asking how that lost write protection actually works.
Basically it is there to detect lost writes, as the parameter clearly states. That means, a write request to the disk was commited an the database is happy with that. But the write did not actually happen for whatever reason. So when the block will be read the next time, it is still in old state, any changed, deleted or added values are not included. The block itself is consistent, it is not corrupted. The DBA will not notice it since there is no error. An error will occur only when you restore the tablespace containing the block and then try to apply the redo stream. The recovery will detect a newer SCN in the redo stream which does not match the blocks SCN. That is the point where it gets tricky.

The test environment

My simple test cases run on a VirtualBox VM with OEL 6.7, Oracle Restart 12.1.0.2 and Oracle Database 12.1.0.2. Primary and Standby run on the same host.
DB_NAME: db12c
DB_UNIQUE_NAME: db12ca
DB_UNIQUE_NAME: db12cb
You will see the names in my SQL prompt to make things clear.

This is the current state of the system:

SYS@db12ca> show parameter lost

NAME                                 TYPE                              VALUE
------------------------------------ --------------------------------- ------------------------------
db_lost_write_protect                string                            NONE

SYS@db12ca> select database_role from v$database;

DATABASE_ROLE
------------------------------------------------
PHYSICAL STANDBY
SYS@db12cb> show parameter lost


NAME                                 TYPE                              VALUE
------------------------------------ --------------------------------- ------------------------------
db_lost_write_protect                string                            NONE


SYS@db12cb> select database_role from v$database;

DATABASE_ROLE
------------------------------------------------
PRIMARY

So “db12cb” is my primary and “db12ca” my standby instance. by the way, that’s why I gave them the suffix “a” and “b” because they may change roles over and over again.

For testing I create a separate tablespace with manual space management. This allows me to specify FREELISTS=1. Otherwise the changes to my data may end up in different blocks which is not what I want for my testing. Beside that, I create an user which I will use for testing and which gets the necessary grants.

SYS@db12cb> create tablespace marco datafile size 100m segment space management manual;

Tablespace created.

SYS@db12cb> create user marco identified by marco default tablespace marco quota unlimited on marco;

User created.

SYS@db12cb> grant create session to marco;

Grant succeeded.

SYS@db12cb> grant create table to marco;

Grant succeeded.

Scenario #1: No Lost Write Detection

The new user can now create a table and insert some data, so let’s do that.

SYS@db12cb> conn marco/marco
Connected.
MARCO@db12cb> create table testtable (id number, txt varchar2(100)) storage (freelists 1);

Table created.

MARCO@db12cb> insert into testtable values (1, 'Test Lost Write Detection - 1');

1 row created.

MARCO@db12cb> commit;

Commit complete.

Now we can identify the block and check if the data is really in there.

SYS@db12cb> select file_name from dba_data_files where tablespace_name='MARCO';

FILE_NAME
------------------------------------------------------------------------------------------------------------------------
/u01/app/oracle/oradata/DB12CB/datafile/o1_mf_marco_d3llm6dd_.dbf

SYS@db12cb> select block_id, blocks from dba_extents where segment_name='TESTTABLE' and owner='MARCO';

  BLOCK_ID     BLOCKS
---------- ----------
       128          8

SYS@db12cb> alter system checkpoint;

System altered.
[oracle@oel6u4 ~]$ dd if=/u01/app/oracle/oradata/DB12CB/datafile/o1_mf_marco_d3llm6dd_.dbf of=myblock.v1 skip=129 count=1 bs=8192
1+0 records in
1+0 records out
8192 bytes (8.2 kB) copied, 0.000162476 s, 50.4 MB/s
[oracle@oel6u4 ~]$ grep Detection myblock.v1
Binary file myblock.v1 matches

Ok, the data is in that block. In the same way I can now check if the DML was successfully applied on the standby.

SYS@db12ca> alter system flush buffer_cache;

System altered.

SYS@db12ca> select name from v$datafile where name like '%marco%';

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/DB12CA/datafile/o1_mf_marco_d3llm8nt_.dbf
[oracle@oel6u4 ~]$ dd if=/u01/app/oracle/oradata/DB12CA/datafile/o1_mf_marco_d3llm8nt_.dbf of=sbblock.v1 skip=129 count=1 bs=8192
1+0 records in
1+0 records out
8192 bytes (8.2 kB) copied, 0.000662024 s, 12.4 MB/s
[oracle@oel6u4 ~]$ grep Detection sbblock.v1
Binary file sbblock.v1 matches

So everything is fine until now as it should be.
I will now insert another row into the test table, force that change to be written to disk and then clear the buffer cache.

MARCO@db12cb> insert into testtable values (2, 'Oh my god!');

1 row created.

MARCO@db12cb> commit;

Commit complete.

MARCO@db12cb>

MARCO@db12cb> conn / as sysdba
Connected.
SYS@db12cb> alter system checkpoint;

System altered.

SYS@db12cb> alter system flush buffer_cache;

System altered.

Again, check if it was written to disk.

[oracle@oel6u4 ~]$ dd if=/u01/app/oracle/oradata/DB12CB/datafile/o1_mf_marco_d3llm6dd_.dbf of=myblock.v2 skip=129 count=1 bs=8192
1+0 records in
1+0 records out
8192 bytes (8.2 kB) copied, 0.000318304 s, 25.7 MB/s
[oracle@oel6u4 ~]$ grep Detection myblock.v2
Binary file myblock.v2 matches
[oracle@oel6u4 ~]$ grep god myblock.v2
Binary file myblock.v2 matches

Both values that I inserted are on disk now. Just to make sure everything is ok, I check the block on the standby.

[oracle@oel6u4 ~]$ dd if=/u01/app/oracle/oradata/DB12CA/datafile/o1_mf_marco_d3llm8nt_.dbf of=sbblock.v2 skip=129 count=1 bs=8192
1+0 records in
1+0 records out
8192 bytes (8.2 kB) copied, 0.000162124 s, 50.5 MB/s
[oracle@oel6u4 ~]$ grep Detection sbblock.v2
Binary file sbblock.v2 matches
[oracle@oel6u4 ~]$ grep god sbblock.v2
Binary file sbblock.v2 matches

So far, so good. Now comes the funny part. I will simulate a lost write by just putting my first extracted block back in the datafile.

[oracle@oel6u4 ~]$ dd if=myblock.v1 of=/u01/app/oracle/oradata/DB12CB/datafile/o1_mf_marco_d3llm6dd_.dbf seek=129 count=1 bs=8192 conv=notrunc
1+0 records in
1+0 records out
8192 bytes (8.2 kB) copied, 0.000154517 s, 53.0 MB/s

Now let us query the test table and see what’s happening.

MARCO@db12cb> select * from testtable;

        ID
----------
TXT
--------------------------------------------------------------------------------
         1
Test Lost Write Detection - 1

No error, no waring, just the result. But the result set obviously lacks the row from the second insert. And as the block is completely intact and not corrupted, there is no need to raise any error.
So now it is time to do another INSERT.

MARCO@db12cb> insert into testtable values (3, 'Inconsistency!');

1 row created.

That is the point where it comes to light. The redo apply of the standby database detects a redo record which does not match the data block that itself has. It has no other chance as to stop recovery and raise an error in the alert.log.

2016-11-26 09:52:02.752000 +01:00
ERROR: ORA-00600: internal error code, arguments: [3020] recovery detected a data block with invalid SCN raised at location:kcbr_media_ap_1
Checker run found 1 new persistent data failures
Errors in file /u01/app/oracle/diag/rdbms/db12ca/db12ca/trace/db12ca_pr02_2466.trc  (incident=2705):
ORA-00600: internal error code, arguments: [3020], [2], [129], [8388737], [], [], [], [], [], [], [], []
ORA-10567: Redo is inconsistent with data block (file# 2, block# 129, file offset is 1056768 bytes)
ORA-10564: tablespace MARCO
ORA-01110: data file 2: '/u01/app/oracle/oradata/DB12CA/datafile/o1_mf_marco_d3llm8nt_.dbf'
ORA-10561: block type 'TRANSACTION MANAGED DATA BLOCK', data object# 93368
2016-11-26 09:52:03.882000 +01:00
Incident details in: /u01/app/oracle/diag/rdbms/db12ca/db12ca/incident/incdir_2705/db12ca_pr02_2466_i2705.trc

Beside that, the primary is still running fine, accepts changes, commits and is just doing what a database is supposed to do. This is very unkind since the only way to recover from such a situation is doing a failover to the standby and lose all changes that happened after the change to damaged block. And this can be a lot.

Scenario #2: Lost Write Detection enabled

I enable it by simply setting the parameter to typical on both instances.

SYS@db12ca> alter system set db_lost_write_protect=typical;

System altered.
SYS@db12cb> alter system set db_lost_write_protect=typical;

System altered.

This parameter forces the database to record the SCN of all blocks that it reads from disk to the redo stream. The standby database can use this information to compare the recorded SCN from the redo stream to the actual SCN of the block at the standby site. If there is a difference, it can report a lost write.

Now I walk through the same steps as above. But this time, after simulating the lost write, I simply query the table.

MARCO@db12cb> select * from testtable;

        ID
----------
TXT
--------------------------------------------------------------------------------
         1
Test Lost Write Detection - 1

The SELECT succeeds, but the alert.log of the primary reports the following error.

2016-11-26 10:40:47.143000 +01:00
DMON: A primary database lost write was reported by standby database db12ca. Please look at the alert and DRC logs of the standby database db12ca to see more information about the lost write.

The standby’s alert.log now reports an ORA-752 instead of an ORA-600.

No redo at or after SCN 3448159 can be used for recovery.
PR02: Primary database lost write detected by standby database db12ca
BLOCK THAT LOST WRITE 129, FILE 2, TABLESPACE# 7
The block read during the normal successful database operation had SCN 3346737 (0x0000.00331131) seq 1 (0x01)
ERROR: ORA-00752 detected lost write on primary
Slave exiting with ORA-752 exception
Errors in file /u01/app/oracle/diag/rdbms/db12ca/db12ca/trace/db12ca_pr02_2924.trc:
ORA-00752: recovery detected a lost write of a data block
ORA-10567: Redo is inconsistent with data block (file# 2, block# 129, file offset is 1056768 bytes)
ORA-10564: tablespace MARCO
ORA-01110: data file 2: '/u01/app/oracle/oradata/DB12CA/datafile/o1_mf_marco_d3lnpn8n_.dbf'
ORA-10561: block type 'TRANSACTION MANAGED DATA BLOCK', data object# 93369
Recovery Slave PR02 previously exited with exception 752
MRP0: Background Media Recovery terminated with error 448
Errors in file /u01/app/oracle/diag/rdbms/db12ca/db12ca/trace/db12ca_pr00_2919.trc:
ORA-00448: normal completion of background process

Recovering from a lost write

As in scenario #1, the only way to work around this error is to failover to the standby database.

[oracle@oel6u4 ~]$ dgmgrl
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys@db12ca
Password:
Connected as SYSDBA.
DGMGRL> failover to db12ca immediate
Performing failover NOW, please wait...
Failover succeeded, new primary is "db12ca"

Now I can query my test table at the new primary.

SYS@db12ca> select * from marco.testtable;

        ID TXT
---------- ------------------------------
         1 Test Lost Write Detection - 1
         2 Oh my god!

I now need to re-create the old primary. Reinstate using Flashback Database will not work. The steps will be basically these:

  • remove database from configuration
  • recreate the database using duplicate
  • add database back to the configuration

A lot of effort for such a “small” failure….

Conclusion

Enabling lost write detection is crucial in a Data Guard setup. Lost writes are detected at read time which allows to perform recovery steps much earlier than without it. Nevertheless, lost writes should not occur. If it does occur, something really bad is going on in your environment and you need to investigate the root cause of the lost write.
That’s it, basically. I hope it makes things a little more clear.

Further reading

Resolving ORA-752 or ORA-600 [3020] During Standby Recovery (Doc ID 1265884.1)
Oracle Docs – Database Reference: DB_LOST_WRITE_PROTECT