Read Only Oracle Homes

As long as I am using Oracle Database, the configuration files reside under the ORACLE_HOME path. There are files for SQL*Net under network/admin and files for the database instances under dbs (or database if it is Windows). If one wanted to have those files elsewhere stored, maybe to have a centralized administration, needed to create some workarounds, using environment variables like TNS_ADMIN or using the IFILE syntax inside those configuration files. Since Oracle Database version 18c this might be history. With that version a feature called “Read Only Oracle Home” was introduced.
This feature enables us to separate the software part from the dynamic part. To handle this the already existing tool “orabase” got some friendly neighbors.

  • orabaseconfig – returns the base path for all the dynamic configuration files
  • orabasehome – returns the current path under which is currently being used to retrieve configuration files
  • roohctl – used to configure an Oracle Home as read only or vice versa

This is how it works for a normal Oracle Home:

[oracle@vm121 ~]$ orabasehome                                                             
/u01/app/grid/18                                                                          
[oracle@vm121 ~]$ orabaseconfig
/u01/app/grid/18 

But how can I enable the read only option? This is done using “roohctl”.


[oracle@vm121 ~]$ roohctl -h
Usage:  roohctl [] [ ]
Following are the possible flags:
        -help

Following are the possible commands:
        -enable Schreibgeschütztes Oracle Home aktivieren
                [-nodeList Liste der Knoten in einer Clusterumgebung]

[oracle@vm121 ~]$ roohctl -enable                                                         
Enabling Read-Only Oracle home.
Cannot enable Read-Only Oracle home in a configured Oracle home.                          
The Oracle Home is configured with listeners 'LISTENER'. 

Ok, first try, first fail. In that case I am trying to modify a home that has running components already. These components need to be stopped beforehand. So let’s do that.

[oracle@vm121 ~]$ srvctl stop listener
[oracle@vm121 ~]$ srvctl remove listener

[oracle@vm121 ~]$ roohctl -enable
Enabling Read-Only Oracle home.
Update orabasetab file to enable Read-Only Oracle home.
Orabasetab file has been updated successfully.
Bootstrap files have been processed successfully.
Read-Only Oracle home has been enabled successfully.
Check the log file /refresh/home/oracle/cfgtoollogs/roohctl/roohctl-180611AM052009.log

[oracle@vm121 ~]$ srvctl add listener
[oracle@vm121 ~]$ srvctl start listener

Now that worked like a charm. Let’s check the output of the new commands.

[oracle@vm121 ~]$ orabaseconfig
/u01/app/oracle
[oracle@vm121 ~]$ orabasehome
/u01/app/oracle/homes/OraGI18Home1

As we can see, the configuration files will now be stored in and read from a directory outside the Oracle Home. This is how the tree lookes like initially.

[oracle@vm121 ~]$ tree -a /u01/app/oracle/homes/OraGI18Home1/
/u01/app/oracle/homes/OraGI18Home1/
├── assistants
│   └── dbca
│       └── templates
├── dbs
├── install
├── log
│   └── vm121
│       └── client
├── network
│   ├── admin
│   │   └── listener.ora
│   ├── log
│   └── trace
└── rdbms
    ├── audit
    └── log

15 directories, 1 file

And after a while of operation, other files get added. This gives a rough picture of what it means to have a Read Only Oracle Home.

[oracle@vm121 ~]$ tree -a /u01/app/oracle/homes/OraGI18Home1/
/u01/app/oracle/homes/OraGI18Home1/
├── assistants
│   └── dbca
│       └── templates
├── cfgtoollogs
│   ├── opatchauto
│   │   ├── core
│   │   │   ├── opatch
│   │   │   │   ├── opatch2020-01-20_09-53-19AM_1.log
│   │   │   │   ├── opatch2020-01-20_09-59-52AM_1.log
│   │   │   │   ├── opatch_history.txt
│   │   │   │   └── SDKWork.txt
│   │   │   └── .patch_storage
│   │   ├── opatchauto2020-01-20_09-46-32AM.log
│   │   ├── opatchauto_2020-01-20_09-52-05_binary.log
│   │   └── opatchauto_2020-01-20_09-59-45_binary.log
│   ├── opatchautodb
│   │   ├── 2020-01-20-09-48-21
│   │   │   ├── log.txt
│   │   │   ├── log.txt.lck
│   │   │   ├── machine-readable
│   │   │   └── .tmp
│   │   ├── bootstrap2020-01-20_09-29-37AM.log
│   │   ├── hostlist.obj
│   │   └── systemconfig2020-01-20_09-36-27AM.log
│   └── oplan
├── dbs
├── install
├── log
│   └── vm121
│       └── client
│           ├── tnslsnr_10410.log
│           └── tnslsnr_9216.log
├── network
│   ├── admin
│   │   └── listener.ora
│   ├── log
│   └── trace
└── rdbms
    ├── audit
    └── log

25 directories, 15 files

So having the Oracle Home read only makes sense somehow. But if you want to have it that way, configure it right from the beginning to prevent downtimes later on. Another thing to keep in mind is the fact, that the “read only” portion is meant for configuration only. Applying One-Off Patches or Release Updates will still modify your files in the ORACLE_HOME.

MERGE burns Sequence Numbers

In my previous blog post I wrote about the MERGE statement and how it can efficiently be used. During the implementation of this solution a colleague stumbled upon some very high values numbers for the sequence he used to generate the IDs. For sure I investigated this and here’s the result. First I start with a table that gets its IDs from a default.

SQL> create sequence tm_seq;

Sequence created.

SQL> create table test_merge (
  2   id  number default on null tm_seq.nextval,
  3   txt varchar2(30),
  4   last_updated date,
  5   constraint pk_test_merge primary key (id)
  6  )
  7  /

Table created.

SQL> select LAST_NUMBER from user_sequences where SEQUENCE_NAME='TM_SEQ';

LAST_NUMBER
-----------
          1

When I now insert the first records into that table, the IDs get generated automatically using the Sequence.

SQL> merge into test_merge
  2  using (
  3    select 'Zeile ' || to_char(level) zeile from dual connect by level  select LAST_NUMBER from user_sequences where SEQUENCE_NAME='TM_SEQ';

LAST_NUMBER
-----------
     100001

At the end the Sequence increased by exactly the amount of records I inserted. But let’s see what happens when I run the same MERGE again. It will now do UPDATEs instead of INSERTs.

SQL> merge into test_merge
  2  using (
  3    select 'Zeile ' || to_char(level) zeile from dual connect by level  select LAST_NUMBER from user_sequences where SEQUENCE_NAME='TM_SEQ';

LAST_NUMBER
-----------
     200001

Look, the Sequence increased again by exactly the amount of records I tried to MERGE. Obviously the ID values are created regardless of the operation that is finally performed. Even if I update the same row multiple times, wich ends up throwing an error of cause, these Sequence numbers get burned.

SQL> merge into test_merge
  2  using (
  3    select 'Zeile ' || to_char(mod(level,10000)) zeile from dual connect by level  select LAST_NUMBER from user_sequences where SEQUENCE_NAME='TM_SEQ';

LAST_NUMBER
-----------
     700021

Next, It replaced the DEFAULT with an IDENTITY column. But the result is the same.

SQL> create table test_merge (
  2   id  number generated by default on null as identity,
  3   txt varchar2(30),
  4   last_updated date,
  5   constraint pk_test_merge primary key (id)
  6  )
  7  /

Table created.

SQL> select sequence_name from user_tab_identity_cols where table_name='TEST_MERGE';

SEQUENCE_NAME
------------------------------------------------------------------------------------------------------------------
ISEQ$$_1193040

SQL> select LAST_NUMBER from user_sequences where SEQUENCE_NAME='ISEQ$$_1193040';

LAST_NUMBER
-----------
          1

SQL> merge into test_merge
  2  using (
  3    select 'Zeile ' || to_char(level) zeile from dual connect by level  merge into test_merge
  2  using (
  3    select 'Zeile ' || to_char(level) zeile from dual connect by level  select LAST_NUMBER from user_sequences where SEQUENCE_NAME='ISEQ$$_1193040';

LAST_NUMBER
-----------
     200001

Last try, I switch to the Trigger-based solution that I created initially.

SQL> create sequence te_seq;

Sequence created.

SQL> create or replace trigger tm_bir
  2  before insert on test_merge
  3  for each row
  4  begin
  5    :new.id := tm_seq.nextval;
  6  end;
  7  /

Trigger created.

SQL> merge into test_merge
  2  using (
  3    select 'Zeile ' || to_char(level) zeile from dual connect by level  select LAST_NUMBER from user_sequences where SEQUENCE_NAME='TM_SEQ';

LAST_NUMBER
-----------
     100001

SQL> merge into test_merge
  2  using (
  3    select 'Zeile ' || to_char(level) zeile from dual connect by level  select LAST_NUMBER from user_sequences where SEQUENCE_NAME='TM_SEQ';

LAST_NUMBER
-----------
     100001

You see, with the Trigger generating the IDs there is no burning of Sequence values. This is because the Trigger fires, when the INSERT is really happening. It cannot asume that there may be an INSERT. With the first approaches using DEFAULT or IDENTITY the IDs get generated right away, just in case. But this a cosmetic thing, those IDs are just IDs, nobody should ever see or even care about this values. At the end, the Trigger-based solution is a bit slower than the other two. Here the overhead of the PL/SQL is measureable.

Merge vs. Insert-Exception-Update

Recently I had a discussion with one of our developer about how to bring new rows in a table or, if the row already exists, update it. His approach was a PL/SQL procedure that first tries to insert a single row and if that fails because the row is already there, catch that exception and do an update instead. I told him, that this approach has at least three issues, that make it slow. First, it does things row-by-row, this is much more inefficient than set operations and it does not scale well. Second, it has lots of context switches between SQL and PL/SQL which are costly. And third, the exception handling is expensive too. My advise was to switch the code to use a single MERGE-statement. And to prove that I am right, I wrote a small example which I’ll share with you.

Let’s create two tables with identical structure, both will have an unique index and a trigger, that fills the ID using a sequence.

SQL> create unique index tm_txt_uq on test_merge(txt);

Index created.

SQL> create table test_except (
  2   id  number,
  3   txt varchar2(30),
  4   last_updated date,
  5   constraint pk_test_except primary key (id)
  6  )
  7  /

Table created.

SQL> create unique index te_txt_uq on test_except(txt);

Index created.

SQL> create sequence tm_seq;

Sequence created.

SQL> create sequence te_seq;

Sequence created.

SQL> create or replace trigger tm_bir
  2  before insert on test_merge
  3  for each row
  4  begin
  5    :new.id := tm_seq.nextval;
  6  end;
  7  /

Trigger created.

SQL> create or replace trigger te_bir
  2  before insert on test_except
  3  for each row
  4  begin
  5    :new.id := te_seq.nextval;
  6  end;
  7  /

Trigger created.

In the first run I populated both tables with 100k rows, since both tables are empty, this will lead to INSERTs of cause. And SQL*Plus will do the timing for me.

SQL> set timing on
SQL> merge into test_merge
  2  using (
  3    select 'Zeile ' || to_char(level) zeile from dual connect by level  set serveroutput on
SQL> declare
  2    l number;
  3    i number :=0;
  4    u number :=0;
  5  begin
  6    for l in 1..100000 loop
  7      begin
  8        insert into test_except (txt)
  9    values ('Zeile ' || to_char(l));
 10    i := i+1;
 11  exception
 12    when dup_val_on_index then
 13      update test_except set last_updated=sysdate where txt = 'Zeile ' || to_char(l);
 14      u := u+1;
 15      end;
 16    end loop;
 17    dbms_output.put_line(to_char(i) || ' rows inserted.');
 18    dbms_output.put_line(to_char(u) || ' rows updated.');
 19  end;
 20  /
100000 rows inserted.
0 rows updated.

PL/SQL procedure successfully completed.

Elapsed: 00:00:14.62

We see, that the MERGE is more than 2x faster than the PL/SQL approach. When I now run both statements a scond time, it will lead to UPDATEs since all the rows are already in the tables.

SQL> merge into test_merge
  2  using (
  3    select 'Zeile ' || to_char(level) zeile from dual connect by level  declare
  2    l number;
  3    i number :=0;
  4    u number :=0;
  5  begin
  6    for l in 1..100000 loop
  7      begin
  8        insert into test_except (txt)
  9    values ('Zeile ' || to_char(l));
 10    i := i+1;
 11      exception
 12    when dup_val_on_index then
 13      update test_except set last_updated=sysdate where txt = 'Zeile ' || to_char(l);
 14      u := u+1;
 15      end;
 16    end loop;
 17    dbms_output.put_line(to_char(i) || ' rows inserted.');
 18    dbms_output.put_line(to_char(u) || ' rows updated.');
 19  end;
 20  /
0 rows inserted.
100000 rows updated.

PL/SQL procedure successfully completed.

Elapsed: 00:00:55.36

The MERGE statement is even faster compared to the first run. That might be due to the fact, that there is less internal work to do, no index block splits etc. But the elapsed time of the PL/SQL aproach dicreased dramatically, it took 4x longer than it’s initial run and is now 11x slower then the MERGE.
So now it is really obvious, that the PL/SQL approach might not be the best performing solution. The MERGE solution maybe needs some more thinking and might not readable as the PL/SQL solution, but if done regularly one should get used to it.
The good thing is, I convinced my collegue.

Size of Flashback Logs

The todays blog post is about the Flashback Database feature inside the Oracle Database. I use this feature mostly in the context of Data Guard, but there are also other possible use cases like having a fallback during updates/upgrades etc. For Data Guard setups, the Flashback Database feature is simplyfies and speeds up a reinstantiation of a formerly primary database that got lost and a failover the standby database was performed. With Flashback Database there is no need to perform a complete duplication of the database to bring it up and running as a standby database. The database is simply rolled back to just before the SCN when the standby database was activated by the failover. It can then just start applying redo that it gets from the new primary database.
But how does it work? When activating this feature, the database writes some additional logs that cotain information to rollback the database. These logs are stored in the Fast Recovery Area and there is no way to influence that. Let’s have a look at that.

SQL> select group#, status, bytes from v$log;

    GROUP# STATUS                BYTES
---------- ---------------- ----------
         1 CURRENT          1073741824
         2 INACTIVE         1073741824
         3 INACTIVE         1073741824

SQL> alter database flashback on;

Database altered.

SQL> select * from v$flashback_database_logfile;

NAME                                                                                   LOG#    THREAD#  SEQUENCE#      BYTES FIRST_CHANGE# FIRST_TIME          TYPE          CON_ID
-------------------------------------------------------------------------------- ---------- ---------- ---------- ---------- ------------- ------------------- --------- ----------
/u03/app/oracle/fast_recovery_area/MMISI2/flashback/o1_mf_h1xp7lv0_.flb                   1          1          1 1073741824       3886676 2020-01-15 10:18:44 NORMAL             0
/u03/app/oracle/fast_recovery_area/MMISI2/flashback/o1_mf_h1xp7pnk_.flb                   2          1          1 1073741824             0                     RESERVED           0

SQL> !ls -l /u03/app/oracle/fast_recovery_area/MMISI2/flashback/
total 2097168
-rw-r----- 1 oracle dba 1073750016 Jan 15 10:22 o1_mf_h1xp7lv0_.flb
-rw-r----- 1 oracle dba 1073750016 Jan 15 10:18 o1_mf_h1xp7pnk_.flb

Obviously the logs are the same size as the redologs. And they consume space inside your FRA. But what if one does not have sufficient space and/or wants to shrink those flashback logs? Since the size is being inherited from the size of the redologs, let’s try shrinking the redologs by creating new groups and dropping the old ones.

SQL> alter database add logfile group 10 size 256m;

Database altered.

SQL> alter database add logfile group 11 size 256m;

Database altered.

SQL> alter database add logfile group 12 size 256m;

Database altered.

SQL> alter database drop logfile group 2;

Database altered.

SQL> alter database drop logfile group 3;

Database altered.

SQL> alter system switch logfile;

System altered.

SQL> alter system checkpoint;

System altered.

SQL> alter database drop logfile group 1;

Database altered.

SQL> select group#, bytes, status from v$log;

    GROUP#      BYTES STATUS
---------- ---------- ----------------
        10  268435456 CURRENT
        11  268435456 UNUSED
        12  268435456 UNUSED

Now let’s check how the flashback logs look like:

SQL> select log#, thread#, sequence#, bytes, type from v$flashback_database_logfile;

      LOG#    THREAD#  SEQUENCE#      BYTES TYPE
---------- ---------- ---------- ---------- ---------
         1          1          1 1073741824 NORMAL
         2          1          1 1073741824 RESERVED

Still unchanged. May it requires some work to be done to cycle through the logs.

SQL> create table dings tablespace users as select * from dba_source;

Table created.

SQL> insert into dings select * from dings;

283698 rows created.

SQL> insert into dings select * from dings;

567396 rows created.

SQL> insert into dings select * from dings;

1134792 rows created.

SQL> insert into dings select * from dings;

2269584 rows created.

SQL> insert into dings select * from dings;

4539168 rows created.

SQL> commit;

Commit complete.

SQL> select group#, sequence#, bytes, status from v$log;

    GROUP#  SEQUENCE#      BYTES STATUS
---------- ---------- ---------- ----------------
        10         11  268435456 ACTIVE
        11         12  268435456 ACTIVE
        12         13  268435456 CURRENT

SQL> select log#, thread#, sequence#, bytes, type from v$flashback_database_logfile;

      LOG#    THREAD#  SEQUENCE#      BYTES TYPE
---------- ---------- ---------- ---------- ---------
         1          1          1 1073741824 NORMAL
         2          1          1 1073741824 RESERVED

Ok, no change here. There isn’t even a new flashback log. Maybe it wasn’t enough change? So let’s try some row-by-row deletion. And please do not do this in production, deleting rows can’t be more inefficient as that.

SQL> begin
   2   for rec in (select rowid from dings) loop
   3     delete from dings where rowid = rec.rowid;
   4     commit;
   5   end loop;
   6 end;
   7 /

PL/SQL procedure successfully completed.

SQL> select group#, sequence#, bytes, status from v$log;

    GROUP#  SEQUENCE#      BYTES STATUS
---------- ---------- ---------- ----------------
        10         53  268435456 CURRENT
        11         51  268435456 INACTIVE
        12         52  268435456 INACTIVE

SQL> select log#, thread#, sequence#, bytes, type from v$flashback_database_logfile;

      LOG#    THREAD#  SEQUENCE#      BYTES TYPE
---------- ---------- ---------- ---------- ---------
         1          1          1 1073741824 NORMAL
         2          1          1 1073741824 RESERVED

At lease this produced quite some redologs, but no new flashback logs. Maybe the “db_flashback_retention_target” parameter has an impact on these logs?

SQL> show parameter flashb

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_flashback_retention_target        integer     1440

SQL> alter system set db_flashback_retention_target=10;

System altered.

SQL> select log#, thread#, sequence#, bytes, type from v$flashback_database_logfile;

      LOG#    THREAD#  SEQUENCE#      BYTES TYPE
---------- ---------- ---------- ---------- ---------
         1          1          1 1073741824 NORMAL
         2          1          1 1073741824 RESERVED

Simply speaking, no, it hasn’t. Maybe other operations influence the generation of flashback logs, like index maintenance?

SQL> insert into dings select * from dba_source;

283698 rows created.

SQL> commit;

Commit complete.

SQL> create index dings_ix1 on dings(owner, name, line) tablespace new;

Index created.

SQL> insert into dings select * from dings;

283698 rows created.

SQL> insert into dings select * from dings;

567396 rows created.

SQL> insert into dings select * from dings;

1134792 rows created.

SQL> insert into dings select * from dings;

2269584 rows created.

SQL> commit;

Commit complete.

SQL> select group#, sequence#, bytes, status from v$log;

    GROUP#  SEQUENCE#      BYTES STATUS
---------- ---------- ---------- ----------------
        10         65  268435456 ACTIVE
        11         66  268435456 ACTIVE
        12         67  268435456 CURRENT

SQL> select log#, thread#, sequence#, bytes, type, first_time from v$flashback_database_logfile;

      LOG#    THREAD#  SEQUENCE#      BYTES TYPE      FIRST_TIME
---------- ---------- ---------- ---------- --------- -------------------
         3          1          3 1073741824 NORMAL    2020-01-15 11:27:47
         4          1          1 1073741824 RESERVED
         1          1          1 1073741824 NORMAL    2020-01-15 10:18:44
         2          1          2 1073741824 NORMAL    2020-01-15 11:26:36

SQL> !ls -l /u03/app/oracle/fast_recovery_area/MMISI2/flashback/
total 4194336
-rw-r----- 1 oracle dba 1073750016 Jan 15 11:26 o1_mf_h1xp7lv0_.flb
-rw-r----- 1 oracle dba 1073750016 Jan 15 11:27 o1_mf_h1xp7pnk_.flb
-rw-r----- 1 oracle dba 1073750016 Jan 15 11:28 o1_mf_h1xt6z4y_.flb
-rw-r----- 1 oracle dba 1073750016 Jan 15 11:27 o1_mf_h1xt95vj_.flb

Ok, finally there are some new flashback logs. But the size of those logs is still at 1GB.
I am running out of ideas, the last chance is to disable and enable the flashback datbae feautre.

SQL> alter database flashback off;

Database altered.

SQL> alter database flashback on;

Database altered.

SQL> select log#, thread#, sequence#, bytes, type, first_time from v$flashback_database_logfile;

      LOG#    THREAD#  SEQUENCE#      BYTES TYPE      FIRST_TIME
---------- ---------- ---------- ---------- --------- -------------------
         1          1          1  268435456 NORMAL    2020-01-15 11:30:01
         2          1          1  268435456 RESERVED

SQL> !ls -l /u03/app/oracle/fast_recovery_area/MMISI2/flashback/
total 524304
-rw-r----- 1 oracle dba 268443648 Jan 15 11:30 o1_mf_h1xtf8yz_.flb
-rw-r----- 1 oracle dba 268443648 Jan 15 11:30 o1_mf_h1xtfck6_.flb

Here we go, the flashback logs are finally shrinked. But with the consequences of loosing all the flashback information that was previously there. Keep that in mind and schedule wisely when changing the size of your flashback logs.

Change time zone in Oracle’s DBaaS

Recently I assisted in a project that is utilizing Oracle Cloud Infrastructure (OCI) to provide application servers and Oracle databases. The customer is located in New Zealand, so they obviously wanted to work with dates and times in their local time zone. But when it came to the Database Scheduler, we found, that is was using UTC rather than the local time zone. This makes totally sense in the first place, but it’s not what we wanted to have.
Here is, what it looked like initially.

[oracle@ecotes-dbt ~]$ date
Thu Nov 14 07:05:29 UTC 2019

SQL> select sysdate from dual;

SYSDATE
----------------
14.11.2019 07:06

So at first we checked and changed the time zone for the underlying server. This is done by pointing /etc/localtime to another file.

[opc@ecotes-dbt ~]# sudo su - 
[root@ecotes-dbt ~]# ll /etc/localtime
lrwxrwxrwx 1 root root 23 Oct  7 06:51 /etc/localtime -> /usr/share/zoneinfo/UTC
[root@ecotes-dbt ~]#  ll /usr/share/zoneinfo/NZ
-rw-r--r-- 4 root root 2434 Jan 29  2018 /usr/share/zoneinfo/NZ
[root@ecotes-dbt ~]# ll /usr/share/zoneinfo/Pacific/Auckland
-rw-r--r-- 4 root root 2434 Jan 29  2018 /usr/share/zoneinfo/Pacific/Auckland
[root@ecotes-dbt ~]# rm -f /etc/localtime
[root@ecotes-dbt ~]# ln -s /usr/share/zoneinfo/Pacific/Auckland /etc/localtime
[root@ecotes-dbt ~]# ll /etc/localtime
lrwxrwxrwx 1 root root 36 Nov 14 20:10 /etc/localtime -> /usr/share/zoneinfo/Pacific/Auckland

[oracle@ecotes-dbt ~]$ date
Thu Nov 14 20:11:31 NZDT 2019

Now the database is picking up the new settings from the OS.

SQL> select sysdate from dual;

SYSDATE
----------------
14.11.2019 20:11

Unfortunately, the database scheduler has it’s own settings. So we needed to change that too.

SQL> SELECT *
2    FROM   dba_scheduler_global_attribute
3    WHERE  attribute_name = 'DEFAULT_TIMEZONE';   

ATTRIBUTE_NAME         VALUE
---------------------- -----------
DEFAULT_TIMEZONE       Etc/UTC

SQL> select dbms_scheduler.stime from dual;

STIME
---------------------------------------------------------------------------
14-NOV-19 08.26.59.556872000 AM ETC/UTC

But we can simply change this default with a single call to DBMS_SCHEDULER.SET_SCHEDULER_ATTRIBUTE.

SQL> exec DBMS_SCHEDULER.SET_SCHEDULER_ATTRIBUTE('default_timezone', 'Pacific/Auckland');

PL/SQL procedure successfully completed.

SQL> SELECT *
2    FROM   dba_scheduler_global_attribute
3    WHERE  attribute_name = 'DEFAULT_TIMEZONE';   

ATTRIBUTE_NAME         VALUE
---------------------- -----------
DEFAULT_TIMEZONE       Pacific/Auckland

SQL> select dbms_scheduler.stime from dual;

STIME
---------------------------------------------------------------------------
14-NOV-19 09.31.26.365519000 PM PACIFIC/AUCKLAND

So that’s it. Now we have our DBaaS running with the time zone specific to New Zealand.


Update 2020-01-06

In the first version of my post, I did not mention the relevance of the server side time zone settings. Because a database session inherits its NLS settings from the listener that forks the process for a session. You can find a complete checklist in MOS Note 1627439.1.
Basically, you set the timezone in /etc/sysconfig/clock. The possible values can be found in /usr/share/zoneinfo.

$ cat /etc/sysconfig/clock
ZONE="Pacific/Auckland"
UTC=true
ARC=false

Before doing so, you should check if the times for UTC and your desired time zone are accurate.

$ export TZ=UTC
$ date
Fri Dec 20 11:53:48 UTC 2019
$ export TZ=Pacific/Auckland
$ date
Sat Dec 21 00:53:52 NZDT 2019

The settings for the Clusterware may also need a change. The neccessary steps can be found in part D) of the note mentioned above. Ideally you set the time zone only in $GRID_HOME/crs/install/s_crsconfig__env.txt:

$ grep ^TZ $GRID_HOME/crs/install/s_crsconfig_$(hostname -s)_env.txt
TZ=Pacific/Auckland

All other ressource specific settings shall be removed using “srvctl unsetenv [database | asm | listener] -envs TZ”, that will make those ressources use the settings defined for the clusterware.
If all those changes have been made, the Clusterware stack should be bounced to bring all the changes into effect.e geänderten Einstellungen zu aktivieren.

Privilege Analysis

Todays blog post is about security. When creating database users, the least privilege principle should apply. That means, the user gets only those privileges that are needed for the work it needs to do. But we all know that many applications come with extensive requirements for the database user accounts. In most cases the development was done using DBA privileges or similar so the final installation requires these privileges too. A DBA will not like that at all, but it is difficult to identify all the privileges that are needed to apply the least-privilege-principle. That is where the “Privilege Analysis” feature comes into play. It was introduced with Oracle Database 12.1 but unfortunately it was part of the “Database Vault” extra-cost option. With the Database Release 18 this limitation was lifted and backported to 12.1 so it is now available as a standard feature of Enterprise Edition.
The following small and simple example will outline the basic functionality of the feature. The main parts are the DBMS_PRIVILEGE_CAPTURE package and the DBA_USED_PRIVS view. In the example there is a central schema named “NEXUS” that is being accessed by other users named “NEXUS”. There are roles in place wich contain all required privileges and are granted to those users. But in addition to that the “SELECT ANY TABLE” system privilege is required to make the application work. This requirement needs to be eliminated.
The first step is to create a rule that defines what should be monitored. Since a “like” expression cannot be used, it is done with a “between” that actually does the same.

BEGIN
  DBMS_PRIVILEGE_CAPTURE.CREATE_CAPTURE(
    name        => 'COLLECT_NEXUS' ,
    description => 'Ermittlung verwendeter Privilegien von Nexus' ,
    type        => DBMS_PRIVILEGE_CAPTURE.G_CONTEXT ,
    condition   => 'SYS_CONTEXT (''USERENV'',''SESSION_USER'') between ''NEXUS'' and ''NEXUT'''
  );
END;
/

In order to gather some data, the rule needs to be enabled.

BEGIN DBMS_PRIVILEGE_CAPTURE.ENABLE_CAPTURE('COLLECT_NEXUS'); END;
/

Having done this, we run through the questionable part of the application. The database will capture all the privileges we use during that time. So it is essential to deactivate this right after this.

BEGIN DBMS_PRIVILEGE_CAPTURE.DISABLE_CAPTURE('COLLECT_NEXUS'); END;
/

The captured data can now be analyzed using the DBA_USED_PRIVS view. In our case this looked like this:

SQL> SELECT DISTINCT USERNAME, SYS_PRIV, OBJ_PRV, OBJECT_OWNER,OBJECT_NAME,USERNAME FROM DBA_USED_PRIVS WHERE CAPTURE='COLLECT_NEXUS';

USERNAME	SYS_PRIV	     OBJ_PRIV	 USER_PRIV	 OBJECT_OWNER	 OBJECT_NAME
----------- ---------------- ----------- ----------- --------------- ---------------
NEXUS13	    CREATE PROCEDURE				
NEXUS11	    CREATE PROCEDURE				
[...]
NEXUS11	    CREATE TABLE				
NEXUS13	    CREATE VIEW				
NEXUS11	    CREATE VIEW				
NEXUS11	    SELECT ANY TABLE                        NEXUS	        KASSEKAT
NEXUS11	    SELECT ANY TABLE			            NEXUS           SPRACHEN
NEXUS13		                  SELECT		        SYS             ALL_TAB_COLUMNS
NEXUS13		                  SELECT		        SYS             ALL_CONSTRAINTS
NEXUS11		                  SELECT		        NEXUS	        BEREICHSZAHLEN
NEXUS13		                  SELECT		        SYS	            ALL_OBJECTS
NEXUS		                  EXECUTE		        SYS	            DBMS_OUTPUT
NEXUS13		                  SELECT		        NEXUS	        KAPITEL
[...]

As we can see, the application used the SELECT ANY TABLE privilge to access two tables in the main schema. The users already have SELECT privileges granted on those tables, but through a role. Since the application is using PL/SQL as well, this grant cannot be used. In this case we need to do a direct grant for these two tables. And as soon as we did that, we were able to revoke the SELECT ANY TABLE privilege with no negative impact to the application.
This example outlines the possibilities of the feature. Since it is now free-of-charge for Enterprise Edition users, it becomes quite useful. Only Standard Edition users are not allowed to make use of it at all.

Duplicate Column Names?

Recently, at a customers site, we stumbled upon an error in one of the databases alert.log. It turned out, that it was caused by the nightly statistics gathering job. There was one table for which statistics could not be generated.

SQL> exec dbms_stats.gather_table_stats('BASIS','VG_XXXXXXXXXXX', method_opt=>'for all columns size 1');

BEGIN dbms_stats.gather_table_stats('BASIS','VG_XXXXXXXXXXX', method_opt=>'for all columns size 1'); END;
 
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-00001: unique constraint (SYS.I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST) violated
ORA-06512: at "SYS.DBMS_STATS", line 34850
ORA-06512: at line 1

So we first tried to remove the statistics, but we were unable to.

SQL> exec dbms_stats.delete_table_stats('BASIS','VG_XXXXXXXXXXX');
BEGIN dbms_stats.delete_table_stats('BASIS','VG_XXXXXXXXXXX'); END;
 
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-00001: unique constraint (SYS.I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST) violated
ORA-06512: at "SYS.DBMS_STATS", line 17279
ORA-06512: at line 1
 
SQL> exec dbms_stats.delete_table_stats('BASIS','VG_XXXXXXXXXXX',force=>true);
BEGIN dbms_stats.delete_table_stats('BASIS','VG_XXXXXXXXXXX',force=>true); END;
 
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-00001: unique constraint (SYS.I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST) violated
ORA-06512: at "SYS.DBMS_STATS", line 17279
ORA-06512: at line 1

So we started to dig deeper into that issue. We found out, that there is a duplicate column name when querying the Data Dictionary. But on the other hand, a simple “describe” does not show that.

SQL> desc owner.vg_xxxxxxxxxx

Name                                                  Null?    Type
----------------------------------------------------- -------- -----------------
[...]
XXXXXXXXXXXXXXX_ID                                             NUMBER(19)
XXXXXXXXXXXXXXXXX_XXX                                          NUMBER(19)
XXXXXXXXXXXXXXXXX_ID                                           NUMBER(19)
NAMENSZUSATZ                                                   NVARCHAR2(4000)
ENTITY_TIMESTAMP                                               TIMESTAMP(9)
OPTLOCKVERSION                                        NOT NULL NUMBER(19)
DATENUEBERNAHMETYP                                             NVARCHAR2(4000)
ZUGANGGEBUCHT                                         NOT NULL NUMBER(1)
UEBERSTELLER                                          NOT NULL NUMBER(1)
XXXXXXXXXXXXXX                                        NOT NULL NUMBER(1)
 
 
SQL> select column_id, column_name  from dba_tab_columns where table_name='VG_XXXXXXXXXXX' and column_name like 'Z%'
 
COLUMN_ID  COLUMN_NAME
---------- --------------------
        59 ZUGANGGEBUCHT
        59 ZUGANGGEBUCHT

Next step was to find out, where the data from the dictionary view is comming from.

SQL> select text from dba_views where view_name='DBA_TAB_COLUMNS';

TEXT
--------------------------------------------------------------------------------
select OWNER, TABLE_NAME,
       COLUMN_NAME, DATA_TYPE, DATA_TYPE_MOD, DATA_TYPE_OWNER,
       DATA_LENGTH, DATA_PRECISION, DATA_SCALE, NULLABLE, COLUMN_ID,
       DEFAULT_LENGTH, DATA_DEFAULT, NUM_DISTINCT, LOW_VALUE, HIGH_VALUE,
       DENSITY, NUM_NULLS, NUM_BUCKETS, LAST_ANALYZED, SAMPLE_SIZE,
       CHARACTER_SET_NAME, CHAR_COL_DECL_LENGTH,
       GLOBAL_STATS, USER_STATS, AVG_COL_LEN, CHAR_LENGTH, CHAR_USED,
       V80_FMT_IMAGE, DATA_UPGRADED, HISTOGRAM, DEFAULT_ON_NULL,
       IDENTITY_COLUMN, SENSITIVE_COLUMN,
       EVALUATION_EDITION, UNUSABLE_BEFORE, UNUSABLE_BEGINNING
  from DBA_TAB_COLS
where USER_GENERATED = 'YES'


SQL> select text from dba_views where view_name='DBA_TAB_COLS';

TEXT
--------------------------------------------------------------------------------
select
     OWNER, TABLE_NAME,
     COLUMN_NAME, DATA_TYPE, DATA_TYPE_MOD, DATA_TYPE_OWNER,
     DATA_LENGTH, DATA_PRECISION, DATA_SCALE, NULLABLE, COLUMN_ID,
     DEFAULT_LENGTH, DATA_DEFAULT, NUM_DISTINCT, LOW_VALUE, HIGH_VALUE,
     DENSITY, NUM_NULLS, NUM_BUCKETS, LAST_ANALYZED, SAMPLE_SIZE,
     CHARACTER_SET_NAME, CHAR_COL_DECL_LENGTH,
     GLOBAL_STATS,
     USER_STATS, AVG_COL_LEN, CHAR_LENGTH, CHAR_USED,
     V80_FMT_IMAGE, DATA_UPGRADED, HIDDEN_COLUMN, VIRTUAL_COLUMN,
     SEGMENT_COLUMN_ID, INTERNAL_COLUMN_ID, HISTOGRAM, QUALIFIED_COL_NAME,
     USER_GENERATED, DEFAULT_ON_NULL, IDENTITY_COLUMN, SENSITIVE_COLUMN,
     EVALUATION_EDITION, UNUSABLE_BEFORE, UNUSABLE_BEGINNING
from dba_tab_cols_v$


SQL> select text from dba_views where view_name='DBA_TAB_COLS_V$';

TEXT
--------------------------------------------------------------------------------
select u.name, o.name,
       c.name,

[...]

Ok, DBA_TAB_COLUMNS gets its data from DBA_TAB_COLS which in turn gets its data from DBA_TAB_COLS_V$. That view has the final SQL, that is being used to retrieve the data, I shortened it, you get the idea.
We then used the SQL from DBA_TAB_COLS_V$ to find the root cause of our duplicate column.

SQL> select
  2    , sys."_CURRENT_EDITION_OBJ" o
  3    , sys.hist_head$ h
  4    c.col#, c.obj#, c.name,  c.intcol#
  5    , sys.tab$ t
  6  where o.obj# = c.obj#
  7    , h.obj#, h.intcol#
  8  from sys.col$ c
  9        or
 10        (o.type# = 2     /* tables, excluding iot - overflow and nested tables */
 11          and
 12   , sys."_CURRENT_EDITION_OBJ" o
 13                         and (bitand(t.property, 512) = 512 or
 14                              bitand(t.property, 8192) = 8192))))
 15    , sys.hist_head$ h
 16    , sys.user$ u
 17    , sys.coltype$ ac
 18    , sys.obj$ ot
 19    , sys."_BASE_USER" ut
 20    , sys.tab$ t
 21  where o.obj# = c.obj#
 22    and o.owner# = u.user#
 23    and o.obj# = t.obj#(+)
 24    and c.obj# = h.obj#(+) and c.intcol# = h.intcol#(+)
 25    and c.obj# = ac.obj#(+) and c.intcol# = ac.intcol#(+)
 26    and ac.toid = ot.oid$(+)
 27    and ot.type#(+) = 13
 28    and ot.owner# = ut.user#(+)
 29    and (o.type# in (3, 4)                                     /* cluster, view */
 30         or
 31         (o.type# = 2     /* tables, excluding iot - overflow and nested tables */
 32          and
 33          not exists (select null
 34                        from sys.tab$ t
 35                       where t.obj# = o.obj#
 36                         and (bitand(t.property, 512) = 512 or
 37                              bitand(t.property, 8192) = 8192))))
 38    and c.name='ZUGANGGEBUCHT';

      COL#       OBJ# NAME                    INTCOL#       OBJ#    INTCOL#
---------- ---------- -------------------- ---------- ---------- ----------
        60      95556 ZUGANGGEBUCHT                60
        68      95523 ZUGANGGEBUCHT                68
        59      94177 ZUGANGGEBUCHT                59      94177         59   
        59      94177 ZUGANGGEBUCHT                59      94177         59   
        74      95522 ZUGANGGEBUCHT                74
        74      95550 ZUGANGGEBUCHT                74

6 rows selected.

Have a look at the highlighted rows, there’s the duplicate column. Obviously the join to HIST_HEAD$ introduced the second row. That means, there must be something wrong inside that table. And that turned out to be true.

SQL> desc hist_head$
Name                        Null?    Type
--------------------------- -------- -------------
OBJ#                        NOT NULL NUMBER
COL#                        NOT NULL NUMBER
BUCKET_CNT                  NOT NULL NUMBER
ROW_CNT                     NOT NULL NUMBER
CACHE_CNT                            NUMBER
NULL_CNT                             NUMBER
TIMESTAMP#                           DATE
SAMPLE_SIZE                          NUMBER
MINIMUM                              NUMBER
MAXIMUM                              NUMBER
DISTCNT                              NUMBER
LOWVAL                               RAW(1000)
HIVAL                                RAW(1000)
DENSITY                              NUMBER
INTCOL#                     NOT NULL NUMBER
SPARE1                               NUMBER
SPARE2                               NUMBER
AVGCLN                               NUMBER
SPARE3                               NUMBER
SPARE4                               NUMBER

SQL> select  COL#, BUCKET_CNT, ROW_CNT, NULL_CNT, TIMESTAMP#, SAMPLE_SIZE from  hist_head$ where obj#=94177 and intcol#=59;

      COL# BUCKET_CNT    ROW_CNT   NULL_CNT TIMESTAMP#          SAMPLE_SIZE
---------- ---------- ---------- ---------- ------------------- -----------
         0          1          0      35656 2017-01-05 22:00:23        5502
         0          1          0      35383 2017-01-05 22:00:23        5775

A quick MOS-search revealed two notes, that describe the issue, Alter Table Drop Column Failing With ORA-00600[16515] (Doc ID 2375301.1) and DBMS_STATS.DELETE_TABLE_STATS Fails With ORA-600 [16515] (Doc ID 1233745.1). The latter one has the final solution for this issue.
We first identified the older of the two rows which we then deleted.

SQL> SELECT rowid,obj#,intcol#,timestamp# FROM hist_head$ WHERE obj#=94177 and intcol#=59 order by timestamp#;

ROWID                    OBJ#    INTCOL# TIMESTAMP#
------------------ ---------- ---------- -------------------
AAAABEAABAAAWh0AAj      94177         59 2017-01-05 22:00:23
AAAABEAABAAAWh0AAi      94177         59 2017-01-05 22:00:23

SQL> DELETE FROM hist_head$ WHERE ROWID='AAAABEAABAAAWh0AAj';

1 row deleted.

SQL>  COMMIT;

Commit complete.

After this, DBA_TAB_COLUMNS is back to distinct column names.

SQL> select column_id, column_name  from dba_tab_columns where table_name='VG_XXXXXXXXXXX' and column_name like 'Z%';

COLUMN_ID COLUMN_NAME
---------- --------------------
        59 ZUGANGGEBUCHT

And of cause, the statistics gathering was and is running fine again.