Read Only Oracle Homes

As long as I am using Oracle Database, the configuration files reside under the ORACLE_HOME path. There are files for SQL*Net under network/admin and files for the database instances under dbs (or database if it is Windows). If one wanted to have those files elsewhere stored, maybe to have a centralized administration, needed to create some workarounds, using environment variables like TNS_ADMIN or using the IFILE syntax inside those configuration files. Since Oracle Database version 18c this might be history. With that version a feature called “Read Only Oracle Home” was introduced.
This feature enables us to separate the software part from the dynamic part. To handle this the already existing tool “orabase” got some friendly neighbors.

  • orabaseconfig – returns the base path for all the dynamic configuration files
  • orabasehome – returns the current path under which is currently being used to retrieve configuration files
  • roohctl – used to configure an Oracle Home as read only or vice versa

This is how it works for a normal Oracle Home:

[oracle@vm121 ~]$ orabasehome                                                             
/u01/app/grid/18                                                                          
[oracle@vm121 ~]$ orabaseconfig
/u01/app/grid/18 

But how can I enable the read only option? This is done using “roohctl”.


[oracle@vm121 ~]$ roohctl -h
Usage:  roohctl [] [ ]
Following are the possible flags:
        -help

Following are the possible commands:
        -enable Schreibgeschütztes Oracle Home aktivieren
                [-nodeList Liste der Knoten in einer Clusterumgebung]

[oracle@vm121 ~]$ roohctl -enable                                                         
Enabling Read-Only Oracle home.
Cannot enable Read-Only Oracle home in a configured Oracle home.                          
The Oracle Home is configured with listeners 'LISTENER'. 

Ok, first try, first fail. In that case I am trying to modify a home that has running components already. These components need to be stopped beforehand. So let’s do that.

[oracle@vm121 ~]$ srvctl stop listener
[oracle@vm121 ~]$ srvctl remove listener

[oracle@vm121 ~]$ roohctl -enable
Enabling Read-Only Oracle home.
Update orabasetab file to enable Read-Only Oracle home.
Orabasetab file has been updated successfully.
Bootstrap files have been processed successfully.
Read-Only Oracle home has been enabled successfully.
Check the log file /refresh/home/oracle/cfgtoollogs/roohctl/roohctl-180611AM052009.log

[oracle@vm121 ~]$ srvctl add listener
[oracle@vm121 ~]$ srvctl start listener

Now that worked like a charm. Let’s check the output of the new commands.

[oracle@vm121 ~]$ orabaseconfig
/u01/app/oracle
[oracle@vm121 ~]$ orabasehome
/u01/app/oracle/homes/OraGI18Home1

As we can see, the configuration files will now be stored in and read from a directory outside the Oracle Home. This is how the tree lookes like initially.

[oracle@vm121 ~]$ tree -a /u01/app/oracle/homes/OraGI18Home1/
/u01/app/oracle/homes/OraGI18Home1/
├── assistants
│   └── dbca
│       └── templates
├── dbs
├── install
├── log
│   └── vm121
│       └── client
├── network
│   ├── admin
│   │   └── listener.ora
│   ├── log
│   └── trace
└── rdbms
    ├── audit
    └── log

15 directories, 1 file

And after a while of operation, other files get added. This gives a rough picture of what it means to have a Read Only Oracle Home.

[oracle@vm121 ~]$ tree -a /u01/app/oracle/homes/OraGI18Home1/
/u01/app/oracle/homes/OraGI18Home1/
├── assistants
│   └── dbca
│       └── templates
├── cfgtoollogs
│   ├── opatchauto
│   │   ├── core
│   │   │   ├── opatch
│   │   │   │   ├── opatch2020-01-20_09-53-19AM_1.log
│   │   │   │   ├── opatch2020-01-20_09-59-52AM_1.log
│   │   │   │   ├── opatch_history.txt
│   │   │   │   └── SDKWork.txt
│   │   │   └── .patch_storage
│   │   ├── opatchauto2020-01-20_09-46-32AM.log
│   │   ├── opatchauto_2020-01-20_09-52-05_binary.log
│   │   └── opatchauto_2020-01-20_09-59-45_binary.log
│   ├── opatchautodb
│   │   ├── 2020-01-20-09-48-21
│   │   │   ├── log.txt
│   │   │   ├── log.txt.lck
│   │   │   ├── machine-readable
│   │   │   └── .tmp
│   │   ├── bootstrap2020-01-20_09-29-37AM.log
│   │   ├── hostlist.obj
│   │   └── systemconfig2020-01-20_09-36-27AM.log
│   └── oplan
├── dbs
├── install
├── log
│   └── vm121
│       └── client
│           ├── tnslsnr_10410.log
│           └── tnslsnr_9216.log
├── network
│   ├── admin
│   │   └── listener.ora
│   ├── log
│   └── trace
└── rdbms
    ├── audit
    └── log

25 directories, 15 files

So having the Oracle Home read only makes sense somehow. But if you want to have it that way, configure it right from the beginning to prevent downtimes later on. Another thing to keep in mind is the fact, that the “read only” portion is meant for configuration only. Applying One-Off Patches or Release Updates will still modify your files in the ORACLE_HOME.

12c Upgrade stuck at fixed object stats

A while ago I did some database upgrades to version 12.1.0.2. One of those upgrades made me a bit nervous. I used “dbca” to perform the upgrade and it was kind of stuck. So I went to the command prompt and started to investigate this. What I found was this:

12c_fixed_obj_stats

A session was waiting for “ADR block file read” again and again. The SQL that was causing this waits can be seen in the screenshot too. It was gathering statistics for X$DBKFDG, part of the “gather fixed object stats” step.

A quick research in My Oracle Support did not bring up any helpful information. My first idea was to check the ADR but I found it to be nearly empty, at least in the suspicous directories like “trace” etc. So I posted this issue on Twitter and asked for help. @realadrienne pointed to a similar issue related to Recovery Advisor and the list of critical issues. So I checked that using

SQL> select * from v$ir_failure;

which returned loads of records that had a priority of “critical” and a status of “closed”. So I tried to get rid of these records using “adrci”.

adrci> purge -age 1 -type hm

This took quite a while, about 10 minutes. to complete. But in the end the v$ir_failure view had no records anymore and the fixed object stats where gathered quickly.

So now there is an additional pre-upgrade check on my personal list, that says “clean up v$ir_failure”. You should add this to your list too to prevent unnecessary delays during database upgrades.

Thanks to @realadrienne, @FranckPachot and @OracleSK for the quick and very helpful assistance.

Thanks also to @MikeDietrichDE for the feedback on this issue.

 

 

DataPatch stuck on RAC – PSU October 2016

Yesterday one of my customers wanted to patch two 2-node clusters with the current PSU October 2016 (161018). Both are running 12.1.0.2 Grid Infrastructure and 12.1.0.2 Database. The servers run SPARC Solaris 10. When applying the patch on the first cluster using “opatchauto” everything went fine until the “trying to apply SQL Patch” part on the 2nd node. So I went to the log directory and found the following:

$ cd $ORACLE_BASE/cfgtoollogs/sqlpatch/sqlpatch_27075_2016_11_30_17_12_08
$ tail sqlpatch_catcon_0.log

SQL> GRANT SELECT ON sys.gv_$instance TO dv_secanalyst
  2  /

At that line it was stuck. Searching My Oracle Support brought up nothing helpful. So I had a look at the database sessions:

SQL> select sid, username, event, state, seconds_in_wait 
2    from v$session where username='SYS';

       SID USERNAME                       EVENT                                                            STATE                                                   SECONDS_IN_WAIT
---------- ------------------------------ ---------------------------------------------------------------- -----------                                    -------- ---------------
        13 SYS                            SQL*Net message from client                                      WAITING                                                             226
        30 SYS                            SQL*Net message from client                                      WAITING                                                             473
        32 SYS                            SQL*Net message to client                                        WAITED SHOR                                    T TIME                 0
       411 SYS                            SQL*Net message from client                                      WAITING                                                             473
       783 SYS                            library cache lock                                               WAITING                                                             211
       786 SYS                            SQL*Net message from client                                      WAITING                                                               4
      1155 SYS                            SQL*Net message from client                                      WAITING                                                             467

The session is waiting for something dictionary related. Since the waiting statement was related to RAC, I stopped the other instance which made sqlplatch continue immediately. So the workaround looked like this:

$ srvctl stop instance -db <dbname> -node <node1>
$ srvctl start instance -db <dbname> -node <node1>

This happened on both clusters. So be aware of that in case you are applying that PSU patch to RAC databases.
In case you missed to stop the 1st instance in time, the GRANT statement will run into a timeout (ORA-4021) and the SQL-Patch will be marked with “ERROR” in DBA_REGISTRY_SQLPATCH. In such case, just re-run “datapatch” again and monitor the logfile.
Happy patching.

Update 07-DEC-2016

I was not able to reproduce this issue on a Linux x86-64 system. So there is a chance that the issue is OS related.

Update 12-DEC-2016

Finally I reproduced this issue on my Linux x86-64 test system. Now I opened a SR for that.

Update 13-DEC-2016

Thanks to a quick and efficient Oracle Support guy (yes, there are such people!) we found the root cause of that issue. There is a bug in the Enterprise Manager Agent (DB Express maybe too) that it holds a shared lock on some GV$ views during the whole lifetime of a session. That’s why datapatch got stuck. If you just stop the Agent, datapatch will continue immediatly. There is no need to stop the whole instance. We just need to get rid of the Agent’s sessions.
Thanks a lot to Prakash from Oracle Support for his engagement in investigating this issue.