Database Upgrade with Standby on ODA

A couple of days ago I did my easiest upgrade of an Oracle Database ever. This is the system:

  • 2 Oracle Database Appliances in two locations
  • Databases running as single instance due to licensing restrictions
  • DataGuard between ODA-1 and ODA-2 for mission-critical databases
  • several databases, version 11.2.0.4 and 12.1.0.2
  • currently running ODA 12.1.2.4, so big excuse for not being up-to-date

The customer wanted me to upgrade one of these 11.2.0.4 databases to 12.1.0.2. So the basic steps are as follows:

  1. Install new Oracle Home for version 12.1.0.2
  2. Disable DataGuard configuration
  3. Stop DataGuard Broker processes
  4. Stop Standby database
  5. Upgrade Primary database
  6. Start Standby database from new home

On an ODA this is quite simple to do. First create a new Oracle Home an both ODAs:

root$ oakcli create dbhome -version 12.1.0.2.3
...
INFO: 2016-03-15 09:25:07: Installing a new Home : OraDb12102_home3 at /u01/app/oracle/product/12.1.0.2/dbhome_3
...
SUCCESS: 2016-03-15 09:30:55: Successfully created the Database Home : OraDb12102_home3

Now there is the new 12c home that we can use for the upgrade. One could also use an existing home that can be chosen from the list:

root$ oakcli show dbhomes -detail

Next step is disabling the DataGuard configuration:

oracle$ . oraenv
oracle$ dgmgrl
DGMGRL> connect sys/****@primary
Connected.
DGMGRL> disable configuration
Disabled.

Now stop the DataGuard Broker processes on both, primary and standby:

oracle$ . oraenv
oracle$ sqlplus / as sysdba
SQL> alter system set dg_broker_start=false scope=both;

System altered.

Stop the Standby database:

oracle$ srvctl stop database -d <dbname>

And now do the upgrade of the Primary to the Oracle Home that was created or chosen in the first step. It will ask for the SYS password, be prepared for that.

root$ oakcli upgrade database -db <dbname> -to OraDb12102_home3
INFO: 2016-03-15 10:30:30: Look at the log file '/opt/oracle/oak/log/<odabase-01>/tools/12.1.2.4.0/dbupgrade_2585.log' for more details 

Please enter the 'SYS'  password : 
Please-enter the 'SYS' password: 
2016-03-15 10:31:29: Upgrading the database <dbname>. It will take few minutes. Please wait... 

 SUCCESS: 2016-03-15 10:50:01: Successfully upgraded the database <dbname>

That’s it for the Primary. Now modify the Standby to use the new 12c Database Home. Copy the password file to the new Oracle Home, and, if it is not in a shared location, the SPfile too.

oracle$ cp <old ORACLE_HOME>/dbs/orapw<dbname> <new ORACLE_HOME>/dbs/ 
oracle$ cp <old ORACLE_HOME>/dbs/spfile<dbname>.ora <new ORACLE_HOME>/dbs/

The database resource needs to be modified in order to use the new Oracle Home. That is the last time that “srvctl” is run from the old Oracle Home:

oracle$ srvctl config database -d <dbname> 
oracle$ srvctl modify database -d <dbname> -o /u01/app/oracle/product/12.1.0.2/dbhome_3

If required, change the SPfile location too:

oracle$ srvctl modify database -d <dbname> -p /u01/app/oracle/product/12.1.0.2/dbhome_3/dbs/spfile<dbname>.ora

Now the Standby can be started again:

oracle$ srvctl start database -d <dbname>

Last step is to start the DataGuard Broker processes and re-enable the configuration.

oracle$ . oraenv
oracle$ sqlplus / as sysdba
SQL> alter system set dg_broker_start=true scope=both;

System altered.
oracle$ . oraenv
oracle$ dgmgrl
DGMGRL> connect sys/****@primary
Connected.
DGMGRL> enable configuration
Enabled.

Very simple and very straight forward. I like ODA 🙂

There is some documentation that has been used:

ODA: Performance Impact due to Flashback Database

Here and there I am planning and actually doing migrations to the Oracle Database Appliance (ODA). I like this appliance because of it’s relative simplicity and ease of management. In most cases we connect two ODAs in different locations using DataGuard with the goal of maximizing availability. That means we have a primary 2-node-RAC on ODA-1 and another 2-node-RAC on ODA-2 which operates as standby.
In order to fulfill the requirements of availability and fault tolerance the databases are configured as “Maximum Availability”:

  • sufficient Standby Redologs configured – on the dedicated Redo SSDs
  • Force Logging mode
  • Flashback Database configured and activated – located in FRA per definition

In terms of availability and manageability this is quite good, but in terms of performance this turned out to be really bad sometimes. Look at this AWR snippets:

              Snap Id      Snap Time      Sessions Curs/Sess
            --------- ------------------- -------- ---------
Begin Snap:     88808 01-Sep-15 17:30:06       480       2.7
  End Snap:     88855 02-Sep-15 17:00:10       478       6.8
   Elapsed:            1,410.06 (mins)
   DB Time:            3,311.41 (mins)
Wait Classes by Total Wait Time
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                                        Avg             Avg
                                        Total Wait     Wait   % DB   Active
Wait Class                  Waits       Time (sec)     (ms)   time Sessions
---------------- ---------------- ---------------- -------- ------ --------
System I/O             11,522,327          161,304       14   81.2      1.9
DB CPU                                     101,562            51.1      1.2
User I/O               19,956,840           88,663        4   44.6      1.0
Background Wait Events                 DB/Inst: XXX/xxx  Snaps: 88808-88855
-> ordered by wait time desc, waits desc (idle events last)
-> Only events with Total Wait Time (s) >= .001 are shown
-> %Timeouts: value of 0 indicates value was > .5%.  Value of null is truly 0

                                                             Avg                
                                        %Time Total Wait    wait    Waits   % bg
Event                             Waits -outs   Time (s)    (ms)     /txn   time
-------------------------- ------------ ----- ---------- ------- -------- ------
db file async I/O submit        969,898     0     90,151      93      0.3   46.5
db file parallel write        5,015,605     0     33,520       7      1.3   17.3
flashback log file write        617,118     0     30,295      49      0.2   15.6
Backup: MML write backup p   10,608,828     0     11,727       1      2.7    6.0
RMAN backup &amp; recovery I/O      288,934     0      3,508      12      0.1    1.8

Remember, the flashback logs are stored inside FRA. And where does ODA place the FRA? On the slower part of the spinning disks. The system can not be faster than the slowest part in the chain. And in this configuration the user I/Os are influenced by flashback I/Os making things even worse.
So turn off flashback database in order to improve performance.

              Snap Id      Snap Time      Sessions Curs/Sess
            --------- ------------------- -------- ---------
Begin Snap:     89528 16-Sep-15 17:30:45       393       5.0
  End Snap:     89575 17-Sep-15 17:00:44       576       5.2
   Elapsed:            1,409.98 (mins)
   DB Time:            2,792.69 (mins)
Wait Classes by Total Wait Time
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                                        Avg             Avg
                                        Total Wait     Wait   % DB   Active
Wait Class                  Waits       Time (sec)     (ms)   time Sessions
---------------- ---------------- ---------------- -------- ------ --------
System I/O              7,424,187           97,029       13   57.9      1.1
DB CPU                                      96,595            57.6      1.1
User I/O                8,191,268           63,769        8   38.1      0.8

Only half the System I/O than with flashback database enabled.

Background Wait Events                 DB/Inst: XXX/xxx  Snaps: 89528-89575
-> ordered by wait time desc, waits desc (idle events last)
-> Only events with Total Wait Time (s) >= .001 are shown
-> %Timeouts: value of 0 indicates value was > .5%.  Value of null is truly 0

                                                             Avg                
                                        %Time Total Wait    wait    Waits   % bg
Event                             Waits -outs   Time (s)    (ms)     /txn   time
-------------------------- ------------ ----- ---------- ------- -------- ------
db file async I/O submit        531,487     0     73,471     138      0.1   57.5
db file parallel write        2,384,365     0     15,169       6      0.7   11.9
Backup: MML write backup p   10,121,315     0     11,769       1      2.8    9.2
RMAN backup &amp; recovery I/O      197,080     0      4,906      25      0.1    3.8

The background wait for “flashback log file write” completely gone.

Keep that in mind when using Flashback Database. This will also apply to other hardware than ODA. Be careful which disks you use for FRA/flashback logs.

How to mount DFS shares on Oracle Database Appliance

At first, the Oracle Database Appliance is a very nice machine in my opinion. It is easy to operate and performs very very well. But since the complete stack is set up to work as good and fast as possible, there are restrictions especially in the OS layer. The ODA is running Enterprise Linux, but one is not allowed to use yum repositories to install additional packages.
In a current ODA migration project we needed to mount a Windows DFS share for backup and restore purposes. This requires to install the “keyutils” RPM package. So, how do I do that on the ODA? My Oracle Support 1461798.1 has the answer. Just follow these simple steps:

  1. download ODARPM_Config.zip from MOS
  2. copy ODARPM_Config.zip to ODA:/opt/oracle/oak/odarpm
  3. extract ODARPM_Config.zip
  4. set environment variable “http_proxy” if neccessary
  5. run “./yumsetup” which can take a while, 2.5 hours in my case
  6. get the RPM you want, e.g. “./rpmget keyutils”
  7. install the RPM “rpm -ivh /opt/oracle/oak/pkgrepos/rpms/keyutils-1.2-1.el5.x86_64.rpm”
  8. configure keyutils to enable DFS mounts, add these two lines
    • create cifs.spnego * * /usr/sbin/cifs.upcall -c %k
    • create dns_resolver * * /usr/sbin/cifs.upcall %k
  9. mount your DFS share: mount -t cifs //mydomain.net/dfs /mnt -o username=myuser,domain=mydomain.net

That’s it. Enjoy your data transfers.