Turning a RAC into Data Guard
The headline sounds strange but this really happened to me some time ago. The reason
behind is that a customer got a new second datacenter some kilometers away from the
current one. That’s why they were forced to stretch the existing hardware over both
datacenters, with minimum invest. So we could either build a stretched RAC or something
else. During the discussion we came to the conclusion that a real desaster recovery (DR)
solution like Data Guard is much better than RAC for this setup. That created the
challenge to turn the existing 2-node-RAC into Data Guard. And yes, we are aware of the
gap in between when there is no RAC anymore and no Data Guard yet 🙂
In the following post I’ll outline the steps that we went through, most of the command
output will be in german as it was a german project, but I think you’ll get the idea.
1. What we had
2-node RAC Version 12.1.0.2, configuration details
$ srvctl status database -db proddb Instanz PRODDB_1 wird auf Knoten server-a ausgefĂĽhrt Instanz PRODDB_2 wird auf Knoten server-b ausgefĂĽhrt
$ srvctl config database -db proddb Eindeutiger Datenbankname: PRODDB Datenbankname: PRODDB Oracle-Standardverzeichnis: /u01/app/oracle/product/12.1.0.2/db Oracle-Benutzer: oracle Spfile: +ASMDAT/PRODDB/PARAMETERFILE/spfile.263.895424997 Kennwortdatei: +ASMDAT/PRODDB/PASSWORD/pwdproddb.258.895416527 Domain: Startoptionen: open Stoppoptionen: immediate Datenbankrolle: PRIMARY Verwaltungs-Policy: AUTOMATIC Server-Pools: proddb Datenträgergruppen: ASMDAT,ASMLOG1,ASMLOG2 Mount Point-Pfade: Services: Typ: RAC Parallelität starten: Parallelität stoppen: OSDBA-Gruppe: oinstall OSOPER-Gruppe: Datenbankinstanzen: PRODDB_1,PRODDB_2 Konfigurierte Knoten: Datenbank wird von Policy verwaltet
2. Remove node B from the cluster
First we have to stop the database instance on that node:
$ srvctl stop instance -db proddb -node server-b
It is a policy-managed cluster, so we moved the node to the Free-pool
$ srvctl relocate server -n server-b -g Free
Now that is the status of the server pools
$ srvctl status serverpool -detail Server-Pool-Name: Free Anzahl der aktiven Server: 1 Namen der aktiven Server: server-b NAME=server-b STATE=ONLINE Server-Pool-Name: Generic Anzahl der aktiven Server: 0 Namen der aktiven Server: Server-Pool-Name: proddb Anzahl der aktiven Server: 1 Namen der aktiven Server: server-a NAME=server-a STATE=ONLINE
At this point we can remove node B from the cluster configuration following the documentation
Remove node A from the oraInventory on node B
$ /u01/app/grid/12.1.0.2/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={server-b}" CRS=TRUE -silent -local Oracle Universal Installer wird gestartet... Swap-Speicher wird geprüft: muss größer sein als 500 MB. Tatsächlich 60145 MB Erfolgreich The inventory pointer is located at /var/opt/oracle/oraInst.loc [...]
Once this is done we can deconfigure Oracle Clusterware Stack on node B according to the documentation.
As root:
$ cd $ORACLE_HOME/crs/install/ $ ./rootcrs.sh -deconfig -force Using configuration parameter file: /u01/app/grid/12.1.0.2/crs/install/crsconfig_params Netzwerk 1 ist vorhanden Subnet IPv4: 172.17.3.0/255.255.255.0/ipmpfs0, static Subnet IPv6: [...] 2017/09/06 09:46:21 CLSRSC-4006: Removing Oracle Trace File Analyzer (TFA) Collector. 2017/09/06 09:47:05 CLSRSC-4007: Successfully removed Oracle Trace File Analyzer (TFA) Collector. 2017/09/06 09:47:11 CLSRSC-336: Successfully deconfigured Oracle Clusterware stack on this node
Remove node B from the oraInventory on node A
cd $ORACLE_HOME/oui/bin/ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={server-a}" CRS=TRUE -silent -local Oracle Universal Installer wird gestartet... Swap-Speicher wird geprüft: muss größer sein als 500 MB. Tatsächlich 36559 MB Erfolgreich The inventory pointer is located at /var/opt/oracle/oraInst.loc 'UpdateNodeList' war erfolgreich.
Finally, remove node B from Grid Infrastructure (as root)
crsctl delete node -n server-b CRS-4661: Node server-b successfully deleted.
3. Turn Cluster Grid Infrastructure into Single Instance
Grid Infrastructure (GI) and Single Instance (SI) installations are nearly the same. So instead of
deinstalling GI and reinstalling (and patching) SI, we can simply change the installation type.
To do this, we first deconfigured GI
$ cd $ORACLE_HOME/crs/install $ ./rootcrs.sh -deconfig -force Using configuration parameter file: /u01/app/grid/12.1.0.2/crs/install/crsconfig_params Netzwerk 1 ist vorhanden Subnet IPv4: 172.17.3.0/255.255.255.0/ipmpfs0, static Subnet IPv6: Ziele pingen: Netzwerk ist aktiviert […] CRS-4133: Oracle High Availability Services has been stopped. 2017/09/19 12:20:22 CLSRSC-4006: Removing Oracle Trace File Analyzer (TFA) Collector. 2017/09/19 12:20:54 CLSRSC-4007: Successfully removed Oracle Trace File Analyzer (TFA) Collector. 2017/09/19 12:21:00 CLSRSC-336: Successfully deconfigured Oracle Clusterware stack on this node
GI uses some special root permissions for its installation, so we needed to revert the permission settings.
$ ./rootcrs.sh -unlock -crshome $ORACLE_HOME –nocrsstop
There is a bug when configuring SI when there are left-over files from a previous GI installation,
so we got rid of these.
$ cd /u01/app/oracle/ $ ls admin cfgtoollogs crsdata diagsnap log audit checkpoints diag server-a product $ mv crsdata crsdata_old $ cd - /u01/app/grid/12.1.0.2/crs/install
A GI installation is linked with the RAC option enabled. With SI we do not need this and beside that, it will
fail to configure SI. So we disabled the RAC option.
$ cd $ORACLE_HOME/rdbms/lib $ make -f ins_rdbms.mk rac_off ioracle
Now everything is prepared to configure SI (as root):
$ /u01/app/grid/12.1.0.2/perl/bin/perl -I/u01/app/grid/12.1.0.2/perl/lib -I/u01/app/grid/12.1.0.2/crs/install /u01/app/grid/12.1.0.2/crs/install/roothas.pl Using configuration parameter file: /u01/app/grid/12.1.0.2/crs/install/crsconfig_params 2017/09/19 12:29:06 CLSRSC-363: User ignored prerequisites during installation LOCAL ADD MODE Creating OCR keys for user 'oracle', privgrp 'oinstall'.. Operation successful. LOCAL ONLY MODE Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. CRS-4664: Knoten server-b erfolgreich gepinnt. 2017/09/19 12:29:17 CLSRSC-330: Adding Clusterware entries to file '/etc/inittab' server-b 2017/09/19 12:29:58 /u01/app/grid/12.1.0.2/cdata/server-b/backup_20170919_122958.olr 257602229 CRS-2791: Herunterfahren der von Oracle High Availability Services verwalteten Ressourcen auf "server-b" wird gestartet CRS-2673: Versuch, "ora.evmd" auf "server-b" zu stoppen CRS-2677: Stoppen von "ora.evmd" auf "server-b" erfolgreich CRS-2793: Herunterfahren der von Oracle High Availability Services verwalteten Ressourcen auf "server-b" ist abgeschlossen CRS-4133: Oracle High Availability Services wurde gestoppt CRS-4123: Oracle High Availability Services wurde gestartet. 2017/09/19 12:30:36 CLSRSC-327: Successfully configured Oracle Restart for a standalone server
Configure SI
Having SI up and running, we need to configure some more things, namely ASM and a listener.
$ srvctl add asm $ srvctl start asm $ srvctl add listener $ srvctl start listener $ crsctl stat res -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.LISTENER.lsnr ONLINE ONLINE server-b STABLE ora.asm ONLINE ONLINE server-b Started,STABLE ora.ons OFFLINE OFFLINE server-b STABLE -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.cssd 1 ONLINE ONLINE server-b STABLE ora.diskmon 1 OFFLINE OFFLINE STABLE ora.evmd 1 ONLINE ONLINE server-b STABLE --------------------------------------------------------------------------------
As per this setup, ASM comes without any configurtion files. That’s why we need to create the SPfile and password file on our own.
This is especially important for the following disk setup, we set the discovery string to be able to find our disks.
Setting up SPfile:
SQL> create spfile from memory; File created. $ srvctl modify asm -d '/dev/rdsk/asm*' $ srvctl config asm ASM-Standardverzeichnis: <CRS home> Kennwortdatei: ASM-Listener: LISTENER Spfile: ?/dbs/spfile@.ora ASM-Datenträgergruppe-Discovery-Zeichenfolge: /dev/rdsk/asm* $ srvctl stop asm $ srvctl start asm
Setting up ASM password file:
ASMCMD> pwcreate --asm +ASMDAT <SYS password> ASMCMD> lspwusr Username sysdba sysoper sysasm SYS TRUE TRUE FALSE ASMCMD> ASMCMD> orapwusr --add asmsnmp Enter password: ******** ASMCMD> lspwusr Username sysdba sysoper sysasm SYS TRUE TRUE FALSE ASMSNMP FALSE FALSE FALSE ASMCMD> orapwusr --grant sysasm asmsnmp ASMCMD> orapwusr --grant sysasm sys ASMCMD> lspwusr Username sysdba sysoper sysasm SYS TRUE TRUE TRUE ASMSNMP FALSE FALSE TRUE
For the moved node B we created new LUNs for the database. We can’t use the old LUNs since they hold the existing database.
The new Data Guard setup requires separate storage for the database.
SQL> alter diskgroup asmdat mount; Diskgroup altered. SQL> alter diskgroup asmrec mount; Diskgroup altered. SQL> alter diskgroup asmlog1 mount; Diskgroup altered. SQL> alter diskgroup asmlog2 mount; Diskgroup altered. $ asmcmd lsdg State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 4096 1048576 274408 36703 0 36703 0 N ASMDAT/ MOUNTED EXTERN N 512 4096 1048576 10216 2893 0 2893 0 N ASMLOG1/ MOUNTED EXTERN N 512 4096 1048576 10216 2893 0 2893 0 N ASMLOG2/ MOUNTED EXTERN N 512 4096 1048576 102376 97502 0 97502 0 N ASMREC/
5. Turn database installation into SI
Changing the database installation is pretty easy. We have to change to Oracle Inventory and we have to
disable the RAC option as we did in the GI installation.
First, we disable the RAC option for the database home:
$ cd $ORACLE_HOME/rdbms/lib/ $ make -f ins_rdbms.mk rac_off ioracle
Then we modify the inventory to get a single server installation:
$ cd $ORACLE_HOME/oui/bin $ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={server-b}" -silent -local Oracle Universal Installer wird gestartet... Swap-Speicher wird geprüft: muss größer sein als 500 MB. Tatsächlich 61662 MB Erfolgreich The inventory pointer is located at /var/opt/oracle/oraInst.loc 'UpdateNodeList' war erfolgreich
6. Setting up Data Guard
I will not cover this in detail since this is a different topic. Just to outline it, we did a “duplicate for standby”, registered the duplicate DB with SI, set up Data Guard between the existing DB and the duplicate and enable the configuration.
7. Modify server A
Changing server A from GI to SI requires pretty much the same steps that we did to server B. We just did a Data Guard Switchover
to server B, disabled and dropped the Data Guard configuration and went through steps 1-6 on server A. That’s basically it.
8. Conclusion
Finally we ended up with a clean Single Instance installation on both servers, with single instance databases and a neat Data
Guard set up between them.