Using a separate network for DataGuard in 12c RAC

Updated 2015-MAY-18, refined the listener parameters for the databases.

Lately I was asked to create a MAA environment having two 2-node clusters in separate locations. The customer wanted to separate the network traffic caused by DataGuard from the public network. I will outline the steps we made to get this done. We did this with a 12.1.0.2 database on a Windows 2012, you’ll notice all these “D:\>” prompts 😉 The approach should be pretty much the same on any other platform, but be careful which OS user you need to choose for the several operations.

So let’s get started. What do we already have in place?

  • two clusters with Grid Infrastructure installed and configured
  • database software installed for both clusters
  • a primary RAC database named CLU1DB running in cluster1
  • a standby duplicate RAC database named CLU2DB running in cluster2
  • DataGuard parameters prepared in both databases

And what do we need for the separate network?

  • one extra VIP for each node (4 in my case for all 4 servers) in the new network
  • one extra SCAN for each cluster, at least one IP per SCAN, preferably three IPs

For the sake of simplicity and since the dedicated network is only used for communicating between all the cluster nodes, we used one IP for each SCAN and defined them in the “hosts” files.

192.168.50.5             cluster1-dg-scan
192.168.50.6             cluster2-dg-scan
192.168.50.11           server1-dg-vip
192.168.50.12           server2-dg-vip
192.168.50.13           server3-dg-vip
192.168.50.14           server4-dg-vip

Let’s define that new network:

D:\>oifcfg setif -global DGNet/192.168.50.0:public

At first, we need to add the new network to the Grid Infratructure on both clusters.

D:\>srvctl add network -netnum 2 -subnet 192.168.50.0/255.255.255.224/DGNet -nettype static

Now that we have a second network defined, we can continue to add the VIPs. In the same step we add the local listeners for those VIPs and start everything up.

D:\>srvctl add vip -node server1 -netnum 2 -address server1-dg-vip/255.255.255.224/DGNet
D:\>srvctl add vip -node server2 -netnum 2 -address server2-dg-vip/255.255.255.224/DGNet

D:\>srvctl add listener -listener listener_dg -netnum 2 -endpoints "TCP:1522"

D:\>srvctl start listener -listener listener_dg

D:\>srvctl status listener -listener listener_dg
Listener LISTENER_DG is enabled
Listener LISTENER_DG is running on node(s): server1,server2

The second cluster is prepared identically, just change the host names and VIPs.

Now the SCAN listeners, since 12c we are now able to create more than one SCAN listener. So let us use a SCAN listener to route network traffic and simplify connections from remote locations. Again do this on both clusters.

D:\>srvctl add scan -scanname cluster1-dg-scan -netnum 2
D:\>srvctl add scan_listener -netnum 2 -listener cluster1dg -endpoints "TCP:1525"
D:\>srvctl start scan_listener -netnum 2

D:\>srvctl status scan_listener -netnum 2
SCAN Listener CLUSTER1DG_SCAN1_NET2 is enabled
SCAN listener CLUSTER1DG_SCAN1_NET2 is running on node server1

To start the network on all nodes and to check functionality of the second SCAN listener, we relocated the listener.

D:\>srvctl relocate scan -netnum 2 -scannumber 1

Ok, that’s it for the clusterware. Time to configure the databases. To make the databases use the second network, the listeners must be aware of the databases. That means, the database service needs to be registered.

In my opinion, the easiest way to achieve this is using the LISTENER_NETWORKS parameter. This parameter allows us to list networks and define local and remote listeners for those networks. That implies in turn that we need to set the value of this parameter per instance since each instance needs to register it’s services to the corresponding local listener.

SQL> alter system set listener_networks=
'((NAME=PUBLIC_NET)(LOCAL_LISTENER=server1-vip:1522)(REMOTE_LISTENER=cluster1-scan:1525))',
'((NAME=DG_NET)(LOCAL_LISTENER=server1-dg-vip:1522)(REMOTE_LISTENER=cluster1-dg-scan:1525))' 
scope=both sid='clu1db_1';

System altered.

SQL> alter system set listener_networks=
'((NAME=PUBLIC_NET)(LOCAL_LISTENER=server2-vip:1522)(REMOTE_LISTENER=cluster1-scan:1525))',
'((NAME=DG_NET)(LOCAL_LISTENER=server2-dg-vip:1522)(REMOTE_LISTENER=cluster1-dg-scan:1525))' 
scope=both sid='clu1db_2';

System altered.

Update 2015-MAY-18:

Since Version 12c the cluster agents take care of the network configuration. Details are described in the SCAN Whitepaper we need to unset all the %LISTENER% parameters. Setting one of these parameters manually will prevent the agent to set it automatically. Especially take care of the REMOTE_LISTENER parameter since this one is set by “dbca” during database creation.

SQL> alter system reset remote_listener scope=spfile sid='*';

System altered.

After this change restart the database and check the alert.log which should now contain something like this:

ALTER SYSTEM SET local_listener=' (ADDRESS=(PROTOCOL=TCP)(HOST=10.174.1.158)(PORT=1522))' SCOPE=MEMORY SID='clu1db_1';
2015-05-18 09:39:10.375000 +02:00
ALTER SYSTEM SET remote_listener=' cluster1-scan:1525' SCOPE=MEMORY SID='clu1db_1';
ALTER SYSTEM SET listener_networks='(( NAME=net2)(LOCAL_LISTENER=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.50.11)(PORT=1522)))))','((NAME=net2)(REMOTE_LISTENER=cluster1-dg-scan:1525))' SCOPE=MEMORY SID='clu1db_1';


Then do the same for the standby database.

SQL> alter system set listener_networks=
'((NAME=PUBLIC_NET)(LOCAL_LISTENER=server3-vip:1522)(REMOTE_LISTENER=cluster2-scan:1525))',
'((NAME=DG_NET)(LOCAL_LISTENER=server3-dg-vip:1522)(REMOTE_LISTENER=cluster2-dg-scan:1525))' 
scope=both sid='clu2db_1';

System altered.

SQL> alter system set listener_networks=
'((NAME=PUBLIC_NET)(LOCAL_LISTENER=server4-vip:1522)(REMOTE_LISTENER=cluster2-scan:1525))',
'((NAME=DG_NET)(LOCAL_LISTENER=server4-dg-vip:1522)(REMOTE_LISTENER=cluster2-dg-scan:1525))' 
scope=both sid='clu2db_2';

System altered.

Update 2015-MAY-18:

Reset the %LISTENER% parameters at standby site too and restart the standby database:

SQL> alter system reset remote_listener scope=spfile sid='*';

System altered.

You may now check the changes by issuing some “lsnrctl status” commands. In our case everything went fine 🙂

Finally, we can now create the DataGuard Broker configuration using the second network. It’s as easy as that.

C:\>dgmgrl
DGMGRL for 64-bit Windows: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys@clu1db
Password:
Connected as SYSDG.
DGMGRL> create configuration clu_maa as
> primary database is clu1db
> connect identifier is 'cluster1-dg-scan:1525/clu1db';
Configuration "clu_maa" created with primary database "clu1db"
DGMGRL> add database clu2db
> as connect identifier is 'cluster2-dg-scan:1525/clu2db'
> maintained as physical;
Database "clu2db" added

DGMGRL> enable configuration;
Enabled.
DGMGRL> show configuration verbose;

Configuration - clu_maa

  Protection Mode: MaxPerformance
  Members:
  clu1db - Primary database
    clu2db - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

Now only the “StaticConnectIdentifier” DataGuard property of the two databases is configured to use the public network. You may change this as well to use the DataGuard network but as this property is only used by dgmgrl client to start up databases this causes no traffic at all. Or just leave this property set to NULL, then the “DGConnectIdentifier” property will be used.

Ok, that’s it. Enjoy your network traffic.