Utilizing Serverpools in Oracle Grid Infrastructure

Introduction

The idea of serverpools was introduced in 11g already. Before the existence of serverpools the instances of a database needed to be hard-linked to a server. Asume a 4-node cluster and a 2-instance RAC-database configured to run on node-1 and node-2. If these two nodes are not running for whatever reason the RAC-database is entirely unavailable. We would need to reconfigure the database instances to run on the remaining nodes node-3 and node-4.
That is the moment were serverpools come into play. Serverpools are a kind of an abstraction layer between servers and instances. Servers are allocated to serverpools according to the rules we can define. And on the other hand databases (better: services) are configured to run on specific serverpools. There is no hard dependency between instances and servers anymore.
The following examples use the new 12c parameters since the short ones are now deprecated.

How it works – Basics

Assume the following environment:

  • 2 databases, one with “gold” service, one with “silver” service.
  • 4 servers in the cluster

We want to make sure, that the “gold” database is available as long as possible. So we create two serverpools for the two service levels.

srvctl add serverpool -serverpool gold   -min 1 -max 2 -importance 10
srvctl add serverpool -serverpool silver -min 1 -max 2 -importance 8

Now, make the databases/services use the newly created serverpools:

srvctl add service -db gold   -service goldsvc   -serverpool gold   -cardinality uniform
srvctl add service -db silver -service silversvc -serverpool silver -cardinality uniform

What happens at the startup of the whole cluster? Servers are assigned to serverpools in order of importance. First all serverpools are filled until the defined minimum is reached. If all serverpools have “min” server assigned, the serverpools get filled with servers until “max” is reached in order of importance. That means the serverpool with the highest importance is started first regardless which nodes comes up first. Look at this example:

  • node-4 comes up –> assigned to serverpool “gold” (highest importance, “min” not reached)
  • node-2 comes up –> assigned to serverpool “silver” (“gold” has reached “min”, “silver” has second-highest importance and “min” not reached)
  • node-1 comes up –> assigned to serverpool “gold” (“silver” (all serverpools) has reached “min”, “gold” has not reached “max” and has highest importance)
  • node-3 comes up –> assigned to serverpool “silver” (“gold” has reached “max”, “silver” has second-highest importance and “max” not reached)

The result looks like this:

$ srvctl status serverpool
Server pool name: Free
Active servers count: 0
Server pool name: Generic
Active servers count: 4
Server pool name: gold
Active servers count: 2
Server pool name: silver
Active servers count: 2
$ crsctl status srvpool
NAME=Free
ACTIVE_SERVERS=

NAME=Generic
ACTIVE_SERVERS=node-4 node-2 node-1 node-3

NAME=ora.gold
ACTIVE_SERVERS=node-4 node-1

NAME=ora.silver
ACTIVE_SERVERS=node-2 node-3

The instances were automatically started on the servers belonging to the configured serverpool. This happens completely automatic.

$ srvctl status database -db gold
Instance gold1 is running on node node-4
Instance gold2 is running on node node-1
$ srvctl status database -db silver
Instance silver1 is running on node node-2
Instance silver2 is running on node node-3

You can even modify the serverpool configuration, the handling of the database instances is done by the clusterware, there is no need for human intervention. As we add or remove servers from serverpools instances are configured and started or stopped automatically.

$ srvctl modify serverpool -serverpool gold   -max 3
$ srvctl modify serverpool -serverpool silver -max 1

We need to modify the “min” of serverpool “silver” too, obviously the rules are not completely re-evaluated after the changes.

$ srvctl status database -db gold
Instance gold1 is running on node node-4
Instance gold2 is running on node node-1
Instance gold3 is running on node node-3
$ srvctl status database -db silver
Instance silver1 is running on node node-2

See, there is a third instance for database “gold” now since the serverpool was extended to use three nodes.

Conclusion

Serverpools provide an easy way to get rid of the dependency between servers and instances which increases the availability and flexibility and also reduces administrative overhead dramatically due to the automatic instance handling.
But, on the other hand, if using Cloud Control to manage the cluster, there are some caveats. As of now Cloud Control is not aware of policy managed databases. If an instance is started on another node as at was before, it is marked as “down” on the old node. The newly assigned node does not know anything about the new instance from a Cloud Control point of view. This requires manual activity to reflect the changes and make everything “green” again.

There are other nice features around that, I will blog about that in the near future. Stay tuned.

Advertisements