Oracle 12.2 – SQL*Plus Command History

Currently I am preparing some slides for future events. I will share some of the topics beforehand. First topic is the one mentioned in the title, with Oracle Database 12.2 we finally get a command history. Though only a tiny new bit this is one of my favorite new features.
Let’s see how this works.

$ sqlplus / as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Mon May 8 09:49:04 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> history
SP2-1650: History is off, use "SET HIST[ORY] ON" to enable History.

Ok, the feature is inactive by default. We need to enable it by setting it to “ON” or to a number specifying how many statements should be kept in the history.

SQL> set history 50

SQL> select * from dual;
D
-
X
SQL> history
  1  select * from dual;

Let’s see what else is possible with the “history” command:

SQL> help history

 HISTORY
 -------

 Stores, lists, executes, edits of the commands
 entered during the current SQL*Plus session.

 HIST[ORY] [N {RUN | EDIT | DEL[ETE]}] | [CLEAR]

 N is the entry number listed in the history list.
 Use this number to recall, edit or delete the command.

 Example:
 HIST 3 RUN - will run the 3rd entry from the list.

 HIST[ORY] without any option will list all entries in the list.

We can easily run any command by specifying the number from the history or remove a command from the history.

SQL> hist
  1  select * from dual;
  2  select count(*) from dba_objects where status <> 'VALID';

SQL> hist 2 run

  COUNT(*)
----------
         0

SQL> hist 1 del
SQL> hist
  1  select count(*) from dba_objects where status <> 'VALID';

And by the way, the history runs at statement level, not at line level. So this is maybe the most important advantage over OS level command history using arrow-up and arrow-down (natively on Windows or with “rlwrap” on Linux).

SQL> select count(*) from dba_users
  2  where oracle_maintained = 'Y'
  3  and account_status = 'OPEN';

  COUNT(*)
----------
         2

SQL> hist
  1  select count(*) from dba_objects where status <> 'VALID';
  2  select count(*) from dba_users
     where oracle_maintained = 'Y'
     and account_status = 'OPEN';

SQL> hist 2 run

  COUNT(*)
----------
         2

Enjoy the new feature.

Cluster Health Monitor

Preface

Issues with the Oracle Grid Infrastructure are sometimes related to OS issues like heavy load or similar. That’s why Oracle provided the OSWatcher tool in the past and later included that in the Grid Infrastructure as Cluster Health Monitor (CHM). With version 12.1.0.1 the Grid Inftrastrucure Repository (GIMR) was introduced which became mandatory with 12.1.0.2. The GIMR now holds all the OS related data which used to be stored in the filesystem before. The main tool to manage this data is called “oclumon”.

Setup

At first, one may want to check and change the retention of OS performance data. The retention is always specified in seconds and needs to be correlated to a size in MB. Oclumon can do that for us.
First, check the current retention:

[oracle@vm101 ~]$ oclumon manage -get repsize

CHM Repository Size = 68160 seconds

Ok, not even a full day of performance data. So let’s find out what is needed for a whole day:

[oracle@vm101 ~]$ oclumon manage -repos checkretentiontime 86400
The Cluster Health Monitor repository is too small for the desired retention. Please first resize the repository to 2598 MB

So there is the required size that can now be used for setting the new repository size.

[oracle@vm101 ~]$ oclumon manage -repos changerepossize 2598
The Cluster Health Monitor repository was successfully resized.The new retention is 86460 seconds.

That’s all.

Querying Data

Holding the data is one thing, but how to get that data from the repository? Basically there are two ways of retrieving data. Either use “oclumon dumpnodeview” or Enterprise Manager.

oclumon dumpnodeview

Oclumon is much more powerful, it can get all the data down to the process and device level when using the verbose mode. These are the available options:

[oracle@vm101 ~]$ oclumon dumpnodeview -h

dumpnodeview verb usage
=======================
The dumpnodeview command reports monitored records in the text format. The
collection of metrics for a node at a given point in time (a timestamp) is
called a node view.

* Usage
  dumpnodeview [-allnodes | -n <node1> ...] [-last <duration> |
                -s <timestamp> -e <timestamp>][-i <interval>][-v]
                [-system][-process][-device][filesystem][-nic]
                [-protoerr][-cpu][-topconsumer][-format <format type>]
                [-dir <directory> [-append]]

*Where
  -n <node1> ...   = Dump node views for specified nodes
  -allnodes        = Dump node views for all nodes
  -s <timestamp>   = Specify start time for range dump of node views
  -e <timestamp>   = Specify end time for range dump of node views
                     Absolute timestamp must be in "YYYY-MM-DD HH24:MI:SS"
                     format, for example "2007-11-12 23:05:00"
  -last <duration> = Dump the latest node views for a specified duration.
                     Duration must be in "HH24:MI:SS" format, for example
                     "00:45:00"
  -i               = Dump node views separated by the specified
                     interval in seconds. Must be a multiple of 5.
  -v               = Dump verbose node views containing all parts.
  -system, -cpu,.. = Dump each indicate node view parts.
  -format <format type> = format of the output.
                     <format type> can be legacy, tabular, or csv.
                     The default format is tabular.
  -dir <directory> = Dump node view part to file(s) in spceified dir.
                     With -append, will append the files. Overwrite otherwise.

The data is human readable but not very easy to analyze:

----------------------------------------
Node: vm101 Clock: '16-03-07 11.24.38 ' SerialNo:84380
----------------------------------------

SYSTEM:
#pcpus: 1 #vcpus: 4 cpuht: Y chipname: Dual-Core cpu: 7.25 cpuq: 0 physmemfree: 96760 physmemtotal: 5889124 mcache: 3344260 swapfree: 5953464 swaptotal: 6143996 hugepagetotal: 0 hugepagefree: 0 hugepagesize: 2048 ior: 68 iow: 56 ios: 11 swpin: 0 swpout: 0 pgin: 0 pgout: 4 netr: 41.895 netw: 74.346 procs: 328 procsoncpu: 1 rtprocs: 18 rtprocsoncpu: N/A #fds: 22240 #sysfdlimit: 6815744 #disks: 9 #nics: 3 nicErrors: 0

TOP CONSUMERS:
topcpu: 'gipcd.bin(23768) 5.40' topprivmem: 'java(27097) 488384' topshm: 'oracle_3224_-mg(3224) 562184' topfd: 'oraagent.bin(24093) 266' topthread: 'java(27097) 102'

CPUS:
cpu3: sys-3.71 user-4.58 nice-0.0 usage-8.29 iowait-0.0 steal-1.96
cpu2: sys-3.6 user-4.59 nice-0.0 usage-7.65 iowait-0.0 steal-1.96
cpu1: sys-3.76 user-3.32 nice-0.0 usage-7.9 iowait-0.0 steal-2.43
cpu0: sys-2.20 user-3.75 nice-0.0 usage-5.96 iowait-0.0 steal-2.20

PROCESSES:

name: 'evmlogger.bin' pid: 23762 #procfdlimit: 65536 cpuusage: 0.60 privmem: 5172 shm: 11868 #fd: 30 #threads: 2 priority: 20 nice: 0 state: S
name: 'asm_lms0_+asm1' pid: 24304 #procfdlimit: 65536 cpuusage: 0.60 privmem: 11352 shm: 18712 #fd: 13 #threads: 1 priority: -2 nice: 0 state: S
name: 'ora_vkrm_rac01_' pid: 28506 #procfdlimit: 65536 cpuusage: 0.40 privmem: 2180 shm: 10576 #fd: 12 #threads: 1 priority: 20 nice: 0 state: S
name: 'asm_lmd0_+asm1' pid: 24302 #procfdlimit: 65536 cpuusage: 0.40 privmem: 12204 shm: 19872 #fd: 13 #threads: 1 priority: 20 nice: 0 state: S
name: 'asm_dia0_+asm1' pid: 24298 #procfdlimit: 65536 cpuusage: 0.40 privmem: 12024 shm: 24196 #fd: 16 #threads: 1 priority: 20 nice: 0 state: S
name: 'ora_diag_rac01_' pid: 28502 #procfdlimit: 65536 cpuusage: 0.40 privmem: 7980 shm: 11132 #fd: 12 #threads: 1 priority: 20 nice: 0 state: S
name: 'ora_lmhb_rac01_' pid: 28530 #procfdlimit: 65536 cpuusage: 0.40 privmem: 2828 shm: 12212 #fd: 13 #threads: 1 priority: 20 nice: 0 state: S
name: 'ora_lck0_rac01_' pid: 28570 #procfdlimit: 65536 cpuusage: 0.40 privmem: 3408 shm: 35388 #fd: 13 #threads: 1 priority: 20 nice: 0 state: S
name: 'ora_dia0_rac01_' pid: 28512 #procfdlimit: 65536 cpuusage: 0.40 privmem: 13208 shm: 24336 #fd: 15 #threads: 1 priority: 20 nice: 0 state: S
name: 'mdb_o000_-mgmtd' pid: 2688 #procfdlimit: 65536 cpuusage: 0.40 privmem: 2368 shm: 11932 #fd: 9 #threads: 1 priority: 20 nice: 0 state: S
name: 'ora_lmon_rac01_' pid: 28514 #procfdlimit: 65536 cpuusage: 0.40 privmem: 17184 shm: 32324 #fd: 13 #threads: 1 priority: 20 nice: 0 state: S
name: 'mdb_smco_-mgmtd' pid: 24896 #procfdlimit: 65536 cpuusage: 0.20 privmem: 2092 shm: 11692 #fd: 7 #threads: 1 priority: 20 nice: 0 state: S
name: 'UsmMonitor' pid: 23353 #procfdlimit: 1024 cpuusage: 0.20 privmem: 0 shm: 0 #fd: 2 #threads: 1 priority: 20 nice: 0 state: S
name: 'ora_q00e_rac01_' pid: 28741 #procfdlimit: 65536 cpuusage: 0.20 privmem: 4756 shm: 36972 #fd: 11 #threads: 1 priority: 20 nice: 0 state: S
name: 'asm_lmon_+asm1' pid: 24300 #procfdlimit: 65536 cpuusage: 0.20 privmem: 14312 shm: 27064 #fd: 13 #threads: 1 priority: 20 nice: 0 state: S
name: 'ora_p00c_rac01_' pid: 28699 #procfdlimit: 65536 cpuusage: 0.20 privmem: 2028 shm: 8328 #fd: 10 #threads: 1 priority: 20 nice: 0 state: S
name: 'mdb_lgwr_-mgmtd' pid: 24838 #procfdlimit: 65536 cpuusage: 0.20 privmem: 4140 shm: 21876 #fd: 11 #threads: 1 priority: 20 nice: 0 state: S
name: 'mdb_dia0_-mgmtd' pid: 24833 #procfdlimit: 65536 cpuusage: 0.20 privmem: 3908 shm: 15792 #fd: 7 #threads: 1 priority: 20 nice: 0 state: S
name: 'ovmd' pid: 603 #procfdlimit: 1024 cpuusage: 0.20 privmem: 96 shm: 396 #fd: 5 #threads: 1 priority: 20 nice: 0 state: S
name: 'ora_p009_rac01_' pid: 28689 #procfdlimit: 65536 cpuusage: 0.20 privmem: 2032 shm: 8376 #fd: 10 #threads: 1 priority: 20 nice: 0 state: S
name: 'init.tfa' pid: 21628 #procfdlimit: 1024 cpuusage: 0.20 privmem: 724 shm: 1220 #fd: 4 #threads: 1 priority: 20 nice: 0 state: S
name: 'ora_p004_rac01_' pid: 28676 #procfdlimit: 65536 cpuusage: 0.20 privmem: 2028 shm: 8324 #fd: 10 #threads: 1 priority: 20 nice: 0 state: S
name: 'cssdagent' pid: 23811 #procfdlimit: 65536 cpuusage: 0.60 privmem: 37564 shm: 80744 #fd: 61 #threads: 16 priority: -100 nice: 0 state: S
name: 'kswapd0' pid: 44 #procfdlimit: 1024 cpuusage: 0.00 privmem: 0 shm: 0 #fd: 2 #threads: 1 priority: 20 nice: 0 state: S
name: 'oracle_3224_-mg' pid: 3224 #procfdlimit: 65536 cpuusage: 0.80 privmem: 5692 shm: 562184 #fd: 9 #threads: 1 priority: 20 nice: 0 state: S
name: 'mdb_dbw0_-mgmtd' pid: 24835 #procfdlimit: 65536 cpuusage: 0.00 privmem: 11332 shm: 512752 #fd: 13 #threads: 1 priority: 20 nice: 0 state: S
name: 'ora_mman_rac01_' pid: 28498 #procfdlimit: 65536 cpuusage: 0.00 privmem: 2180 shm: 482056 #fd: 11 #threads: 1 priority: 20 nice: 0 state: S
name: 'ora_ppa7_rac01_' pid: 28649 #procfdlimit: 65536 cpuusage: 0.20 privmem: 5584 shm: 479744 #fd: 11 #threads: 1 priority: 20 nice: 0 state: S
name: 'ora_dbw0_rac01_' pid: 28534 #procfdlimit: 65536 cpuusage: 0.00 privmem: 10552 shm: 406356 #fd: 12 #threads: 1 priority: 20 nice: 0 state: S
name: 'oracle_3216_-mg' pid: 3216 #procfdlimit: 65536 cpuusage: 0.00 privmem: 6264 shm: 351364 #fd: 9 #threads: 1 priority: 20 nice: 0 state: S
name: 'java' pid: 27097 #procfdlimit: 65536 cpuusage: 0.40 privmem: 488384 shm: 6068 #fd: 151 #threads: 102 priority: 20 nice: 0 state: S
name: 'ora_lms1_rac01_' pid: 28524 #procfdlimit: 65536 cpuusage: 0.60 privmem: 16536 shm: 229772 #fd: 11 #threads: 1 priority: -2 nice: 0 state: S
name: 'ora_cjq0_rac01_' pid: 28663 #procfdlimit: 65536 cpuusage: 0.00 privmem: 14556 shm: 226124 #fd: 11 #threads: 1 priority: 20 nice: 0 state: S
name: 'ora_lms0_rac01_' pid: 28520 #procfdlimit: 65536 cpuusage: 0.80 privmem: 16496 shm: 220516 #fd: 11 #threads: 1 priority: -2 nice: 0 state: S
name: 'ora_mmon_rac01_' pid: 28556 #procfdlimit: 65536 cpuusage: 0.40 privmem: 8364 shm: 201564 #fd: 15 #threads: 1 priority: 20 nice: 0 state: S
name: 'mdb_mman_-mgmtd' pid: 24823 #procfdlimit: 65536 cpuusage: 0.00 privmem: 2092 shm: 194524 #fd: 9 #threads: 1 priority: 20 nice: 0 state: S
name: 'mdb_mmon_-mgmtd' pid: 24856 #procfdlimit: 65536 cpuusage: 0.00 privmem: 20948 shm: 153272 #fd: 13 #threads: 1 priority: 20 nice: 0 state: S
name: 'ora_smon_rac01_' pid: 28542 #procfdlimit: 65536 cpuusage: 0.00 privmem: 4800 shm: 159652 #fd: 12 #threads: 1 priority: 20 nice: 0 state: S
name: 'ora_lck1_rac01_' pid: 28532 #procfdlimit: 65536 cpuusage: 0.00 privmem: 4128 shm: 156856 #fd: 10 #threads: 1 priority: 20 nice: 0 state: S
name: 'ora_w001_rac01_' pid: 28635 #procfdlimit: 65536 cpuusage: 0.00 privmem: 4720 shm: 128528 #fd: 10 #threads: 1 priority: 20 nice: 0 state: S
name: 'mdb_smon_-mgmtd' pid: 24844 #procfdlimit: 65536 cpuusage: 0.00 privmem: 4572 shm: 123232 #fd: 10 #threads: 1 priority: 20 nice: 0 state: S
name: 'ora_w000_rac01_' pid: 28633 #procfdlimit: 65536 cpuusage: 0.00 privmem: 3852 shm: 122216 #fd: 10 #threads: 1 priority: 20 nice: 0 state: S
name: 'oracle_4662_-mg' pid: 4662 #procfdlimit: 65536 cpuusage: 0.00 privmem: 8732 shm: 107292 #fd: 10 #threads: 1 priority: 20 nice: 0 state: S
name: 'ora_w002_rac01_' pid: 29643 #procfdlimit: 65536 cpuusage: 0.00 privmem: 3968 shm: 102284 #fd: 10 #threads: 1 priority: 20 nice: 0 state: S
name: 'ora_w004_rac01_' pid: 2548 #procfdlimit: 65536 cpuusage: 0.00 privmem: 3648 shm: 94840 #fd: 10 #threads: 1 priority: 20 nice: 0 state: S
name: 'ora_lmd0_rac01_' pid: 28516 #procfdlimit: 65536 cpuusage: 0.40 privmem: 14832 shm: 83312 #fd: 11 #threads: 1 priority: 20 nice: 0 state: S
name: 'oracle_29094_ra' pid: 29094 #procfdlimit: 65536 cpuusage: 0.20 privmem: 7764 shm: 86696 #fd: 12 #threads: 1 priority: 20 nice: 0 state: S
name: 'ora_w005_rac01_' pid: 26461 #procfdlimit: 65536 cpuusage: 0.20 privmem: 3976 shm: 88140 #fd: 10 #threads: 1 priority: 20 nice: 0 state: S
name: 'oracle_25071_-m' pid: 25071 #procfdlimit: 65536 cpuusage: 0.00 privmem: 5116 shm: 85268 #fd: 11 #threads: 1 priority: 20 nice: 0 state: S
name: 'oracle_30010_-m' pid: 30010 #procfdlimit: 65536 cpuusage: 0.00 privmem: 6348 shm: 83756 #fd: 11 #threads: 1 priority: 20 nice: 0 state: S
name: 'mdb_q004_-mgmtd' pid: 25060 #procfdlimit: 65536 cpuusage: 0.00 privmem: 5532 shm: 81732 #fd: 7 #threads: 1 priority: 20 nice: 0 state: S
name: 'ora_lmd1_rac01_' pid: 28518 #procfdlimit: 65536 cpuusage: 0.40 privmem: 14752 shm: 73420 #fd: 11 #threads: 1 priority: 20 nice: 0 state: S
name: 'ora_q002_rac01_' pid: 28711 #procfdlimit: 65536 cpuusage: 0.00 privmem: 9756 shm: 74516 #fd: 10 #threads: 1 priority: 20 nice: 0 state: S
name: 'ora_w006_rac01_' pid: 26712 #procfdlimit: 65536 cpuusage: 0.00 privmem: 3528 shm: 73032 #fd: 10 #threads: 1 priority: 20 nice: 0 state: S
name: 'mdb_mmnl_-mgmtd' pid: 24860 #procfdlimit: 65536 cpuusage: 0.20 privmem: 2308 shm: 72652 #fd: 9 #threads: 1 priority: 20 nice: 0 state: S
name: 'ora_w003_rac01_' pid: 30569 #procfdlimit: 65536 cpuusage: 0.00 privmem: 3792 shm: 71664 #fd: 10 #threads: 1 priority: 20 nice: 0 state: S
name: 'oracle_27203_ra' pid: 27203 #procfdlimit: 65536 cpuusage: 0.00 privmem: 5728 shm: 58184 #fd: 12 #threads: 1 priority: 20 nice: 0 state: S
name: 'crsd.bin' pid: 23906 #procfdlimit: 65536 cpuusage: 4.00 privmem: 68024 shm: 24676 #fd: 262 #threads: 47 priority: 20 nice: 0 state: S
name: 'ora_q004_rac01_' pid: 28715 #procfdlimit: 65536 cpuusage: 0.00 privmem: 5808 shm: 54364 #fd: 12 #threads: 1 priority: 20 nice: 0 state: S
name: 'oracle_26988_ra' pid: 26988 #procfdlimit: 65536 cpuusage: 0.00 privmem: 5872 shm: 53996 #fd: 12 #threads: 1 priority: 20 nice: 0 state: S
name: 'ora_qm00_rac01_' pid: 28709 #procfdlimit: 65536 cpuusage: 0.00 privmem: 5000 shm: 52896 #fd: 10 #threads: 1 priority: 20 nice: 0 state: S
name: 'ora_w007_rac01_' pid: 15980 #procfdlimit: 65536 cpuusage: 0.00 privmem: 3776 shm: 51372 #fd: 10 #threads: 1 priority: 20 nice: 0 state: S
name: 'oracle_28695_ra' pid: 28695 #procfdlimit: 65536 cpuusage: 0.00 privmem: 4556 shm: 49604 #fd: 14 #threads: 1 priority: 20 nice: 0 state: S
name: 'ora_mmnl_rac01_' pid: 28558 #procfdlimit: 65536 cpuusage: 0.00 privmem: 3272 shm: 49672 #fd: 12 #threads: 1 priority: 20 nice: 0 state: S
name: 'oracle_28691_ra' pid: 28691 #procfdlimit: 65536 cpuusage: 0.00 privmem: 5512 shm: 47584 #fd: 14 #threads: 1 priority: 20 nice: 0 state: S
name: 'ora_q006_rac01_' pid: 28719 #procfdlimit: 65536 cpuusage: 0.00 privmem: 5176 shm: 47744 #fd: 10 #threads: 1 priority: 20 nice: 0 state: S
name: 'mdb_dbrm_-mgmtd' pid: 24829 #procfdlimit: 65536 cpuusage: 0.00 privmem: 5236 shm: 46936 #fd: 10 #threads: 1 priority: 20 nice: 0 state: S
name: 'ora_ckpt_rac01_' pid: 28538 #procfdlimit: 65536 cpuusage: 0.20 privmem: 4512 shm: 45660 #fd: 13 #threads: 1 priority: 20 nice: 0 state: S
name: 'mdb_ckpt_-mgmtd' pid: 24840 #procfdlimit: 65536 cpuusage: 0.00 privmem: 4616 shm: 44304 #fd: 11 #threads: 1 priority: 20 nice: 0 state: S
name: 'ohasd.bin' pid: 23597 #procfdlimit: 65536 cpuusage: 2.60 privmem: 48848 shm: 20988 #fd: 242 #threads: 31 priority: 20 nice: 0 state: S
name: 'ora_rmv1_rac01_' pid: 28566 #procfdlimit: 65536 cpuusage: 1.20 privmem: 2988 shm: 43556 #fd: 8 #threads: 1 priority: 20 nice: 0 state: S
name: 'ora_rmv0_rac01_' pid: 28564 #procfdlimit: 65536 cpuusage: 0.80 privmem: 2988 shm: 43536 #fd: 8 #threads: 1 priority: 20 nice: 0 state: S
name: 'orarootagent.bi' pid: 23644 #procfdlimit: 65536 cpuusage: 1.40 privmem: 44300 shm: 20824 #fd: 173 #threads: 24 priority: 20 nice: 0 state: S
name: 'oraagent.bin' pid: 24093 #procfdlimit: 65536 cpuusage: 2.20 privmem: 40068 shm: 22908 #fd: 266 #threads: 34 priority: 20 nice: 0 state: S
name: 'oraagent.bin' pid: 23719 #procfdlimit: 65536 cpuusage: 1.40 privmem: 37496 shm: 18652 #fd: 141 #threads: 20 priority: 20 nice: 0 state: S
name: 'ons' pid: 29963 #procfdlimit: 65536 cpuusage: 0.00 privmem: 5516 shm: 1372 #fd: 17 #threads: 18 priority: 20 nice: 0 state: S
name: 'gipcd.bin' pid: 23768 #procfdlimit: 65536 cpuusage: 5.40 privmem: 30204 shm: 17532 #fd: 198 #threads: 9 priority: 20 nice: 0 state: S
name: 'evmd.bin' pid: 23734 #procfdlimit: 65536 cpuusage: 2.40 privmem: 18364 shm: 16816 #fd: 165 #threads: 9 priority: 20 nice: 0 state: S
name: 'asm_vktm_+asm1' pid: 24284 #procfdlimit: 65536 cpuusage: 3.60 privmem: 1484 shm: 10980 #fd: 11 #threads: 1 priority: -2 nice: 0 state: S
name: 'ora_vktm_rac01_' pid: 28492 #procfdlimit: 65536 cpuusage: 3.40 privmem: 2188 shm: 9476 #fd: 10 #threads: 1 priority: -2 nice: 0 state: S
name: 'mdb_vktm_-mgmtd' pid: 24817 #procfdlimit: 65536 cpuusage: 3.40 privmem: 2100 shm: 10392 #fd: 7 #threads: 1 priority: -2 nice: 0 state: S
name: 'octssd.bin' pid: 23878 #procfdlimit: 65536 cpuusage: 2.00 privmem: 18116 shm: 15580 #fd: 99 #threads: 11 priority: 20 nice: 0 state: S
name: 'orarootagent.bi' pid: 24051 #procfdlimit: 65536 cpuusage: 2.00 privmem: 14164 shm: 14940 #fd: 53 #threads: 10 priority: 20 nice: 0 state: S
name: 'rcu_sched' pid: 10 #procfdlimit: 1024 cpuusage: 0.80 privmem: 0 shm: 0 #fd: 2 #threads: 1 priority: 20 nice: 0 state: S
name: 'mdnsd.bin' pid: 23732 #procfdlimit: 65536 cpuusage: 0.60 privmem: 6260 shm: 11268 #fd: 47 #threads: 3 priority: 20 nice: 0 state: S
name: 'gpnpd.bin' pid: 23749 #procfdlimit: 65536 cpuusage: 0.60 privmem: 18872 shm: 16416 #fd: 118 #threads: 8 priority: 20 nice: 0 state: S
name: 'osysmond.bin' pid: 23900 #procfdlimit: 65536 cpuusage: 2.60 privmem: 34464 shm: 80940 #fd: 108 #threads: 12 priority: -100 nice: 0 state: S
name: 'ocssd.bin' pid: 23822 #procfdlimit: 65536 cpuusage: 3.80 privmem: 101036 shm: 90008 #fd: 253 #threads: 28 priority: -100 nice: 0 state: S
name: 'cssdmonitor' pid: 23796 #procfdlimit: 65536 cpuusage: 0.80 privmem: 36816 shm: 80716 #fd: 61 #threads: 16 priority: -100 nice: 0 state: S
DEVICES:
xvdf ior: 0.000 iow: 0.000 ios: 0 qlen: 0 wait: 0 type: SYS
dm-2 ior: 0.000 iow: 4.002 ios: 1 qlen: 0 wait: 0 type: SYS
dm-1 ior: 0.000 iow: 0.000 ios: 0 qlen: 0 wait: 0 type: SYS
dm-0 ior: 0.000 iow: 0.000 ios: 0 qlen: 0 wait: 0 type: SWAP
xvde ior: 0.000 iow: 0.000 ios: 0 qlen: 0 wait: 0 type: ASM
xvda1 ior: 0.000 iow: 0.000 ios: 0 qlen: 0 wait: 0 type: SYS
xvdc ior: 0.000 iow: 0.000 ios: 0 qlen: 0 wait: 0 type: SYS
xvdb ior: 13.809 iow: 41.826 ios: 5 qlen: 0 wait: 0 type: ASM,OCR,VOTING[ONLINE]
xvda ior: 0.000 iow: 4.002 ios: 1 qlen: 0 wait: 0 type: SYS
xvda2 ior: 0.000 iow: 4.002 ios: 1 qlen: 0 wait: 0 type: SYS
xvdd ior: 54.435 iow: 6.403 ios: 3 qlen: 0 wait: 0 type: ASM
NICS:
lo netrr: 0.201  netwr: 0.201  neteff: 0.402  nicerrors: 0 pktsin: 1  pktsout: 1  errsin: 0  errsout: 0  indiscarded: 0  outdiscarded: 0  inunicast: 1  innonunicast: 0  type: PUBLIC
eth1 netrr: 41.132  netwr: 73.606  neteff: 114.738  nicerrors: 0 pktsin: 73  pktsout: 98  errsin: 0  errsout: 0  indiscarded: 0  outdiscarded: 0  inunicast: 73  innonunicast: 0  type: PRIVATE,ASM latency: <1
eth0 netrr: 0.562  netwr: 0.537  neteff: 1.099  nicerrors: 0 pktsin: 6  pktsout: 3  errsin: 0  errsout: 0  indiscarded: 0  outdiscarded: 0  inunicast: 6  innonunicast: 0  type: PUBLIC

FILESYSTEMS:
mount: /u01 type: xfs total: 47175680 used: 29903604 available: 17272076 used%: 63 ifree%: 99 [GRID_HOME]

PROTOCOL ERRORS:
IPHdrErr: 0 IPAddrErr: 0 IPUnkProto: 0 IPReasFail: 442 IPFragFail: 0 TCPFailedConn: 4000 TCPEstRst: 249227 TCPRetraSeg: 1980 UDPUnkPort: 3066 UDPRcvErr: 22563

These sections are repeated for every timeframe.

Enterprise Manager

Enterprise Manager is able to display the data as charts which is much more intuitive. But when you choose the “Cluster Health Monitor” from the cluster target homepage

CC-Cluster-Health-Monitoring

you will face a login prompt:

CC-Cluster-Health-Monitoring-login

This is maybe not very straight-forward. What Oracle wants to know is, how to connect to the GIMR database. The performance data is inside a PDB, but Enterprise Manager simply needs the DBSNMP user of the CDB.

[oracle@vm101 ~]$ export ORACLE_HOME=/u01/app/grid/12.1.0.2
[oracle@vm101 ~]$ export ORACLE_SID=-MGMTDB
[oracle@vm101 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Mon Mar 7 11:40:40 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management and Advanced Analytics options

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 MYCLUSTER                      READ WRITE NO

SQL> select username, account_status
  2  from dba_users;

USERNAME             ACCOUNT_STATUS
-------------------- --------------------------------
ANONYMOUS            EXPIRED & LOCKED
DBSNMP               EXPIRED & LOCKED
WMSYS                EXPIRED & LOCKED
XDB                  EXPIRED & LOCKED
APPQOSSYS            EXPIRED & LOCKED
GSMADMIN_INTERNAL    EXPIRED & LOCKED
GSMCATUSER           EXPIRED & LOCKED
SYSBACKUP            EXPIRED & LOCKED
OUTLN                EXPIRED & LOCKED
DIP                  EXPIRED & LOCKED
SYSDG                EXPIRED & LOCKED
ORACLE_OCM           EXPIRED & LOCKED
SYSKM                EXPIRED & LOCKED
XS$NULL              EXPIRED & LOCKED
GSMUSER              EXPIRED & LOCKED
AUDSYS               EXPIRED & LOCKED
SYSTEM               OPEN
SYS                  OPEN

19 rows selected.

SQL> alter user dbsnmp account unlock identified by dbsnmp;

User altered.

Now we can see the performance charts in real time:

CC-Cluster-Health-Monitoring-rt

Or historically:

CC-Cluster-Health-Monitoring-hist

Unfortunately there is now drill down to the process level like in the database performance pages. And there is only data for CPU, network and memory, but not disk. Please Oracle, add these features for us. It would help a lot. Thanks.
At least we can see details for each single CPU and network interface per node:

CC-Cluster-Health-Monitoring-13c-CPU

CC-Cluster-Health-Monitoring-13c-network

opatchauto Odyssey

A couple of days ago a customer asked for assistance in installing the January PSU in their RAC environment. The patch should be applied to two systems, first the test cluster, second the production cluster. Makes sense so far. So we planned the steps that needed to be done:

  • Download the patch
  • copy patch to all nodes and extract it
  • check OPatch version
  • create response file for OCM and copy it to all nodes
  • clear ASM adump directory since this may slow down pre-patch steps
  • “opatchauto” first node
  • “opatchauto” second node
  • run “datapatch” to apply SQL to databases

The whole procedure went fine without any issues on test. We even skipped the last step, running “datapatch” since the “opatchauto” did that for us. This happens in contrast to the Readme which does not tell about that.

So that was easy. But unfortunately the production system went not as smooth as the test system. “opatchauto” shut down the cluster stack and patched the RDBMS home successfully. But during the patch phase of GI, the logfile told us that there are still processes that blocked some files. I checked that and found a handful, one of those processes was the “ocssd”. When killing all the left-over processes I knew immediately that this was not the best idea. The server fenced and rebooted straight away. That left my cluster in a fuzzy state. The cluster stack came up again, but “opatchauto -resume” told me, that I should proceed with some manual steps. So I applied the patches to the GI home which was not done before and run the post-patch script which failed. Starting “opatchauto” in normal mode failed also since the cluster was already in “rolling” mode.

So finally I removed all the applied patches manually, put the cluster back in normal mode following MOS Note 1943498.1 and started the whole patching all over.  Everything went fine this time.

Conclusion

  1. Think before you act. Killing OCSSD is not a good idea at all.
  2. In contrast to the Readme “datapatch” is being executed by “opatchauto” as part of the patching process.
  3. Checking the current cluster status can be done like this:
[oracle@vm101 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [3467666221].