Duplicating a PDB with Read Only tablespaces

I am a big fan of Oracle Recovery Manager (RMAN) since the very beginning. It evolved over time and makes the life of a DBA much easier, as long as you know the concepts. Todays blog post is about duplicating a database. Initially I wanted to perform a schema point in time restore which is not natively supported by RMAN. So let’s outline the system I have.

We are talking about an Oracle Database 19c, 19.18 to be precise. It is running the multitenant architecture and has 3 pluggable databases. I am using Oracle Managed Files (OMF) because this makes file handling easier for me. In one of these PDBs are several schemas spread over a couple of tablespaces. This is the tablespace layout:

SYS @ ORCL19:CDB$ROOT:>select con_id, tablespace_name, status from cdb_tablespaces;

    CON_ID TABLESPACE_NAME                STATUS
---------- ------------------------------ ---------
         3 SYSTEM                         ONLINE
         3 SYSAUX                         ONLINE
         3 UNDOTBS1                       ONLINE
         3 TEMP                           ONLINE
         3 USERS                          READ ONLY     <<<<<<<<
         3 TBS_DATA                       READ ONLY
         3 TBS_LOB                        READ ONLY
         3 TBS_ORG_UTPLSQL                ONLINE
         1 SYSTEM                         ONLINE
         1 SYSAUX                         ONLINE
         1 UNDOTBS1                       ONLINE
         1 TEMP                           ONLINE
         1 USERS                          ONLINE
         4 SYSTEM                         ONLINE
         4 SYSAUX                         ONLINE
         4 UNDOTBS1                       ONLINE
         4 TEMP                           ONLINE
         4 USERS                          ONLINE
         4 DATA                           ONLINE
         4 TBS_DATA                       READ ONLY
         4 TBS_LOB                        READ ONLY
         5 SYSTEM                         ONLINE
         5 SYSAUX                         ONLINE
         5 UNDOTBS1                       ONLINE
         5 TEMP                           ONLINE
         5 USERS                          ONLINE

I wanted to restore a specific tablespace, USERS in CON_ID 3, to some point in the past. Since I cannot use an automatic auxiliary instance for this, I created my own auxiliary instance. Not that complex, just create a parameterfile.

[oracle@vm160 ~]$ cat $ORACLE_HOME/dbs/initaux.ora
db_name=aux
db_block_size=8192
db_create_file_dest='/u01/app/oracle/oradata'
db_recovery_file_dest='/u01/app/oracle/fra'
db_recovery_file_dest_size=10g
sga_target=1500M
enable_pluggable_database=true

Having that, I started RMAN, connected to my target and my auxiliary database and started the duplicate. The DUPLICATE command allows us to include or exclude PDBs and/or skip specific tablespaces. Since I wanted to only restore one tablespace of one PDB, I combined these features. Furthermore I used the “NOOPEN” option to prevent the auxiliary instance from opening automatically after the duplication.

[oracle@vm160 ~]$ rman

Recovery Manager: Release 19.0.0.0.0 - Production on Thu Mar 21 08:21:41 2024
Version 19.18.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect target sys/********@vm160/orcl19.support.robotron.de

connected to target database: ORCL19 (DBID=422142402)

RMAN> connect auxiliary /

connected to auxiliary database (not started)

RMAN> startup clone nomount

Oracle instance started

Total System Global Area    1577055136 bytes

Fixed Size                     9163680 bytes
Variable Size                385875968 bytes
Database Buffers            1174405120 bytes
Redo Buffers                   7610368 bytes

RMAN> sql clone 'create spfile from pfile';

using target database control file instead of recovery catalog
sql statement: create spfile from pfile

RMAN> startup clone nomount force

Oracle instance started

Total System Global Area    1577055136 bytes

Fixed Size                     9163680 bytes
Variable Size                385875968 bytes
Database Buffers            1174405120 bytes
Redo Buffers                   7610368 bytes

RMAN> duplicate database to aux pluggable database pdb1 skip tablespace pdb1:TBS_DATA,pdb1:TBS_LOB,pdb1:TBS_ORG_UTPLSQL until time 'sysdate-1' noopen;

Starting Duplicate Db at 2024-03-21 08:22:30
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=179 device type=DISK
duplicating Online logs to Oracle Managed File (OMF) location
duplicating Datafiles to Oracle Managed File (OMF) location

contents of Memory Script:
{
   set until scn  42779928;
   sql clone "alter system set  control_files =
  ''/u01/app/oracle/oradata/AUX/controlfile/o1_mf_lzqr1q5x_.ctl'', ''/u01/app/oracle/fra/AUX/controlfile/o1_mf_lzqr1q60_.ctl'' comment=
 ''Set by RMAN'' scope=spfile";
   sql clone "alter system set  db_name =
 ''ORCL19'' comment=
 ''Modified by RMAN duplicate'' scope=spfile";
   sql clone "alter system set  db_unique_name =
 ''aux'' comment=
 ''Modified by RMAN duplicate'' scope=spfile";
   shutdown clone immediate;
   startup clone force nomount
   restore clone primary controlfile;
   alter clone database mount;
}
executing Memory Script

executing command: SET until clause

sql statement: alter system set  control_files =   ''/u01/app/oracle/oradata/AUX/controlfile/o1_mf_lzqr1q5x_.ctl'', ''/u01/app/oracle/fra/AUX/controlfile/o1_mf_lzqr1q60_.ctl'' comment= ''Set by RMAN'' scope=spfile

sql statement: alter system set  db_name =  ''ORCL19'' comment= ''Modified by RMAN duplicate'' scope=spfile

sql statement: alter system set  db_unique_name =  ''aux'' comment= ''Modified by RMAN duplicate'' scope=spfile

Oracle instance shut down

Oracle instance started

Total System Global Area    1577055136 bytes

Fixed Size                     9163680 bytes
Variable Size                385875968 bytes
Database Buffers            1174405120 bytes
Redo Buffers                   7610368 bytes

Starting restore at 2024-03-21 08:22:53
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=4 device type=DISK

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /u01/app/oracle/fra/ORCL19/autobackup/2024_03_20/o1_mf_s_1164074408_lznf8rgr_.bkp
channel ORA_AUX_DISK_1: piece handle=/u01/app/oracle/fra/ORCL19/autobackup/2024_03_20/o1_mf_s_1164074408_lznf8rgr_.bkp tag=TAG20240320T020008
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/u01/app/oracle/oradata/AUX/controlfile/o1_mf_lzqr1q5x_.ctl
output file name=/u01/app/oracle/fra/AUX/controlfile/o1_mf_lzqr1q60_.ctl
Finished restore at 2024-03-21 08:22:55

database mounted
Skipping pluggable database ORCL19PDB2
Skipping pluggable database PDB2
Checking that duplicated tablespaces are self-contained
Automatically adding tablespace SYSTEM
Automatically adding tablespace SYSAUX
Automatically adding tablespace PDB$SEED:SYSTEM
Automatically adding tablespace PDB$SEED:SYSAUX
Automatically adding tablespace UNDOTBS1
Skipping tablespace USERS
Skipping tablespace PDB1:TBS_ORG_UTPLSQL
Skipping tablespace PDB1:TBS_LOB
Skipping tablespace PDB1:TBS_DATA

contents of Memory Script:
{
   set until scn  42779928;
   set newname for clone datafile  1 to new;
   set newname for clone datafile  3 to new;
   set newname for clone datafile  4 to new;
   set newname for clone datafile  5 to new;
   set newname for clone datafile  6 to new;
   set newname for clone datafile  8 to new;
   set newname for clone datafile  19 to new;
   set newname for clone datafile  20 to new;
   set newname for clone datafile  21 to new;
   set newname for clone datafile  22 to new;
   restore
   clone database
   skip forever tablespace  "USERS",
 "ORCL19PDB2":"USERS",
 "ORCL19PDB2":"UNDOTBS1",
 "ORCL19PDB2":"SYSTEM",
 "ORCL19PDB2":"SYSAUX",
 "PDB2":"USERS",
 "PDB2":"UNDOTBS1",
 "PDB2":"TBS_LOB",
 "PDB2":"TBS_DATA",
 "PDB2":"SYSTEM",
 "PDB2":"SYSAUX",
 "PDB2":"DATA",
 "PDB1":"TBS_ORG_UTPLSQL",
 "PDB1":"TBS_LOB",
 "PDB1":"TBS_DATA"   ;
}
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 2024-03-21 08:23:13
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00019 to /u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00020 to /u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00021 to /u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00022 to /u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_users_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u01/app/oracle/backup/orcl19/ORCL19_20240317_data_gh2lsmoo_1_1
channel ORA_AUX_DISK_1: piece handle=/u01/app/oracle/backup/orcl19/ORCL19_20240317_data_gh2lsmoo_1_1 tag=TAG20240317T010008
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:15
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /u01/app/oracle/oradata/AUX/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00003 to /u01/app/oracle/oradata/AUX/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00004 to /u01/app/oracle/oradata/AUX/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u01/app/oracle/backup/orcl19/ORCL19_20240317_data_gj2lsmq0_1_1
channel ORA_AUX_DISK_1: piece handle=/u01/app/oracle/backup/orcl19/ORCL19_20240317_data_gj2lsmq0_1_1 tag=TAG20240317T010008
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:15
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00005 to /u01/app/oracle/oradata/AUX/ED7F22D941443E9AE053A024100AD0FB/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00006 to /u01/app/oracle/oradata/AUX/ED7F22D941443E9AE053A024100AD0FB/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00008 to /u01/app/oracle/oradata/AUX/ED7F22D941443E9AE053A024100AD0FB/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u01/app/oracle/backup/orcl19/ORCL19_20240317_data_gl2lsmqn_1_1
channel ORA_AUX_DISK_1: piece handle=/u01/app/oracle/backup/orcl19/ORCL19_20240317_data_gl2lsmqn_1_1 tag=TAG20240317T010008
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 2024-03-21 08:23:51

contents of Memory Script:
{
   switch clone datafile all;
}
executing Memory Script

datafile 1 switched to datafile copy
input datafile copy RECID=14 STAMP=1164183831 file name=/u01/app/oracle/oradata/AUX/datafile/o1_mf_system_lzqr3jxq_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=15 STAMP=1164183831 file name=/u01/app/oracle/oradata/AUX/datafile/o1_mf_sysaux_lzqr3jwy_.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=16 STAMP=1164183831 file name=/u01/app/oracle/oradata/AUX/datafile/o1_mf_undotbs1_lzqr3jy4_.dbf
datafile 5 switched to datafile copy
input datafile copy RECID=17 STAMP=1164183831 file name=/u01/app/oracle/oradata/AUX/ED7F22D941443E9AE053A024100AD0FB/datafile/o1_mf_system_lzqr400z_.dbf
datafile 6 switched to datafile copy
input datafile copy RECID=18 STAMP=1164183831 file name=/u01/app/oracle/oradata/AUX/ED7F22D941443E9AE053A024100AD0FB/datafile/o1_mf_sysaux_lzqr4014_.dbf
datafile 8 switched to datafile copy
input datafile copy RECID=19 STAMP=1164183831 file name=/u01/app/oracle/oradata/AUX/ED7F22D941443E9AE053A024100AD0FB/datafile/o1_mf_undotbs1_lzqr401g_.dbf
datafile 19 switched to datafile copy
input datafile copy RECID=20 STAMP=1164183831 file name=/u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_system_lzqr31r4_.dbf
datafile 20 switched to datafile copy
input datafile copy RECID=21 STAMP=1164183831 file name=/u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_sysaux_lzqr31rf_.dbf
datafile 21 switched to datafile copy
input datafile copy RECID=22 STAMP=1164183831 file name=/u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_undotbs1_lzqr31rq_.dbf
datafile 22 switched to datafile copy
input datafile copy RECID=23 STAMP=1164183831 file name=/u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_users_lzqr31r2_.dbf

contents of Memory Script:
{
   set until time  "to_date('2024/03/20 08:22:30', 'YYYY/MM/DD HH24:MI:SS')";
   recover
   clone database
   skip forever tablespace  "USERS",
 "ORCL19PDB2":"USERS",
 "ORCL19PDB2":"UNDOTBS1",
 "ORCL19PDB2":"SYSTEM",
 "ORCL19PDB2":"SYSAUX",
 "PDB2":"USERS",
 "PDB2":"UNDOTBS1",
 "PDB2":"TBS_LOB",
 "PDB2":"TBS_DATA",
 "PDB2":"SYSTEM",
 "PDB2":"SYSAUX",
 "PDB2":"DATA",
 "PDB1":"TBS_ORG_UTPLSQL",
 "PDB1":"TBS_LOB",
 "PDB1":"TBS_DATA"    delete archivelog
   ;
}
executing Memory Script

executing command: SET until clause

Starting recover at 2024-03-21 08:23:51
using channel ORA_AUX_DISK_1
datafile 22 not processed because file is read-only

Executing: alter database datafile 7 offline drop
Executing: alter database datafile 29, 30, 33 offline drop
Executing: alter database datafile 24, 25, 26, 27, 28, 31, 32 offline drop
Executing: alter database datafile 15, 16, 17, 23 offline drop
starting media recovery

channel ORA_AUX_DISK_1: starting archived log restore to default destination
channel ORA_AUX_DISK_1: restoring archived log
archived log thread=1 sequence=234
channel ORA_AUX_DISK_1: reading from backup piece /u01/app/oracle/backup/orcl19/ORCL19_20240317_arch_gm2lsmqq_1_1
channel ORA_AUX_DISK_1: piece handle=/u01/app/oracle/backup/orcl19/ORCL19_20240317_arch_gm2lsmqq_1_1 tag=TAG20240317T010114
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
archived log file name=/u01/app/oracle/fra/AUX/archivelog/2024_03_21/o1_mf_1_234_lzqr4b89_.arc thread=1 sequence=234
channel clone_default: deleting archived log(s)
archived log file name=/u01/app/oracle/fra/AUX/archivelog/2024_03_21/o1_mf_1_234_lzqr4b89_.arc RECID=726 STAMP=1164183834
channel ORA_AUX_DISK_1: starting archived log restore to default destination
channel ORA_AUX_DISK_1: restoring archived log
archived log thread=1 sequence=235
channel ORA_AUX_DISK_1: reading from backup piece /u01/app/oracle/backup/orcl19/ORCL19_20240317_arch_go2lsq94_1_1
channel ORA_AUX_DISK_1: piece handle=/u01/app/oracle/backup/orcl19/ORCL19_20240317_arch_go2lsq94_1_1 tag=TAG20240317T020004
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
archived log file name=/u01/app/oracle/fra/AUX/archivelog/2024_03_21/o1_mf_1_235_lzqr4ck8_.arc thread=1 sequence=235
channel clone_default: deleting archived log(s)
archived log file name=/u01/app/oracle/fra/AUX/archivelog/2024_03_21/o1_mf_1_235_lzqr4ck8_.arc RECID=727 STAMP=1164183835
channel ORA_AUX_DISK_1: starting archived log restore to default destination
channel ORA_AUX_DISK_1: restoring archived log
archived log thread=1 sequence=236
channel ORA_AUX_DISK_1: restoring archived log
archived log thread=1 sequence=237
channel ORA_AUX_DISK_1: restoring archived log
archived log thread=1 sequence=238
channel ORA_AUX_DISK_1: reading from backup piece /u01/app/oracle/backup/orcl19/ORCL19_20240318_arch_gq2lvel5_1_1
channel ORA_AUX_DISK_1: piece handle=/u01/app/oracle/backup/orcl19/ORCL19_20240318_arch_gq2lvel5_1_1 tag=TAG20240318T020005
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:03
archived log file name=/u01/app/oracle/fra/AUX/archivelog/2024_03_21/o1_mf_1_236_lzqr4gz1_.arc thread=1 sequence=236
channel clone_default: deleting archived log(s)
archived log file name=/u01/app/oracle/fra/AUX/archivelog/2024_03_21/o1_mf_1_236_lzqr4gz1_.arc RECID=729 STAMP=1164183840
archived log file name=/u01/app/oracle/fra/AUX/archivelog/2024_03_21/o1_mf_1_237_lzqr4gys_.arc thread=1 sequence=237
channel clone_default: deleting archived log(s)
archived log file name=/u01/app/oracle/fra/AUX/archivelog/2024_03_21/o1_mf_1_237_lzqr4gys_.arc RECID=730 STAMP=1164183840
archived log file name=/u01/app/oracle/fra/AUX/archivelog/2024_03_21/o1_mf_1_238_lzqr4h00_.arc thread=1 sequence=238
channel clone_default: deleting archived log(s)
archived log file name=/u01/app/oracle/fra/AUX/archivelog/2024_03_21/o1_mf_1_238_lzqr4h00_.arc RECID=728 STAMP=1164183840
channel ORA_AUX_DISK_1: starting archived log restore to default destination
channel ORA_AUX_DISK_1: restoring archived log
archived log thread=1 sequence=239
channel ORA_AUX_DISK_1: restoring archived log
archived log thread=1 sequence=240
channel ORA_AUX_DISK_1: restoring archived log
archived log thread=1 sequence=241
channel ORA_AUX_DISK_1: restoring archived log
archived log thread=1 sequence=242
channel ORA_AUX_DISK_1: reading from backup piece /u01/app/oracle/backup/orcl19/ORCL19_20240319_arch_gs2m2315_1_1
channel ORA_AUX_DISK_1: piece handle=/u01/app/oracle/backup/orcl19/ORCL19_20240319_arch_gs2m2315_1_1 tag=TAG20240319T020005
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:03
archived log file name=/u01/app/oracle/fra/AUX/archivelog/2024_03_21/o1_mf_1_239_lzqr55dj_.arc thread=1 sequence=239
channel clone_default: deleting archived log(s)
archived log file name=/u01/app/oracle/fra/AUX/archivelog/2024_03_21/o1_mf_1_239_lzqr55dj_.arc RECID=733 STAMP=1164183862
archived log file name=/u01/app/oracle/fra/AUX/archivelog/2024_03_21/o1_mf_1_240_lzqr55f6_.arc thread=1 sequence=240
channel clone_default: deleting archived log(s)
archived log file name=/u01/app/oracle/fra/AUX/archivelog/2024_03_21/o1_mf_1_240_lzqr55f6_.arc RECID=731 STAMP=1164183861
archived log file name=/u01/app/oracle/fra/AUX/archivelog/2024_03_21/o1_mf_1_241_lzqr55d4_.arc thread=1 sequence=241
channel clone_default: deleting archived log(s)
archived log file name=/u01/app/oracle/fra/AUX/archivelog/2024_03_21/o1_mf_1_241_lzqr55d4_.arc RECID=734 STAMP=1164183862
archived log file name=/u01/app/oracle/fra/AUX/archivelog/2024_03_21/o1_mf_1_242_lzqr55ds_.arc thread=1 sequence=242
channel clone_default: deleting archived log(s)
archived log file name=/u01/app/oracle/fra/AUX/archivelog/2024_03_21/o1_mf_1_242_lzqr55ds_.arc RECID=732 STAMP=1164183861
channel ORA_AUX_DISK_1: starting archived log restore to default destination
channel ORA_AUX_DISK_1: restoring archived log
archived log thread=1 sequence=243
channel ORA_AUX_DISK_1: restoring archived log
archived log thread=1 sequence=244
channel ORA_AUX_DISK_1: reading from backup piece /u01/app/oracle/backup/orcl19/ORCL19_20240320_arch_gu2m4nd7_1_1
channel ORA_AUX_DISK_1: piece handle=/u01/app/oracle/backup/orcl19/ORCL19_20240320_arch_gu2m4nd7_1_1 tag=TAG20240320T020007
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:03
archived log file name=/u01/app/oracle/fra/AUX/archivelog/2024_03_21/o1_mf_1_243_lzqr5nvb_.arc thread=1 sequence=243
channel clone_default: deleting archived log(s)
archived log file name=/u01/app/oracle/fra/AUX/archivelog/2024_03_21/o1_mf_1_243_lzqr5nvb_.arc RECID=736 STAMP=1164183878
archived log file name=/u01/app/oracle/fra/AUX/archivelog/2024_03_21/o1_mf_1_244_lzqr5nvn_.arc thread=1 sequence=244
channel clone_default: deleting archived log(s)
archived log file name=/u01/app/oracle/fra/AUX/archivelog/2024_03_21/o1_mf_1_244_lzqr5nvn_.arc RECID=735 STAMP=1164183877
channel ORA_AUX_DISK_1: starting archived log restore to default destination
channel ORA_AUX_DISK_1: restoring archived log
archived log thread=1 sequence=245
channel ORA_AUX_DISK_1: reading from backup piece /u01/app/oracle/backup/orcl19/ORCL19_20240321_arch_h02m7bp5_1_1
channel ORA_AUX_DISK_1: piece handle=/u01/app/oracle/backup/orcl19/ORCL19_20240321_arch_h02m7bp5_1_1 tag=TAG20240321T020005
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
archived log file name=/u01/app/oracle/fra/AUX/archivelog/2024_03_21/o1_mf_1_245_lzqr6354_.arc thread=1 sequence=245
channel clone_default: deleting archived log(s)
archived log file name=/u01/app/oracle/fra/AUX/archivelog/2024_03_21/o1_mf_1_245_lzqr6354_.arc RECID=737 STAMP=1164183891
media recovery complete, elapsed time: 00:00:02
Finished recover at 2024-03-21 08:24:55
Oracle instance started

Total System Global Area    1577055136 bytes

Fixed Size                     9163680 bytes
Variable Size                385875968 bytes
Database Buffers            1174405120 bytes
Redo Buffers                   7610368 bytes

contents of Memory Script:
{
   sql clone "alter system set  db_name =
 ''AUX'' comment=
 ''Reset to original value by RMAN'' scope=spfile";
   sql clone "alter system reset  db_unique_name scope=spfile";
}
executing Memory Script

sql statement: alter system set  db_name =  ''AUX'' comment= ''Reset to original value by RMAN'' scope=spfile

sql statement: alter system reset  db_unique_name scope=spfile
Oracle instance started

Total System Global Area    1577055136 bytes

Fixed Size                     9163680 bytes
Variable Size                385875968 bytes
Database Buffers            1174405120 bytes
Redo Buffers                   7610368 bytes
sql statement: CREATE CONTROLFILE REUSE SET DATABASE "AUX" RESETLOGS ARCHIVELOG
  MAXLOGFILES     16
  MAXLOGMEMBERS      3
  MAXDATAFILES     1024
  MAXINSTANCES     8
  MAXLOGHISTORY      292
 LOGFILE
  GROUP     1  SIZE 200 M ,
  GROUP     2  SIZE 200 M ,
  GROUP     3  SIZE 200 M
 DATAFILE
  '/u01/app/oracle/oradata/AUX/datafile/o1_mf_system_lzqr3jxq_.dbf',
  '/u01/app/oracle/oradata/AUX/ED7F22D941443E9AE053A024100AD0FB/datafile/o1_mf_system_lzqr400z_.dbf',
  '/u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_system_lzqr31r4_.dbf'
 CHARACTER SET AL32UTF8


contents of Memory Script:
{
   set newname for clone tempfile  1 to new;
   set newname for clone tempfile  2 to new;
   set newname for clone tempfile  3 to new;
   switch clone tempfile all;
   catalog clone datafilecopy  "/u01/app/oracle/oradata/AUX/datafile/o1_mf_sysaux_lzqr3jwy_.dbf",
 "/u01/app/oracle/oradata/AUX/datafile/o1_mf_undotbs1_lzqr3jy4_.dbf",
 "/u01/app/oracle/oradata/AUX/ED7F22D941443E9AE053A024100AD0FB/datafile/o1_mf_sysaux_lzqr4014_.dbf",
 "/u01/app/oracle/oradata/AUX/ED7F22D941443E9AE053A024100AD0FB/datafile/o1_mf_undotbs1_lzqr401g_.dbf",
 "/u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_sysaux_lzqr31rf_.dbf",
 "/u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_undotbs1_lzqr31rq_.dbf";
   switch clone datafile all;
}
executing Memory Script

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

renamed tempfile 1 to /u01/app/oracle/oradata/AUX/datafile/o1_mf_temp_%u_.tmp in control file
renamed tempfile 2 to /u01/app/oracle/oradata/AUX/ED7F22D941443E9AE053A024100AD0FB/datafile/o1_mf_temp_%u_.tmp in control file
renamed tempfile 3 to /u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_temp_%u_.tmp in control file

cataloged datafile copy
datafile copy file name=/u01/app/oracle/oradata/AUX/datafile/o1_mf_sysaux_lzqr3jwy_.dbf RECID=1 STAMP=1164183915
cataloged datafile copy
datafile copy file name=/u01/app/oracle/oradata/AUX/datafile/o1_mf_undotbs1_lzqr3jy4_.dbf RECID=2 STAMP=1164183915
cataloged datafile copy
datafile copy file name=/u01/app/oracle/oradata/AUX/ED7F22D941443E9AE053A024100AD0FB/datafile/o1_mf_sysaux_lzqr4014_.dbf RECID=3 STAMP=1164183915
cataloged datafile copy
datafile copy file name=/u01/app/oracle/oradata/AUX/ED7F22D941443E9AE053A024100AD0FB/datafile/o1_mf_undotbs1_lzqr401g_.dbf RECID=4 STAMP=1164183915
cataloged datafile copy
datafile copy file name=/u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_sysaux_lzqr31rf_.dbf RECID=5 STAMP=1164183915
cataloged datafile copy
datafile copy file name=/u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_undotbs1_lzqr31rq_.dbf RECID=6 STAMP=1164183915

datafile 3 switched to datafile copy
input datafile copy RECID=1 STAMP=1164183915 file name=/u01/app/oracle/oradata/AUX/datafile/o1_mf_sysaux_lzqr3jwy_.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=2 STAMP=1164183915 file name=/u01/app/oracle/oradata/AUX/datafile/o1_mf_undotbs1_lzqr3jy4_.dbf
datafile 6 switched to datafile copy
input datafile copy RECID=3 STAMP=1164183915 file name=/u01/app/oracle/oradata/AUX/ED7F22D941443E9AE053A024100AD0FB/datafile/o1_mf_sysaux_lzqr4014_.dbf
datafile 8 switched to datafile copy
input datafile copy RECID=4 STAMP=1164183915 file name=/u01/app/oracle/oradata/AUX/ED7F22D941443E9AE053A024100AD0FB/datafile/o1_mf_undotbs1_lzqr401g_.dbf
datafile 20 switched to datafile copy
input datafile copy RECID=5 STAMP=1164183915 file name=/u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_sysaux_lzqr31rf_.dbf
datafile 21 switched to datafile copy
input datafile copy RECID=6 STAMP=1164183915 file name=/u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_undotbs1_lzqr31rq_.dbf
Leaving database unopened, as requested
Finished Duplicate Db at 2024-03-21 08:25:15

So far, so good. See line 180, my datafile got restored. It was also switched to after restoration, see line 229. But then, in line 260, it was skipped for the recovery part. That sounds not like a big deal since the file / the tablespace is read only. But at the end you see, that neither the CREATE CONTROLFILE nor the CATALOG and SWITCH part contains my datafile 22. Strange. Let’s see, what we have at this point.

SYS @ AUX:CDB$ROOT:>select file#, name from v$datafile;

     FILE# NAME
---------- ------------------------------------------------------------------------------------------------------------------------
         1 /u01/app/oracle/oradata/AUX/datafile/o1_mf_system_lzqr3jxq_.dbf
         3 /u01/app/oracle/oradata/AUX/datafile/o1_mf_sysaux_lzqr3jwy_.dbf
         4 /u01/app/oracle/oradata/AUX/datafile/o1_mf_undotbs1_lzqr3jy4_.dbf
         5 /u01/app/oracle/oradata/AUX/ED7F22D941443E9AE053A024100AD0FB/datafile/o1_mf_system_lzqr400z_.dbf
         6 /u01/app/oracle/oradata/AUX/ED7F22D941443E9AE053A024100AD0FB/datafile/o1_mf_sysaux_lzqr4014_.dbf
         8 /u01/app/oracle/oradata/AUX/ED7F22D941443E9AE053A024100AD0FB/datafile/o1_mf_undotbs1_lzqr401g_.dbf
        19 /u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_system_lzqr31r4_.dbf
        20 /u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_sysaux_lzqr31rf_.dbf
        21 /u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_undotbs1_lzqr31rq_.dbf

9 rows selected.

SYS @ AUX:CDB$ROOT:>show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       MOUNTED
         3 _###_UNKNOWN_PDB_#_3           MOUNTED
SYS @ AUX:CDB$ROOT:>select status from v$instance;

STATUS
------------
MOUNTED

As you can see, no file 22 is there. So let’s go ahead and open the database read only just to check things.

SYS @ AUX:CDB$ROOT:>alter database open read only;
alter database open read only
*
ERROR at line 1:
ORA-16008: indeterminate control file checkpoint

SYS @ AUX:CDB$ROOT:>!oerr ora 16008
16008, 00000, "indeterminate control file checkpoint"
// *Cause:  The control file for the database being opened for read-only access
//          was created using the CREATE CONTROLFILE statement. Therefore, a
//          control file checkpoint could not be calculated and the database
//          could not be opened for read-only access.
// *Action: First, open the database for read/write access. This creates
//          a valid control file checkpoint. Then, reopen the database for
//          read-only access.

Ok, that makes sense, RMAN did create a new controlfile. I’ll open the database read/write then and check again.

SYS @ AUX:CDB$ROOT:>alter database open resetlogs;

Database altered.

SYS @ AUX:CDB$ROOT:>show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           MOUNTED
         4 PDB2                           MOUNTED
         5 ORCL19PDB2                     MOUNTED
         
SYS @ AUX:CDB$ROOT:>select file#, name from v$datafile;

     FILE# NAME
---------- ------------------------------------------------------------------------------------------------------------------------
         1 /u01/app/oracle/oradata/AUX/datafile/o1_mf_system_lzqr3jxq_.dbf
         3 /u01/app/oracle/oradata/AUX/datafile/o1_mf_sysaux_lzqr3jwy_.dbf
         4 /u01/app/oracle/oradata/AUX/datafile/o1_mf_undotbs1_lzqr3jy4_.dbf
         5 /u01/app/oracle/oradata/AUX/ED7F22D941443E9AE053A024100AD0FB/datafile/o1_mf_system_lzqr400z_.dbf
         6 /u01/app/oracle/oradata/AUX/ED7F22D941443E9AE053A024100AD0FB/datafile/o1_mf_sysaux_lzqr4014_.dbf
         7 /u01/app/oracle/product/db/19/dbs/MISSING00007
         8 /u01/app/oracle/oradata/AUX/ED7F22D941443E9AE053A024100AD0FB/datafile/o1_mf_undotbs1_lzqr401g_.dbf
        19 /u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_system_lzqr31r4_.dbf
        20 /u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_sysaux_lzqr31rf_.dbf
        21 /u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_undotbs1_lzqr31rq_.dbf
        22 /u01/app/oracle/product/db/19/dbs/MISSING00022
        29 /u01/app/oracle/product/db/19/dbs/MISSING00029
        30 /u01/app/oracle/product/db/19/dbs/MISSING00030
        33 /u01/app/oracle/product/db/19/dbs/MISSING00033

14 rows selected.

As you can see, we have much more information after opening the database. This is because we now have access to the data dictionary too, not only to the controlfile. So we can see, that the database knows about file 22, but not where to find it. But I do know, it was written in the RMAN output and can also be derived from the paths of the other datafiles.

SYS @ AUX:CDB$ROOT:>!ls -l /u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile
total 3490876
-rw-r-----. 1 oracle dba  555753472 Mar 21 08:29 o1_mf_sysaux_lzqr31rf_.dbf
-rw-r-----. 1 oracle dba  713039872 Mar 21 08:29 o1_mf_system_lzqr31r4_.dbf
-rw-r-----. 1 oracle dba  104865792 Mar 21 08:29 o1_mf_temp_lzqrh54t_.tmp
-rw-r-----. 1 oracle dba  157294592 Mar 21 08:29 o1_mf_undotbs1_lzqr31rq_.dbf
-rw-r-----. 1 oracle dba 1073750016 Mar 21 08:23 o1_mf_users_lzqr31r2_.dbf

The file is there, so let’s just use it. The database allows us to rename files, that should do the trick. The file itself is consistent since it was read/only in the source database.

SYS @ AUX:CDB$ROOT:>alter database rename file '/u01/app/oracle/product/db/19/dbs/MISSING00022' to '/u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_users_lzqr31r2_.dbf';

Database altered.

SYS @ AUX:CDB$ROOT:>alter pluggable database pdb1 open ;

Pluggable database altered.

SYS @ AUX:CDB$ROOT:>alter session set container=pdb1;

Session altered.

SYS @ AUX:CDB$ROOT:>select * from hr.employees;
select * from hr.employees
                 *
ERROR at line 1:
ORA-00376: file 22 cannot be read at this time
ORA-01110: data file 22: '/u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_users_lzqr31r2_.dbf'


SYS @ AUX:CDB$ROOT:>alter database datafile 22 online;
alter database datafile 22 online
*
ERROR at line 1:
ORA-01190: control file or data file 22 is from before the last RESETLOGS
ORA-01110: data file 22: '/u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_users_lzqr31r2_.dbf'

Obviously I cannot use the restored file at all. But why is that? To figure this out, I dropped the auxiliary database and did another DUPLICATE, this time ommiting the NOOPEN option, just to see, what RMAN will do. I’ll only paste the relevant parts of this operation for the sake of clarity.

RMAN> duplicate database to aux pluggable database pdb1 skip tablespace pdb1:TBS_ORG_UTPLSQL until time 'sysdate-1' ;

Starting Duplicate Db at 2024-03-21 08:37:59

[...]

channel ORA_AUX_DISK_1: restoring datafile 00022 to /u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_users_%u_.dbf

[...]

Starting recover at 2024-03-21 08:39:22
using channel ORA_AUX_DISK_1
datafile 22 not processed because file is read-only

[...]

sql statement: CREATE CONTROLFILE REUSE SET DATABASE "AUX" RESETLOGS ARCHIVELOG
  MAXLOGFILES     16
  MAXLOGMEMBERS      3
  MAXDATAFILES     1024
  MAXINSTANCES     8
  MAXLOGHISTORY      292
 LOGFILE
  GROUP     1  SIZE 200 M ,
  GROUP     2  SIZE 200 M ,
  GROUP     3  SIZE 200 M
 DATAFILE
  '/u01/app/oracle/oradata/AUX/datafile/o1_mf_system_lzqs0qjg_.dbf',
  '/u01/app/oracle/oradata/AUX/ED7F22D941443E9AE053A024100AD0FB/datafile/o1_mf_system_lzqs16m6_.dbf',
  '/u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_system_lzqrzyg9_.dbf'
 CHARACTER SET AL32UTF8

[...]

contents of Memory Script:
{
   sql clone 'alter system set "_system_trig_enabled"=FALSE';
   Alter clone database open resetlogs;
   sql clone 'alter system reset "_system_trig_enabled"';
}
executing Memory Script

sql statement: alter system set "_system_trig_enabled"=FALSE

database opened

sql statement: alter system reset "_system_trig_enabled"
Executing: drop pluggable database "ORCL19PDB2" including datafiles
Executing: drop pluggable database "PDB2" including datafiles

contents of Memory Script:
{
   sql clone "alter pluggable database all open";
}
executing Memory Script

sql statement: alter pluggable database all open

contents of Memory Script:
{
   catalog clone datafilecopy  "/u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_users_lzqrzyg7_.dbf",
 "/u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_tbs_data_lzqrzygh_.dbf",
 "/u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_tbs_lob_lzqrzyg4_.dbf";
   switch clone datafile  22 to datafilecopy
 "/u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_users_lzqrzyg7_.dbf";
   switch clone datafile  29 to datafilecopy
 "/u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_tbs_data_lzqrzygh_.dbf";
   switch clone datafile  30 to datafilecopy
 "/u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_tbs_lob_lzqrzyg4_.dbf";
   #online the readonly tablespace
   sql clone 'PDB1' "alter tablespace
 USERS online";
   #online the readonly tablespace
   sql clone 'PDB1' "alter tablespace
 TBS_LOB online";
   #online the readonly tablespace
   sql clone 'PDB1' "alter tablespace
 TBS_DATA online";
}
executing Memory Script

cataloged datafile copy
datafile copy file name=/u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_users_lzqrzyg7_.dbf RECID=7 STAMP=1164184865
cataloged datafile copy
datafile copy file name=/u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_tbs_data_lzqrzygh_.dbf RECID=8 STAMP=1164184865
cataloged datafile copy
datafile copy file name=/u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_tbs_lob_lzqrzyg4_.dbf RECID=9 STAMP=1164184865

datafile 22 switched to datafile copy
input datafile copy RECID=7 STAMP=1164184865 file name=/u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_users_lzqrzyg7_.dbf

datafile 29 switched to datafile copy
input datafile copy RECID=8 STAMP=1164184865 file name=/u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_tbs_data_lzqrzygh_.dbf

datafile 30 switched to datafile copy
input datafile copy RECID=9 STAMP=1164184865 file name=/u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_tbs_lob_lzqrzyg4_.dbf

sql statement: alter tablespace  USERS online

sql statement: alter tablespace  TBS_LOB online

sql statement: alter tablespace  TBS_DATA online
Dropping offline and skipped tablespaces
Executing: alter database default tablespace system
Executing: drop tablespace "USERS" including contents cascade constraints
Executing: drop tablespace "TBS_ORG_UTPLSQL" including contents cascade constraints
Finished Duplicate Db at 2024-03-21 08:41:25

As you see, the restore and recover work the same way as with the NOOPEN option, no surprise here, I highlighted those lines. But then RMAN opens the database and after opening the database, it catalogs all the remaining files and switches the location of these files, see lines 61-64 and 82-83. The SWITCH command does something else than just renaming the file. And just to proof it works, here we go.

SYS @ AUX:CDB$ROOT:>select status from v$instance;

STATUS
------------
OPEN

SYS @ AUX:CDB$ROOT:>col name format a120
SYS @ AUX:CDB$ROOT:>set lines 200
SYS @ AUX:CDB$ROOT:>set pages 200
SYS @ AUX:CDB$ROOT:>select file#, name from v$datafile;

     FILE# NAME
---------- ------------------------------------------------------------------------------------------------------------------------
         1 /u01/app/oracle/oradata/AUX/datafile/o1_mf_system_lzqs0qjg_.dbf
         3 /u01/app/oracle/oradata/AUX/datafile/o1_mf_sysaux_lzqs0qjb_.dbf
         4 /u01/app/oracle/oradata/AUX/datafile/o1_mf_undotbs1_lzqs0qjk_.dbf
         5 /u01/app/oracle/oradata/AUX/ED7F22D941443E9AE053A024100AD0FB/datafile/o1_mf_system_lzqs16m6_.dbf
         6 /u01/app/oracle/oradata/AUX/ED7F22D941443E9AE053A024100AD0FB/datafile/o1_mf_sysaux_lzqs16mb_.dbf
         8 /u01/app/oracle/oradata/AUX/ED7F22D941443E9AE053A024100AD0FB/datafile/o1_mf_undotbs1_lzqs16mf_.dbf
        19 /u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_system_lzqrzyg9_.dbf
        20 /u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_sysaux_lzqrzygc_.dbf
        21 /u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_undotbs1_lzqrzygf_.dbf
        22 /u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_users_lzqrzyg7_.dbf
        29 /u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_tbs_data_lzqrzygh_.dbf
        30 /u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_tbs_lob_lzqrzyg4_.dbf

12 rows selected.

SYS @ AUX:CDB$ROOT:>alter session set container=pdb1;

Session altered.

SYS @ AUX:CDB$ROOT:>select count(*) from hr.employees;

  COUNT(*)
----------
       107

But can we use the SWITCH trick also for the NOOPEN case? I dropped the auxiliary database again and did another duplicate with the NOOPEN option as in the first case. After this duplication, the situation is again like this.

YS @ AUX:CDB$ROOT:>select file#, name from v$datafile;

     FILE# NAME
---------- ------------------------------------------------------------------------------------------------------------------------
         1 /u01/app/oracle/oradata/AUX/datafile/o1_mf_system_lzqskm2l_.dbf
         3 /u01/app/oracle/oradata/AUX/datafile/o1_mf_sysaux_lzqskm2f_.dbf
         4 /u01/app/oracle/oradata/AUX/datafile/o1_mf_undotbs1_lzqskm2p_.dbf
         5 /u01/app/oracle/oradata/AUX/ED7F22D941443E9AE053A024100AD0FB/datafile/o1_mf_system_lzqsl251_.dbf
         6 /u01/app/oracle/oradata/AUX/ED7F22D941443E9AE053A024100AD0FB/datafile/o1_mf_sysaux_lzqsl256_.dbf
         8 /u01/app/oracle/oradata/AUX/ED7F22D941443E9AE053A024100AD0FB/datafile/o1_mf_undotbs1_lzqsl259_.dbf
        19 /u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_system_lzqsk40g_.dbf
        20 /u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_sysaux_lzqsk40j_.dbf
        21 /u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_undotbs1_lzqsk40k_.dbf

9 rows selected.

SYS @ AUX:CDB$ROOT:>!ls -l /u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/
total 2441260
-rw-r-----. 1 oracle dba  555753472 Mar 21 08:49 o1_mf_sysaux_lzqsk40j_.dbf
-rw-r-----. 1 oracle dba  713039872 Mar 21 08:49 o1_mf_system_lzqsk40g_.dbf
-rw-r-----. 1 oracle dba  157294592 Mar 21 08:49 o1_mf_undotbs1_lzqsk40k_.dbf
-rw-r-----. 1 oracle dba 1073750016 Mar 21 08:47 o1_mf_users_lzqsk40b_.dbf

No file 22 in the views, but at least it got restored again. So let’s try to open the auxiliary database again and use the SWITCH command this time. Note, that the command needs to put into a RUN-block.

[oracle@vm160 ~]$ export ORACLE_SID=aux
[oracle@vm160 ~]$ rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Thu Mar 21 09:59:32 2024
Version 19.18.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: AUX (DBID=2017441707, not open)

RMAN> alter database open resetlogs;

Statement processed

RMAN> run {
2> catalog datafilecopy  "/u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_users_lzqsk40b_.dbf";
3> switch datafile 22 to datafilecopy "/u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_users_lzqsk40b_.dbf";
4> }

cataloged datafile copy
datafile copy file name=/u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_users_lzqsk40b_.dbf RECID=7 STAMP=1164189777

datafile 22 switched to datafile copy
input datafile copy RECID=7 STAMP=1164189777 file name=/u01/app/oracle/oradata/AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_users_lzqsk40b_.dbf

RMAN> sql "alter pluggable database pdb1 open";

sql statement: alter pluggable database pdb1 open

RMAN> sql 'PDB1' "alter tablespace users online";

sql statement: alter tablespace users online

Looks promising so far. But am I able to query some data from this file now?

[oracle@vm160 ~]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Mar 21 10:04:55 2024
Version 19.18.0.0.0

Copyright (c) 1982, 2022, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.18.0.0.0

SYS @ AUX:CDB$ROOT:>alter session set container=pdb1;

Session altered.

SYS @ AUX:CDB$ROOT:>select count(*) from hr.employees;

  COUNT(*)
----------
       107

Yes, I can! Finally! So what did I learn? RMAN duplication skips read only files during the recover operation which is totally fine. It adds those files *after* opening the database. If I open the database myself, I have to find the restored files myself and use the RMAN SWITCH command to make them part of the restored database, An “alter database rename file” is not sufficient. Good to know.

Restore a PDB into another CDB

In my previous post I already wrote about a special restore scenario with the Multitenant Architecture. Today the story continues with another scenario that sounds quite simple in the first place. I will be using Oracle Database Version 19.18 for this topic.

So let’s get started. The goal is to restore a given PDB into another CDB at a specific point in time. Imagine some accidental data change happend which should be rolled back. Since only a small and rarely used table is affected, flashback or a complete Point-in-time Restore is not an option. The goal is to restore the PDB into another CDB at the given point in time to analyze the issue further. So let’s get started.

Instantiating a PDB in another CDB sounds like cloning at a first glance. But cloning always uses the running PDB as a source, there is no option to specify a point in time. So let’s try RMAN PDB duplication.

[oracle@odax5ha0 ~]$ rman

Recovery Manager: Release 19.0.0.0.0 - Production on Tue Mar 21 10:08:34 2023
Version 19.16.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect target sys/********@odax5ha0:1521/mpmcdb2x5.support.robotron.de

connected to target database: MPMCDB2 (DBID=889235828)

RMAN> connect auxiliary sys/********@odax5ha0:1521/mpmcdb1x5.support.robotron.de

connected to auxiliary database: MPMCDB1 (DBID=1547213674)

RMAN> duplicate pluggable database pdba as pdbclone  until time "to_date('21.03.2023 09:10:00')";

Starting Duplicate PDB at 2023-03-21 10:10:28
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate PDB command at 03/21/2023 10:10:28
RMAN-05501: aborting duplication of target database
RMAN-05649: duplicate pluggable database does not allow the use of UNTIL SCN or TO RESTORE POINT clause

Oh yes, good start. And RMAN tells the truth, the docs tell us that.

As in the preivous post, I need an temporary auxiliary CDB instance to restore my PDB. Good news, this time the steps will be a little easier. I start with a Pfile derived from the target CDB.

SQL> create pfile='/tmp/initaux.ora' from spfile;

File created.

I remove the “control_files” parameter and set the “db_unique_name” to something unique. Having that I can startup my auxiliary instance and create a SPfile for it.

[oracle@odax5ha0 ~]$ export ORACLE_SID=CDBDUP

[oracle@odax5ha0 ~]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Tue Mar 21 10:14:22 2023
Version 19.16.0.0.0

Copyright (c) 1982, 2022, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup nomount pfile=/tmp/initaux.ora
ORACLE instance started.

Total System Global Area 4294963760 bytes
Fixed Size                  9142832 bytes
Variable Size             805306368 bytes
Database Buffers         3456106496 bytes
Redo Buffers               24408064 bytes

SQL> create spfile from pfile='/tmp/initaux.ora';

File created.

SQL> startup nomount force
ORACLE instance started.

Total System Global Area 4287959600 bytes
Fixed Size                  9142832 bytes
Variable Size             838860800 bytes
Database Buffers         3422552064 bytes
Redo Buffers               17403904 bytes

SQL>

Now i can go ahead and try the duplication again, this time at CDB level because this allows me to specify a point in time.

[oracle@odax5ha0 ~]$ rman

Recovery Manager: Release 19.0.0.0.0 - Production on Tue Mar 21 10:28:43 2023
Version 19.16.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect target sys/********@odax5ha0:1521/mpmcdb1x5.support.robotron.de

connected to target database: MPMCDB1 (DBID=1547213674)

RMAN> connect auxiliary /

connected to auxiliary database (not started)

RMAN> duplicate database to CDBDUP pluggable database pdba until time "to_date('21.03.2023 09:10:00', 'dd.mm.yyyy hh24:mi:ss')";

Starting Duplicate Db at 2023-03-21 10:29:44
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=26 device type=DISK
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 03/21/2023 10:29:48
RMAN-05501: aborting duplication of target database
RMAN-06617: UNTIL TIME (2023-03-21 09:10:00) is ahead of last NEXT TIME in archived logs (2023-03-21 09:00:09)

Oh, look, another error. This is because there was not much DML happening and no log switch occured between the given point in time and right now. But this is easy to work around.

RMAN> alter system archive log current;

Statement processed

RMAN> duplicate database to CDBDUP pluggable database pdba until time "to_date('21.03.2023 09:10:00', 'dd.mm.yyyy hh24:mi:ss')";

Starting Duplicate Db at 2023-03-21 10:30:25
using channel ORA_AUX_DISK_1
duplicating Online logs to Oracle Managed File (OMF) location
duplicating Datafiles to Oracle Managed File (OMF) location

[...]

sql statement: alter pluggable database all open
Dropping offline and skipped tablespaces
Executing: alter database default tablespace system
Executing: drop tablespace "USERS" including contents cascade constraints
Finished Duplicate Db at 2023-03-21 10:34:50

RMAN>

The rest of the work is to unplug the PDB from the temporary auxiliary PDB, drop the auxiliary CDB and finally plug the PDB into the target CDB.

[oracle@odax5ha0 ~]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Tue Mar 21 10:41:30 2023
Version 19.16.0.0.0

Copyright (c) 1982, 2022, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.16.0.0.0

SQL> select name from v$database;

NAME
---------
CDBDUP

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDBA                           READ WRITE NO
SQL> alter pluggable database pdba close immediate;

Pluggable database altered.

SQL> alter pluggable database pdba unplug into '/tmp/pdba.xml';

Pluggable database altered.

SQL> drop pluggable database pdba keep datafiles;

Pluggable database dropped.

SQL> startup mount restrict exclusive force
ORACLE instance started.

Total System Global Area 4287959600 bytes
Fixed Size                  9142832 bytes
Variable Size             838860800 bytes
Database Buffers         3422552064 bytes
Redo Buffers               17403904 bytes
Database mounted.
SQL> drop database;

Database dropped.

Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.16.0.0.0

[oracle@odax5ha0 ~]$ export ORACLE_SID=MPMCDB2
[oracle@odax5ha0 ~]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Tue Mar 21 10:44:12 2023
Version 19.16.0.0.0

Copyright (c) 1982, 2022, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.16.0.0.0

SQL> select name from v$database;

NAME
---------
MPMCDB2

SQL> create pluggable database PDBA using '/tmp/pdba.xml' copy tempfile reuse;

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDBB                           READ WRITE NO
SQL> create pluggable database PDBA using '/tmp/pdba.xml' copy;

Pluggable database created.

SQL> alter pluggable database pdba open;

Pluggable database altered.

SQL> alter pluggable database pdba save state;

Pluggable database altered.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDBB                           READ WRITE NO
         4 PDBA                           READ WRITE NO

SQL>

And that’s just it. Easier than the other situation but not quite straight forward anyway. I think, this mght help in some situation.

Restore a dropped Pluggable Database

When reading the headline, the todays topic sounds not very complicated or special at all. But be warned, it definitely is. During a Migration-to-Multitenant project at a customers site we walked through quite a couple of restore scenarios. The most obvious scenario is dropping a PDB by accident. Of cause we have all the backups in place, that we will need to restore and recover the lost PDB.

Let’s start with what I did to the PDB:

[oracle@vm160 ~]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Feb 23 13:54:28 2023
Version 19.18.0.0.0

Copyright (c) 1982, 2022, Oracle.  All rights reserved.

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.18.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO
         4 ORCL19PDB3                     READ WRITE NO
         5 ORCL19PDB2                     READ WRITE NO
SQL> alter session set container=pdb1;

Session altered.

SQL> select count(*) from hr.employees;

  COUNT(*)
----------
       108

SQL> alter session set container=cdb$root;

Session altered.

SQL> alter pluggable database pdb1 close immediate;

Pluggable database altered.

SQL> drop pluggable database pdb1 including datafiles;

Pluggable database dropped.

That was the easiest part of the story. You see, there were 3 PDBs, the dropped PDB had a table “EMPLOYEES” with 108 records. Now the simple goal ist to restore that PDB to right before the drop. To get that timestamp, we simply look it up in the alert.log:

[oracle@vm160 ~]$ tail /u01/app/oracle/diag/rdbms/orcl19/orcl19/trace/alert_orcl19.log
2023-02-23T13:55:01.355396+01:00
alter pluggable database pdb1 close immediate
2023-02-23T13:55:01.357622+01:00
PDB1(3):Pluggable database PDB1 closing
PDB1(3):JIT: pid 9707 requesting stop
PDB1(3):Closing sequence subsystem (3457133901505).
PDB1(3):Buffer Cache flush started: 3
2023-02-23T13:55:02.353510+01:00
PDB1(3):Buffer Cache flush finished: 3
Pluggable database PDB1 closed
Completed: alter pluggable database pdb1 close immediate
2023-02-23T13:55:14.816955+01:00
drop pluggable database pdb1 including datafiles
2023-02-23T13:55:17.338832+01:00
Deleted Oracle managed file /u01/app/oracle/oradata/ORCL19/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_users_ksfwjspy_.dbf
Deleted Oracle managed file /u01/app/oracle/oradata/ORCL19/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_temp_kq6nz8kv_.dbf
Deleted Oracle managed file /u01/app/oracle/oradata/ORCL19/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_undotbs1_kq6nz8kv_.dbf
Deleted Oracle managed file /u01/app/oracle/oradata/ORCL19/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_sysaux_kq6nz8kt_.dbf
Deleted Oracle managed file /u01/app/oracle/oradata/ORCL19/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_system_kq6nz8kh_.dbf
2023-02-23T13:55:17.616908+01:00
Stopped service pdb1
Completed: drop pluggable database pdb1 including datafiles

Now that we identified the propper timestamp which is roughly 13:55, let’s start with RMAN:

[oracle@vm160 ~]$ rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Thu Feb 23 13:57:26 2023
Version 19.18.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCL19 (DBID=422142402)

RMAN> run {
2> set until time "to_date('23.02.2023 13:55:00', 'dd.mm.yyyy hh24:mi:ss')";
3> restore pluggable database pdb1;
4> recover pluggable database pdb1;
5> }

executing command: SET until clause

Starting restore at 2023-02-23 13:58:23
using target database control file instead of recovery catalog
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 02/23/2023 13:58:24
ORA-01403: no data found

RMAN> restore pluggable database pdb1;

Starting restore at 2023-02-23 13:58:31
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=277 device type=DISK
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 02/23/2023 13:58:31
RMAN-06813: could not translate pluggable database PDB1

Ok, neither can we do a point-in-time restore nor can we do a complete and current restore of the dropped PDB. This is because the controlfile is our single point of truth and it simply forgot everything about the PDB when I dropped it.

What we need to restore the PDB is an auxiliary instance to restore and recover the lost PDB. MOS Note “How to Restore – Dropped Pluggable database (PDB) in Multitenant (Doc ID 2034953.1)” is using DUPPLICATE for that, but this requires to have all neccessary backups in one location. So I chose a different approach. My auxiliary instance will be based on the original one, so I create a Pfile from it, that I will change according to my needs:

  • choose a new db_unique_name
  • remove “control_files” parameter
  • leave create-file-destinations unchanged
  • change memory parameters to minimum values
SQL> create pfile='/tmp/initaux.ora' from spfile;

File created.
*.audit_file_dest='/u01/app/oracle/admin/orcl19/adump'
*.audit_trail='db'
*.compatible='19.0.0'
*.container_data='ALL'
*.db_block_size=8192
*.db_create_file_dest='/u01/app/oracle/oradata'
*.db_domain='support.robotron.de'
*.db_name='orcl19'
*.db_unique_name='orcl19aux'
*.db_recovery_file_dest='/u01/app/oracle/fra'
*.db_recovery_file_dest_size=10494m
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=orcl19XDB)'
*.enable_pluggable_database=true
*.nls_language='AMERICAN'
*.nls_territory='AMERICA'
*.open_cursors=300
*.pga_aggregate_target=364m
*.processes=300
*.remote_login_passwordfile='EXCLUSIVE
*.sec_return_server_release_banner=TRUE
*.sga_target=2048m
*.undo_tablespace='UNDOTBS1'

Keeping the values for file destinations will restore the datafiles in their original paths in case of user-defined filenames. In case of OMF we will get new subdirectories according to the new db_unique_name. Either way the files will be created in the filesystem the original database is using.

With this Pfile I can now start my auxiliary instance and create a SPfile. Having a SPfile is handy because RMAN will update the “control_files” parameter accordingly when restoring the controlfiles. Of cause I have to set the ORACLE_SID to some value other than the original CDB.

[oracle@vm160 ~]$ export ORACLE_SID=aux
[oracle@vm160 ~]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Feb 23 14:22:46 2023
Version 19.18.0.0.0

Copyright (c) 1982, 2022, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup nomount pfile=/tmp/initaux.ora
ORACLE instance started.

Total System Global Area 2147482320 bytes
Fixed Size                  9165520 bytes
Variable Size             486539264 bytes
Database Buffers         1644167168 bytes
Redo Buffers                7610368 bytes

SQL> create spfile from pfile='/tmp/initaux.ora';

File created.

SQL> startup nomount force
ORACLE instance started.

Total System Global Area 2147482320 bytes
Fixed Size                  9165520 bytes
Variable Size             486539264 bytes
Database Buffers         1644167168 bytes
Redo Buffers                7610368 bytes

Having that, I can now restore and recover my PDB using RMAN. Note, since we only restore the one PDB, we have to skip all the other PDBs and the PDB$SEED during recovery.

[oracle@vm160 ~]$ rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Thu Feb 23 14:27:46 2023
Version 19.18.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCL19 (not mounted)

RMAN> set DBID 422142402

executing command: SET DBID

RMAN> run {
1      set until time "to_date('23.02.2023 13:55:00', 'dd.mm.yyyy hh24;mi:ss')";
2      set newname for database to new;
3      restore controlfile from '/u01/app/oracle/fra/ORCL19/autobackup/2023_02_23/o1_mf_s_1129557030_kzgr8qj7_.bkp';
4      alter database mount;
5      restore database root pluggable database pdb1;
6      switch datafile all;
7      switch tempfile all;
8      recover database skip forever tablespace
9      "ORCL19PDB3":"SYSTEM","ORCL19PDB3":"SYSAUX","ORCL19PDB3":"UNDOTBS1","ORCL19PDB2":"SYSTEM","ORCL19PDB2":"SYSAUX","ORCL19PDB2":"UNDOTBS1","PDB$SEED":"SYSTEM","PDB$SEED":"SYSAUX","PDB$SEED":"UNDOTBS1";
10 }

executing command: SET until clause

executing command: SET NEWNAME

Starting restore at 2023-02-23 14:51:50
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=256 device type=DISK

[...]

Starting recover at 2023-02-23 15:08:47
using channel ORA_DISK_1

Executing: alter database datafile 5, 6, 8 offline drop
Executing: alter database datafile 12, 13, 14 offline drop
Executing: alter database datafile 15, 16, 17 offline drop
starting media recovery

archived log for thread 1 with sequence 164 is already on disk as file /u01/app/oracle/fra/ORCL19/archivelog/2023_02_23/o1_mf_1_164_kzgr5rdp_.arc
archived log for thread 1 with sequence 165 is already on disk as file /u01/app/oracle/fra/ORCL19/onlinelog/o1_mf_3_kq6lk1ty_.log
archived log file name=/u01/app/oracle/fra/ORCL19/archivelog/2023_02_23/o1_mf_1_164_kzgr5rdp_.arc thread=1 sequence=164
archived log file name=/u01/app/oracle/fra/ORCL19/onlinelog/o1_mf_3_kq6lk1ty_.log thread=1 sequence=165
media recovery complete, elapsed time: 00:00:06
Finished recover at 2023-02-23 15:08:58

In case you have several PDBs to skip or if you do not like to type all the tablespaces by hand, you can use this handy SQL on the original database to generate the list of tablespaces to skip:

SQL> select listagg('"'||p.name||'":"'||t.tablespace_name||'"', ',')
  2  from v$pdbs p, cdb_tablespaces t
  3  where p.con_id=t.con_id
  4  and t.contents <> 'TEMPORARY'
SQL> /

LISTAGG('"'||P.NAME||'":"'||T.TABLESPACE_NAME||'"',',')
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
"ORCL19PDB3":"SYSTEM","ORCL19PDB3":"SYSAUX","ORCL19PDB3":"UNDOTBS1","ORCL19PDB2":"SYSTEM","ORCL19PDB2":"SYSAUX","ORCL19PDB2":"UNDOTBS1","PDB$SEED":"SYSTEM","PDB$SEED":"SYSAUX","PDB$SEED":"UNDOTBS1"

At this point we could open the auxiliary instance using the “open resetlogs” clause. But wait, we restored the controlfile from the original database so we have the same filenames for the redo members. And we are on the same host. Clearing those redo files might not be the best option. It would simply overwrite the redo files of our original database and we would end up doing another incomplete recovery… My solution is to create a new controlfile with modified redo members. As a basis I backup the controlfile to trace, change the db_name right away and restart my auxiliary instance to nomount.

SQL> alter database backup controlfile to trace as '/tmp/create_ctrl_aux.sql' resetlogs;

Database altered.

SQL> alter system set db_name="AUX" scope=spfile;

System altered.

SQL> startup nomount force
ORACLE instance started.

Total System Global Area 2147482320 bytes
Fixed Size                  9165520 bytes
Variable Size             486539264 bytes
Database Buffers         1644167168 bytes
Redo Buffers                7610368 bytes

Then I modified the “create controlfile” statement

  • set the new database name
  • remove all the files, that I didn’t restore
  • remove redo file names (OMF) – or – change redo file names

The final statement is this:

CREATE CONTROLFILE REUSE set DATABASE "AUX" RESETLOGS  ARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 1024
    MAXINSTANCES 8
    MAXLOGHISTORY 292
LOGFILE
  GROUP 1 SIZE 200M BLOCKSIZE 512,
  GROUP 2  SIZE 200M BLOCKSIZE 512,
  GROUP 3 SIZE 200M BLOCKSIZE 512
-- STANDBY LOGFILE
DATAFILE
  '/u01/app/oracle/oradata/ORCL19AUX/datafile/o1_mf_system_kzgvw29p_.dbf',
  '/u01/app/oracle/oradata/ORCL19AUX/datafile/o1_mf_sysaux_kzgvw291_.dbf',
  '/u01/app/oracle/oradata/ORCL19AUX/datafile/o1_mf_undotbs1_kzgvw2b9_.dbf',
  '/u01/app/oracle/oradata/ORCL19AUX/datafile/o1_mf_users_kzgvw2f8_.dbf',
  '/u01/app/oracle/oradata/ORCL19AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_system_kzgw69jf_.dbf',
  '/u01/app/oracle/oradata/ORCL19AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_sysaux_kzgw69jz_.dbf',
  '/u01/app/oracle/oradata/ORCL19AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_undotbs1_kzgw69kb_.dbf',
  '/u01/app/oracle/oradata/ORCL19AUX/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_users_kzgw69h9_.dbf'
CHARACTER SET AL32UTF8
;

Finally, I create a new controlfile, open the auxiliary database and unplug the PDB keeping their datafiles.

SQL> CREATE CONTROLFILE REUSE set DATABASE "AUX" RESETLOGS  ARCHIVELOG
[...]

Control file created.

SQL> alter database open resetlogs;

Database altered.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       MOUNTED
         3 PDB1                           MOUNTED
         4 ORCL19PDB3                     MOUNTED
         5 ORCL19PDB2                     MOUNTED
SQL> alter pluggable database PDB1 unplug into '/tmp/pdb1.xml';

Pluggable database altered.

SQL> drop pluggable database PDB1 keep datafiles;

Pluggable database dropped.

At this point we can transfer the restored PDB back to its original container. You can decide, if you want to use the restored datafiles right away or if you create another copy of them. Either way it is a simple one-liner.

SQL> create pluggable database PDB1 using '/tmp/pdb1.xml' copy;

Pluggable database created.

Or

SQL> create pluggable database PDB1 using '/tmp/pdb1.xml' nocopy tempfile reuse;

Pluggable database created.

Last step is to open the PDB again and check it’s contents.

SQL> alter  pluggable database PDB1 open;

Pluggable database altered.

SQL> alter  pluggable database PDB1 save state;

Pluggable database altered.

SQL> alter session set container=pdb1;

Session altered.

SQL> select count(*) from hr.employees;

  COUNT(*)
----------
       108

Et voila, missing accomplished. The very last step now is removing the auxiliary instance.

SQL> startup mount restrict exclusive force

SQL> drop database;

Whoever said, Multitenant will make DBA’s life easier did not do such a restore 🙂 … But to be fair, there are lot of scenarios where Multitenant is very handy, I must admit.

What data can a common user see in Oracle Multitenant

With the introduction of the Oracle Multitenant architecture, some new dictionary views were introduced to reflect the new layer. Beside the well-known views prefixed by USER_, ALL_ and DBA_ there’s now a fourth set of views prefixed by CDB_. The structure of the views is very similar, but the contents depend on the users privileges.

  • USER_: Everything the logged on user owns
  • ALL_ : Everything the logged on user has privileges on
  • DBA_: Everything belonging to the current (pluggable) database
  • CDB_: Everything belonging to the CDB when queried from CDB$ROOT, similar to DBA_ views when queried from within a pluggable database

Of cause there are V$-views containing container data too, that makes up a large set of containerized views.

SQL> SELECT count(*) FROM cdb_views WHERE container_data = 'Y';

  COUNT(*)
----------
      3118

Now let’s verify, what data can be seen in these views at the different levels, CDB$ROOT or PDB. I created a common user with the appropriate privileges to check that.

SQL> create user c##marco identified by "R0b0tr0N##";

User created.

SQL> grant select any dictionary, create session, set container to c##marco container=all;

Grant succeeded.

Having that, it is time to check what the newly created user can read from the dictionary views at the CDB$ROOT and the PDB level.

SQL> conn c##marco/********
Connected.
SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT
SQL> select con_id, name from v$datafile
  2  order by 1;

    CON_ID NAME
---------- --------------------------------------------------------------------------------------------------------------
         1 /u01/app/oracle/oradata/ORCL19/datafile/o1_mf_undotbs1_kq6ld1qb_.dbf
         1 /u01/app/oracle/oradata/ORCL19/datafile/o1_mf_sysaux_kq6lbyd0_.dbf
         1 /u01/app/oracle/oradata/ORCL19/datafile/o1_mf_system_kq6l8vf4_.dbf
         1 /u01/app/oracle/oradata/ORCL19/datafile/o1_mf_users_kq6ld4xm_.dbf

4 rows selected.

SQL> alter session set container=pdb1;

Session altered.

SQL> select con_id, name from v$datafile;

    CON_ID NAME
---------- ------------------------------------------------------------------------------------------------------------------------
         3 /u01/app/oracle/oradata/ORCL19/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_system_kq6nz8kh_.dbf
         3 /u01/app/oracle/oradata/ORCL19/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_sysaux_kq6nz8kt_.dbf
         3 /u01/app/oracle/oradata/ORCL19/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_undotbs1_kq6nz8kv_.dbf
         3 /u01/app/oracle/oradata/ORCL19/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_users_ksfwjspy_.dbf

Obviously the user can see only data belonging to the current container. From within the CDB$ROOT only content from the CDB$ROOT is visible, nothing from any PDB. At the PDB level the user can only see the data for that specific PDB. The later makes totally sense, the first too, at a second glance. The PDBs might be sensitive somehow, their contents should not be exposed to everyone by default. Of cause this can be changed. As mentioned, only data for the current container can be seen by default. The CONTAINER_DATA setting for an user is used to control that.

SQL> conn / as sysdba
Connected.
SQL> alter user c##marco set container_data=all container=current;

User altered.

SQL> SELECT username,
  2         owner,
  3         object_name,
  4         all_containers,
  5         container_name
  6  FROM   cdb_container_data
  7  WHERE  username = 'C##MARCO'
  8  ORDER BY 1,2,3;

USERNAME             OWNER                OBJECT_NAME          A CONTAINER_NAME
-------------------- -------------------- -------------------- - --------------------
C##MARCO                                                       Y

Now the common user C##MARCO is able to see the CDB$ROOT data as well as the data from all the PDBs. With”CONTAINER_DATA=ALL” the setting applies to all existing PDBs and will apply for all future PDBs.

SQL> conn c##marco/********
Connected.
SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT
SQL> select con_id, name from v$datafile
  2 order by 1;

    CON_ID NAME
---------- --------------------------------------------------------------------------------------------------------------
         1 /u01/app/oracle/oradata/ORCL19/datafile/o1_mf_undotbs1_kq6ld1qb_.dbf
         1 /u01/app/oracle/oradata/ORCL19/datafile/o1_mf_sysaux_kq6lbyd0_.dbf
         1 /u01/app/oracle/oradata/ORCL19/datafile/o1_mf_system_kq6l8vf4_.dbf
         1 /u01/app/oracle/oradata/ORCL19/datafile/o1_mf_users_kq6ld4xm_.dbf
         2 /u01/app/oracle/oradata/ORCL19/datafile/o1_mf_system_kq6m8z4c_.dbf
         2 /u01/app/oracle/oradata/ORCL19/datafile/o1_mf_sysaux_kq6m8z7g_.dbf
         2 /u01/app/oracle/oradata/ORCL19/datafile/o1_mf_undotbs1_kq6m8zgv_.dbf
         3 /u01/app/oracle/oradata/ORCL19/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_undotbs1_kq6nz8kv_.dbf
         3 /u01/app/oracle/oradata/ORCL19/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_users_ksfwjspy_.dbf
         3 /u01/app/oracle/oradata/ORCL19/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_system_kq6nz8kh_.dbf
         3 /u01/app/oracle/oradata/ORCL19/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_sysaux_kq6nz8kt_.dbf
         4 /u01/app/oracle/oradata/ORCL19/EF6055BDD31101A7E053A024100A1966/datafile/o1_mf_sysaux_ks5op8o9_.dbf
         4 /u01/app/oracle/oradata/ORCL19/EF6055BDD31101A7E053A024100A1966/datafile/o1_mf_undotbs1_ks5op8o9_.dbf
         4 /u01/app/oracle/oradata/ORCL19/EF6055BDD31101A7E053A024100A1966/datafile/o1_mf_system_ks5op8n8_.dbf
         5 /u01/app/oracle/oradata/ORCL19/EF608A6519330511E053A024100A91E2/datafile/o1_mf_system_ks5pktn8_.dbf
         5 /u01/app/oracle/oradata/ORCL19/EF608A6519330511E053A024100A91E2/datafile/o1_mf_sysaux_ks5pktn9_.dbf
         5 /u01/app/oracle/oradata/ORCL19/EF608A6519330511E053A024100A91E2/datafile/o1_mf_undotbs1_ks5pktnb_.dbf

Since this setting is somewhat global, there’s a method to restrict the visibility at PDB level. To outline that, I first remove the setting and replace it with a more granular one.

SQL> alter user c##marco set container_data=default container=current;

User altered.

SQL> SELECT username,
  2         owner,
  3         object_name,
  4         all_containers,
  5         container_name
  6  FROM   cdb_container_data
  7  WHERE  username = 'C##MARCO'
  8  ORDER BY 1,2,3;

no rows selected

SQL> alter user c##marco set container_data=(CDB$ROOT,PDB1) container=current;

User altered.

SQL> SELECT username,
  2         owner,
  3         object_name,
  4         all_containers,
  5         container_name
  6  FROM   cdb_container_data
  7  WHERE  username = 'C##MARCO'
  8  ORDER BY 1,2,3;

USERNAME             OWNER                OBJECT_NAME          A CONTAINER_NAME
-------------------- -------------------- -------------------- - --------------------
C##MARCO                                                       N PDB1
C##MARCO                                                       N CDB$ROOT

The value for the “ALL_CONTAINERS” is now ‘N’ and every container, that the user has access to, is listed as a separate record. Only data from CDB$ROOT and PDB1 is now being returned by the test query. Data from other PDBs is excluded. Also data from newly created PDBs will not be visible.

SQL> conn c##marco/********
Connected.
SQL> select con_id, name from v$datafile;

    CON_ID NAME
---------- --------------------------------------------------------------------------------------------------------------
         1 /u01/app/oracle/oradata/ORCL19/datafile/o1_mf_system_kq6l8vf4_.dbf
         1 /u01/app/oracle/oradata/ORCL19/datafile/o1_mf_sysaux_kq6lbyd0_.dbf
         1 /u01/app/oracle/oradata/ORCL19/datafile/o1_mf_undotbs1_kq6ld1qb_.dbf
         1 /u01/app/oracle/oradata/ORCL19/datafile/o1_mf_users_kq6ld4xm_.dbf
         3 /u01/app/oracle/oradata/ORCL19/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_system_kq6nz8kh_.dbf
         3 /u01/app/oracle/oradata/ORCL19/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_sysaux_kq6nz8kt_.dbf
         3 /u01/app/oracle/oradata/ORCL19/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_undotbs1_kq6nz8kv_.dbf
         3 /u01/app/oracle/oradata/ORCL19/ED7F830814134AADE053A024100A5D08/datafile/o1_mf_users_ksfwjspy_.dbf

So this is way to restrict the visibility of data of different containers for common users. This might be handy for monitoring users and sensitive databases.

On top of that, the 19.10 Release Update introduced a parameter CONTAINER_DATA. The docs state, that for extended data-linked Oracle-supplied data dictionary objects data from CDB$ROOT as well as PDBs is returned with the default setting of ‘ALL’. Changing the value to ‘CURRENT’ or ‘CURRENT_DICTIONARY’ restrict the data returned by queries to the current PDB. I tried to verify that with the above example, but did not see any difference.

SQL> conn c##marco/********
Connected.
SQL> show parameter container_data

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
container_data                       string      ALL

SQL> select distinct con_id from cdb_objects;

    CON_ID
----------
         1
         3

SQL> alter session set container_data=current;

Session altered.

SQL> select distinct con_id from cdb_objects;

    CON_ID
----------
         1
         3

SQL> alter session set container=pdb1;

Session altered.

SQL> show parameter container_data

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
container_data                       string      CURRENT

SQL> select distinct con_id from cdb_objects;

    CON_ID
----------
         3

SQL> alter session set container_data=all;

Session altered.

SQL> select distinct con_id from cdb_objects;

    CON_ID
----------
         3

I have no clue, what that parameter is really meant for. There is a MOS note “Query to ALL_SYNONYMS takes very long time after 19c upgrade (Doc ID 2915211.1)“, that mentions some performance issues that can be worked around by setting this parameter. Maybe that is the main use case. If anybody knows, how that parameter should be used and when, I am happy to learn about that.

DBA users at PDB level

The todays blog post is about Oracle Multitenant architecture and security. Everything I show was tested with Oracle Database 19.16, but should be valid for other versions too. With the introduction of pluggable databases we got a new kind of user, the PDBADMIN. The idea behind this user is to have a privileged user for administering just this PDB. So with the creation of PDB from scratch (e.g. PDB$SEED) the specification of this user is mandatory. The documentation is quite clear about that.

As we can see, we can choose the username, it is not bound to be PDBADMIN. And there is an option to specify roles for that user.

But how does this work? Reading the docs will answer that question quite good.

ADMIN USER

Use this clause to create an administrative user who can be granted the privileges required to perform administrative tasks on the PDB. For admin_user_name, specify name of the user to be created. Use the IDENTIFIED BY clause to specify the password for admin_user_name. Oracle Database creates a local user in the PDB and grants the PDB_DBA local role to that user.

pdb_dba_roles

This clause lets you grant one or more roles to the PDB_DBA role. Use this clause to grant roles that have the privileges required by the administrative user of the PDB. For role, specify a predefined role. For a list of predefined roles, refer to Oracle Database Security Guide.

You can also use the GRANT statement to grant roles to the PDB_DBA role after the PDB has been created. Until you have granted the appropriate privileges to the PDB_DBA role, the SYS and SYSTEM users can perform administrative tasks on a PDB.

Oracle Database Documentation 19c

That means, the PDBADMIN user is always granted the PDB_DBA role. For the sake of completeness let me point out, that there are cases where we don’t have a PDBADMIN account. This is the case when the PDB was created by cloning or plugging a non-CDB database into a CDB. In such a case, I recommend to create a PDBADMIN user immediately and grant the PDB_DBA role.

But with what privileges we end up when we create a PDB and do not specify any additional role? Or, in other words, which privileges make up the PDB_DBA role? Let’s give it a try.

SQL> create pluggable database ORCL12PDB3 admin user pdbadmin identified by "********";

Pluggable database created.

SQL> alter pluggable database ORCL12PDB3 open;

Pluggable database altered.

SQL> connect pdbadmin/********@vm160/orcl12pdb3
Connected.
SQL> select * from session_roles;

ROLE
--------------------------------------------------------------------------------
PDB_DBA

SQL> select * from session_privs;

PRIVILEGE
----------------------------------------
SET CONTAINER
CREATE PLUGGABLE DATABASE
CREATE SESSION

Basically the PDBADMIN can do nothing but connect to the PDB. At this point, I can enable the PDBADMIN to do more, by granting privileges or roles either directly to the user or to the PDB_DBA role.

So in case, we know what the PDBADMIN shall be able to do beforehand, it is better to specify the necessary roles right at PDB creation time. Unfortunately we cannot add system privileges, only roles. If some system privileges shall be granted, we must grant those afterwards.

SQL> create pluggable database orcl12pdb2 admin user pdbadmin identified by "********" roles=(CONNECT, RESOURCE);

Pluggable database created.

SQL> alter  pluggable database orcl12pdb2 open;

Pluggable database altered.

SQL> connect pdbadmin/********@vm160/orcl12pdb2
Connected.
SQL> select * from session_roles;

ROLE
--------------------------------------------------------------------------------
CONNECT
RESOURCE
PDB_DBA
SODA_APP

SQL> select * from session_privs;

PRIVILEGE
----------------------------------------
CREATE SESSION
CREATE TABLE
CREATE CLUSTER
CREATE SEQUENCE
CREATE PROCEDURE
CREATE TRIGGER
CREATE TYPE
CREATE OPERATOR
CREATE INDEXTYPE
CREATE PLUGGABLE DATABASE
SET CONTAINER

11 rows selected.

In case we want the PDBADMIN to be a real DBA, we can specify the DBA role instead of cause. That should enable the PDADMIN to perform all adminstrative tasks.

SQL> create pluggable database orcl12pdb1 admin user pdbadmin identified by "********" roles=(DBA);

Pluggable database created.

SQL> alter pluggable database orcl12pdb1 open;

Pluggable database altered.

SQL> connect pdbadmin/********@vm160/orcl12pdb1
Connected.
SQL> select * from session_roles;

ROLE
--------------------------------------------------------------------------------
DBA
PDB_DBA
SELECT_CATALOG_ROLE
EXECUTE_CATALOG_ROLE
CAPTURE_ADMIN
EXP_FULL_DATABASE
IMP_FULL_DATABASE
DATAPUMP_EXP_FULL_DATABASE
DATAPUMP_IMP_FULL_DATABASE
GATHER_SYSTEM_STATISTICS
OPTIMIZER_PROCESSING_RATE
EM_EXPRESS_BASIC
EM_EXPRESS_ALL
SCHEDULER_ADMIN
HS_ADMIN_SELECT_ROLE
HS_ADMIN_EXECUTE_ROLE
XDBADMIN
XDB_SET_INVOKER
WM_ADMIN_ROLE
JAVA_ADMIN
JAVA_DEPLOY
OLAP_XS_ADMIN
OLAP_DBA

23 rows selected.

SQL> select count(*) from session_privs;

  COUNT(*)
----------
       238

The only thing here is, that the PDADMIN behaves more like SYSTEM in a non-CDB environment rather than SYS. The user just isn’t SYS and therefore cannot grant privileges on objects owned by SYS to other users, even though the user has the “GRANT ANY” privileges.

SQL> show user
USER is "PDBADMIN"
SQL> select * from session_privs where privilege like 'GRANT%';

PRIVILEGE
----------------------------------------
GRANT ANY ROLE
GRANT ANY PRIVILEGE
GRANT ANY OBJECT PRIVILEGE

SQL> grant execute on sys.dbms_lock to test;
grant execute on sys.dbms_lock to test
                     *
ERROR at line 1:
ORA-01031: insufficient privileges


SQL> grant select on sys.v_$session to test;
grant select on sys.v_$session to test
                    *
ERROR at line 1:
ORA-01031: insufficient privileges

At this point it still necessary to use the SYS user to grant those privileges to the PDADMIN with the GRANT OPTION. That enables the PDADMIN to propagate those privileges to other users as well.

SQL> conn / as sysdba
Connected.
SQL> alter session set container=orcl12pdb1;

Session altered.

SQL> grant select on sys.v_$session to pdbadmin with grant option;

Grant succeeded.

SQL> grant execute on dbms_lock to pdbadmin with grant option;

Grant succeeded.

SQL> connect pdbadmin/********@vm160/orcl12pdb1.support.robotron.de
Connected.
SQL> grant execute on dbms_lock to test;

Grant succeeded.

SQL> grant select on sys.v_$session to test;

Grant succeeded.

The only thing that remains, is the creation of objects in the SYS schema, which the PDADMIN still can’t do. Sometimes this is necessary to do, to create a password verify function to be used by a profile for instance. But is it really necessary to have such a function in the SYS schema?

SQL> show user
USER is "PDBADMIN"
SQL> REM password verify function von Oracle im Schema PDBADMIN anlegen
SQL> @?/rdbms/admin/catpvf

SQL> create profile RDS_PROFILE limit password_verify_function  ORA12C_STIG_VERIFY_FUNCTION;

Profile created.

SQL> alter user test profile RDS_PROFILE;

User altered.

SQL> connect test/test@vm160/orcl12pdb1
Connected.
SQL> alter user test identified by "Test" replace "test";
alter user test identified by "Test" replace "test"
*
ERROR at line 1:
ORA-28003: password verification for the specified password failed
ORA-20001: Password length less than 15


SQL> alter user test identified by "G4nz_K0mpliziert!" replace "test";

User altered.

Obviously the password verify function does it’s job, even though it is owned by PDBADMIN in this case. That means, there is no need to have this function in the SYS schema, this security feature can be used without involving the SYS user.

At the end, the PDBADMIN can be equiped quite well with privileges that allow daily administrative tasks for just one PDB. Only granting privileges on SYS objects remains a topic. This requires the usage of SYS to grant the privileges, at least for the initial setup. In most cases this should not be a major issue, except when we simply don’t have access to the SYS user like in Autonomous Database in the Oracle Cloud for instance. There we get only a PDBADMIN user. This user can do many things, but did Oracle also think of SYS privileges?

SQL> show user
USER is "ADMIN"
SQL> show con_name

CON_NAME
------------------------------
PPA2OTADQX9XMML_MMIATP
SQL> select host_name from v$instance;

HOST_NAME
----------------------------------------------------------------


SQL> select owner, privilege, count(*) anz from dba_tab_privs where grantable='YES' and grantee=user group by owner, privilege
  2  order by 1,2;

OWNER                PRIVILEGE                                       ANZ
-------------------- ---------------------------------------- ----------
AUDSYS               EXECUTE                                           1
AUDSYS               READ                                              8
C##ADP$SERVICE       EXECUTE                                           6
C##ADP$SERVICE       READ                                             17
C##ADP$SERVICE       UNDER                                             1
C##CLOUD$SERVICE     EXECUTE                                       11168
C##CLOUD$SERVICE     READ                                             33
C##CLOUD$SERVICE     SELECT                                           17
CTXSYS               EXECUTE                                          10
CTXSYS               READ                                              4
CTXSYS               SELECT                                            1
DVSYS                SELECT                                            1
GSMADMIN_INTERNAL    EXECUTE                                          15
GSMADMIN_INTERNAL    READ                                              3
LBACSYS              EXECUTE                                          35
LBACSYS              READ                                             19
LBACSYS              SELECT                                           38
MDSYS                SELECT                                            1
OUTLN                SELECT                                            3
SYS                  EXECUTE                                        1313
SYS                  FLASHBACK                                        14
SYS                  READ                                           1695
SYS                  SELECT                                         4718
SYS                  USE                                               1
SYS                  WRITE                                             2
SYSTEM               READ                                              1
SYSTEM               SELECT                                            8
XDB                  DELETE                                            4
XDB                  EXECUTE                                          13
XDB                  INSERT                                            4
XDB                  READ                                              1
XDB                  SELECT                                            5
XDB                  UPDATE                                            3

33 rows selected.


SQL> create user test identified by "G4nz_K0mpliziert!";

User created.

SQL> grant create session to test;

Grant succeeded.

SQL> grant execute on dbms_lock to test;

Grant succeeded.

SQL> grant select on v$session to test;

Grant succeeded.

SQL> conn test/"G4nz_K0mpliziert!"@mmiatp_high
Connected.
SQL> show user
USER is "TEST"
SQL> show con_name

CON_NAME
------------------------------
PPA2OTADQX9XMML_MMIATP

SQL> select sid, username from v$session;

       SID USERNAME
---------- --------------------
     10457 TEST

SQL> set timing on
SQL> exec dbms_lock.sleep(10);

PL/SQL procedure successfully completed.

Elapsed: 00:00:10.04

Obviously yes, the PDBADMIN, or ADMIN in this case, in my Autonomous Database has numerous object privileges in serveral schemas that also have the GRANT option. How useful that is, is another story, as you can see when querying the V$SESSION view.

To cut a long story short, there are good ways to configure a PDB administrative user in order to allow for common DBA tasks. We just to keep in mind, that privileges on SYS objects are a somewhat special and that we might not always have a PDB admin user at all in case we migrated directly from a non-CDB architecture.

Initialization Parameters in Multitenant

Preface

It’s been a long time since the last blog post, so today is the day to start sharing knowledge again. And the topic is quite an interresting one I think. It came up during the creation of my slides for DOAG Conference 2022 in Nuremberg. There I will be talking about my experiences during several Multitenant migration projects. A question that always occur is how the Oracle Database handles parameter changes.

The thing is, that in contrast to the old-fashioned architecture we can now set parameters at the CDB (e.g. CDB$ROOT container) level as well as at the PDB level. My initial assumption was, that paramter changes done at the CDB level will also affect all PDBs as long as they don’t have their own settings. But during the projects, it turned out that this is not rule of thumb.

So I read through the documentation starting with the Database Concepts going to the Administrators Guide which in turn led me to the Multitenant Administrators Guide and the topics of Modifying a CDB with ALTER SYSTEM and Modifying a PDB with ALTER SYSTEM. But for me that didn’t paint a clearer picture. That’s why I started investigating things further.

I will be using Oracle 19.16 for my testing. I started with the V$PARAMETER view which has a column called “ISPDB_MODIFIABLE”. This column may have values of either “FALSE” or “TRUE”.

SQL> select ispdb_modifiable, count(*) cnt# from v$parameter group by ispdb_modifiable;

ISPDB       CNT#
----- ----------
TRUE         200
FALSE        265

Out of these 200 parameters, that can be modified from within a PDB I choose roughly 60 that made sense to change. My ideas was to create a table containing these selected parameter names, store the initial values of these parameters for the CDB as well as for one PDB, then change the values at CDB level and store the values again for the CDB and the PDB. That would then make up for a comparison.

Initial setup

SQL> create table params_in_question (name varchar2(100));

Table created.

##### Do the inserts #####

SQL> select * from params_in_question order by 1;

NAME
--------------------------------------------------------------------------------
approx_for_aggregation
approx_for_count_distinct
approx_for_percentile
aq_tm_processes
bitmap_merge_area_size
blank_trimming
cpu_count
cpu_min_count
create_bitmap_area_size
cursor_sharing
db_domain
db_file_multiblock_read_count
db_securefile
deferred_segment_creation
global_names
job_queue_processes
max_dump_file_size
open_cursors
open_links
optimizer_adaptive_plans
optimizer_adaptive_reporting_only
optimizer_adaptive_statistics
optimizer_capture_sql_plan_baselines
optimizer_dynamic_sampling
optimizer_features_enable
optimizer_ignore_hints
optimizer_ignore_parallel_hints
optimizer_index_caching
optimizer_index_cost_adj
optimizer_inmemory_aware
optimizer_mode
optimizer_real_time_statistics
optimizer_secure_view_merging
optimizer_use_invisible_indexes
optimizer_use_pending_statistics
optimizer_use_sql_plan_baselines
parallel_degree_limit
parallel_degree_policy
parallel_force_local
parallel_max_servers
parallel_min_degree
parallel_min_time_threshold
parallel_servers_target
plscope_settings
query_rewrite_enabled
query_rewrite_integrity
recyclebin
resource_limit
result_cache_max_result
result_cache_max_size
result_cache_mode
result_cache_remote_expiration
resumable_timeout
session_cached_cursors
sga_min_size
sga_target
shared_pool_size
skip_unusable_indexes
smtp_out_server
sort_area_retained_size
sort_area_size
star_transformation_enabled
undo_retention

63 rows selected.
 

Now I am going to save the initial values for the CDB and PDB.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 MMIPDB1                        READ WRITE NO

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> create table cdb_start_params as select name, value from v$parameter where name in (select name from params_in_question);

Table created.

SQL> alter session set container=mmipdb1;


Session altered.

SQL> create table cdb_start_params as select name, value from v$parameter where name in (select name from params_in_question);

Table created.

Change the values

After setting that up, I altered all these parameters using the typical ALTER SYSTEM statement in the form

SQL> alter system set <parameter name> = <new value> scope=spfile;

and restart the whole database instance. Note, if we do not specify any CONTAINER=XYZ clause, the default of CONTAINER=CURRENT will be used according to the documentation.

Check the values

After the reboot, I created another set of tables that hold the values after the change.

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> create table cdb_end_params as select name, value from v$parameter where name in (select name from params_in_question);

Table created.

SQL> alter session set container=mmipdb1;

Session altered.

SQL> create table cdb_end_params as select name, value from v$parameter where name in (select name from params_in_question);

Table created.

At this point I am now able to compare the settings from before and after the parameter changes.

SQL> select count(*) 
2    from 
3    (select * from containers(cdb_start_params) where con_id=1) b,
4    (select * from containers(cdb_end_params) where con_id=1) e
5    where e.name = b.name
6    and  e.value <> nvl(b.value, '');

  COUNT(*)
----------
        60

SQL> select count(*) 
2    from 
3    (select * from containers(cdb_start_params) where con_id=3) b,
4    (select * from containers(cdb_end_params) where con_id=3) e
5    where e.name = b.name
6    and  nvl(e.value, 'X') <> nvl(b.value, 'X');

  COUNT(*)
----------
        56

So 60 out 63 parameter changes made it in the CDB$ROOT but only 56 into the PDB. What parameters did not change in the CDB$ROOT? Let’s check that.

select b.name 
from 
 (select * from containers(cdb_start_params) where con_id=1) b,
 (select * from containers(cdb_end_params) where con_id=1) e
where e.name = b.name
and  nvl(e.value, 'X') = nvl(b.value, 'X')
;

NAME
--------------------------------------------------------------------------------
parallel_max_servers
result_cache_max_size
parallel_servers_target

Ok, these parameters are all related to parallel executions. In my example I am using a Standard Edition 2 database which does not allow parallel operations at all. That might be the reason. Let’s check next, which parameters did not made it into the PDB.

SQL> select b.name
  2  from
  3   (select * from containers(cdb_start_params) where con_id=1) b,
  4   (select * from containers(cdb_end_params) where con_id=1) e
  5  where e.name = b.name
  6  and  e.value <> nvl(b.value, '')
  7  minus
  8  select b.name
  9  from
 10   (select * from containers(cdb_start_params) where con_id=3) b,
 11   (select * from containers(cdb_end_params) where con_id=3) e
 12  where e.name = b.name
 13  and  nvl(e.value, 'X') <> nvl(b.value, 'X');

NAME
--------------------------------------------------------------------------------
sga_min_size
sga_target
shared_pool_size
undo_retention

Even though I altered the parameters with the default of CONTAINER=CURRENT, only these 4 parameters where left unchanged inside the PDB. All other changes are picked up by the PDB. But are these parameters really persisted in the PDBs SPfile?

SQL> select name, con_id, con_uid from v$pdbs;

NAME               CON_ID    CON_UID
-------------- ---------- ----------
PDB$SEED                2 3913790623
PDB1                    3 2275295767
PDB2                    4 3431655691

SQL> select name from pdb_spfile$ where pdb_uid=2275295767;

NAME
--------------
db_securefile

Obviously not.

Modify again

So my next check is to alter all these parameters again, but this time using CONTAINER=ALL, bounce the instance and compare the parameter settings again.

SQL> select b.name
  2  from
  3   (select * from containers(cdb_start_params) where con_id=1) b,
  4   (select * from containers(cdb_end_params) where con_id=1) e
  5  where e.name = b.name
  6  and   nvl(e.value, 'X') <> nvl(b.value, 'X')
  7  minus
  8  select b.name
  9  from
 10   (select * from containers(cdb_start_params) where con_id=3) b,
 11   (select * from containers(cdb_end_params) where con_id=3) e
 12  where e.name = b.name
 13  and   nvl(e.value, 'X') <> nvl(b.value, 'X');

no rows selected

SQL> select name from pdb_spfile$ where pdb_uid=2275295767;

NAME
-------------------------
cpu_min_count
db_securefile
parallel_servers_target
sga_min_size
sga_target
shared_pool_size
undo_retention

7 rows selected.

This time all parameter changes made it into the PDB. That means, the CONTAINER=ALL does exactly what it says. But interrestingly only a few parameters are stored in the PDBs Spfile and not only the ones that were left unchanged in the first example but some others too.

Conclusion

I don’t really found a pattern which paramters might make it into the PDB even though I did not specify CONTAINER=ALL. At the end I have to be careful what I am doing, think twice before doing it and of cause, check the outcome afterwards if it really did what I wanted it to do.

PDB parameters not persisted from CDB

Mike Dietrich wrote in this blog post about a change in behaviour when setting the undo_retention in the CDB$ROOT. I will not repeat that but I found something else when following his advice. My test database is version 19.10 on a Linux box. The CDB has two additional PDBs with which I tried to check Mike’s approach.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 MMI1                           READ WRITE NO
         4 MMI2                           READ WRITE NO

SQL> show parameter undo_retention

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
undo_retention                       integer     900

SQL> select name, value from v$spparameter where name='undo_retention';

NAME                 VALUE
-------------------- ---------------------------
undo_retention

SQL> alter session set container=mmi1;

Session altered.

SQL> show parameter undo_retention

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
undo_retention                       integer     900

SQL> alter session set container=mmi2;

Session altered.

SQL> show parameter undo_retention

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
undo_retention                       integer     900

You see, there is no setting for undo_retention at all, so the CDB as well as the PDBs have the default of 900. Now let’s increase that to 14400 seconds which is exactly one day. According to Mike, I do that for all PDBs.

SQL> alter session set container=cdb$root;

Session altered.

SQL> alter system set undo_retention=14400 container=all scope=both;

System altered.

SQL> alter session set container=mmi1;

Session altered.

SQL> show parameter undo_retention

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
undo_retention                       integer     14400

SQL> alter session set container=mmi2;

Session altered.

SQL> show parameter undo_retention

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
undo_retention                       integer     14400

As expected, all PDBs are now keeping their undo for one whole day, they try to at least depending on physical limits. But now I bounce my CDB. And see what’s happening.

SQL> alter session set container=cdb$root;

Session altered.

SQL> startup force
ORACLE instance started.

Total System Global Area 8589931480 bytes
Fixed Size                  9155544 bytes
Variable Size            2986344448 bytes
Database Buffers         5570035712 bytes
Redo Buffers               24395776 bytes
Database mounted.
Database opened.
SQL> show parameter undo_retention

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
undo_retention                       integer     14400

SQL> alter session set container=mmi1;

Session altered.

SQL> show parameter undo_retention

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
undo_retention                       integer     900
SQL> alter session set container=mmi2;

Session altered.

SQL> show parameter undo_retention

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
undo_retention                       integer     900

The CDB still has the setting but the PDBs lost it. This is because it was not added to the internal PDB_SPFILE$ table even though I used “scope=both”.

SQL> select c.name, p.name, p.value$
  2  from v$pdbs c
  3  join pdb_spfile$ p
  4    on (c.con_uid = p.pdb_uid)
  5* where p.name = 'undo_retention'
SQL> /

no rows selected

I have to explicitly go to a PDB and set the value there to make it persistent.

SQL> alter session set container=mmi2;

Session altered.

SQL> alter system set undo_retention=28800 scope=both;

System altered.

SQL> alter session set container=cdb$root;

Session altered.

SQL> select c.name, p.name, p.value$
  2  from v$pdbs c
  3  join pdb_spfile$ p
  4    on (c.con_uid = p.pdb_uid)
  5* where p.name = 'undo_retention'
SQL> /

NAME                 NAME                 VALUE$
-------------------- -------------------- --------------------
MMI2                 undo_retention       28800


SQL> startup force
ORACLE instance started.

Total System Global Area 8589931480 bytes
Fixed Size                  9155544 bytes
Variable Size            2986344448 bytes
Database Buffers         5570035712 bytes
Redo Buffers               24395776 bytes
Database mounted.
Database opened.

SQL> alter session set container=mmi2;

Session altered.

SQL> show parameter undo_retention

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
undo_retention                       integer     28800

That means, unless I missed something, it is not sufficient to change the parameter from the CDB$ROOT for all PDBs, you have to touch every single PDB to make it persistent.

Misleading ORA-1555

Recently a colleague of mine was upgrading a database from 12.1.0.2 non-CDB to 19c PDB. Since the database was more than 1TB he used the following approach:

  1. Create a 19c CDB without any PDBs
  2. startup old 12.1 database read only
  3. create XML for plugging in the database
  4. stop the old 12.1 database
  5. plugin the database into the 19c CDB using the XML and NOCOPY option
  6. Upgrade the new PDB

So let me outline that a bit more in detail. This is what was done at the source database.

SQL> shutdown immediate
SQL> startup open read only

SQL> begin
2      dbms_pdb.describe(pdb_descr_file => '/u01/software/dbp/xxxx.xml');
3    end;
4    /

SQL> shutdown immediate

Having that, the next steps were done at the destination 19c CDB.

SQL> CREATE PLUGGABLE DATABASE XXXX USING '/u01/software/dbp/xxxx.xml'
2    NOCOPY
3    TEMPFILE REUSE;

SQL> alter pluggable database xxxx open upgrade;

$ cd $ORACLE_HOME/rdbms/admin
$ $ORACLE_HOME/perl/bin/perl catctl.pl -c "xxxx" -l /u01/oracle catupgrd.sql

And then, after rebooting everythin, the fun started. Nearly every application that connected to the new PDB including SQL*Developer got an ORA-1555 error:

ORA-00604: error occurred at recursive SQL level 1
ORA-01555: snapshot too old: rollback segment number 10 with name "_SYSSMU10_2194929394$" too small

Since this looked like an undo retention issue somehow, we checked the undo_retention parameter, set it to a higher value but without any change. Next, we checked all the LOBs inside the database and set their retention accordingly, but still no change. We even created a new UNDO tablespace and switched the PDB to that newly created tablespace, but still no change. Also the alert.log had no additional information. That’s why we then added some events to get traces. There is a nice explanation here: https://blogs.oracle.com/db/master-note-for-ora-1555-errors.

SQL> alter system set events '1555 trace name errorstack forever, level 3';
SQL> alter system set events '10442 trace name context forever, level 10';

We then raised the error again and searched for the traces. But there were no traces at all. At least not where we would expected them.

I tried to find any traces in ADR that were created in the last 60 minutes.

$ find $ORACLE_BASE -mmin -60 -name "*ora*trc" -type f

And I found one, but in the directory of the old 12.1 database:

Trace file /u01/app/oracle/diag/rdbms/xxxx/XXXX/trace/XXXX_ora_12346.trc
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
ORACLE_HOME = /u01/app/oracle/product/12.1.0/dbhome_1
System name:    Linux
Node name:      xxxx0003
Release:        4.12.14-95.6-default
Version:        #1 SMP Thu Jan 17 06:04:39 UTC 2019 (6af4ef8)
Machine:        x86_64
Instance name: XXXX
Redo thread mounted by this instance: 1
Oracle process number: 33
Unix process pid: 12346, image: oracle@xxxx0003


*** 2021-01-30 18:32:16.011
*** SESSION ID:(695.19613) 2021-01-30 18:32:16.011
*** CLIENT ID:() 2021-01-30 18:32:16.011
*** SERVICE NAME:(XXXX) 2021-01-30 18:32:16.011
*** MODULE NAME:(RCP:xxxxx:64040) 2021-01-30 18:32:16.011
*** CLIENT DRIVER:(jdbcthin : 19.3.0.0.0) 2021-01-30 18:32:16.011
*** ACTION NAME:() 2021-01-30 18:32:16.011

Attempt to read  undo record @ tsn: 188 uba: 0x01410eff.22ec.3e for CR-application for  xid: 0x000a.005.00007679 failed

And then I had the strange feeling, that there might be something else accessing the database. A quick check revealed. that the old 12.1 database was running again, for whatever reason.

My assumption is, that due to the plugin operation, the database got new SYSTEM and UNDO tablespaces, the controlfiles are from the CDB anyway. So the old 12.1 non-CDB and the new 19c PDB shared only the user datafiles and could startup concurrently without any trouble. Only when accessing the user data, the instance wants to perform a rollback for read consistency and was not able to find the propper undo segment since that undo segment resided in another place. That is my theory about that.

Finally, we stopped and disabled the old 12.1 database (instance). From that point on the PDB worked like a charme and happily there were no corruptions at all as far as we checked.

Conclusion: If you migrate from non-CDB to PDB on the same host using the NOCOPY option, prevent the old instance from ever starting again. You might run into serious trouble.

Dynamic CPU Scaling

Starting with Oracle Database 19.4 a new feature called “Dynamic CPU Scaling” got introduced. There is an announcement from Oracle about this feature and also a couple of other blog posts related to this new feature. The basic idea is, that with the new Multitenant architecture all pluggable databases (PDBs) share the CPUs defined by CPU_COUNT in the container database (CDB). By default each and every PDB has the same value for CPU_COUNT in the CDB. Basically this is an over-allocation of CPU resources. That’s why Oracle introduced a new parameter called CPU_MIN_COUNT that shall be available to a specific PDB in any case to preserve a minimum of CPU capacity.

Since this new feature might become very handy sometime in the future, I wanted to try it out and see how it actually works. My setup consists of one CDB version 19.9 and two PDBs. For both PDBs I have a SwingBench SOE benchmark with 20 concurrent sessions prepared to put some CPU load on them.

SQL> show pdbs

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO
         4 PDB2                           READ WRITE NO

SQL>  select CON_ID, name, value from v$system_parameter where name like 'cpu%count' and con_id in (0,3,4);

    CON_ID NAME                 VALUE
---------- -------------------- ----------
         0 cpu_count            16
         0 cpu_min_count        16
         3 cpu_min_count        16
         4 cpu_min_count        16

6 rows selected.

My CPU_COUNT equals 16 which is by funny coinsidence the limit that a Standard Edition 2 has due to license restrictions.

Having that, let’s put some load on PDB1 and see what happens. For the whole blog post, PDB1 graphs will be on the left window, PDB2 graphs on the right window.

We see, we get some steady load for PDB1 with that 20 sessions. Actually we have 16 active sessions and 4 waiting for CPU. That is what I wanted to achieve. Now I start the other SwinBench run which starts another 20 sessions but againts PDB2.

Unfortunately the graphs are sorted in a different manner but we can see, that the throughput of PDB1 drops because PDB2 now requires some CPU resources too. Interresting to see, that the throughput does not decrease to a certain level and then stays there constantly. It just flips up and down for both PDBs. If PDB1 is in favour, PDB2 suffers. And some seconds later this changes to the opposite and PDB2 is in favour. And some more seconds later it changes again… and again… and again, see:

Now I reduce the CPU_MIN_COUNT for PDB1

SQL> alter session set container=pdb1
  2  /

Session altered.

SQL> alter system set cpu_min_count=10 container=current;

System altered.

As you can see there is not really a change in behaviour. It just stumbles a bit and then goes on as nothing had changed.

So my next step was to reduce the CPU_MIN_COUNT for PDB2 down to 2. From the CPU_MIN_COUNT perspective that means we are not overallocating anymore.

SQL> alter session set container=pdb2
  2  /

Session altered.

SQL> alter system set cpu_min_count=2 container=current;

System altered.

Now the throughput of PDB1 raises again and the the one for PDB2 goes down. Obviously the database is now able to manage the load according to the shares that I defined by setting the CPU_MIN_COUNT, maybe because I am not over-commiting anymore.

My next step was to increase the CPU_MIN_COUNT for PDB2 again to 6. That means, I have now 16 CPUs in minimum over both PDBs.

SQL> alter session set container=pdb2
  2  /

Session altered.

SQL> alter system set cpu_min_count=6 container=current;

System altered.

Again, the throughput of PDB1 goes down to allow for more throughput on PDB2. And after a while, it is flipping again.

It is also notable, that the numbers are really similar even though the CPU_MIN_COUNTs are different. Remember, at this point PDB1 has 10 CPUs in minimum whilst PDB2 has 6 CPUs in minimum.

Finally, I reduced the CPU_MIN_COUNT for PDB1 down to 2.

SQL> alter session set container=pdb1
  2  /

Session altered.

SQL> alter system set cpu_min_count=2 container=current;

System altered.

Now I am not over-commiting anymore and the throughput for both PDBs reaches a steady level again without any flipping.

At the end this means for me, that the CPU scaling occurs only if the sum of all CPU_MIN_COUNTs over all PDBs is lower than the actual CPU_COUNT defined in the CDB. Or at least, it seems to work as I would have expected in that case. But nevertheless it is a handy feature to limit CPU resources and noisy neighbor issues in a multitenant environment.

Shrink SYSAUX of a Pluggable Database

The todays topic is inspired by an experience that I had some time ago. A customer comlpained about a huge SYSAUX tablespace inside of one of his PDBs. They needed tp add a second datafile since the first datafile already reached its maximum size of 32G. When we investigated, what was consuming all this space, we found that it was the unified audit trail. As it turned out there were lots of “select any table” usages recorded. There was an application user that should only read some tables. We revoked the “select any table” privilege from that user and istead granted “read” on the tables. Now, that the unified audit trail wasn’t growing that much anymore we cleaned it up. You can read about this in one of my previous posts.

So we ended up with a 32GB SYSAUX tablespace that only holds a handful GB of data. Reorganizing is not as easy as with an user tablespace for two reasons. Reason 1: There are lot of “special” segments like partitioned LOBs, nested tables, clusters etc. Reason 2: We are not allowed to touch objects inside SYSAUX.

SQL> alter table wmsys.WM$LOCKROWS_INFO$ move tablespace sysaux;
alter table wmsys.WM$LOCKROWS_INFO$ move tablespace sysaux
*
ERROR at line 1:
ORA-65040: operation not allowed from within a pluggable database

SQL> !oerr ora 65040
65040, 00000, "operation not allowed from within a pluggable database"
// *Cause: An operation was attempted that can only be performed in the root
// container.
// *Action: Switch to the root container to perform the operation.
//

Ok. Doing this from the root container does not bring us any step further since there is another SYSAUX tablespace which is not in scope of our reorganization. So we need to find another way to touch objects in our PDBs SYSAUX tablespace. Basically there are two ways to do that, either by setting the underscore session parameter “_oracle_script” or by doing the commands via “dbms_pdb.exec_as_oracle_script”.

SQL> alter session set "_oracle_script"=true;

Session altered.

SQL> alter table wmsys.WM$LOCKROWS_INFO$ move tablespace sysaux;

Table altered.

SQL> alter session set "_oracle_script"=false;

Session altered.
SQL> exec dbms_pdb.exec_as_oracle_script('alter table wmsys.WM$LOCKROWS_INFO$ move tablespace sysaux');

PL/SQL procedure successfully completed.

Now let’s check who has segments in the SYSAUX tablespace:

SQL> select distinct owner from dba_segments where tablespace_name='SYSAUX' order by 1;

OWNER
--------------------------------------------------------------------------------
APEX_040200
AUDSYS
CTXSYS
DBSNMP
DVSYS
GSMADMIN_INTERNAL
MDSYS
ORDDATA
ORDSYS
SYS
SYSTEM
WMSYS
XDB

13 rows selected.

In order to shrink the tablespace we create an intermediate tablespace where we can move most of the segments to. That will hopefully free up enough space so that we can shrink the datafile(s) and move the segments back.

SQL> create tablespace sysaux_neu datafile size 500M autoextend on;

Tablespace created.


All the owners need quota for the new tablespace.

SQL> alter user xdb quota unlimited on sysaux_neu;

User altered.

Do this for all the other users that own segments in SYSAUX.

Now we can start moving segments. For XDB there is a procedure that does that. It is described in MOS Note 1271560.1.

SQL> alter system enable restricted session;

System altered.

SQL> select any_path from resource_view;

ANY_PATH
--------------------------------------------------------------------------------
/home
/home/MDSYS

[...]
99 rows selected.

SQL> set serveroutput on
SQL> begin
  2  xdb.dbms_xdb_admin.movexdb_tablespace('SYSAUX_NEU', trace=> TRUE);
  3  end;
  4 /
[...]
sql stmt alter user xdb default tablespace "SYSAUX_NEU"

PL/SQL procedure successfully completed.

Another speciality are the AQ tables. For those Oracle provides a PL/SQL package "move_qt_pkg" which is described in and can be downloaded from MOS Note 1410195.1.

We can simply loop through all AQ tables and move them to the new tablespace.

begin
  for rec in (
    select q.owner ow, q.queue_table nm
	  from dba_queue_tables q 
	  join dba_segments s on (q.queue_table = s.segment_name and q.owner = s.owner)
	 where s.tablespace_name = 'SYSAUX'
  )
  loop
    dbms_output.put( rpad(rec.ow || '.' || rec.nm, 60, '.') );
	begin  
      dbms_pdb.exec_as_oracle_script('begin move_qt_pkg.move_queue_table('''||rec.ow||''' , '''||rec.nm||''', ''SYSAUX'', ''SYSAUX_NEU''); end;');
	  dbms_output.put_line('OK');
	exception
	  when others then
	    dbms_output.put_line('NOK');
		dbms_output.put_line(SQLERRM);
    end;
  end loop;
end;
/

Now the rest is more or less straight forward:

  • move non-partitioned heap tables
  • move non-partitioned object tables
  • move non-partitioned IOTs
  • move non-partitioned IOT Overflows
  • move indexes of non-partitioned IOTs
  • move partitioned heap tables
  • move LOBs
  • move non-partitioned indexes
  • move partitioned indexes

That covers most of the segments and freed up enough space to shrink the datafiles. After that we did the whole procedure all over again to move the segments back.

You can find my script here (in .docx format since WordPress does not allow SQL or TXT).
But be aware, there is no support from Oracle for these actions.