Platform change and upgrade in one step – Manually

In a previous blog post I outlined the steps to migrate between platforms and versions in one step and ended up with an error from the XTTS scripts provided by Oracle.
Now I’ll show you, how easy this is, even if you do it by hand.

Step 1: create initial backup

At first, we get the current SCN of the target database. This SCN will be used later on.

SQL>  select current_scn from v$database;

CURRENT_SCN
-----------
    1446020

Then we create a classic backup of all tablespaces that shall be transported. Note, this is all online and read/write at this time. So there is no impact to any production processes beside the load generated by the backup itself.

RMAN> backup for transport allow inconsistent incremental level 0 format '/tmp/MMI_%U.bkp' tablespace mmi;

Starting backup at 2018-09-24 13:43:45
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=271 device type=DISK


channel ORA_DISK_1: starting incremental level 0 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=+DATA/MMI/DATAFILE/mmi.296.987685353
channel ORA_DISK_1: starting piece 1 at 2018-09-24 13:43:54
channel ORA_DISK_1: finished piece 1 at 2018-09-24 13:43:57
piece handle=/tmp/MMI_02tdtqsq_1_1.bkp tag=TAG20180924T134353 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 2018-09-24 13:43:57

Step 2: restore the back at destination

For the restore, we need to have an empty database in place. This database may be of a different, e.g. higher version as the source. We do the restore and the necessary platform conversion in one step. You could also restore and convert afterwards, but this would require twice the storage.

RMAN> restore  from platform 'Linux x86 64-bit'  all foreign datafiles to new from backupset '/tmp/MMI_02tdtqsq_1_1.bkp';

Starting restore at 2018-09-24 14:05:00
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring all foreign files in backup piece
channel ORA_DISK_1: reading from backup piece /tmp/MMI_02tdtqsq_1_1.bkp
channel ORA_DISK_1: restoring foreign file 5 to +DATA/MMIRECO/DATAFILE/mmi.301.987689101
channel ORA_DISK_1: foreign piece handle=/tmp/MMI_02tdtqsq_1_1.bkp
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:16
Finished restore at 2018-09-24 14:05:16

At this point we have the files available at the destination, but they are not yet part of the destination database.

Step 3: create incremental backup

To keep the destination files up-to-date, we create an incremental backup at the source that covers all changes from the beginning of the initial (or previous incremental) backup that we took. That’s why we gathered the SCN beforehand. And we will get the SCN again right before the incremental backup for the next increment.

SQL>  select current_scn from v$database;

CURRENT_SCN
-----------
    1458731

Now we create the incremental backup.

RMAN>  backup for transport allow inconsistent incremental from scn 1446020 format '/tmp/MMI_%U.bkp' tablespace mmi;

Starting backup at 2018-09-24 13:54:20
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=+DATA/MMI/DATAFILE/mmi.296.987685353
channel ORA_DISK_1: starting piece 1 at 2018-09-24 13:54:21
channel ORA_DISK_1: finished piece 1 at 2018-09-24 13:54:22
piece handle=/tmp/MMI_03tdtrgd_1_1.bkp tag=TAG20180924T135421 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 2018-09-24 13:54:22

Step 4: apply incremental backup

The incremental backup, that we created in step 3 can now be converted to the destination platform and applied to the datafiles.

RMAN> recover foreign datafilecopy '+DATA/MMIRECO/DATAFILE/mmi.301.987689101' from backupset '/tmp/MMI_03tdtrgd_1_1.bkp' from platform 'Linux x86 64-bit';

Starting restore at 2018-09-24 14:06:19
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring foreign file +DATA/MMIRECO/DATAFILE/mmi.301.987689101
channel ORA_DISK_1: reading from backup piece /tmp/MMI_03tdtrgd_1_1.bkp
channel ORA_DISK_1: foreign piece handle=/tmp/MMI_03tdtrgd_1_1.bkp
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 2018-09-24 14:06:21

Step 5: keep datafile copies current

This is simply done by repeating steps 3 & 4 until the final downtime takes place.

Step 6: final incremental backup

For the final transport we create a last incremental backup with the tablespaces being read-only now.

RMAN> sql "alter tablespace mmi read only";

sql statement: alter tablespace mmi read only


RMAN> backup for transport incremental from scn 1458731 format '/tmp/MMI_%U.bkp' tablespace mmi DATAPUMP FORMAT '/tmp/mmi_ts.dmpdp';

Starting backup at 2018-09-24 14:10:48
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=271 device type=DISK
Running TRANSPORT_SET_CHECK on specified tablespaces
TRANSPORT_SET_CHECK completed successfully

Performing export of metadata for specified tablespaces...
   EXPDP> Starting "SYS"."TRANSPORT_EXP_MMI_nffu":
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
   EXPDP> Master table "SYS"."TRANSPORT_EXP_MMI_nffu" successfully loaded/unloaded
   EXPDP> ******************************************************************************
   EXPDP> Dump file set for SYS.TRANSPORT_EXP_MMI_nffu is:
   EXPDP>   /u01/app/oracle/product/12.2.0.1/db/dbs/backup_tts_MMI_95169.dmp
   EXPDP> ******************************************************************************
   EXPDP> Datafiles required for transportable tablespace MMI:
   EXPDP>   +DATA/MMI/DATAFILE/mmi.296.987685353
   EXPDP> Job "SYS"."TRANSPORT_EXP_MMI_nffu" successfully completed at Mon Sep 24 14:12:46 2018 elapsed 0 00:01:29
Export completed

channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=+DATA/MMI/DATAFILE/mmi.296.987685353
channel ORA_DISK_1: starting piece 1 at 2018-09-24 14:12:51
channel ORA_DISK_1: finished piece 1 at 2018-09-24 14:12:54
piece handle=/tmp/MMI_05tdtsj3_1_1.bkp tag=TAG20180924T141050 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting full datafile backup set
input Data Pump dump file=/u01/app/oracle/product/12.2.0.1/db/dbs/backup_tts_MMI_95169.dmp
channel ORA_DISK_1: starting piece 1 at 2018-09-24 14:12:55
channel ORA_DISK_1: finished piece 1 at 2018-09-24 14:12:56
piece handle=/tmp/mmi_ts.dmpdp tag=TAG20180924T141050 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 2018-09-24 14:12:56

This step creates the last incremental backup that we need to apply at the destination as well as a metadata export that we’ll need to plugin the datafiles into the destination database.

Step 7: final apply and plugin

The application of the incremental backup is identical to the previous apply from step 4.

RMAN> recover foreign datafilecopy '+DATA/MMIRECO/DATAFILE/mmi.301.987689101' from backupset '/tmp/MMI_05tdtsj3_1_1.bkp' from platform 'Linux x86 64-bit';

Starting restore at 2018-09-24 14:14:29
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring foreign file +DATA/MMIRECO/DATAFILE/mmi.301.987689101
channel ORA_DISK_1: reading from backup piece /tmp/MMI_05tdtsj3_1_1.bkp
channel ORA_DISK_1: foreign piece handle=/tmp/MMI_05tdtsj3_1_1.bkp
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 2018-09-24 14:14:31

Now we need to get the metadata export dump from the final backupset.

RMAN> restore dump file datapump destination '/u01/app/oracle/admin/mmireco/dpdump/' from backupset '/tmp/mmi_ts.dmpdp';


Starting restore at 2018-09-24 15:04:15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=267 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring Data Pump dump file to /u01/app/oracle/admin/mmireco/dpdump/backup_tts_MMIRECO_22317.dmp
channel ORA_DISK_1: reading from backup piece /tmp/mmi_ts.dmpdp
channel ORA_DISK_1: foreign piece handle=/tmp/mmi_ts.dmpdp
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
Finished restore at 2018-09-24 15:04:20

Finally, we can import the metadata export making the foreign datafiles part of the destination database. In my example I used just one file/tablespace, you can easily plugin several datafiles at once by using wildcards in the file list. As a prerequisite, create all the users that own segments inside the transported tablespaces. Otherwise the import will fail.

[oracle@vm141 ~]$ impdp  dumpfile=data_pump_dir:backup_tts_MMIRECO_22317.dmp logfile=data_pump_dir:mmi_imp_tts.impdp.log transport_datafiles='+DATA/MMIRECO/DATAFILE/mmi.301.987689101';

Import: Release 12.2.0.1.0 - Production on Mon Sep 24 15:06:15 2018

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Master table "SYS"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "SYS"."SYS_IMPORT_TRANSPORTABLE_01":  /******** AS SYSDBA dumpfile=data_pump_dir:backup_tts_MMIRECO_22317.dmp logfile=data_pump_dir:mmi_imp_tts.impdp.log transport_datafiles=+DATA/MMIRECO/DATAFILE/mmi.301.987689101
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job "SYS"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at Mon Sep 24 15:07:02 2018 elapsed 0 00:00:43

Step 8: Finalization

Finally, we can switch the tablespace(s) read/write again at the destination.

SQL> alter tablespace mmi read write;

Tablespace altered.

And since the export only includes metadata for all the segments inside the transported files, we need to copy all metadata that resides in other tablespaces, e.g. SYSTEM tablespaces. That are views, PL/SQL, grants etc. So best is to do a metadata-only export at the source and import that into the destination database. I asume, you know how to do it, so I safely ommit this step.

Conclusion

Transporting tablespaces between platforms and version is not a one-liner, but it is a acceptable effort when you consider the downtime, that is needed to switch from one system to another. And in my opinion, when you do it manually without the scripts from Oracle, it becomes much clearer, how things work.

Advertisements

Platform change and upgrade in one step – XTTS

Today I want to tell you something about cross platform transportable tablespaces (XTTS) and incrementally updated datafile copies. This is a very nice feature to move to another platform and change the Oracle version in one step. There are several MOS notes, whitepapers and of cause some Upgrade-Slides by Mike Dietrich covering this topic.

The basic steps in all cases and versions are these:

  1. set tablespace(s) read only
  2. export metadata for those tablespaces
  3. transport datafiles and dump to target system
  4. create necessary database users
  5. import dump into target database which makes the datafiles part of the target database
  6. set tablespace(s) read write
  7. optionally, transport other objects like views, PL/SQL etc.

But what if the tablespaces are too big and copy time would exceed the allowed downtime window? Then we can use incremental backups to shorten the downtime. The steps to do this, are quite similar.

  1. backup tablespace(s) for transport
  2. copy backup to target system
  3. restore and convert datafiles to target destination and platform
  4. create incremental backup for transported tablespace(s)
  5. recover transported datafiles using the incremental backup
  6. repeat steps 4 and 5 until final downtime window
  7. set tablespace(s) read only
  8. create final incremental backup for transported tablespace(s)
  9. apply incremental backup to transported datafiles
  10. continue with step 2 of the initial process above

Sounds a bit complicated, doesn’t it? So Oracle was so kind to put this whole process into scripts. The following MOS notes contain and describe these scripts:

  • 11G – Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 1389592.1)
  • 12C – Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2005729.1)
  • V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2471245.1)

During a migration project from Linux to SPARC (don’t ask) I wanted to use those scripts. But I experienced the following during “Phase 2”:

oracle@server2:~/xtts> $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup
============================================================
trace file is /oracle/xtts/backup_Sep20_Thu_12_20_52_917//Sep20_Thu_12_20_52_917_.log
=============================================================

[…]

--------------------------------------------------------------------
Done with prepare phase
--------------------------------------------------------------------


--------------------------------------------------------------------
Find list of datafiles in system
--------------------------------------------------------------------

SP2-0027: Input is too long (> 2499 characters) - line ignored
SP2-0027: Input is too long (> 2499 characters) - line ignored
           from dual
           *
ERROR at line 19:
ORA-00936: missing expression

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Error:
------
Error in executing /oracle/xtts/backup_Sep20_Thu_12_27_07_889//diff.sql

Okay. The reason is, I have loads of tablespaces that need to be transferred. This list messed up the whole process. So I tried to understand the process and just do it myself. What I learned is, that it is quite easy to do it manually. And I share my findings with you by simply outlining the basic process in an upcoming blog post.