Transporting SQL Patches Between Databases

A while ago I wrote about SQL Patches – what hints do they exactly use? Now I came to the point where the underlying application was moved to another database using Data Pump. And consequently some parts of the application started to run into performance issues again because the SQL patches, that made some statements run fast, were not transported to the new database. So the need to re-create or transport those SQL patches arrised.

SQL patches are not part of the application schema, they are stored in the data dictionary of the database. That is the cause why they are not transported when using Data Pump. But there is a way to transport them. The DBMS_SQLDIAG package provides some procedures for this, CREATE_STGTAB_SQLPATCH, PACK_STGTAB_SQLPATCH and UNPACK_STGTAB_SQLPATCH. This listing represents the order in which these functions are used. Basically the procedure to move SQL patches is this:

  1. Create staging table in a schema other than SYS
  2. Import SQL patches (all or some) into this staging table
  3. Transport staging table to destination database (Data Pump, CTAS over DB-Link will not work since there is a LONG column in the staging table)
  4. Extract SQL patches (all or some) from staging table back into the dictionary

Let’s see how this works, first extract the SQL patches.

SQL> select name from dba_sql_patches;

NAME
--------------------------------------------------------------------------------
Patch_2
Patch_5
Patch_3
Patch_4
Patch_1

SQL> exec dbms_sqldiag.create_stgtab_sqlpatch('SQLPATCH_STAGE','SYSTEM');

PL/SQL procedure successfully executed.

SQL>  exec  dbms_sqldiag.pack_stgtab_sqlpatch(staging_table_name=>'SQLPATCH_STAGE', staging_schema_owner=>'SYSTEM');

PL/SQL procedure successfully executed.

Now move the staging table to the destination database.

[oracle ~]$ expdp system/**** tables=system.sqlpatch_stage directory=dings dumpfile=sqlpatch.dmpdp

[oracle ~]$ impdp system/**** directory=dings dumpfile=sqlpatch.dmpdp full=y

Finally, extract the SQL patches back into the data dictionary.

SQL>  exec dbms_sqldiag.unpack_stgtab_sqlpatch(replace=>true, staging_table_name=>'SQLPATCH_STAGE', staging_schema_owner=>'SYSTEM');

PL/SQL procedure successfully executed.

That’s it. Nothing more to do. Keep that in mind in case your applications are fine tuned using SQL patches and you need to move them to different databases.

Advertisements

Transportable Tablespaces, Deferred Segment Creation and SE

Right now I am in the middle of a project which is about moving an old 11.2.0.2 Oracle Enterprise Edition Database of roughly 1.5TB from the US to new hardware in Germany including an upgrade to 12.2.0.1 and a switch the Standard Edition 2. As you see, there are a couple of things to do and I am more than happy to do this challenging project. The basic plan is this:

  1. Create a new 12.2.0.1 database in Standard Edition 2
  2. Get an initial copy of the source database to Germany
  3. Restore that copy on the new hardware with an 11.2.0.2 Oracle Home
  4. Get the archivelogs on a regular basis from the source database in the US
  5. Recover the german copy of the database
  6. Repeat 4 and 5 until cut-over
  7. Open Resetlogs the german copy of the database
  8. Move the data to the new 12.2.0.1 database using transportable tablespace

According to the Licensing Guide it is allowed to plug in transportable tablespaces into a SE2 database. So I am completely happy with my approach.

But during the test phase of this migration I encountered a nasty error when plugging in the tablespace. I obfuscated some identifiers, so don’t be too strict:

Import: Release 12.2.0.1.0 - Production on Thu May 31 15:32:01 2018

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.
;;; 
Connected to: Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production
Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01":  system/******** directory=copy_dump logfile=some_ts.impdp.log dumpfile=some_ts.dmpdp transport_datafiles=/u02/app/oracle/oradata/SOMEDB/SOMEDB/datafile/e*dbf 
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
ORA-39083: Object type TABLE:"SOME_USER"."SOME_TABLE" failed to create with error:
ORA-01647: tablespace 'SOME_TS' is read-only, cannot allocate space in it

Failing sql is:
CREATE TABLE "SOME_USER"."SOME_TABLE" (<Column list stripped>) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING TABLESPACE "SOME_TS" 

Ok, the tablesapce is read only. That is supposed to be. A quick research at MOS revealed a note “ORA-39083 & ORA-01647 during TTS or EXPDP/IMPDP from Enterprise Edition to Standard Edition when deferred_segment_creation=True (Doc ID 2288535.1)” which states, that the deferred segment creation is the root cause. That makes sense somehow. The note further states, the workaround is either to use the version parameter of “impdp” during import TTS or to disable deferred segment creation at the source database before starting the TTS.
To cut a long story short, both workarounds do not work and end up with the exact same error. In my opinion this makes totally sense, because the segment for the table in question is simply not present in the transported tablespace. And it can’t be created during the TTS import since the tablespace is marked read only. It cannot be created when I use the version parameter for impdp nor it is being created when I disable deferred segment creation.
The only feasible solution is to create the segment at the source database before setting the tablespace to read only. A simple

SQL> alter table SOME_USER.SOME_TABLE allocate extent;

Table altered.

does the trick. And that’s it. After creating a segment for this table the TTS import went fine without any further errors.
To make sure all objects do have a segment created, I use these queries:

SQL>  select 'alter table ' || owner || '.' || table_name || ' allocate extent;'
  2   from dba_tables where SEGMENT_CREATED='NO'
  3*  and owner='&USERNAME.';

So when using Transportable Tablespaces to move to a SE2 database and the source is 11.2.0.2 (the version when deferred segment creating was introduced) or higher, you better check that in advance. Hope that helps.

Things to consider when moving a database

A main task of a DBA’s life is keeping database versions and hardware up-to-date. This sounds easy, but actually it is not. This is especially if the system was not touched for a long period of time. And there might also be a lot of applications using a database. This implies dependencies.
The todays story is about a server migration that we did a couple of weeks ago. It meant moving 7 databases to a new hardware which we did by simply stopping the databases on the old server, stopping ASM on the old server, mounting the ASM disks to the new server(s) and creating cluster resources for the databases. But there were several other things to take care of. The following list is long but might not be complete, so please feel free to comment on that to make the list as good as possible.

  • cron jobs of ALL users
    • copy cron entries and all references scripts/directories etc
  • scripts
    • copy any scripts that are on the old system
  • local applications
    • copy any application and their configuration that might run on the old system
  • tnsnames.ora of ALL clients
    • prepare TNS aliases for the new system and activate them during the migration
    • notify all application owners who do not use tnsnames.ora or directory services (JDBC Thin, EZ-Connect, …)
  • external procedures
    • be ea
  • directories
    • adump
    • diagnostic_dest
    • directory objects
  • database links in ALL other databases
  • backup
    • RMAN configuration, especially locations for backups and snapshot controlfile
  • connect identifiers in applications, that do not use tnsnames.ora (JDBC etc)
  • SSH keys, authorized_keys
    • prepare any remote systems that were able to login via SSH to work seamlessly with the new server
    • add any neccessary entries to known_hosts and/or authorized_keys on the new server
  • Monitoring systems
    • change your central monitoring tool to monitor the new environment

 

Once you moved everything to the new system it might be a good idea to keep the Oracle Listener(s) up and running on the old system for a while. Doing that, you will be able to monitor the listener logfiles and identify any application/user/interface that were not updated and  are still trying to connect to the old environment.