Platform change and upgrade in one step – XTTS

Today I want to tell you something about cross platform transportable tablespaces (XTTS) and incrementally updated datafile copies. This is a very nice feature to move to another platform and change the Oracle version in one step. There are several MOS notes, whitepapers and of cause some Upgrade-Slides by Mike Dietrich covering this topic.

The basic steps in all cases and versions are these:

  1. set tablespace(s) read only
  2. export metadata for those tablespaces
  3. transport datafiles and dump to target system
  4. create necessary database users
  5. import dump into target database which makes the datafiles part of the target database
  6. set tablespace(s) read write
  7. optionally, transport other objects like views, PL/SQL etc.

But what if the tablespaces are too big and copy time would exceed the allowed downtime window? Then we can use incremental backups to shorten the downtime. The steps to do this, are quite similar.

  1. backup tablespace(s) for transport
  2. copy backup to target system
  3. restore and convert datafiles to target destination and platform
  4. create incremental backup for transported tablespace(s)
  5. recover transported datafiles using the incremental backup
  6. repeat steps 4 and 5 until final downtime window
  7. set tablespace(s) read only
  8. create final incremental backup for transported tablespace(s)
  9. apply incremental backup to transported datafiles
  10. continue with step 2 of the initial process above

Sounds a bit complicated, doesn’t it? So Oracle was so kind to put this whole process into scripts. The following MOS notes contain and describe these scripts:

  • 11G – Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 1389592.1)
  • 12C – Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2005729.1)
  • V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2471245.1)

During a migration project from Linux to SPARC (don’t ask) I wanted to use those scripts. But I experienced the following during “Phase 2”:

oracle@server2:~/xtts> $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup
============================================================
trace file is /oracle/xtts/backup_Sep20_Thu_12_20_52_917//Sep20_Thu_12_20_52_917_.log
=============================================================

[…]

--------------------------------------------------------------------
Done with prepare phase
--------------------------------------------------------------------


--------------------------------------------------------------------
Find list of datafiles in system
--------------------------------------------------------------------

SP2-0027: Input is too long (> 2499 characters) - line ignored
SP2-0027: Input is too long (> 2499 characters) - line ignored
           from dual
           *
ERROR at line 19:
ORA-00936: missing expression

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Error:
------
Error in executing /oracle/xtts/backup_Sep20_Thu_12_27_07_889//diff.sql

Okay. The reason is, I have loads of tablespaces that need to be transferred. This list messed up the whole process. So I tried to understand the process and just do it myself. What I learned is, that it is quite easy to do it manually. And I share my findings with you by simply outlining the basic process in an upcoming blog post.

Advertisements

Parallelizing Standard Edition Data Pump

Todays blog post is about Data Pump. Sometimes we use this tool to migrate applications to another database. There are some reasons for doing so, reorganizing data, difficult upgade/migration paths to mention two. With the Enterpise Edition of the Oracle Database we can do the export and import in parallel. But what if we are using Standard Edition only? We are bound to export and import serially. This is a major restriction, especially if time is limited. But there is a way to make it run in parallel. The basic idea is to split the export/import process into several tasks which then can run in parallel. There is no restriction in having several Data Pump processes at a time. It makes the process more difficult, but it can speed up the whole process a lot. Let’s see how it is done. In my example I use direct import using a network link, this skips the step of persisting the transported data in the filesystem and makes the process even faster.
Basically there are three steps that cost a lot of time:

  1. Transport table data
  2. Create indexes
  3. Create/validate constraints

To parallelize the transport of table data, I first analyze the size of the tables. Typically there are a handful large tables and many many small tables. I create sets of tables that should be transported together, these sets are made of the large tables. Then there is another import, that imports all the other tables. For the latter one I need to exclude indexes and constraints since those will be created in parallel afterwards. For best reproducability I use parameter files for all import runs. Beside that, it makes handling of quotes much easier than directly at the prompt. And I use “job_name” to better identify and monitor the data pump jobs.
First, the parameters for the sets of large tables, in my example there is just one table per set:

userid=
network_link=
logfile=DATA_PUMP_DIR:import_table1.log
include=TABLE:"=<TABLE>"
job_name=table_1
userid=
network_link=
logfile=DATA_PUMP_DIR:import_table2.log
include=TABLE:"=<TABLE>"
job_name=table_2

Repeat this for all the other table sets. Now the parameter file for the rest of the data and all the other information, I use a schema based import and exclude the large tables from above.

userid=
network_link=
logfile=DATA_PUMP_DIR:import_therest.log
schemas=
exclude=TABLE:"in ('TABLE_1', 'TABLE_2')"
exclude=INDEX
exclude=CONSTRAINT
job_name=the_rest

Since the last job creates all required users, I start this job first and wait until the user creation is finished. After that, the jobs for the large tables can be started.
When all of those jobs are finished, it is time to transport the indexes. To create all indexes, I use database jobs with no repeat interval. This makes the jobs go away once they finish successfully. The degree of parallelizm can be easily adapted by setting the “job_queue_processes” parameter. The job action can be created using DBMS_METADATA. The following script will generate a SQL script that creates one database job per index creation.

set serveroutput on
begin
  dbms_output.put_line('var x number');
  dbms_metadata.set_transform_param(
    TRANSFORM_HANDLE=>DBMS_METADATA.SESSION_TRANSFORM,
    name=>'STORAGE', 
    value=>FALSE
  );
  for obj in (
    select 'execute immediate ''''' || dbms_metadata.get_ddl('INDEX', index_name, owner) || ''''';' ddl
    from dba_indexes where owner=''
  ) loop
    dbms_output.put_line('begin dbms_job.submit(:x, what=>''' || obj.ddl || ''', next_date=>sysdate, interval=>null); end;');
  dbms_output.put_line('/');
  dbms_output.put_line('');
  end loop;
  dbms_output.put_line('commit;');
end;
/

After that, the constraints can be created in a similar way.

set serveroutput on
begin
  dbms_output.put_line('var x number');
  DBMS_METADATA.set_transform_param (DBMS_METADATA.session_transform, 'SQLTERMINATOR', true);
  for obj in (
    select dbms_metadata.get_ddl('CONSTRAINT', constraint_name, owner) ddl
    from dba_constraints where owner=''
    and constraint_type IN ('U', 'P')
  ) loop
    dbms_output.put_line('exec dbms_job.submit(:x, what=>''' || obj.ddl || ''', next_date=>sysdate, interval=>null);');
    dbms_output.put_line('/');
    dbms_output.put_line('');
  end loop;
  dbms_output.put_line('commit;');
end;
/

This craetes all the unique and primary constraints. If you wish to include more, simply adapt the script. Be aware of the fact, that I did not include all constraint types. I need to do a last metadata import at last to make sure all objects are copied to the new environment.

The approach can also be used to speed up Data Pump for Enterprise Edition since this only transports tables in parallel and does the index creation etc. serially too, that depends on the version you use.

Hope this helps.

Transportable Tablespaces, Deferred Segment Creation and SE

Right now I am in the middle of a project which is about moving an old 11.2.0.2 Oracle Enterprise Edition Database of roughly 1.5TB from the US to new hardware in Germany including an upgrade to 12.2.0.1 and a switch the Standard Edition 2. As you see, there are a couple of things to do and I am more than happy to do this challenging project. The basic plan is this:

  1. Create a new 12.2.0.1 database in Standard Edition 2
  2. Get an initial copy of the source database to Germany
  3. Restore that copy on the new hardware with an 11.2.0.2 Oracle Home
  4. Get the archivelogs on a regular basis from the source database in the US
  5. Recover the german copy of the database
  6. Repeat 4 and 5 until cut-over
  7. Open Resetlogs the german copy of the database
  8. Move the data to the new 12.2.0.1 database using transportable tablespace

According to the Licensing Guide it is allowed to plug in transportable tablespaces into a SE2 database. So I am completely happy with my approach.

But during the test phase of this migration I encountered a nasty error when plugging in the tablespace. I obfuscated some identifiers, so don’t be too strict:

Import: Release 12.2.0.1.0 - Production on Thu May 31 15:32:01 2018

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.
;;; 
Connected to: Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production
Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01":  system/******** directory=copy_dump logfile=some_ts.impdp.log dumpfile=some_ts.dmpdp transport_datafiles=/u02/app/oracle/oradata/SOMEDB/SOMEDB/datafile/e*dbf 
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
ORA-39083: Object type TABLE:"SOME_USER"."SOME_TABLE" failed to create with error:
ORA-01647: tablespace 'SOME_TS' is read-only, cannot allocate space in it

Failing sql is:
CREATE TABLE "SOME_USER"."SOME_TABLE" (<Column list stripped>) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING TABLESPACE "SOME_TS" 

Ok, the tablesapce is read only. That is supposed to be. A quick research at MOS revealed a note “ORA-39083 & ORA-01647 during TTS or EXPDP/IMPDP from Enterprise Edition to Standard Edition when deferred_segment_creation=True (Doc ID 2288535.1)” which states, that the deferred segment creation is the root cause. That makes sense somehow. The note further states, the workaround is either to use the version parameter of “impdp” during import TTS or to disable deferred segment creation at the source database before starting the TTS.
To cut a long story short, both workarounds do not work and end up with the exact same error. In my opinion this makes totally sense, because the segment for the table in question is simply not present in the transported tablespace. And it can’t be created during the TTS import since the tablespace is marked read only. It cannot be created when I use the version parameter for impdp nor it is being created when I disable deferred segment creation.
The only feasible solution is to create the segment at the source database before setting the tablespace to read only. A simple

SQL> alter table SOME_USER.SOME_TABLE allocate extent;

Table altered.

does the trick. And that’s it. After creating a segment for this table the TTS import went fine without any further errors.
To make sure all objects do have a segment created, I use these queries:

SQL>  select 'alter table ' || owner || '.' || table_name || ' allocate extent;'
  2   from dba_tables where SEGMENT_CREATED='NO'
  3*  and owner='&USERNAME.';

So when using Transportable Tablespaces to move to a SE2 database and the source is 11.2.0.2 (the version when deferred segment creating was introduced) or higher, you better check that in advance. Hope that helps.

Oracle 12.2.0.1 and the UPDATE privilege

Preface

Some days ago we upgraded an application schema from 11.2.0.4 to 12.2.0.1 using a simple DataPump Export/Import method. After the upgrade we found some errors inside the application. There are two users, the failing process connects as user A and tried to update data in a table owned by user B. User A has UPDATE privileges on the table owned by user B, but only UPDATE, nothing else. This setup was fine in 11.2 but it is not anymore in 12.2. The reason is, that the default for “sql92_security” has changed from FALSE to TRUE. If this parameter is set to TRUE, the SELECT privilege is required to UPDATE or DELETE from tables as the documentation clearly states.

11.2.0.4 Setup

SQL> show parameter sql92_security

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
sql92_security                       boolean     FALSE


SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Release 11.2.0.4.0 - 64bit Production
PL/SQL Release 11.2.0.4.0 - Production
CORE    11.2.0.4.0      Production
TNS for Linux: Version 11.2.0.4.0 - Production
NLSRTL Version 11.2.0.4.0 - Production



SQL> create user a identified by a quota unlimited on users;

User created.

SQL> create user b identified by b;

User created.

SQL> grant create session to a;

Grant succeeded.

SQL> grant create session to b;

Grant succeeded.

SQL> grant create table to a;

Grant succeeded.

11.2.0.4 Behaviour

Now I can create a table as user A, grant UPDATE privileges only to user B and thus B is able to change data.

SQL> conn a/a
Connected.
SQL> create table t as select * from all_tables;

Table created.

SQL> grant update on t to b;

Grant succeeded.

SQL> conn b/b
Connected.
SQL> update a.t set owner='CHANGED' where owner='SYSTEM';

4 rows updated.

SQL> rollback;

Rollback complete.

12.2.0.1 Setup

I use the same setup, now in a 12.2.0.1 database.

SQL> show parameter sql92_security

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
sql92_security                       boolean     TRUE
SQL> select * from v$version;

BANNER                                                                               CON_ID
-------------------------------------------------------------------------------- ----------
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production              0
PL/SQL Release 12.2.0.1.0 - Production                                                    0
CORE    12.2.0.1.0      Production                                                                0
TNS for Linux: Version 12.2.0.1.0 - Production                                            0
NLSRTL Version 12.2.0.1.0 - Production                                                    0

SQL> create user a identified by a quota unlimited on users;

User created.

SQL> create user b identified by b;

User created.

SQL> grant create session to a;

Grant succeeded.

SQL> grant create session to b;

Grant succeeded.

SQL> grant create table to a;

Grant succeeded.

12.2.0.1 Behaviour

Now I try the same update as above:

SQL> conn a/a
Connected.
SQL> create table t as select * from all_tables;

Table created.

SQL> grant update on t to b;

Grant succeeded.

SQL> conn b/b
Connected.
SQL> update a.t set owner='CHANGED' where owner='SYSTEM';
update a.t set owner='CHANGED' where owner='SYSTEM'
         *
ERROR at line 1:
ORA-01031: insufficient privileges

SQL92 requires additional SELECT privileges to allow updates on foreign objects. The READ privilege is not sufficient. This makes sense somehow since with the SELECT privilege we are able to lock rows in a table with “SELECT … FOR UPDATE”. Let’s see:

SQL> conn a/a
Connected.
SQL> grant read on t to b;

Grant succeeded.

SQL> conn b/b
Connected.
SQL> update a.t set owner='CHANGED' where owner='SYSTEM';
update a.t set owner='CHANGED' where owner='SYSTEM'
         *
ERROR at line 1:
ORA-01031: insufficient privileges


SQL> conn a/a
Connected.
SQL> revoke read on t from b;

Revoke succeeded.

SQL> grant select on t to b;

Grant succeeded.

SQL> conn b/b
Connected.
SQL> update a.t set owner='CHANGED' where owner='SYSTEM';

4 rows updated.

SQL> rollback;

Rollback complete.

Changing the paramter

Instead of adding privileges I can also revert the “sql92_standard” parameter back to the 11.2 setting of FALSE. The parameter is static, so I have to bounce the database.

SQL> conn / as sysdba
Connected.
SQL> alter system set sql92_security=false scope=spfile;

System altered.

SQL> startup force
ORACLE instance started.

Total System Global Area 1073741824 bytes
Fixed Size                  8628936 bytes
Variable Size             658507064 bytes
Database Buffers          398458880 bytes
Redo Buffers                8146944 bytes
Database mounted.
Database opened.
SQL> show parameter sql92_security

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
sql92_security                       boolean     FALSE
SQL> conn a/a
Connected.
SQL> revoke select on t from b;

Revoke succeeded.

SQL> conn b/b
Connected.
SQL> update a.t set owner='CHANGED' where owner='SYSTEM';

4 rows updated.

SQL> rollback;

Rollback complete.

Conclusion

Be aware of this change. When upgrading to 12.2 check the table privileges and search for users that have UPDATE only privileges on tables. I recommend adding the SELECT privilege instead of changing the parameter.