Restore Points and Upgrade – not for SE2

This time I want to tell something about Oracle database upgrades and what to do if something goes wrong during the upgrade. In that case the database needs to be reset to the time before the upgrade started. The Database Upgrade Assistant and also the Autoupgrade Tool can create a guaranteed restore point for that. That means, even with Flashback Database not enabled, the database will generate flashback logs beginning with the creation of that restore point. These flashback logs will get deleted only when the guaranteed restore point is dropped. For Standard Edition 2, Autoupgrade does not create a restore point and there is a reason for that. I tried that before doing a manual upgrade and these are my results.

First, I checked the database and created a guaranteed restore point.

[oracle@vm123 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Wed Feb 10 08:08:16 2021

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Release 11.2.0.4.0 - 64bit Production

SQL> col name format a60
SQL> set lines 200
SQL> select flashback_on from v$database;

FLASHBACK_ON
------------------------------------------------------
NO

SQL> create restore point upgrade_db guarantee flashback database;

Restore point created.

SQL> select flashback_on from v$database;

FLASHBACK_ON
------------------------------------------------------
RESTORE POINT ONLY

SQL> select * from v$flashback_database_logfile;

NAME                                                               LOG#    THREAD#  SEQUENCE#      BYTES FIRST_CHANGE# FIRST_TIME          TYPE
------------------------------------------------------------ ---------- ---------- ---------- ---------- ------------- ------------------- ---------------------------
/u01/app/oracle/fra/ORCL112B/flashback/o1_mf_j271mq1h_.flb            1          1          1 1073741824      46358813 2021-02-10 08:08:44 NORMAL
/u01/app/oracle/fra/ORCL112B/flashback/o1_mf_j271mzof_.flb            2          1          1 1073741824             0                     RESERVED

SQL> select NAME, TIME, GUARANTEE_FLASHBACK_DATABASE, PRESERVED from v$restore_point;

NAME                                                         TIME                                                                        GUARANTEE PRESERVED
------------------------------------------------------------ --------------------------------------------------------------------------- --------- ---------
UPGRADE_DB                                                   10.02.21 08:08:38,000000000                                                 YES       YES

You see, even in Standard Edition I can create a restore point and V$DATBASE reflects this and tells us, that Flashback is enabled just for the guaranteed restore point. Also we have some flashback logs being created to hold the data neccessary for doing the flashback. So everything looks fine at this point.

But what will happen, when I try to rewind my database to that restore point? Let’s have a look.

[oracle@vm123 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Wed Feb 10 08:16:00 2021

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Release 11.2.0.4.0 - 64bit Production

SQL> select NAME, TIME, GUARANTEE_FLASHBACK_DATABASE, PRESERVED from v$restore_point;

NAME          TIME                           GUARANTEE PRESERVED
------------- ------------------------------ --------- ---------
UPGRADE_DB    10.02.21 08:08:38,000000000    YES       YES

SQL> select flashback_on from v$database;

FLASHBACK_ON
------------------------------------------------------
RESTORE POINT ONLY

SQL> select status from v$instance;

STATUS
------------------------------------
MOUNTED

SQL> flashback database to restore point upgrade_db;
flashback database to restore point upgrade_db
*
ERROR at line 1:
ORA-00439: feature not enabled: Flashback Database

So even if we have everything in place to do the Flashback operation, the database denies that. And that is, because Flashback Database is an Enterprise Edition feature. The fact, that we can easily create a guaranteed restore point in a Standard Edition database is a little misleading, be aware of that. In my opinion it would be better to get the error message right when creating such a restore point like when we try to enable Flashback Database in general.

SQL> alter database flashback on;
alter database flashback on
*
ERROR at line 1:
ORA-00439: feature not enabled: Flashback Database

In case you have a single strategy for upgrading your databases and have both Enterprise and Standard Edition in place, do not rely on the successful creation of a restore point as your fallback. Do differentiate between both Editions and choose your backup wisely.

Misleading ORA-1555

Recently a colleague of mine was upgrading a database from 12.1.0.2 non-CDB to 19c PDB. Since the database was more than 1TB he used the following approach:

  1. Create a 19c CDB without any PDBs
  2. startup old 12.1 database read only
  3. create XML for plugging in the database
  4. stop the old 12.1 database
  5. plugin the database into the 19c CDB using the XML and NOCOPY option
  6. Upgrade the new PDB

So let me outline that a bit more in detail. This is what was done at the source database.

SQL> shutdown immediate
SQL> startup open read only

SQL> begin
2      dbms_pdb.describe(pdb_descr_file => '/u01/software/dbp/xxxx.xml');
3    end;
4    /

SQL> shutdown immediate

Having that, the next steps were done at the destination 19c CDB.

SQL> CREATE PLUGGABLE DATABASE XXXX USING '/u01/software/dbp/xxxx.xml'
2    NOCOPY
3    TEMPFILE REUSE;

SQL> alter pluggable database xxxx open upgrade;

$ cd $ORACLE_HOME/rdbms/admin
$ $ORACLE_HOME/perl/bin/perl catctl.pl -c "xxxx" -l /u01/oracle catupgrd.sql

And then, after rebooting everythin, the fun started. Nearly every application that connected to the new PDB including SQL*Developer got an ORA-1555 error:

ORA-00604: error occurred at recursive SQL level 1
ORA-01555: snapshot too old: rollback segment number 10 with name "_SYSSMU10_2194929394$" too small

Since this looked like an undo retention issue somehow, we checked the undo_retention parameter, set it to a higher value but without any change. Next, we checked all the LOBs inside the database and set their retention accordingly, but still no change. We even created a new UNDO tablespace and switched the PDB to that newly created tablespace, but still no change. Also the alert.log had no additional information. That’s why we then added some events to get traces. There is a nice explanation here: https://blogs.oracle.com/db/master-note-for-ora-1555-errors.

SQL> alter system set events '1555 trace name errorstack forever, level 3';
SQL> alter system set events '10442 trace name context forever, level 10';

We then raised the error again and searched for the traces. But there were no traces at all. At least not where we would expected them.

I tried to find any traces in ADR that were created in the last 60 minutes.

$ find $ORACLE_BASE -mmin -60 -name "*ora*trc" -type f

And I found one, but in the directory of the old 12.1 database:

Trace file /u01/app/oracle/diag/rdbms/xxxx/XXXX/trace/XXXX_ora_12346.trc
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
ORACLE_HOME = /u01/app/oracle/product/12.1.0/dbhome_1
System name:    Linux
Node name:      xxxx0003
Release:        4.12.14-95.6-default
Version:        #1 SMP Thu Jan 17 06:04:39 UTC 2019 (6af4ef8)
Machine:        x86_64
Instance name: XXXX
Redo thread mounted by this instance: 1
Oracle process number: 33
Unix process pid: 12346, image: oracle@xxxx0003


*** 2021-01-30 18:32:16.011
*** SESSION ID:(695.19613) 2021-01-30 18:32:16.011
*** CLIENT ID:() 2021-01-30 18:32:16.011
*** SERVICE NAME:(XXXX) 2021-01-30 18:32:16.011
*** MODULE NAME:(RCP:xxxxx:64040) 2021-01-30 18:32:16.011
*** CLIENT DRIVER:(jdbcthin : 19.3.0.0.0) 2021-01-30 18:32:16.011
*** ACTION NAME:() 2021-01-30 18:32:16.011

Attempt to read  undo record @ tsn: 188 uba: 0x01410eff.22ec.3e for CR-application for  xid: 0x000a.005.00007679 failed

And then I had the strange feeling, that there might be something else accessing the database. A quick check revealed. that the old 12.1 database was running again, for whatever reason.

My assumption is, that due to the plugin operation, the database got new SYSTEM and UNDO tablespaces, the controlfiles are from the CDB anyway. So the old 12.1 non-CDB and the new 19c PDB shared only the user datafiles and could startup concurrently without any trouble. Only when accessing the user data, the instance wants to perform a rollback for read consistency and was not able to find the propper undo segment since that undo segment resided in another place. That is my theory about that.

Finally, we stopped and disabled the old 12.1 database (instance). From that point on the PDB worked like a charme and happily there were no corruptions at all as far as we checked.

Conclusion: If you migrate from non-CDB to PDB on the same host using the NOCOPY option, prevent the old instance from ever starting again. You might run into serious trouble.

Purging ADR home of upgraded Grid Infrastructure might fail

The second blog post today is about housekeeping. I typically use the “purgeLogs” script from MOS Doc-ID 2081655.1. It was intially designed for engineered systems but it works well on classic systems too. But today I found an error in the output:

2019-06-21 00:00:06: I adrci GI purging diagnostic destination diag/crs/odax6m/crs
2019-06-21 00:00:06: I ... purging ALERT older than 8 days
2019-06-21 00:00:06: W Not able to purge ALERT, due to error: DIA-49803: Purge not possible due to incompatible schema version.

Typically this happens, when you use an “adrci” that does not match the ADR home you are trying to purge. That’s maybe the reason, why “purgeLogs” tries different adrci’s to purge the RDBMS homes.
But what is wrong here? Let’s try the manual way.

oracle@odax6m ~> adrci

ADRCI: Release 18.0.0.0.0 - Production on Fri Jun 21 13:53:17 2019

Copyright (c) 1982, 2018, Oracle and/or its affiliates.  All rights reserved.

ADR base = "/u01/app/oracle"
adrci> set home crs
adrci> purge
DIA-49803: Purge not possible due to incompatible schema version.

When I put that message in MOS, I quickly found this bug: Bug 28375106 – ADRCI Purge of Grid Home Fails With “DIA-49803: Purge not possible due to incompatible schema version” After Upgrade from 12.1 (Doc ID 28375106.8). So I crosschecked that.

adrci> show version schema
Schema version (on-disk): 107
Schema version (library): 110

And indeed, the versions are different. Obviously the issue is related to
upgrades. In my case this is an Oracle Database Appliance, that has been patched from 18.3 to 18.5. Unfortunately MOS has no workaround for this. But let’s check what “adrci” may can do for me.

adrci> help extended

 HELP [topic]
   Available Topics:
        BEGIN BACKUP
        CD
        CREATE STAGING XMLSCHEMA
        CREATE VIEW
        DDE
        DEFINE
        DELETE
        DESCRIBE
        DROP VIEW
        END BACKUP
        INSERT
        LIST DEFINE
        MERGE ALERT
        MERGE FILE
        MERGE LOG
        MIGRATE
        QUERY
        REPAIR
        SET COLUMN
        SHOW CATALOG
        SHOW DUMP
        SHOW SECTION
        SHOW TRACE
        SHOW TRACEMAP
        SWEEP
        UNDEFINE
        UPDATE
        VIEW

The “MIGRATE” sounds interresting.

adrci> help migrate

  Usage: MIGRATE [RELATION  | SCHEMA] [-DOWNGRADE | -RECOVER]

  Purpose: Upgrades the relation or schema to the latest version

  Options:
    [-downgrade] Downgrade schema to previous version
    [-recover]   Recover schema from failed migrate

  Examples:
    migrate schema
    migrate relation incident

Sounds like I do want to do exactly this. Give it a try.

adrci> migrate schema
Schema migrated.
adrci> show version schema
Schema version (on-disk): 110
Schema version (library): 110

And voilà, now I can purge the Grid Inftrastructure ADR home again.

adrci> purge
adrci>

Since the bug is about the behaviour after upgrading from 12.1, this might apply to other upgrade or update paths too. Be aware of that and monitor your housekeeping scripts. And check your ADR schema version after you upgraded or patched your Grid Infrastructure installation.

Platform change and upgrade in one step – XTTS

Today I want to tell you something about cross platform transportable tablespaces (XTTS) and incrementally updated datafile copies. This is a very nice feature to move to another platform and change the Oracle version in one step. There are several MOS notes, whitepapers and of cause some Upgrade-Slides by Mike Dietrich covering this topic.

The basic steps in all cases and versions are these:

  1. set tablespace(s) read only
  2. export metadata for those tablespaces
  3. transport datafiles and dump to target system
  4. create necessary database users
  5. import dump into target database which makes the datafiles part of the target database
  6. set tablespace(s) read write
  7. optionally, transport other objects like views, PL/SQL etc.

But what if the tablespaces are too big and copy time would exceed the allowed downtime window? Then we can use incremental backups to shorten the downtime. The steps to do this, are quite similar.

  1. backup tablespace(s) for transport
  2. copy backup to target system
  3. restore and convert datafiles to target destination and platform
  4. create incremental backup for transported tablespace(s)
  5. recover transported datafiles using the incremental backup
  6. repeat steps 4 and 5 until final downtime window
  7. set tablespace(s) read only
  8. create final incremental backup for transported tablespace(s)
  9. apply incremental backup to transported datafiles
  10. continue with step 2 of the initial process above

Sounds a bit complicated, doesn’t it? So Oracle was so kind to put this whole process into scripts. The following MOS notes contain and describe these scripts:

  • 11G – Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 1389592.1)
  • 12C – Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2005729.1)
  • V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2471245.1)

During a migration project from Linux to SPARC (don’t ask) I wanted to use those scripts. But I experienced the following during “Phase 2”:

oracle@server2:~/xtts> $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup
============================================================
trace file is /oracle/xtts/backup_Sep20_Thu_12_20_52_917//Sep20_Thu_12_20_52_917_.log
=============================================================

[…]

--------------------------------------------------------------------
Done with prepare phase
--------------------------------------------------------------------


--------------------------------------------------------------------
Find list of datafiles in system
--------------------------------------------------------------------

SP2-0027: Input is too long (> 2499 characters) - line ignored
SP2-0027: Input is too long (> 2499 characters) - line ignored
           from dual
           *
ERROR at line 19:
ORA-00936: missing expression

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Error:
------
Error in executing /oracle/xtts/backup_Sep20_Thu_12_27_07_889//diff.sql

Okay. The reason is, I have loads of tablespaces that need to be transferred. This list messed up the whole process. So I tried to understand the process and just do it myself. What I learned is, that it is quite easy to do it manually. And I share my findings with you by simply outlining the basic process in an upcoming blog post.

Transportable Tablespaces, Deferred Segment Creation and SE

Right now I am in the middle of a project which is about moving an old 11.2.0.2 Oracle Enterprise Edition Database of roughly 1.5TB from the US to new hardware in Germany including an upgrade to 12.2.0.1 and a switch the Standard Edition 2. As you see, there are a couple of things to do and I am more than happy to do this challenging project. The basic plan is this:

  1. Create a new 12.2.0.1 database in Standard Edition 2
  2. Get an initial copy of the source database to Germany
  3. Restore that copy on the new hardware with an 11.2.0.2 Oracle Home
  4. Get the archivelogs on a regular basis from the source database in the US
  5. Recover the german copy of the database
  6. Repeat 4 and 5 until cut-over
  7. Open Resetlogs the german copy of the database
  8. Move the data to the new 12.2.0.1 database using transportable tablespace

According to the Licensing Guide it is allowed to plug in transportable tablespaces into a SE2 database. So I am completely happy with my approach.

But during the test phase of this migration I encountered a nasty error when plugging in the tablespace. I obfuscated some identifiers, so don’t be too strict:

Import: Release 12.2.0.1.0 - Production on Thu May 31 15:32:01 2018

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.
;;; 
Connected to: Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production
Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01":  system/******** directory=copy_dump logfile=some_ts.impdp.log dumpfile=some_ts.dmpdp transport_datafiles=/u02/app/oracle/oradata/SOMEDB/SOMEDB/datafile/e*dbf 
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
ORA-39083: Object type TABLE:"SOME_USER"."SOME_TABLE" failed to create with error:
ORA-01647: tablespace 'SOME_TS' is read-only, cannot allocate space in it

Failing sql is:
CREATE TABLE "SOME_USER"."SOME_TABLE" (<Column list stripped>) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING TABLESPACE "SOME_TS" 

Ok, the tablesapce is read only. That is supposed to be. A quick research at MOS revealed a note “ORA-39083 & ORA-01647 during TTS or EXPDP/IMPDP from Enterprise Edition to Standard Edition when deferred_segment_creation=True (Doc ID 2288535.1)” which states, that the deferred segment creation is the root cause. That makes sense somehow. The note further states, the workaround is either to use the version parameter of “impdp” during import TTS or to disable deferred segment creation at the source database before starting the TTS.
To cut a long story short, both workarounds do not work and end up with the exact same error. In my opinion this makes totally sense, because the segment for the table in question is simply not present in the transported tablespace. And it can’t be created during the TTS import since the tablespace is marked read only. It cannot be created when I use the version parameter for impdp nor it is being created when I disable deferred segment creation.
The only feasible solution is to create the segment at the source database before setting the tablespace to read only. A simple

SQL> alter table SOME_USER.SOME_TABLE allocate extent;

Table altered.

does the trick. And that’s it. After creating a segment for this table the TTS import went fine without any further errors.
To make sure all objects do have a segment created, I use these queries:

SQL>  select 'alter table ' || owner || '.' || table_name || ' allocate extent;'
  2   from dba_tables where SEGMENT_CREATED='NO'
  3*  and owner='&USERNAME.';

So when using Transportable Tablespaces to move to a SE2 database and the source is 11.2.0.2 (the version when deferred segment creating was introduced) or higher, you better check that in advance. Hope that helps.

Grid Infrastructure Upgrade to 12.2 requires reboot

Last week I had to upgrade a 2-node Oracle cluster running Grid Infrastructure 12.1.0.2 with the April 2017 Bundlepatch on Oracle Linux 7. The interresting thing is, the cluster is using the ASM Filter Driver (AFD) to present the disks to the GI. Since there were some caveats, I will walk you through the steps that lead to a running 12.2 cluster. Unfortunately, I have no terminal output or screenshots, but I am sure you will get the idea.

First, we updated the nodes OS-wise. So at the end, we have OL 7.4 with the latest UEK kernel at the time of patching the node. That went smooth.

Second, we installed the new Grid Infrastructure 12.2 software. To do that, we extracted the ZIP to it’s final location as described in the documentation. Then we run “gridSetup.sh” from this location and chose “Software only” and selected both nodes. This prepares the Oracle Homes on both nodes but does nothing else to it.

Next step was to patch the GI software to the latest (170118) bundlepatch. This is generally a good idea to fix as much issues as possible before setting up the cluster. It provides newer versions of kernel modules which is important in our case since we updated the kernel as the first step. But since we do not have a running 12.2 cluster at the time of patching, we cannot use “opatchauto” functionality to apply the patch. Instead, we needed to update OPatch to the latest version on both nodes and then apply all the patches that comes with the GU bundlepatch one by one like this:

oracle$ export ORACLE_HOME=/u01/app/grid/12.2.0.1
oracle$ export PATH=$ORACLE_HOME/OPatch:$PATH
oracle$ cd /tmp/27100009
oracle$ cd 26839277
oracle$ opatch apply .
oracle$ cd ../27105253
oracle$ opatch apply .
oracle$ cd ../27128906
oracle$ opatch apply .
oracle$ cd ../27144050
oracle$ opatch apply .
oracle$ cd ../27335416
oracle$ opatch apply .

Note, that this was run as the owner of the GI home, “oracle” in our case.

Before running the upgrade, we need to check if there is sufficient space available for the GIMR. Unfortunately the upgrade process creates the new GIMR in the same diskgroup that is used for storing OCR and voting disk even if the GIMR is currently stored in another diskgroup. In contrast, the installation can use a separate diskgroup for GIMR. So be aware of that.

At this point we can start the upgrade process by running “gridSetup.sh” again and selecting the “Upgrade” option. Quickly we come to the point where to root scripts needs to run. That is where the fun starts. In our case the “rootupgrade.sh” failed at the step where the AFD driver is being updated.

CLSRSC-400: A system reboot is required to continue installing.

The reason for that is, the “oracleafd” kernel module is in use and thus cannot be unloaded.

[root ~]# lsmod |grep afd
oracleafd             204227  1
[root ~]# modprobe -r oracleafd
modprobe: FATAL: Module oracleafd is in use.

There are issues like that in MOS, but none of those matched our scenario and/or patch level.

So a reboot is required, nice. That means our “gridSetup.sh” GUI that still has some work to do, will go away. Fortunately the documentation has a solution for that. We shall reboot the node and then run “gridSetup.sh” again and provide a response file. What the documentation does not tell is, that this response file was already created in $ORACLE_HOME/grid/install/response. We can identify the file by it’s name and timestamp.

So we went ahead and rebooted the first node. After it was up again we checked the kernel modules again, found “oracleafd” loaded again, but this time we were able to unload it.

[root ~]# lsmod |grep afd
oracleafd             204227  1
[root ~]# modprobe -r oracleafd
[root ~]# lsmod |grep afd
[root ~]# 

Maybe this step is not necessary but it helped us to stay calm at this point. We started “rootupgrade.sh” again and this time it run fine without any errors.

The next step is to run the “rootupgrade.sh” on the remaining node. It run into the same issue, so we rebooted the node, unloaded “oracleafd” kernel module and run “rootupgrade.sh” again which then run fine.

We were now up and running with GI 12.2. The final step is to run the “gridSetup.sh” again as described in the documentation to finalize the upgrade.

oracle$ $ORACLE_HOME/gridSetup.sh -executeConfigTools -responseFile $ORACLE_HOME/install/response/gridsetup.rsp

The went smooth and the cluster is finally upgraded to 12.2. As a last step we reconfigured the GIMR to use it’s dedicated diskgroup again. This is described in MOS Note 2065175.1 and is quite straight forward.

That’s it for today, I hope it will help you to stay calm during your cluster upgrades.

Oracle 12.2.0.1 and the UPDATE privilege

Preface

Some days ago we upgraded an application schema from 11.2.0.4 to 12.2.0.1 using a simple DataPump Export/Import method. After the upgrade we found some errors inside the application. There are two users, the failing process connects as user A and tried to update data in a table owned by user B. User A has UPDATE privileges on the table owned by user B, but only UPDATE, nothing else. This setup was fine in 11.2 but it is not anymore in 12.2. The reason is, that the default for “sql92_security” has changed from FALSE to TRUE. If this parameter is set to TRUE, the SELECT privilege is required to UPDATE or DELETE from tables as the documentation clearly states.

11.2.0.4 Setup

SQL> show parameter sql92_security

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
sql92_security                       boolean     FALSE


SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Release 11.2.0.4.0 - 64bit Production
PL/SQL Release 11.2.0.4.0 - Production
CORE    11.2.0.4.0      Production
TNS for Linux: Version 11.2.0.4.0 - Production
NLSRTL Version 11.2.0.4.0 - Production



SQL> create user a identified by a quota unlimited on users;

User created.

SQL> create user b identified by b;

User created.

SQL> grant create session to a;

Grant succeeded.

SQL> grant create session to b;

Grant succeeded.

SQL> grant create table to a;

Grant succeeded.

11.2.0.4 Behaviour

Now I can create a table as user A, grant UPDATE privileges only to user B and thus B is able to change data.

SQL> conn a/a
Connected.
SQL> create table t as select * from all_tables;

Table created.

SQL> grant update on t to b;

Grant succeeded.

SQL> conn b/b
Connected.
SQL> update a.t set owner='CHANGED' where owner='SYSTEM';

4 rows updated.

SQL> rollback;

Rollback complete.

12.2.0.1 Setup

I use the same setup, now in a 12.2.0.1 database.

SQL> show parameter sql92_security

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
sql92_security                       boolean     TRUE
SQL> select * from v$version;

BANNER                                                                               CON_ID
-------------------------------------------------------------------------------- ----------
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production              0
PL/SQL Release 12.2.0.1.0 - Production                                                    0
CORE    12.2.0.1.0      Production                                                                0
TNS for Linux: Version 12.2.0.1.0 - Production                                            0
NLSRTL Version 12.2.0.1.0 - Production                                                    0

SQL> create user a identified by a quota unlimited on users;

User created.

SQL> create user b identified by b;

User created.

SQL> grant create session to a;

Grant succeeded.

SQL> grant create session to b;

Grant succeeded.

SQL> grant create table to a;

Grant succeeded.

12.2.0.1 Behaviour

Now I try the same update as above:

SQL> conn a/a
Connected.
SQL> create table t as select * from all_tables;

Table created.

SQL> grant update on t to b;

Grant succeeded.

SQL> conn b/b
Connected.
SQL> update a.t set owner='CHANGED' where owner='SYSTEM';
update a.t set owner='CHANGED' where owner='SYSTEM'
         *
ERROR at line 1:
ORA-01031: insufficient privileges

SQL92 requires additional SELECT privileges to allow updates on foreign objects. The READ privilege is not sufficient. This makes sense somehow since with the SELECT privilege we are able to lock rows in a table with “SELECT … FOR UPDATE”. Let’s see:

SQL> conn a/a
Connected.
SQL> grant read on t to b;

Grant succeeded.

SQL> conn b/b
Connected.
SQL> update a.t set owner='CHANGED' where owner='SYSTEM';
update a.t set owner='CHANGED' where owner='SYSTEM'
         *
ERROR at line 1:
ORA-01031: insufficient privileges


SQL> conn a/a
Connected.
SQL> revoke read on t from b;

Revoke succeeded.

SQL> grant select on t to b;

Grant succeeded.

SQL> conn b/b
Connected.
SQL> update a.t set owner='CHANGED' where owner='SYSTEM';

4 rows updated.

SQL> rollback;

Rollback complete.

Changing the paramter

Instead of adding privileges I can also revert the “sql92_standard” parameter back to the 11.2 setting of FALSE. The parameter is static, so I have to bounce the database.

SQL> conn / as sysdba
Connected.
SQL> alter system set sql92_security=false scope=spfile;

System altered.

SQL> startup force
ORACLE instance started.

Total System Global Area 1073741824 bytes
Fixed Size                  8628936 bytes
Variable Size             658507064 bytes
Database Buffers          398458880 bytes
Redo Buffers                8146944 bytes
Database mounted.
Database opened.
SQL> show parameter sql92_security

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
sql92_security                       boolean     FALSE
SQL> conn a/a
Connected.
SQL> revoke select on t from b;

Revoke succeeded.

SQL> conn b/b
Connected.
SQL> update a.t set owner='CHANGED' where owner='SYSTEM';

4 rows updated.

SQL> rollback;

Rollback complete.

Conclusion

Be aware of this change. When upgrading to 12.2 check the table privileges and search for users that have UPDATE only privileges on tables. I recommend adding the SELECT privilege instead of changing the parameter.