Empty Backup Piece in RMAN

Yesterday I needed to do a comlete database restore since the DATA diskgroup was corrupted and could not be mounted:

SQL> alter diskgroup data mount;
alter diskgroup data mount
ERROR in line 1:
ORA-15096: lost disk write detected

This was due to an interruption of power supply and buffered writes that were not written to disk. So the only solution was to drop and re-create the diskgroup from scratch loosing all the data inside that DATA diskgroup.

SQL> drop diskgroup data force including contents;

Diskgroup dropped;

SQL> create diskgroup data redundancy external diks '\\.\ORCLDISKDATA0', '\\.\ORCLDISKDATA1';

Diskgroup created;

Now since every datafile was gone I needed to restore the database from scratch. Fortunately there was another diskgroup FRA which was not corrupted. And it held a copy of the controlfile. So I first started the instance with a simple pfile in order to restore the spfile afterwards.


Then go to RMAN and do the restore:

RMAN> startup mount;
RMAN> restore spfile to 'c:\temp\spfilemydb.ora';
RMAN> shutdown immediate;

Now I created a temporary pfile from that restored spfile, changed the control_files parameter to ‘+DATA’,’+FRA’ in order to get two brand-new controlfiles, and created an spfile now inside ASM which I then used to restart the instance.

Now follows the funny part. I started the database restore which took a long time but went fine.

RMAN> restore database;

Easy. Now the recovery. I checked the available backup pieces which were spread over tapes and repmovable disks. I found one backup piece which sounded like an archivelog backup but contained nothing according to the controlfile.

MAN> list backuppiece 'G:\ARC_883008933_S28127_P1';

List of Backup Pieces
BP Key  BS Key  Pc# Cp# Status      Device Type Piece Name
------- ------- --- --- ----------- ----------- ----------
13625   12924   1   1   AVAILABLE   DISK        G:\ARC_883008933_S28127_P1                                                                                    
RMAN> list backupset 12924; 

List of Backup Sets
BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ -------------------                
12924   2.48G      DISK        00:01:49     22.06.2015 00:17:22                        

BP Key: 13625   Status: AVAILABLE  Compressed: NO  Tag: ARC                    
Piece Name: G:\ARC_883008933_S28127_P1 

The file was really there and was exactly the size RMAN mentioned. But for whatever reason no content was recorded. And I was missing some archivelogs which I really really would have liked to have in place for recovery. That’s why I tried to look into that backup piece since I was assuming that it contains my missing archivelogs. Just re-cataloging is not sufficient.

RMAN> catalog start with 'G:\';

searching for all files that match the pattern G:\
no files found to be unknown to the database

So I just renamed the file to ‘G:\TARC_883008933_S28127_P1′ and tried the catalog command again.

RMAN> catalog start with 'g:\T';
searching for all files that match the pattern g:\T

List of Files Unknown to the Database
File Name: G:\TARC_883008933_S28127_P1

Do you really want to catalog the above files (enter YES or NO)? yes

cataloging files...
cataloging done

List of Cataloged Files
File Name: G:\TARC_883008933_S28127_P1

RMAN> list backuppiece 'G:\TARC_883008933_S28127_P1';

List of Backup Pieces
BP Key  BS Key  Pc# Cp# Status      Device Type Piece Name
------- ------- --- --- ----------- ----------- ----------
13633   12924   1   2   AVAILABLE   DISK        G:\TARC_883008933_S28127_P1

RMAN> list backupset 12924;

List of Backup Sets                                                            
BS Key  Size
------- ----------
12924   2.48G

List of Archived Logs in backup set 12924

Thrd Seq     Low SCN    Low Time            Next SCN   Next Time
---- ------- ---------- ------------------- ---------- ---------
2    68379   2830068124 21.06.2015 09:54:37 2830104357 21.06.2015 10:02:27
2    68380   2830104357 21.06.2015 10:02:27 2830156671 21.06.2015 10:10:46
2    68381   2830156671 21.06.2015 10:10:46 2830187258 21.06.2015 10:17:36
2    68382   2830187258 21.06.2015 10:17:36 2830213445 21.06.2015 10:22:01
2    68383   2830213445 21.06.2015 10:22:01 2830239509 21.06.2015 10:27:06
2    68384   2830239509 21.06.2015 10:27:06 2830257850 21.06.2015 10:30:47
2    68385   2830257850 21.06.2015 10:30:47 2830273799 21.06.2015 10:34:01
2    68386   2830273799 21.06.2015 10:34:01 2830306271 21.06.2015 10:41:34
2    68387   2830306271 21.06.2015 10:41:34 2830325027 21.06.2015 10:45:18
2    68388   2830325027 21.06.2015 10:45:18 2830352735 21.06.2015 10:51:05
2    68389   2830352735 21.06.2015 10:51:05 2830383695 21.06.2015 10:58:41
2    68390   2830383695 21.06.2015 10:58:41 2830414559 21.06.2015 11:04:32
2    68391   2830414559 21.06.2015 11:04:32 2830446480 21.06.2015 11:12:14
2    68392   2830446480 21.06.2015 11:12:14 2830471834 21.06.2015 11:18:15
2    68393   2830471834 21.06.2015 11:18:15 2830486546 21.06.2015 11:22:14
2    68394   2830486546 21.06.2015 11:22:14 2830509633 21.06.2015 11:28:05
2    68395   2830509633 21.06.2015 11:28:05 2830530886 21.06.2015 11:32:21
2    68396   2830530886 21.06.2015 11:32:21 2830576051 21.06.2015 11:39:08
2    68397   2830576051 21.06.2015 11:39:08 2830604761 21.06.2015 11:46:28
2    68398   2830604761 21.06.2015 11:46:28 2830623622 21.06.2015 11:51:07
2    68399   2830623622 21.06.2015 11:51:07 2830647907 21.06.2015 11:57:49
2    68400   2830647907 21.06.2015 11:57:49 2830669866 21.06.2015 12:01:42
2    68401   2830669866 21.06.2015 12:01:42 2830701975 21.06.2015 12:10:05
2    68402   2830701975 21.06.2015 12:10:05 2830717193 21.06.2015 12:14:11
2    68403   2830717193 21.06.2015 12:14:11 2830753093 21.06.2015 12:26:11
2    68404   2830753093 21.06.2015 12:26:11 2830791736 21.06.2015 12:34:43
2    68405   2830791736 21.06.2015 12:34:43 2830819126 21.06.2015 12:41:23
2    68406   2830819126 21.06.2015 12:41:23 2830842054 21.06.2015 12:46:41
2    68407   2830842054 21.06.2015 12:46:41 2830862566 21.06.2015 12:51:31
2    68408   2830862566 21.06.2015 12:51:31 2830880088 21.06.2015 12:55:39
2    68409   2830880088 21.06.2015 12:55:39 2830906001 21.06.2015 12:59:57
2    68410   2830906001 21.06.2015 12:59:57 2830925092 21.06.2015 13:03:40
2    68411   2830925092 21.06.2015 13:03:40 2830947332 21.06.2015 13:08:58
2    68412   2830947332 21.06.2015 13:08:58 2830969688 21.06.2015 13:15:03
2    68413   2830969688 21.06.2015 13:15:03 2830996091 21.06.2015 13:22:02
2    68414   2830996091 21.06.2015 13:22:02 2831029186 21.06.2015 13:28:27
2    68415   2831029186 21.06.2015 13:28:27 2831043107 21.06.2015 13:31:51
2    68416   2831043107 21.06.2015 13:31:51 2831066514 21.06.2015 13:36:33
2    68417   2831066514 21.06.2015 13:36:33 2831099209 21.06.2015 13:46:16
2    68418   2831099209 21.06.2015 13:46:16 2831127405 21.06.2015 13:57:43
2    68419   2831127405 21.06.2015 13:57:43 2831163498 21.06.2015 14:09:31
2    68420   2831163498 21.06.2015 14:09:31 2831200968 21.06.2015 14:25:49
2    68421   2831200968 21.06.2015 14:25:49 2831239025 21.06.2015 14:43:08
2    68422   2831239025 21.06.2015 14:43:08 2831287068 21.06.2015 15:02:15
2    68423   2831287068 21.06.2015 15:02:15 2831321905 21.06.2015 15:16:43
2    68424   2831321905 21.06.2015 15:16:43 2831376687 21.06.2015 15:38:14
2    68425   2831376687 21.06.2015 15:38:14 2831420993 21.06.2015 15:58:56
2    68426   2831420993 21.06.2015 15:58:56 2831446376 21.06.2015 16:10:36
2    68427   2831446376 21.06.2015 16:10:36 2831472168 21.06.2015 16:21:27
2    68428   2831472168 21.06.2015 16:21:27 2831499369 21.06.2015 16:31:42
2    68429   2831499369 21.06.2015 16:31:42 2831541993 21.06.2015 16:46:16
2    68430   2831541993 21.06.2015 16:46:16 2831578975 21.06.2015 17:00:32
2    68431   2831578975 21.06.2015 17:00:32 2831615358 21.06.2015 17:13:55
2    68432   2831615358 21.06.2015 17:13:55 2831645246 21.06.2015 17:28:32
2    68433   2831645246 21.06.2015 17:28:32 2831677592 21.06.2015 17:54:21
2    68434   2831677592 21.06.2015 17:54:21 2831728008 21.06.2015 18:32:20
2    68435   2831728008 21.06.2015 18:32:20 2831764928 21.06.2015 19:35:43
2    68436   2831764928 21.06.2015 19:35:43 2831803761 21.06.2015 21:06:29
2    68437   2831803761 21.06.2015 21:06:29 2831838675 21.06.2015 22:02:41
2    68438   2831838675 21.06.2015 22:02:41 2831883176 22.06.2015 00:00:45
2    68439   2831883176 22.06.2015 00:00:45 2831888127 22.06.2015 00:11:16

Backup Set Copy #1 of backup set 12924
Device Type Elapsed Time Completion Time     Compressed Tag
----------- ------------ ------------------- ---------- ---
DISK        00:01:49     22.06.2015 00:17:22 NO         ARC

List of Backup Pieces for backup set 12924 Copy #1

BP Key  Pc# Status      Piece Name
------- --- ----------- ----------
13625   1   AVAILABLE   G:\ARC_883008933_S28127_P1

Backup Set Copy #2 of backup set 12924

Device Type Elapsed Time Completion Time     Compressed Tag
----------- ------------ ------------------- ---------- ---
DISK        00:01:49     26.06.2015 10:42:47 NO         ARC

List of Backup Pieces for backup set 12924 Copy #2

BP Key  Pc# Status      Piece Name
------- --- ----------- ----------
13633   1   AVAILABLE   G:\TARC_883008933_S28127_P1

See, now I have two copies of my backup set, one containing a number of archivelogs and the other one still containing nothing. Strange behavior and I have no clue why the information was missing in the first place. But nevertheless I was now able to recover the database, still incomplete but at least consistent, and make the customer happy. Weekend saved.

SQL*Plus, Character Sets and Windows

Again and again I stumble upon issues regarding Windows, SQL*Plus and character encoding. Let me share my experience with you.

For whatever reason Microsoft decided to use the PC850 character set for the command prompt (cmd) instead of the MSWIN1252 character set. That means, if you install Oracle on Windows and use MSWIN1252 as NLS character set, the “cmd” will destroy charcters when writing or reading from the database. That means one must be very careful when using “cmd” in combination with SQL*Plus. There is a way around that issue that I will describe.


You need to set the value for NLS_LANG explicitly before starting SQL*Plus to the proper value which is “WE8PC850” in that case.

C:\>set NLS_LANG=american_america.we8pc850

C:\>sqlplus marco/*****@******

SQL> create table bla (txt varchar2(100));

Tabl created.

SQL> insert into bla values ('NLS850: äöü');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from bla;

NLS850: äöü

Change Code Page

You may also change the code page that “cmd” is using before starting SQL*Plus:

Active code page: 850

C:\>chcp 1252
Active code page: 1252

C:\>sqlplus marco/*****@******

SQL> select * from bla;

NLS850: äöü

SQL> insert into bla values ('chcp1252: üöäß');

1 row created.

Commit complete.

SQL> select * from bla;

NLS850: äöü
chcp1252: üöäß

Be sure to use a font other than “Raster Fonts” for “cmd”. You can change that in the “Properties” dialogue on the “Font” tab.

Change Code Page permanently

If you are not willing to change NLS_LANG or the code page over and over again when using “cmd” you can also change the default code page for “cmd”. Just add the following registry key:

Windows Registry Editor Version 5.00
Active code page: 1252.

C:\>sqlplus marco/*****@******

SQL> select * from bla;

NLS850: äöü
chcp1252: üöäß

Identifying the current NLS_LANG setting

Since the Oracle environment on Windows platforms can be defined via registry settings rather than environment variables it is kind of tricky to query the current setting that is in effect. There is a simple trick to query that setting from SQL*Plus:

SQL> @ .[%NLS_LANG%]
SP2-0310: unable to open file ".[AMERICAN_AMERICA.WE8MSWIN1252]"

If you get

SQL> @ .[%NLS_LANG%]
SP2-0310: unable to open file ".[%NLS_LANG%]"

then there is no current setting, neither in the environment nor in the registry.

Additonal Information

You can find a list of valid Windows code pages on MSDN

Did you ever see “enq: CF – contention”?

I did. On a customers database we saw massive waits for “enq: CF – contention”. The sessions waiting for that event where all executing simple INSERT statements. So why do they wait for controlfile operations?
My Oracle Support Note 1380939.1 pointed me to the root cause. The INSERTs are doing NOLOGGING operations. And hence the database updates the UNCRECOVERABLE_TIME and UNRECOVERABLE_SCN information inside the controlfile over and over again. This serialization leads to massive performance degradation.
But who is doing NOLOGGING operations? That is bad in terms of recoverability and in our special case really really bad from a performance perspective. We started investigating this. The database runs in NO FORCE LOGGING mode and all tablespaces are LOGGING too:

SQL> select force_logging from v$database;


SQL> select distinct logging from dba_tablespaces where contents='PERMANENT';


Let’s check the logging attribute of the table:

SQL> select logging from dba_tables where table_name='TABLE_IN_QUESTION';


SQL> select partitioned,logging from dba_tables where table_name='TABLE_IN_QUESTION';

--- ---

Obviously the table is partitioned so I needed to check the partitions itself:

SQL> select distinct logging from dba_tab_partitions where table_name='TABLE_IN_QUESTION';


So why are there nologging operations? The logging attribute is for segments, so I checked for LOB columns in the table.

SQL> select column_name, logging, partitioned from dba_lobs  where table_name='TABLE_IN_QUESTION';

-------------------- ------- ---
VALUE                NONE    YES

SQL> select distinct column_name, logging from dba_lob_partitions  where table_name='TABLE_IN_QUESTION';

-------------------- -------
VALUE                NO

That’s it. The LOB partitions are defined to do NOLOGGING operations. But we were not able to change that:

SQL> alter table TABLE_IN_QUESTION modify lob (value) (nocache logging);
alter table TABLE_IN_QUESTION modify lob (value) (nocache logging)
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired

And of cause the application owner, who was complaining most about bad performance by the way, did not commit to switch the data loading off for a short period of time. You know, availability, etc bla bla… So we just enabled the FORCE LOGGING mode for the database which immediately removed the “enq: CF – contention”.

SQL> alter database force logging;

Database altered.

That is what Cloud Control looked like…
CF contention gone

But we need to keep in mind that some time when we are allowed to, we must change the LOB parameters.

Utilizing Serverpools in Oracle Grid Infrastructure


The idea of serverpools was introduced in 11g already. Before the existence of serverpools the instances of a database needed to be hard-linked to a server. Asume a 4-node cluster and a 2-instance RAC-database configured to run on node-1 and node-2. If these two nodes are not running for whatever reason the RAC-database is entirely unavailable. We would need to reconfigure the database instances to run on the remaining nodes node-3 and node-4.
That is the moment were serverpools come into play. Serverpools are a kind of an abstraction layer between servers and instances. Servers are allocated to serverpools according to the rules we can define. And on the other hand databases (better: services) are configured to run on specific serverpools. There is no hard dependency between instances and servers anymore.
The following examples use the new 12c parameters since the short ones are now deprecated.

How it works – Basics

Assume the following environment:

  • 2 databases, one with “gold” service, one with “silver” service.
  • 4 servers in the cluster

We want to make sure, that the “gold” database is available as long as possible. So we create two serverpools for the two service levels.

srvctl add serverpool -serverpool gold   -min 1 -max 2 -importance 10
srvctl add serverpool -serverpool silver -min 1 -max 2 -importance 8

Now, make the databases/services use the newly created serverpools:

srvctl add service -db gold   -service goldsvc   -serverpool gold   -cardinality uniform
srvctl add service -db silver -service silversvc -serverpool silver -cardinality uniform

What happens at the startup of the whole cluster? Servers are assigned to serverpools in order of importance. First all serverpools are filled until the defined minimum is reached. If all serverpools have “min” server assigned, the serverpools get filled with servers until “max” is reached in order of importance. That means the serverpool with the highest importance is started first regardless which nodes comes up first. Look at this example:

  • node-4 comes up –> assigned to serverpool “gold” (highest importance, “min” not reached)
  • node-2 comes up –> assigned to serverpool “silver” (“gold” has reached “min”, “silver” has second-highest importance and “min” not reached)
  • node-1 comes up –> assigned to serverpool “gold” (“silver” (all serverpools) has reached “min”, “gold” has not reached “max” and has highest importance)
  • node-3 comes up –> assigned to serverpool “silver” (“gold” has reached “max”, “silver” has second-highest importance and “max” not reached)

The result looks like this:

$ srvctl status serverpool
Server pool name: Free
Active servers count: 0
Server pool name: Generic
Active servers count: 4
Server pool name: gold
Active servers count: 2
Server pool name: silver
Active servers count: 2
$ crsctl status srvpool

ACTIVE_SERVERS=node-4 node-2 node-1 node-3

ACTIVE_SERVERS=node-4 node-1

ACTIVE_SERVERS=node-2 node-3

The instances were automatically started on the servers belonging to the configured serverpool. This happens completely automatic.

$ srvctl status database -db gold
Instance gold1 is running on node node-4
Instance gold2 is running on node node-1
$ srvctl status database -db silver
Instance silver1 is running on node node-2
Instance silver2 is running on node node-3

You can even modify the serverpool configuration, the handling of the database instances is done by the clusterware, there is no need for human intervention. As we add or remove servers from serverpools instances are configured and started or stopped automatically.

$ srvctl modify serverpool -serverpool gold   -max 3
$ srvctl modify serverpool -serverpool silver -max 1

We need to modify the “min” of serverpool “silver” too, obviously the rules are not completely re-evaluated after the changes.

$ srvctl status database -db gold
Instance gold1 is running on node node-4
Instance gold2 is running on node node-1
Instance gold3 is running on node node-3
$ srvctl status database -db silver
Instance silver1 is running on node node-2

See, there is a third instance for database “gold” now since the serverpool was extended to use three nodes.


Serverpools provide an easy way to get rid of the dependency between servers and instances which increases the availability and flexibility and also reduces administrative overhead dramatically due to the automatic instance handling.
But, on the other hand, if using Cloud Control to manage the cluster, there are some caveats. As of now Cloud Control is not aware of policy managed databases. If an instance is started on another node as at was before, it is marked as “down” on the old node. The newly assigned node does not know anything about the new instance from a Cloud Control point of view. This requires manual activity to reflect the changes and make everything “green” again.

There are other nice features around that, I will blog about that in the near future. Stay tuned.

Flashback – Recover without Recovery


You may know the situation, your databases are running fine, the backups are done regularly and without any issues. But there is one fact that brakes the whole thing, other humans called “users”. And a special kind of “users”, the “developers”. Sometimes they just do things by accident which lead to incorrect updates or deletes. The administrators take care of the physical structure of a database. But that does not protect us from human errors leading to data inconsistencies. That means in turn that recovering from these human errors is time-intensive. Ok, there are enhancements in 12c like the RMAN “recover table” command, but that is a different topic.

Flashback – An Overview

Oracle uses “FlashbacK” for a couple of techniques (SE = Standard Edition, EE = Enterprise Edition):

  • SE: Flashback Query – using UNDO data
  • EE: Flashback Version Query – using UNDO data
  • EE: Flashback Transaction – using Redo data (Logminer)
  • SE: Flashback Drop – using Recyclebin
  • EE: Flashback Database – using Flashback Logs
  • SE: Flashback Data Archive – using special data archives

The interresting thing is, that some powerful features come for free even with the Standard Edition of the database. Even Flashback Data Archive (without history table optimization) is included in all editions starting with


In order to use all the UNDO related features we need to prepare some things. With the default configuration the datafiles of UNDO tablespaces are created with “autoexend on” and the UNDO_RETENTION parameter is set to 900 seconds. That means Oracle will keep only these 15 minutes and overwrite older commited data even if there might be enough space available. But if we configure the UNDO datafiles to have a fixed size, Oracle ignores the value of UNDO_RETENTION and starts using all the space that is available and keeps commited transactions as long as there is space left in UNDO. That is visible in V$UNDOSTAT:

SQL> select * from (
  3  from   V$UNDOSTAT
  4  order by END_TIME desc
  5  )
  6  where ROWNUM = 1;


Flashback Query

This feature utilizes the read consistency that Oracle provides. It just does not use the current SCN, it uses a SCN from the past. You simply use the “select … from … as of …” syntax the query data as it was at the given point in time.

SQL> create table emp_old
  2  as 
  3  select * 
  4  from emp as of timestamp 
  5  to_timestamp(‘2015-06-14 11:00:00’, 
  6  ‘yyyy-mm-yy hh24:mi:ss’);

If this ends up with an “ORA-1555: snapshot too old”, then it the undo data has already been overwritten. But what can we do with this data? We should materialize it so that we no longer rely on UNDO.

SQL> create table emp_old
  2  as 
  3  select * 
  4  from emp as of timestamp 
  5  to_timestamp(‘2015-06-14 11:00:00’, 
  6  ‘yyyy-mm-yy hh24:mi:ss’);

Now we check for differences:

SQL> select * from emp_old
  2  minus
  3  select * from emp;

Oracle allows us to use the "as of"-clause everywhere and mix it with current data, so we could also use it like this:

SQL> select * from emp as of timestamp
  2                to_timestamp('2015-06-14 11:00:00',
  3                'yyyy-mm-dd hh24:mi:ss')
  4  minus
  5  select * from emp;

But again, this only works as long as the UNDO information is not being overwritten.

Now, assume that some records were deleted by accident. We can re-insert these deleted records easily:

SQL> insert into emp
  2  select * from emp as of timestamp
  3                to_timestamp('2015-06-14 11:00:00',
  4                'yyyy-mm-dd hh24:mi:ss')
  5  minus
  6  select * from emp;

SQL> commit;

Or, if some records were modified unintentionally, we can revert these changes:

SQL> update emp e_live
  2  set sal = (select sal
  3             from emp as of timestamp
  4                      to_timestamp('2015-06-14 11:00:00',
  5                      'yyyy-mm-dd hh24:mi:ss') e_orig
  6             where e_orig.empno = e_live.empno
  7            )
  8  ;

SQL> commit;

These are only simple examples to outline the possibilities provided by flashback query. You may use all flavors of SQL DML to cope with your issue.

Flashback Export

A special use case for Flashback Query is Flashback Export. The legacy "exp" as well as the Datapump Export utility can create dumps for a given point in time. That is most useful if there are dependecies between tables that cannot be handled easily. We create an export representing the data from the past and import the data into another database or schema in the same database and give the user/developer access to it.

$ expdp system/Oracle-1 dumpfile=scott_flashback.dmpdp  \ 
        logfile=scott_flashback.expdp.log \
        directory=data_pump_dir \
        flashback_time='2015-06-14-11:00:00' schemas=scott
$ exp system/Oracle-1 file=/tmp/scott_flashback.dmp \
      log=/tmp/scott_flashback.exp.log \
      flashback_time='2015-06-14-11:00:00' owner=scott

<h3>Flashback Drop</h3>
What can we do if a table got dropped by accident? It is time for the Recyclebin. 

SQL> show recyclebin
------------- ------------------------------ ---------- -------------------
EMP           BIN$yxLVQQOqC6rgQBAKtBUKlA==$0 TABLE      2015-06-14:13:29:19

Were indexes defined on that table? To see that we need to query the *_recyclebin views:


------------- ------------------------------ --------- -------------------
EMP           BIN$yxLVQQOqC6rgQBAKtBUKlA==$0 DROP      2015-06-14:13:29:19
PK_EMP        BIN$yxLVQQOpC6rgQBAKtBUKlA==$0 DROP      2015-06-14:13:29:19
IX_ENAME      BIN$yxLVQQOoC6rgQBAKtBUKlA==$0 DROP      2015-06-14:13:29:19
IX_MGR        BIN$yxLVQQOnC6rgQBAKtBUKlA==$0 DROP      2015-06-14:13:29:19

Let's restore the table:

SQL> flashback table emp to before drop;

Flashback done.


No rows selected.

Ok, no more indexes in the recyclebin. Let's crosscheck that:

SQL> select index_name from user_indexes
  2  where table_name='EMP';


The index names were not restored to the original ones. We need to rename the indexes on our own, good that we queried the names before:

SQL> alter index "BIN$yxLVQQOoC6rgQBAKtBUKlA==$0" rename to IX_ENAME;

Note that we use double-quotes for the recyclebin-name since it has some special characters and mixed upper/lowercase in it.

And also note, that foreign key constraints are not restored. We need to re-create them manually.

Flashback Version Query

This is an Enterprise Edition feature, so be sure to be licensed properly before you use it. Flashback Version Query provides some pseudo columns and a special from-clause to show the evolution of rows over time. Let's see what happened to Mr. Millers salary:

SQL> select
  2    versions_starttime, versions_endtime,
  3    versions_operation, versions_xid,
  4    sal
  5  from
  6    emp versions between timestamp
  7        systimestamp - interval '1' hour and
  8        systimestamp
  9  where
 10    ename = 'MILLER';

------------------ ------------------ - ---------------- ---------
14.06.15 14:35:41                     U 07000B00F7480000      1800
14.06.15 14:29:29  14.06.15 14:35:41  U                       1300
                   14.06.15 14:29:29                       1504,91

We see two updates of the row and the timestamps when to change took place. Insert and Deletes can also be queried using this technique as long as you know what to look for:

SQL> select
  2    versions_starttime, versions_endtime,
  3    versions_operation,
  4    sal
  5  from
  6    emp versions between timestamp
  7        systimestamp - interval '1' hour and
  8        systimestamp
  9  where
 10    ename = 'FOO';

------------------ ------------------ - ----------
14.06.15 15:30:05                     D       5046
14.06.15 14:57:08  14.06.15 15:30:05  U       5046
14.06.15 14:45:05  14.06.15 14:57:08  I       4711

We see the insert followed by an update and at the end the delete. Nice.


The flashback features offer a nice and easy way to recover from human errors causing unintended changes to the data. My advice is to keep UNDO as long as possible, sometime you will appreciate it. Your users or developers do not ask for help immediately. They try on their own to recover the data and only if they are not successful they come and ask. That may be hours or even days after the change happened....