Size of Flashback Logs

The todays blog post is about the Flashback Database feature inside the Oracle Database. I use this feature mostly in the context of Data Guard, but there are also other possible use cases like having a fallback during updates/upgrades etc. For Data Guard setups, the Flashback Database feature is simplyfies and speeds up a reinstantiation of a formerly primary database that got lost and a failover the standby database was performed. With Flashback Database there is no need to perform a complete duplication of the database to bring it up and running as a standby database. The database is simply rolled back to just before the SCN when the standby database was activated by the failover. It can then just start applying redo that it gets from the new primary database.
But how does it work? When activating this feature, the database writes some additional logs that cotain information to rollback the database. These logs are stored in the Fast Recovery Area and there is no way to influence that. Let’s have a look at that.

SQL> select group#, status, bytes from v$log;

    GROUP# STATUS                BYTES
---------- ---------------- ----------
         1 CURRENT          1073741824
         2 INACTIVE         1073741824
         3 INACTIVE         1073741824

SQL> alter database flashback on;

Database altered.

SQL> select * from v$flashback_database_logfile;

NAME                                                                                   LOG#    THREAD#  SEQUENCE#      BYTES FIRST_CHANGE# FIRST_TIME          TYPE          CON_ID
-------------------------------------------------------------------------------- ---------- ---------- ---------- ---------- ------------- ------------------- --------- ----------
/u03/app/oracle/fast_recovery_area/MMISI2/flashback/o1_mf_h1xp7lv0_.flb                   1          1          1 1073741824       3886676 2020-01-15 10:18:44 NORMAL             0
/u03/app/oracle/fast_recovery_area/MMISI2/flashback/o1_mf_h1xp7pnk_.flb                   2          1          1 1073741824             0                     RESERVED           0

SQL> !ls -l /u03/app/oracle/fast_recovery_area/MMISI2/flashback/
total 2097168
-rw-r----- 1 oracle dba 1073750016 Jan 15 10:22 o1_mf_h1xp7lv0_.flb
-rw-r----- 1 oracle dba 1073750016 Jan 15 10:18 o1_mf_h1xp7pnk_.flb

Obviously the logs are the same size as the redologs. And they consume space inside your FRA. But what if one does not have sufficient space and/or wants to shrink those flashback logs? Since the size is being inherited from the size of the redologs, let’s try shrinking the redologs by creating new groups and dropping the old ones.

SQL> alter database add logfile group 10 size 256m;

Database altered.

SQL> alter database add logfile group 11 size 256m;

Database altered.

SQL> alter database add logfile group 12 size 256m;

Database altered.

SQL> alter database drop logfile group 2;

Database altered.

SQL> alter database drop logfile group 3;

Database altered.

SQL> alter system switch logfile;

System altered.

SQL> alter system checkpoint;

System altered.

SQL> alter database drop logfile group 1;

Database altered.

SQL> select group#, bytes, status from v$log;

    GROUP#      BYTES STATUS
---------- ---------- ----------------
        10  268435456 CURRENT
        11  268435456 UNUSED
        12  268435456 UNUSED

Now let’s check how the flashback logs look like:

SQL> select log#, thread#, sequence#, bytes, type from v$flashback_database_logfile;

      LOG#    THREAD#  SEQUENCE#      BYTES TYPE
---------- ---------- ---------- ---------- ---------
         1          1          1 1073741824 NORMAL
         2          1          1 1073741824 RESERVED

Still unchanged. May it requires some work to be done to cycle through the logs.

SQL> create table dings tablespace users as select * from dba_source;

Table created.

SQL> insert into dings select * from dings;

283698 rows created.

SQL> insert into dings select * from dings;

567396 rows created.

SQL> insert into dings select * from dings;

1134792 rows created.

SQL> insert into dings select * from dings;

2269584 rows created.

SQL> insert into dings select * from dings;

4539168 rows created.

SQL> commit;

Commit complete.

SQL> select group#, sequence#, bytes, status from v$log;

    GROUP#  SEQUENCE#      BYTES STATUS
---------- ---------- ---------- ----------------
        10         11  268435456 ACTIVE
        11         12  268435456 ACTIVE
        12         13  268435456 CURRENT

SQL> select log#, thread#, sequence#, bytes, type from v$flashback_database_logfile;

      LOG#    THREAD#  SEQUENCE#      BYTES TYPE
---------- ---------- ---------- ---------- ---------
         1          1          1 1073741824 NORMAL
         2          1          1 1073741824 RESERVED

Ok, no change here. There isn’t even a new flashback log. Maybe it wasn’t enough change? So let’s try some row-by-row deletion. And please do not do this in production, deleting rows can’t be more inefficient as that.

SQL> begin
   2   for rec in (select rowid from dings) loop
   3     delete from dings where rowid = rec.rowid;
   4     commit;
   5   end loop;
   6 end;
   7 /

PL/SQL procedure successfully completed.

SQL> select group#, sequence#, bytes, status from v$log;

    GROUP#  SEQUENCE#      BYTES STATUS
---------- ---------- ---------- ----------------
        10         53  268435456 CURRENT
        11         51  268435456 INACTIVE
        12         52  268435456 INACTIVE

SQL> select log#, thread#, sequence#, bytes, type from v$flashback_database_logfile;

      LOG#    THREAD#  SEQUENCE#      BYTES TYPE
---------- ---------- ---------- ---------- ---------
         1          1          1 1073741824 NORMAL
         2          1          1 1073741824 RESERVED

At lease this produced quite some redologs, but no new flashback logs. Maybe the “db_flashback_retention_target” parameter has an impact on these logs?

SQL> show parameter flashb

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_flashback_retention_target        integer     1440

SQL> alter system set db_flashback_retention_target=10;

System altered.

SQL> select log#, thread#, sequence#, bytes, type from v$flashback_database_logfile;

      LOG#    THREAD#  SEQUENCE#      BYTES TYPE
---------- ---------- ---------- ---------- ---------
         1          1          1 1073741824 NORMAL
         2          1          1 1073741824 RESERVED

Simply speaking, no, it hasn’t. Maybe other operations influence the generation of flashback logs, like index maintenance?

SQL> insert into dings select * from dba_source;

283698 rows created.

SQL> commit;

Commit complete.

SQL> create index dings_ix1 on dings(owner, name, line) tablespace new;

Index created.

SQL> insert into dings select * from dings;

283698 rows created.

SQL> insert into dings select * from dings;

567396 rows created.

SQL> insert into dings select * from dings;

1134792 rows created.

SQL> insert into dings select * from dings;

2269584 rows created.

SQL> commit;

Commit complete.

SQL> select group#, sequence#, bytes, status from v$log;

    GROUP#  SEQUENCE#      BYTES STATUS
---------- ---------- ---------- ----------------
        10         65  268435456 ACTIVE
        11         66  268435456 ACTIVE
        12         67  268435456 CURRENT

SQL> select log#, thread#, sequence#, bytes, type, first_time from v$flashback_database_logfile;

      LOG#    THREAD#  SEQUENCE#      BYTES TYPE      FIRST_TIME
---------- ---------- ---------- ---------- --------- -------------------
         3          1          3 1073741824 NORMAL    2020-01-15 11:27:47
         4          1          1 1073741824 RESERVED
         1          1          1 1073741824 NORMAL    2020-01-15 10:18:44
         2          1          2 1073741824 NORMAL    2020-01-15 11:26:36

SQL> !ls -l /u03/app/oracle/fast_recovery_area/MMISI2/flashback/
total 4194336
-rw-r----- 1 oracle dba 1073750016 Jan 15 11:26 o1_mf_h1xp7lv0_.flb
-rw-r----- 1 oracle dba 1073750016 Jan 15 11:27 o1_mf_h1xp7pnk_.flb
-rw-r----- 1 oracle dba 1073750016 Jan 15 11:28 o1_mf_h1xt6z4y_.flb
-rw-r----- 1 oracle dba 1073750016 Jan 15 11:27 o1_mf_h1xt95vj_.flb

Ok, finally there are some new flashback logs. But the size of those logs is still at 1GB.
I am running out of ideas, the last chance is to disable and enable the flashback datbae feautre.

SQL> alter database flashback off;

Database altered.

SQL> alter database flashback on;

Database altered.

SQL> select log#, thread#, sequence#, bytes, type, first_time from v$flashback_database_logfile;

      LOG#    THREAD#  SEQUENCE#      BYTES TYPE      FIRST_TIME
---------- ---------- ---------- ---------- --------- -------------------
         1          1          1  268435456 NORMAL    2020-01-15 11:30:01
         2          1          1  268435456 RESERVED

SQL> !ls -l /u03/app/oracle/fast_recovery_area/MMISI2/flashback/
total 524304
-rw-r----- 1 oracle dba 268443648 Jan 15 11:30 o1_mf_h1xtf8yz_.flb
-rw-r----- 1 oracle dba 268443648 Jan 15 11:30 o1_mf_h1xtfck6_.flb

Here we go, the flashback logs are finally shrinked. But with the consequences of loosing all the flashback information that was previously there. Keep that in mind and schedule wisely when changing the size of your flashback logs.

OTN Appreciation Day: Flashback

Following Tim Hall’s suggestion, I will write about my favorite feature: Flashback.

The term “Flashback” covers different technologies. The most useful ones in my opinion are Flashback Query and Flashback Drop, they become very useful in case of human errors. Those errors are much more likely to occur than errors caused by hardware failures. We are all humans and we all make mistakes. Flashback technologies provide easy methods to recover from those errors in contrast to RMAN features for instance.

In the past I already wrote about that, you can the article here: Flashback – Recover without Recovery

Flashback – Recover without Recovery

Introduction

You may know the situation, your databases are running fine, the backups are done regularly and without any issues. But there is one fact that brakes the whole thing, other humans called “users”. And a special kind of “users”, the “developers”. Sometimes they just do things by accident which lead to incorrect updates or deletes. The administrators take care of the physical structure of a database. But that does not protect us from human errors leading to data inconsistencies. That means in turn that recovering from these human errors is time-intensive. Ok, there are enhancements in 12c like the RMAN “recover table” command, but that is a different topic.

Flashback – An Overview

Oracle uses “FlashbacK” for a couple of techniques (SE = Standard Edition, EE = Enterprise Edition):

  • SE: Flashback Query – using UNDO data
  • EE: Flashback Version Query – using UNDO data
  • EE: Flashback Transaction – using Redo data (Logminer)
  • SE: Flashback Drop – using Recyclebin
  • EE: Flashback Database – using Flashback Logs
  • SE: Flashback Data Archive – using special data archives

The interresting thing is, that some powerful features come for free even with the Standard Edition of the database. Even Flashback Data Archive (without history table optimization) is included in all editions starting with 11.2.0.4.

Preparations

In order to use all the UNDO related features we need to prepare some things. With the default configuration the datafiles of UNDO tablespaces are created with “autoexend on” and the UNDO_RETENTION parameter is set to 900 seconds. That means Oracle will keep only these 15 minutes and overwrite older commited data even if there might be enough space available. But if we configure the UNDO datafiles to have a fixed size, Oracle ignores the value of UNDO_RETENTION and starts using all the space that is available and keeps commited transactions as long as there is space left in UNDO. That is visible in V$UNDOSTAT:

SQL> select * from (
  2  select TUNED_UNDORETENTION
  3  from   V$UNDOSTAT
  4  order by END_TIME desc
  5  )
  6  where ROWNUM = 1;

TUNED_UNDORETENTION
-------------------
              36356

Flashback Query

This feature utilizes the read consistency that Oracle provides. It just does not use the current SCN, it uses a SCN from the past. You simply use the “select … from … as of …” syntax the query data as it was at the given point in time.

SQL> create table emp_old
  2  as 
  3  select * 
  4  from emp as of timestamp 
  5  to_timestamp(‘2015-06-14 11:00:00’, 
  6  ‘yyyy-mm-yy hh24:mi:ss’);

If this ends up with an “ORA-1555: snapshot too old”, then it the undo data has already been overwritten. But what can we do with this data? We should materialize it so that we no longer rely on UNDO.

SQL> create table emp_old
  2  as 
  3  select * 
  4  from emp as of timestamp 
  5  to_timestamp(‘2015-06-14 11:00:00’, 
  6  ‘yyyy-mm-yy hh24:mi:ss’);

Now we check for differences:

SQL> select * from emp_old
  2  minus
  3  select * from emp;


Oracle allows us to use the "as of"-clause everywhere and mix it with current data, so we could also use it like this:


SQL> select * from emp as of timestamp
  2                to_timestamp('2015-06-14 11:00:00',
  3                'yyyy-mm-dd hh24:mi:ss')
  4  minus
  5  select * from emp;


But again, this only works as long as the UNDO information is not being overwritten.

Now, assume that some records were deleted by accident. We can re-insert these deleted records easily:


SQL> insert into emp
  2  select * from emp as of timestamp
  3                to_timestamp('2015-06-14 11:00:00',
  4                'yyyy-mm-dd hh24:mi:ss')
  5  minus
  6  select * from emp;

SQL> commit;

Or, if some records were modified unintentionally, we can revert these changes:

SQL> update emp e_live
  2  set sal = (select sal
  3             from emp as of timestamp
  4                      to_timestamp('2015-06-14 11:00:00',
  5                      'yyyy-mm-dd hh24:mi:ss') e_orig
  6             where e_orig.empno = e_live.empno
  7            )
  8  ;

SQL> commit;

These are only simple examples to outline the possibilities provided by flashback query. You may use all flavors of SQL DML to cope with your issue.

Flashback Export

A special use case for Flashback Query is Flashback Export. The legacy "exp" as well as the Datapump Export utility can create dumps for a given point in time. That is most useful if there are dependecies between tables that cannot be handled easily. We create an export representing the data from the past and import the data into another database or schema in the same database and give the user/developer access to it.

$ expdp system/Oracle-1 dumpfile=scott_flashback.dmpdp  \ 
        logfile=scott_flashback.expdp.log \
        directory=data_pump_dir \
        flashback_time='2015-06-14-11:00:00' schemas=scott
$ exp system/Oracle-1 file=/tmp/scott_flashback.dmp \
      log=/tmp/scott_flashback.exp.log \
      flashback_time='2015-06-14-11:00:00' owner=scott


<h3>Flashback Drop</h3>
What can we do if a table got dropped by accident? It is time for the Recyclebin. 


SQL> show recyclebin
ORIGINAL NAME RECYCLEBIN NAME                OBJ. TYPE  DROP TIME
------------- ------------------------------ ---------- -------------------
EMP           BIN$yxLVQQOqC6rgQBAKtBUKlA==$0 TABLE      2015-06-14:13:29:19

Were indexes defined on that table? To see that we need to query the *_recyclebin views:

SQL> select ORIGINAL_NAME, OBJECT_NAME, OPERATION, DROPTIME 
  2  from USER_RECYCLEBIN;

ORIGINAL_NAME OBJECT_NAME                    OPERATION DROPTIME
------------- ------------------------------ --------- -------------------
EMP           BIN$yxLVQQOqC6rgQBAKtBUKlA==$0 DROP      2015-06-14:13:29:19
PK_EMP        BIN$yxLVQQOpC6rgQBAKtBUKlA==$0 DROP      2015-06-14:13:29:19
IX_ENAME      BIN$yxLVQQOoC6rgQBAKtBUKlA==$0 DROP      2015-06-14:13:29:19
IX_MGR        BIN$yxLVQQOnC6rgQBAKtBUKlA==$0 DROP      2015-06-14:13:29:19


Let's restore the table:


SQL> flashback table emp to before drop;

Flashback done.

SQL> select ORIGINAL_NAME, OBJECT_NAME, OPERATION, DROPTIME 
  2  from USER_RECYCLEBIN;

No rows selected.

Ok, no more indexes in the recyclebin. Let's crosscheck that:

SQL> select index_name from user_indexes
  2  where table_name='EMP';

INDEX_NAME
------------------------------
BIN$yxLVQQOoC6rgQBAKtBUKlA==$0
BIN$yxLVQQOnC6rgQBAKtBUKlA==$0
BIN$yxLVQQOpC6rgQBAKtBUKlA==$0

The index names were not restored to the original ones. We need to rename the indexes on our own, good that we queried the names before:

SQL> alter index "BIN$yxLVQQOoC6rgQBAKtBUKlA==$0" rename to IX_ENAME;

Note that we use double-quotes for the recyclebin-name since it has some special characters and mixed upper/lowercase in it.

And also note, that foreign key constraints are not restored. We need to re-create them manually.

Flashback Version Query

This is an Enterprise Edition feature, so be sure to be licensed properly before you use it. Flashback Version Query provides some pseudo columns and a special from-clause to show the evolution of rows over time. Let's see what happened to Mr. Millers salary:

SQL> select
  2    versions_starttime, versions_endtime,
  3    versions_operation, versions_xid,
  4    sal
  5  from
  6    emp versions between timestamp
  7        systimestamp - interval '1' hour and
  8        systimestamp
  9  where
 10    ename = 'MILLER';

VERSIONS_STARTTIME VERSIONS_ENDTIME   V VERSIONS_XID           SAL
------------------ ------------------ - ---------------- ---------
14.06.15 14:35:41                     U 07000B00F7480000      1800
14.06.15 14:29:29  14.06.15 14:35:41  U                       1300
                   14.06.15 14:29:29                       1504,91

We see two updates of the row and the timestamps when to change took place. Insert and Deletes can also be queried using this technique as long as you know what to look for:

SQL> select
  2    versions_starttime, versions_endtime,
  3    versions_operation,
  4    sal
  5  from
  6    emp versions between timestamp
  7        systimestamp - interval '1' hour and
  8        systimestamp
  9  where
 10    ename = 'FOO';

VERSIONS_STARTTIME VERSIONS_ENDTIME   V        SAL
------------------ ------------------ - ----------
14.06.15 15:30:05                     D       5046
14.06.15 14:57:08  14.06.15 15:30:05  U       5046
14.06.15 14:45:05  14.06.15 14:57:08  I       4711

We see the insert followed by an update and at the end the delete. Nice.

Conclusion

The flashback features offer a nice and easy way to recover from human errors causing unintended changes to the data. My advice is to keep UNDO as long as possible, sometime you will appreciate it. Your users or developers do not ask for help immediately. They try on their own to recover the data and only if they are not successful they come and ask. That may be hours or even days after the change happened....