Transportable Tablespaces, Deferred Segment Creation and SE

Right now I am in the middle of a project which is about moving an old 11.2.0.2 Oracle Enterprise Edition Database of roughly 1.5TB from the US to new hardware in Germany including an upgrade to 12.2.0.1 and a switch the Standard Edition 2. As you see, there are a couple of things to do and I am more than happy to do this challenging project. The basic plan is this:

  1. Create a new 12.2.0.1 database in Standard Edition 2
  2. Get an initial copy of the source database to Germany
  3. Restore that copy on the new hardware with an 11.2.0.2 Oracle Home
  4. Get the archivelogs on a regular basis from the source database in the US
  5. Recover the german copy of the database
  6. Repeat 4 and 5 until cut-over
  7. Open Resetlogs the german copy of the database
  8. Move the data to the new 12.2.0.1 database using transportable tablespace

According to the Licensing Guide it is allowed to plug in transportable tablespaces into a SE2 database. So I am completely happy with my approach.

But during the test phase of this migration I encountered a nasty error when plugging in the tablespace. I obfuscated some identifiers, so don’t be too strict:

Import: Release 12.2.0.1.0 - Production on Thu May 31 15:32:01 2018

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.
;;; 
Connected to: Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production
Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01":  system/******** directory=copy_dump logfile=some_ts.impdp.log dumpfile=some_ts.dmpdp transport_datafiles=/u02/app/oracle/oradata/SOMEDB/SOMEDB/datafile/e*dbf 
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
ORA-39083: Object type TABLE:"SOME_USER"."SOME_TABLE" failed to create with error:
ORA-01647: tablespace 'SOME_TS' is read-only, cannot allocate space in it

Failing sql is:
CREATE TABLE "SOME_USER"."SOME_TABLE" (<Column list stripped>) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING TABLESPACE "SOME_TS" 

Ok, the tablesapce is read only. That is supposed to be. A quick research at MOS revealed a note “ORA-39083 & ORA-01647 during TTS or EXPDP/IMPDP from Enterprise Edition to Standard Edition when deferred_segment_creation=True (Doc ID 2288535.1)” which states, that the deferred segment creation is the root cause. That makes sense somehow. The note further states, the workaround is either to use the version parameter of “impdp” during import TTS or to disable deferred segment creation at the source database before starting the TTS.
To cut a long story short, both workarounds do not work and end up with the exact same error. In my opinion this makes totally sense, because the segment for the table in question is simply not present in the transported tablespace. And it can’t be created during the TTS import since the tablespace is marked read only. It cannot be created when I use the version parameter for impdp nor it is being created when I disable deferred segment creation.
The only feasible solution is to create the segment at the source database before setting the tablespace to read only. A simple

SQL> alter table SOME_USER.SOME_TABLE allocate extent;

Table altered.

does the trick. And that’s it. After creating a segment for this table the TTS import went fine without any further errors.
To make sure all objects do have a segment created, I use these queries:

SQL>  select 'alter table ' || owner || '.' || table_name || ' allocate extent;'
  2   from dba_tables where SEGMENT_CREATED='NO'
  3*  and owner='&USERNAME.';

So when using Transportable Tablespaces to move to a SE2 database and the source is 11.2.0.2 (the version when deferred segment creating was introduced) or higher, you better check that in advance. Hope that helps.

Advertisements

Why running “configure-firstnet” must not be run twice (ODA)

Recently I was deplyoing some Oracle Database Appliances X7-2M at a customers site. Everything went quite smooth until we tried to configure the network using “configure-firstnet”. The documentation is quite clear about this, one shall not run this more than once. But in our case we first configured the network without VLANs which ended up with a nice “ifcfg-btbond1” configuration containing the IP information.
But as we figured out we had to use a VLAN so we simply run “configure-firstnet” again. This generated an “ifcfg-btbond1.26” configuration file for our VLAN with ID 26. It had the same IP information in it as the one without VLANs. This works fine since the script is obviously creating the interfaces properly.
But after a

service network restart

The machine was not reachable anymore. As we investigated this, we saw that there are now two bond interfaces with a configured IP which obviously prevents any network communication.

So the solution was to remove all lines/parameters from “ifcfg-btbond1” configuration file, that were IP-related and restart the network again. At the end the issue was quite obvious and easy to remedy but it took us some time which could have been used better. That emphasizes the importance of clear information and a well structured preparation when deplyoing an Oracle Database Appliance (and other appliances too).

Missing Disk / Dismounting Diskgroup after duplicate from ASM to ACFS

Last week I was asked to create a Data Guard environment. Quite simple task, you may think. And actually it was, but with some funny side effects. The primary database is running on an Oracle Database Appliance X6-2M using ASM. The Standby database was planned to run on another ODA, a X5-2HA. The X5 is using pure ACFS. Both are running 12.1.0.2.170418 Bundlepatch. Be aware of that, the HA ODA’s are using PSUs whilst the smaller ones are using Bundlepatches. You should not mix up these, so I created another DB Home on the HA with the propper Bundlepatch. With the January ODA Update for the HA versions, Oracle moved to Bundlepatches too, but we are not yet there. So that’s it for the sake of completeness.

So what I did obviously in the first place was duplicating the primary database to the HA ODA. Once that was finished, I wanted to clean up the controlfile and get rid of all those backup and archivelog records and keep just the ones that are really available.

oracle@odax51 ~]$ rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Fri Mar 16 09:11:42 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: COMA (DBID=1562414168, not open)

RMAN> catalog db_recovery_file_dest;

Starting implicit crosscheck backup at 2018-03-16 09:11:44
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
allocated channel: ORA_DISK_2
allocated channel: ORA_DISK_3
allocated channel: ORA_DISK_4
allocated channel: ORA_DISK_5
allocated channel: ORA_DISK_6
allocated channel: ORA_DISK_7
allocated channel: ORA_DISK_8

At this point the RMAN was stuck. A quick look in the alert.log revealed a whole bunch of messages like these:

2018-03-16 09:08:32.000000 +01:00
WARNING: ASMB force dismounting group 3 (RECO) due to missing disks
SUCCESS: diskgroup RECO was dismounted
NOTE: ASMB mounting group 3 (RECO)
NOTE: ASM background process initiating disk discovery for grp 3 (reqid:0)
WARNING: group 3 (RECO) has missing disks
ORA-15040: diskgroup is incomplete
WARNING: group 3 is being dismounted.

The ASM alert.log had corresponding messages:

2018-03-16 09:11:48.567000 +01:00
NOTE: client COMA1:COMA:odax5-c dismounting group 3 (RECO)
NOTE: client COMA1:COMA:odax5-c dismounting group 3 (RECO)
NOTE: client COMA1:COMA:odax5-c dismounting group 3 (RECO)
NOTE: client COMA1:COMA:odax5-c dismounting group 3 (RECO)
NOTE: client COMA1:COMA:odax5-c dismounting group 3 (RECO)
NOTE: client COMA1:COMA:odax5-c dismounting group 3 (RECO)
NOTE: client COMA1:COMA:odax5-c dismounting group 3 (RECO)
NOTE: client COMA1:COMA:odax5-c dismounting group 3 (RECO)

Oh sh… you might think, and that was exactly what I thought at that time. So I checked the ASM diskgroups, disks etc. but did not find anything that could be a problem.

So after a while of thinking, the idea came up that it might be related to the backup stuff in the controlfile. So I checked that and tried to unregister a backupiece manually. I used the undocumented DBMS_BACKUP_RESTORE package for that, so do this at your own risk.

SQL> select RECID, STAMP, SET_STAMP, SET_COUNT, HANDLE, PIECE# from v$backup_piece
2 where handle like '+%' and rownum=1;


    RECID      STAMP  SET_STAMP  SET_COUNT PIECE# HANDLE
--------- ---------- ---------- ---------- ------ ----------------------------------------------------------------------------
   129941  969656433  969656431     130820      7 +RECO/COMAX6/BACKUPSET/2018_03_01/nnndn1_tag20180301t210006_0.2815.969656433

SQL> exec dbms_backup_restore.changebackuppiece( -
2      recid => 129941, -
3      stamp => 969656433, -
4      set_stamp => 969656431, -
5      set_count => 130820, -
6      pieceno => 7, -
7      handle => '+RECO/COMAX6/BACKUPSET/2018_03_01/nnndn1_tag20180301t210006_0.2815.969656433', -
8      status => 'D' -
9	);

During the PL/SQL call I saw exact one message like the ones above in the alert.log. That explains te behaviour. During the “catalog” call from RMAN, an implicit crosscheck takes place. Since this tries to access the files in the RECO diskgroup and there is really nothing in that diskgroup except an ACFS volume, this error is being thrown.

That means, I need to get rid of all these records. A simple PL/SQL block helped me doing that.

SQL> set serveroutput on 
SQL> begin
2  for rec in (select RECID, STAMP, SET_STAMP, SET_COUNT, HANDLE, PIECE# 
3              from v$backup_piece 
4			  where HANDLE like '+%'
5  ) loop 
6    dbms_output.put_line('deleting ''' ||rec.handle);
7    dbms_backup_restore.changebackuppiece( 
8       recid => rec.recid,
9       stamp => rec.stamp, 
10      set_stamp => rec.set_stamp,
11      set_count => rec.set_count,
12      pieceno => rec.piece#,
13      handle => rec.handle,
14      status => 'D'
15	 );
16   end loop;
17 end;
18 /

It took a while and caused again a lot of messages in both, the database and the ASM alert.log, but finally I was able to run RMAN commands successfully again.

Maybe this helps you solve such issues, but be aware of the fact that using DBMS_BACKUP_RESTORE is not supported.

Grid Infrastructure Upgrade to 12.2 requires reboot

Last week I had to upgrade a 2-node Oracle cluster running Grid Infrastructure 12.1.0.2 with the April 2017 Bundlepatch on Oracle Linux 7. The interresting thing is, the cluster is using the ASM Filter Driver (AFD) to present the disks to the GI. Since there were some caveats, I will walk you through the steps that lead to a running 12.2 cluster. Unfortunately, I have no terminal output or screenshots, but I am sure you will get the idea.

First, we updated the nodes OS-wise. So at the end, we have OL 7.4 with the latest UEK kernel at the time of patching the node. That went smooth.

Second, we installed the new Grid Infrastructure 12.2 software. To do that, we extracted the ZIP to it’s final location as described in the documentation. Then we run “gridSetup.sh” from this location and chose “Software only” and selected both nodes. This prepares the Oracle Homes on both nodes but does nothing else to it.

Next step was to patch the GI software to the latest (170118) bundlepatch. This is generally a good idea to fix as much issues as possible before setting up the cluster. It provides newer versions of kernel modules which is important in our case since we updated the kernel as the first step. But since we do not have a running 12.2 cluster at the time of patching, we cannot use “opatchauto” functionality to apply the patch. Instead, we needed to update OPatch to the latest version on both nodes and then apply all the patches that comes with the GU bundlepatch one by one like this:

oracle$ export ORACLE_HOME=/u01/app/grid/12.2.0.1
oracle$ export PATH=$ORACLE_HOME/OPatch:$PATH
oracle$ cd /tmp/27100009
oracle$ cd 26839277
oracle$ opatch apply .
oracle$ cd ../27105253
oracle$ opatch apply .
oracle$ cd ../27128906
oracle$ opatch apply .
oracle$ cd ../27144050
oracle$ opatch apply .
oracle$ cd ../27335416
oracle$ opatch apply .

Note, that this was run as the owner of the GI home, “oracle” in our case.

Before running the upgrade, we need to check if there is sufficient space available for the GIMR. Unfortunately the upgrade process creates the new GIMR in the same diskgroup that is used for storing OCR and voting disk even if the GIMR is currently stored in another diskgroup. In contrast, the installation can use a separate diskgroup for GIMR. So be aware of that.

At this point we can start the upgrade process by running “gridSetup.sh” again and selecting the “Upgrade” option. Quickly we come to the point where to root scripts needs to run. That is where the fun starts. In our case the “rootupgrade.sh” failed at the step where the AFD driver is being updated.

CLSRSC-400: A system reboot is required to continue installing.

The reason for that is, the “oracleafd” kernel module is in use and thus cannot be unloaded.

[root ~]# lsmod |grep afd
oracleafd             204227  1
[root ~]# modprobe -r oracleafd
modprobe: FATAL: Module oracleafd is in use.

There are issues like that in MOS, but none of those matched our scenario and/or patch level.

So a reboot is required, nice. That means our “gridSetup.sh” GUI that still has some work to do, will go away. Fortunately the documentation has a solution for that. We shall reboot the node and then run “gridSetup.sh” again and provide a response file. What the documentation does not tell is, that this response file was already created in $ORACLE_HOME/grid/install/response. We can identify the file by it’s name and timestamp.

So we went ahead and rebooted the first node. After it was up again we checked the kernel modules again, found “oracleafd” loaded again, but this time we were able to unload it.

[root ~]# lsmod |grep afd
oracleafd             204227  1
[root ~]# modprobe -r oracleafd
[root ~]# lsmod |grep afd
[root ~]# 

Maybe this step is not necessary but it helped us to stay calm at this point. We started “rootupgrade.sh” again and this time it run fine without any errors.

The next step is to run the “rootupgrade.sh” on the remaining node. It run into the same issue, so we rebooted the node, unloaded “oracleafd” kernel module and run “rootupgrade.sh” again which then run fine.

We were now up and running with GI 12.2. The final step is to run the “gridSetup.sh” again as described in the documentation to finalize the upgrade.

oracle$ $ORACLE_HOME/gridSetup.sh -executeConfigTools -responseFile $ORACLE_HOME/install/response/gridsetup.rsp

The went smooth and the cluster is finally upgraded to 12.2. As a last step we reconfigured the GIMR to use it’s dedicated diskgroup again. This is described in MOS Note 2065175.1 and is quite straight forward.

That’s it for today, I hope it will help you to stay calm during your cluster upgrades.

Data Guard warning: This file is unencrypted

Today I did some functional tests with a newly created Data Guard setup. The database is 12.1.0.2 with the latest (January) Bundlepatch installed and it does not use Oracle Managed Files (OMF).
During this tests I experienced a funny alert.log message at the standby site. I created a tablespace at the primary just to see that is automatically created at the standby too. The system is german, so please excuse the german messages, but I think you can get the idea.

SQL> create tablespace test datafile 'E:\DATABASE\P4\TABLESPACES\TEST01.dbf' size 1g;

Tablespace wurde angelegt.

The alert.log at the primary looks quite normal:

2018-02-01 09:55:55.679000 +01:00
create tablespace test datafile 'E:\DATABASE\P4\TABLESPACES\TEST01.dbf' size 1g
2018-02-01 09:56:02.806000 +01:00
Completed: create tablespace test datafile 'E:\DATABASE\P4\TABLESPACES\TEST01.dbf' size 1g

But at the standby site I read the following:

2018-02-01 09:56:00.856000 +01:00
WARNING: File being created with same name as in Primary
Existing file may be overwritten
2018-02-01 09:56:05.481000 +01:00
Recovery created file E:\DATABASE\P4\TABLESPACES\TEST01.DBF
WARNING: This file E:\DATABASE\P4\TABLESPACES\TEST01.DBF is created as unencrypted.Please consider encrypting this file!
Datafile 13 added to flashback set
Successfully added datafile 13 to media recovery
Datafile #13: 'E:\DATABASE\P4\TABLESPACES\TEST01.DBF'

None of my tablespaces at the primary site are encrypted or were ever encrypted. So a quick research at MOS pointed me to a discussion where a similar situation was solved by just setting the "db_create_file_dest" parameter. So I tried that at the standby:

SQL> alter system set db_create_file_dest='E:\DATABASE\P4\TABLESPACES';

System wurde geändert.

Then I dropped the tablespace and created it again:

SQL> drop tablespace test including contents and datafiles;

Tablespace wurde gelöscht.

SQL> create tablespace test datafile 'E:\DATABASE\P4\TABLESPACES\TEST01.dbf' size 1g;

Tablespace wurde angelegt.

Obviously there is no change at the primary, so I ommit that. But at the standby it now looks like I'd expect it in the first place:

2018-02-01 10:11:23.423000 +01:00
Datafile 13 added to flashback set
Successfully added datafile 13 to media recovery
Datafile #13: 'E:\DATABASE\P4\TABLESPACES\P4B\DATAFILE\O1_MF_TEST_F75PFP2Y_.DBF'

So even I am not using OMF I need to specify the "db_create_file_dest" parameter to prevent these strange warnings which would confuse my monitoring.

I did another test by resetting that parameter and setting "db_file_name_convert" instead, but this did not helped to prevent the warning.

Oracle 12.2.0.1 and the UPDATE privilege

Preface

Some days ago we upgraded an application schema from 11.2.0.4 to 12.2.0.1 using a simple DataPump Export/Import method. After the upgrade we found some errors inside the application. There are two users, the failing process connects as user A and tried to update data in a table owned by user B. User A has UPDATE privileges on the table owned by user B, but only UPDATE, nothing else. This setup was fine in 11.2 but it is not anymore in 12.2. The reason is, that the default for “sql92_security” has changed from FALSE to TRUE. If this parameter is set to TRUE, the SELECT privilege is required to UPDATE or DELETE from tables as the documentation clearly states.

11.2.0.4 Setup

SQL> show parameter sql92_security

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
sql92_security                       boolean     FALSE


SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Release 11.2.0.4.0 - 64bit Production
PL/SQL Release 11.2.0.4.0 - Production
CORE    11.2.0.4.0      Production
TNS for Linux: Version 11.2.0.4.0 - Production
NLSRTL Version 11.2.0.4.0 - Production



SQL> create user a identified by a quota unlimited on users;

User created.

SQL> create user b identified by b;

User created.

SQL> grant create session to a;

Grant succeeded.

SQL> grant create session to b;

Grant succeeded.

SQL> grant create table to a;

Grant succeeded.

11.2.0.4 Behaviour

Now I can create a table as user A, grant UPDATE privileges only to user B and thus B is able to change data.

SQL> conn a/a
Connected.
SQL> create table t as select * from all_tables;

Table created.

SQL> grant update on t to b;

Grant succeeded.

SQL> conn b/b
Connected.
SQL> update a.t set owner='CHANGED' where owner='SYSTEM';

4 rows updated.

SQL> rollback;

Rollback complete.

12.2.0.1 Setup

I use the same setup, now in a 12.2.0.1 database.

SQL> show parameter sql92_security

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
sql92_security                       boolean     TRUE
SQL> select * from v$version;

BANNER                                                                               CON_ID
-------------------------------------------------------------------------------- ----------
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production              0
PL/SQL Release 12.2.0.1.0 - Production                                                    0
CORE    12.2.0.1.0      Production                                                                0
TNS for Linux: Version 12.2.0.1.0 - Production                                            0
NLSRTL Version 12.2.0.1.0 - Production                                                    0

SQL> create user a identified by a quota unlimited on users;

User created.

SQL> create user b identified by b;

User created.

SQL> grant create session to a;

Grant succeeded.

SQL> grant create session to b;

Grant succeeded.

SQL> grant create table to a;

Grant succeeded.

12.2.0.1 Behaviour

Now I try the same update as above:

SQL> conn a/a
Connected.
SQL> create table t as select * from all_tables;

Table created.

SQL> grant update on t to b;

Grant succeeded.

SQL> conn b/b
Connected.
SQL> update a.t set owner='CHANGED' where owner='SYSTEM';
update a.t set owner='CHANGED' where owner='SYSTEM'
         *
ERROR at line 1:
ORA-01031: insufficient privileges

SQL92 requires additional SELECT privileges to allow updates on foreign objects. The READ privilege is not sufficient. This makes sense somehow since with the SELECT privilege we are able to lock rows in a table with “SELECT … FOR UPDATE”. Let’s see:

SQL> conn a/a
Connected.
SQL> grant read on t to b;

Grant succeeded.

SQL> conn b/b
Connected.
SQL> update a.t set owner='CHANGED' where owner='SYSTEM';
update a.t set owner='CHANGED' where owner='SYSTEM'
         *
ERROR at line 1:
ORA-01031: insufficient privileges


SQL> conn a/a
Connected.
SQL> revoke read on t from b;

Revoke succeeded.

SQL> grant select on t to b;

Grant succeeded.

SQL> conn b/b
Connected.
SQL> update a.t set owner='CHANGED' where owner='SYSTEM';

4 rows updated.

SQL> rollback;

Rollback complete.

Changing the paramter

Instead of adding privileges I can also revert the “sql92_standard” parameter back to the 11.2 setting of FALSE. The parameter is static, so I have to bounce the database.

SQL> conn / as sysdba
Connected.
SQL> alter system set sql92_security=false scope=spfile;

System altered.

SQL> startup force
ORACLE instance started.

Total System Global Area 1073741824 bytes
Fixed Size                  8628936 bytes
Variable Size             658507064 bytes
Database Buffers          398458880 bytes
Redo Buffers                8146944 bytes
Database mounted.
Database opened.
SQL> show parameter sql92_security

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
sql92_security                       boolean     FALSE
SQL> conn a/a
Connected.
SQL> revoke select on t from b;

Revoke succeeded.

SQL> conn b/b
Connected.
SQL> update a.t set owner='CHANGED' where owner='SYSTEM';

4 rows updated.

SQL> rollback;

Rollback complete.

Conclusion

Be aware of this change. When upgrading to 12.2 check the table privileges and search for users that have UPDATE only privileges on tables. I recommend adding the SELECT privilege instead of changing the parameter.

Recyclebin vs. Autoextend

At the end of last year I did a presentation at the Nuremberg DOAG Regional Meetup. One of my talks was about Flashback and Un-drop a table. The question came up wether a tablespace will autoextend or overwrite objects in the recyclebin in case of space pressure. I did not have an answer to that so I investigated this. And here’s the result.

First I created an user and a tablespace and granted some privileges to the user:

[oracle@oel6u4 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Fri Jan 5 12:02:44 2018

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SYS@perf> create tablespace test datafile size 10m autoextend on next 1m;

Tablespace created.

SYS@perf> create user test identified by test default tablespace test quota unlimited on test;

User created.

SYS@perf> grant create session to test;

Grant succeeded.

SYS@perf> grant create table to test;

Grant succeeded.

SYS@perf> grant select any dictionary to test;

Grant succeeded.

This user now can be used to test the behaviour. Basically I created a table that does not fit in the tablespace first.

SYS@perf> conn test/test
Connected.
TEST@perf> show parameter recyclebin

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
recyclebin                           string      on
TEST@perf> create table test as select * from all_source;

Table created.

The alert.log shows some file resize operations during this table creation:

2018-01-05 12:03:09.686000 +01:00
create tablespace test datafile size 10m autoextend on next 1m
Completed: create tablespace test datafile size 10m autoextend on next 1m
2018-01-05 12:05:08.112000 +01:00
Resize operation completed for file# 6, old size 10240K, new size 11264K
Resize operation completed for file# 6, old size 11264K, new size 12288K
2018-01-05 12:05:09.325000 +01:00
Resize operation completed for file# 6, old size 12288K, new size 13312K

Let’s check the current space usage from the database perspective:

TEST@perf> select sum(bytes),segment_name from user_segments group by segment_name;

SUM(BYTES) SEGMENT_NAME
---------- ----------------------------------------
  12582912 TEST

TEST@perf> select sum(bytes) from dba_free_space where tablespace_name='TEST';

SUM(BYTES)
----------
   1048576

TEST@perf> select sum(bytes) from dba_data_files where tablespace_name='TEST';

SUM(BYTES)
----------
  13631488

So I dropped the table and checked the situation again:

TEST@perf> drop table test;

Table dropped.

TEST@perf> show recyclebin
ORIGINAL NAME    RECYCLEBIN NAME                OBJECT TYPE  DROP TIME
---------------- ------------------------------ ------------ -------------------
TEST             BIN$YgZ7U68dFi7gU244qMDQ2Q==$0 TABLE        2018-01-05:12:09:26

TEST@perf> select sum(bytes),segment_name from user_segments group by segment_name;

SUM(BYTES) SEGMENT_NAME
---------- ----------------------------------------
  12582912 BIN$YgZ7U68dFi7gU244qMDQ2Q==$0

TEST@perf> select sum(bytes) from dba_free_space where tablespace_name='TEST';

SUM(BYTES)
----------
  13631488

Ok, the dropped segment is still there, but we have the whole tablespace as free space available.
Now the funny part, what will happen if I create another table?

TEST@perf> create table test as select * from all_source;

Table created.

TEST@perf> select sum(bytes),segment_name from user_segments group by segment_name;

SUM(BYTES) SEGMENT_NAME
---------- ----------------------------------------
  12582912 TEST

TEST@perf> show recyclebin
TEST@perf>

That means, Oracle prefers to overwrite the dropped segment instead of increasing the tablespace size. It makes sense in my opinion, the “flashback to before drop” feature is meant to help in case of emergency when something really bad happened, not the keep old things for as long as possible.
So if you want to keep your objects, just do not drop them 🙂