jump to navigation

Oracle11g: Zero Sized Unusable Indexes Part II (Nathan Adler) February 27, 2011

Posted by Richard Foote in 11g, 11g New features, Oracle Indexes, Unusable Indexes.
12 comments

In my previous post, I discussed how Oracle from 11g R2 onwards will automatically drop the segment and associated storage from unusable index objects. Mohamend Houri asked in the comments section the excellent question of just how useful this feature will be in real life cases when typically indexes are not left in an unusuable state for a long time, perhaps only when performing large data loads when such indexes would ordinarily be rebuilt anyways.

Thought the question was worth a seperate blog entry to provide a worthy answer.

The first point I would make is that we need to think a little outside the box and consider how such change in behaviour can open up new possibilities and flexibilities in how we index our tables.

For example, previously a Local Partitioned Index must have the same number of index partitions as the parent table. But what if an index is only useful for the “current” partition, where accessing newish data makes sense via an index. However, historical data in “older” partitions might only be accessed in batch processes via full partition scans. Why have a local index for older partitions when such indexes are never used. Previously, we had no choice, it was a case of if one or some of the partitions needed an index, then all the partitions needed to be indexed. If we made such unnecessary partitioned indexes unusable, we still needed to allocate storage for the index segment. Now, we can make any unnecessary index partition unusable and no storage at all is allocated to such index partitions.

Taking this a step further, we now have a really nice method of potentially indexing only portions of a table that need indexing, values which don’t have any benefit of being indexed (perhaps because the values are too numerous to ever be accessed efficiently via an index) no longer need to be indexed at all.

Here’s a classic example. Following is a table with a flag  in which the vast number of rows in the data have been “processed”. However, we have a few rows, those current rows which are of interest to us, which have not yet been processed (they may have a status of another value). We need an index in order to find the few rows which have not yet been processed but the index needs to also include all the values which are not of interest and have been processed.

 
SQL> create table bowie_stuff (id number, processed varchar2(10));
Table created.

SQL> insert into bowie_stuff select rownum, 'YES' from dual connect by level <= 1000000;
1000000 rows created.

SQL> commit;
Commit complete.

SQL> update bowie_stuff set processed = ‘NO’ where id in (999990, 999992, 999994, 999996, 999998);
5 rows updated.

SQL> commit;
Commit complete.

SQL> create index bowie_stuff_i on bowie_stuff(processed) pctfree 0;
Index created.

SQL> select index_name, leaf_blocks from dba_indexes where index_name = 'BOWIE_STUFF_I';

INDEX_NAME                     LEAF_BLOCKS
------------------------------ -----------
BOWIE_STUFF_I                         1877

SQL> select segment_name, blocks from dba_segments where segment_name = 'BOWIE_STUFF_I';

SEGMENT_NAME             BLOCKS
-------------------- ----------
BOWIE_STUFF_I              1920

 
 

Notice how the index is quite large (1,877 leaf blocks) as it needs to hold values for all 1M rows, even though only a relative handful of values within the index are ultimately of any use.

If we now gather stats (note we need to collect histograms as the column value distribution is very skewed) and run a query to select just the 5 rows that have not actually been processed:

 
SQL> exec dbms_stats.gather_table_stats(ownname=>'BOWIE', tabname=>'BOWIE_STUFF', estimate_percent=>null, cascade=> true, method_opt=> 'FOR ALL COLUMNS SIZE 1');

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.gather_table_stats(ownname=>'BOWIE', tabname=>'BOWIE_STUFF', estimate_percent=>null, method_opt=> 'FOR COLUMNS PROCESSED SIZE 5');

PL/SQL procedure successfully completed.

SQL> select * from bowie_stuff where processed = 'NO';

Execution Plan
---------------------------------------------------------------------------------------------
| Id  | Operation                   | Name          | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |               |     5 |    40 |     4   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| BOWIE_STUFF   |     5 |    40 |     4   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | BOWIE_STUFF_I |     5 |       |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          6  consistent gets
          0  physical reads
          0  redo size
        540  bytes sent via SQL*Net to client
        396  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          5  rows processed

 
 

Note the CBO uses the index but it requires a total of 6 consistent reads.

Not bad but we can do somewhat better and perform less I/O , significantly reduce storage overheads and significantly reduce index maintenance operations, if only we didn’t store the unnecessary index values within the index.

One method could be to create a function-based index based on the decode function and only store non-null values that are of interest. However, this requires the application to likewise use the decode function in order to make use of the index.

Another method is to use a partitioned index and now with this new Oracle11g feature of zero sized unusable indexes, we don’t need any storage at all for the unwanted indexed values.

Let’s now re-create the index as a globally partitioned index, with one partition defined to contain all values of interest and another partition defined to contain the vast number of processed values. Initially, the index is created in an unusable state so no segments and no storage is allocated to any of the partitions:

 
SQL> drop index bowie_stuff_i;

Index dropped.

SQL> create index bowie_stuff_i on bowie_stuff(processed)
  2  global partition by range (processed)
  3  (partition not_processed_part values less than ('YES'),
  4   partition processed_part values less than (MAXVALUE))
  5  unusable;

Index created.

 
 

Next, we’re only going to rebuild the partition containing just the relatively few rows of interest. The partition containing the values that are not of interest is left in an unusable state and so continues to occupy no storage at all:

 
SQL> alter index bowie_stuff_i rebuild partition not_processed_part;

Index altered.

SQL> select index_name, partition_name, leaf_blocks from dba_ind_partitions where index_name = 'BOWIE_STUFF_I';

INDEX_NAME           PARTITION_NAME       LEAF_BLOCKS
-------------------- -------------------- -----------
BOWIE_STUFF_I        PROCESSED_PART                 0
BOWIE_STUFF_I        NOT_PROCESSED_PART             1

SQL> select segment_name, partition_name, blocks from dba_segments where segment_name = 'BOWIE_STUFF_I';

SEGMENT_NAME         PARTITION_NAME           BLOCKS
-------------------- -------------------- ----------
BOWIE_STUFF_I        NOT_PROCESSED_PART            8

 
 

Note how the index is now tiny (reduced from 1,877 leaf blocks to just 1) as it is only now just storing the index entries that are of interest. We have just saved ourselves heaps of storage as the other partition remains unusable and uses no storage at all.

If we now run our query again:

 
SQL> select * from bowie_stuff where processed = 'NO';

Execution Plan
--------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name          | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |               |     5 |    45 |     1   (0)| 00:00:01 |       |       |
|   1 |  PARTITION RANGE SINGLE      |               |     5 |    45 |     1   (0)| 00:00:01 |     1 |     1 |
|   2 |   TABLE ACCESS BY INDEX ROWID| BOWIE_STUFF   |     5 |    45 |     1   (0)| 00:00:01 |       |       |
|*  3 |    INDEX RANGE SCAN          | BOWIE_STUFF_I |     5 |       |     1   (0)| 00:00:01 |     1 |     1 |
--------------------------------------------------------------------------------------------------------------

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          4  consistent gets
          0  physical reads
          0  redo size
        542  bytes sent via SQL*Net to client
        395  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          5  rows processed

 
 

We notice that the execution plan is just using the tiny index partition and as a result we have reduced our consistent gets down from 6 to just 4. Additionally, we have not had to change our application at all to use the improved index, it was the exact same query as executed previously.

This method can of course be used in Oracle prior to 11g R2 but now with  zero sized unusable indexes, we do not have to allocate any storage at all to those indexes that we may wish to remain in an unusable state for extended or indefinite periods of time. So yes, zero sized unusable indexes can be extremely useful in many real life scenarios 🙂

Oracle11g: Zero Sized Unusable Indexes (Zeroes) February 25, 2011

Posted by Richard Foote in 11g, 11g New features, Oracle Indexes, Unusable Indexes.
2 comments

Following on from my previous discussion on “Create On Demand” segments, Oracle 11g R2 has also introduced storage saving initiatives in relation to useable indexes.  Starting with a simple Oracle 10g example, we create a table and associated index:
 

 
SQL> create table bowie as select rownum id, 'BOWIE' name from dual connect by level <= 1000000;
 
Table created.
 
SQL> create index bowie_id_i on bowie(id);
 
Index created.
 
SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'BOWIE', cascade=> true, estimate_percent=> null, method_opt=> 'FOR ALL COLUMNS SIZE 1');
 
PL/SQL procedure successfully completed.

 
  

If we now make the index unusable:
 

 
SQL> alter index bowie_id_i unusable;
 
Index altered.
 
SQL> select index_name, blevel, leaf_blocks, num_rows, status, dropped from dba_indexes where index_name = 'BOWIE_ID_I';
 
INDEX_NAME     BLEVEL LEAF_BLOCKS   NUM_ROWS STATUS   DRO
---------- ---------- ----------- ---------- -------- ---
BOWIE_ID_I          2        2226    1000000 UNUSABLE NO
 

SQL> select segment_name, bytes, blocks, extents from dba_segments where segment_name = 'BOWIE_ID_I';
 
SEGMENT_NAME      BYTES     BLOCKS    EXTENTS
------------ ---------- ---------- ----------
BOWIE_ID_I     18874368       2304         18

 
  

We notice that the storage associated with the segment remains, even though the data within the index is now totally useless to us now. The index definition is of course vital but why bother continuing to assign 18 extents of storage (in this example) to the index ?  Oracle 11g Release 2 has now by default changed this behaviour. 
 
Using the same demo as before but running Oracle11g R2:
 

 
SQL> create table bowie as select rownum id, 'BOWIE' name from dual connect by level <= 1000000;
 
Table created.
 
SQL> create index bowie_id_i on bowie(id);
 
Index created.
 
SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'BOWIE', cascade=> true, estimate_percent=> null, method_opt=> 'FOR ALL COLUMNS SIZE 1');
 
PL/SQL procedure successfully completed.
 
SQL> alter index bowie_id_i unusable;
 
Index altered.
 
SQL> select index_name, blevel, leaf_blocks, num_rows, status, dropped from dba_indexes where index_name = 'BOWIE_ID_I';
 
INDEX_NAME     BLEVEL LEAF_BLOCKS   NUM_ROWS STATUS   DRO
---------- ---------- ----------- ---------- -------- ---
BOWIE_ID_I          2        2226    1000000 UNUSABLE NO
 
SQL> select segment_name, bytes, blocks, extents from dba_segments where segment_name = 'BOWIE_ID_I';
 
no rows selected

 
  

We notice that the storage associated with the object is all gone. The index object remains but the underling segment and storage have been automatically dropped.

If we now look at a partitioning example, where we create 3 types of indexes:
 

 
SQL> CREATE TABLE big_album_sales(id number, album_id number, country_id number,
          release_date date, total_sales number)  PARTITION BY RANGE (release_date)
(PARTITION ALBUMS_2007 VALUES LESS THAN (TO_DATE('01-JAN-2008', 'DD-MON-YYYY')),
 PARTITION ALBUMS_2008 VALUES LESS THAN (TO_DATE('01-JAN-2009', 'DD-MON-YYYY')),
 PARTITION ALBUMS_2009 VALUES LESS THAN (TO_DATE('01-JAN-2010', 'DD-MON-YYYY')),
 PARTITION ALBUMS_2010 VALUES LESS THAN (MAXVALUE));
 
Table created.
  

SQL> INSERT INTO big_album_sales SELECT rownum, mod(rownum,5000)+1, mod(rownum,100)+1, sysdate-mod(rownum,2000), ceil(dbms_random.value(1,500000)) FROM dual CONNECT BY LEVEL <= 1000000;
 
1000000 rows created.
 
SQL> commit;
 
Commit complete.

 
  

We first create a Non-Partitioned Index:
 

 
SQL> CREATE INDEX big_album_tot_sales_i ON big_album_sales(total_sales);
 
Index created.

 
  

Next a Global Partitioned Index:
 

 
SQL> CREATE INDEX big_album_country_id_i  ON big_album_sales(country_id)
     GLOBAL PARTITION BY RANGE (country_id)
     (PARTITION TS1 VALUES LESS THAN (26),
      PARTITION TS2 VALUES LESS THAN (51),
      PARTITION TS3 VALUES LESS THAN (76),
      PARTITION TS4 VALUES LESS THAN (MAXVALUE));
 
Index created.

  

Finally,  a Local Partitioned index:
  

 
SQL> CREATE INDEX big_album_album_id_i ON big_album_sales(album_id) local;
 
Index created.
 
SQL> exec dbms_stats.gather_table_stats(ownname=> 'BOWIE', tabname=> 'BIG_ALBUM_SALES', estimate_percent=> null, method_opt=> 'FOR ALL COLUMNS SIZE 1');
 
PL/SQL procedure successfully completed.

 
  

If we now split the last table partition, this will effectively make the:
 
1) Non-Partitioned Unusable
2) All partitions of the Global Partitioned index unusable
3) Just the last 2 partitions of the Local Partitioned Index unusable
  

 
SQL> ALTER TABLE big_album_sales SPLIT PARTITION ALBUMS_2010
     AT (TO_DATE('01-JAN-2011', 'DD-MON-YYYY'))
     INTO (PARTITION ALBUMS_2010, PARTITION ALBUMS_2011);
 
Table altered.
 
SQL> select index_name, status from dba_indexes where table_name = 'BIG_ALBUM_SALES';
 
INDEX_NAME               STATUS
------------------------ --------
BIG_ALBUM_TOT_SALES_I    UNUSABLE
BIG_ALBUM_COUNTRY_ID_I   N/A
BIG_ALBUM_ALBUM_ID_I     N/A
 
SQL> select index_name, partition_name, status, leaf_blocks from dba_ind_partitions where index_name like 'BIG_ALBUM_%';
 
INDEX_NAME              PARTITION_NAME STATUS   LEAF_BLOCKS
----------------------- -------------- -------- -----------
BIG_ALBUM_ALBUM_ID_I    ALBUMS_2007    USABLE           807
BIG_ALBUM_ALBUM_ID_I    ALBUMS_2008    USABLE           381
BIG_ALBUM_ALBUM_ID_I    ALBUMS_2009    USABLE           383
BIG_ALBUM_ALBUM_ID_I    ALBUMS_2010    UNUSABLE
BIG_ALBUM_ALBUM_ID_I    ALBUMS_2011    UNUSABLE
BIG_ALBUM_COUNTRY_ID_I  TS1            UNUSABLE         629
BIG_ALBUM_COUNTRY_ID_I  TS2            UNUSABLE         629
BIG_ALBUM_COUNTRY_ID_I  TS3            UNUSABLE         629
BIG_ALBUM_COUNTRY_ID_I  TS4            UNUSABLE         629
 
SQL> select segment_name, partition_name, bytes, blocks from dba_segments where segment_name like 'BIG_ALBUM_%' and segment_type like 'INDEX%';
 
SEGMENT_NAME          PARTITION_NAME    BYTES BLOCKS
--------------------- -------------- -------- ------
BIG_ALBUM_ALBUM_ID_I  ALBUMS_2007     7340032    896
BIG_ALBUM_ALBUM_ID_I  ALBUMS_2008     3145728    384
BIG_ALBUM_ALBUM_ID_I  ALBUMS_2009     4194304    512
BIG_ALBUM_TOT_SALES_I                23068672   2816

 
  

We notice that all segments associated with the Global Partitioned index which are now unusable have been dropped. As have both unusable partitions from the Local Partitioned Index. However, the segment and storage associated with the unusable Non-Partitioned index still remains. Perhaps a missing feature for another time …
 
It’s a nice little touch that the unusable and somewhat useless index segments now get automatically cleaned out in Oracle11g R2, although they did previously act as “placeholders” in that nothing else within the tablespace could come along and use the occupied storage.

Last Notice For Seminars Scheduled For Vienna and Tallinn Next Month February 18, 2011

Posted by Richard Foote in Oracle Index Seminar.
4 comments

Just received an email from Oracle University stating that numbers are very tight for my Index Internals and Best Practices Seminars scheduled next month for Vienna and Tallinn. Unfortunately, due to the travel time, it will be a case of them both running or neither of them going ahead.

With cancelled events in the past, it’s invariably been the case of people complaining later that they were just about to enroll when Oracle University went and cancelled the seminar. These things will unfortunately happen if things are left too late as obviously Oracle University need time to arrange and confirm travel with the presenter, print materials, gives customers who have enrolled sufficient time to give them notice of a cancellation as they often have made travel arrangements, etc. etc.

So if you are interested in attending, may I suggest you contact your local Oracle University rep ASAP of your intention before it’s too late.

Details are:

7-8 March 2011- Vienna

10-11 March 2011 – Tallinn

UPDATE: These seminars have now been confirmed and will definitely go ahead. To those attending, looking forward to seeing you all 🙂

Oracle11g Creation On Demand Indexes (Invisible Touch) February 17, 2011

Posted by Richard Foote in 11g, 11g New features, Oracle Indexes.
11 comments

Prior to Oracle11g Release 2, the default and minimum size of a segment is one extent. So in the below example, where we create a table and five associated indexes: 

 
SQL> create table empty (a number, b number, c number, d number, e number);
 
Table created.
 
SQL> create index empty_a_i on empty(a);
 
Index created.
 
SQL> create index empty_b_i on empty(b);
 
Index created.
 
SQL> create index empty_c_i on empty(c);
 
Index created.
 
SQL> create index empty_d_i on empty(d);
 
Index created.
 
SQL> create index empty_e_i on empty(e);
 
Index created.
 

SQL> select segment_name, blocks, bytes, extents from dba_segments where segment_name like 'EMPTY%';
 
SEGMENT_NAME     BLOCKS      BYTES    EXTENTS
------------ ---------- ---------- ----------
EMPTY               128    1048576          1
EMPTY_A_I           128    1048576          1
EMPTY_B_I           128    1048576          1
EMPTY_C_I           128    1048576          1
EMPTY_D_I           128    1048576          1
EMPTY_E_I           128    1048576          1
 
6 rows selected.

  

Each of the segments has been allocated an extent, including each of the indexes.
 
However, since Oracle11g Release 2, this default behaviour has changed. Running exactly the same demo:
 

 
SQL> create table empty (a number, b number, c number, d number, e number);
 
Table created.
 
SQL> create index empty_a_i on empty(a);
 
Index created.
 
SQL> create index empty_b_i on empty(b);
 
Index created.
 
SQL> create index empty_c_i on empty(c);
 
Index created.
 
SQL> create index empty_d_i on empty(d);
 
Index created.
 
SQL> create index empty_e_i on empty(e);
 
Index created.
 
SQL> select segment_name, blocks, bytes, extents from dba_segments where segment_name like 'EMPTY%';
 
no rows selected

 

We can see that no actual segments have been allocated. The default number of extents when creating an object is now effectively zero. Oracle now defers the creation of the segment and the actual allocation of extents and storage until the point in time when the first row gets inserted.
 
This means for those packaged applications where a large number of objects get created of which relatively few are actually ever used by the specific deployment of the application (eg. SAP) , a substantial amount of storage could potentially be saved. It also can save a significant amount of time deploying such applications as the overheads associated with actually creating the never to be used segments can be avoided.
 
There also some subtle performance implications when the application attempts to access some of these “empty” tables. I’m just going to create another table, but this one populated with a bunch of rows and run a query that joins this with the “empty” table:
 

 
SQL> create table bowie as select rownum id, 'BOWIE' name from dual connect by level <= 1000000;
 
Table created.
 
SQL> create index bowie_id_i on bowie(id);
 
Index created. 

SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'BOWIE', cascade=>true, estimate_percent=>null, method_opt=> 'FOR ALL COLUMNS SIZE 1');
 
PL/SQL procedure successfully completed.
 
SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'EMPTY', cascade=>true, estimate_percent=>null, method_opt=> 'FOR ALL COLUMNS SIZE 1');
 
PL/SQL procedure successfully completed.

SQL> select * from bowie, empty where bowie.id=empty.a and bowie.id = 42;
 
no rows selected
 

Execution Plan
----------------------------------------------------------
Plan hash value: 1098778158
 
--------------------------------------------------------------------------------------------
| Id  | Operation                     | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |            |     1 |    76 |     2   (0)| 00:00:01 |
|   1 |  MERGE JOIN CARTESIAN         |            |     1 |    76 |     2   (0)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID | EMPTY      |     1 |    65 |     1   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN           | EMPTY_A_I  |     1 |       |     1   (0)| 00:00:01 |
|   4 |   BUFFER SORT                 |            |     1 |    11 |     1   (0)| 00:00:01 |
|   5 |    TABLE ACCESS BY INDEX ROWID| BOWIE      |     1 |    11 |     1   (0)| 00:00:01 |
|*  6 |     INDEX RANGE SCAN          | BOWIE_ID_I |     1 |       |     0   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------
 

Predicate Information (identified by operation id):
---------------------------------------------------
 
   3 - access("EMPTY"."A"=42)
   6 - access("BOWIE"."ID"=42)
 

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          0  consistent gets
          0  physical reads
          0  redo size
        303  bytes sent via SQL*Net to client
        239  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed

 

With a table which has yet to be populated, the CBO knows there can be no rows associated with this table (and hence no returns return from this query). You can’t get any less than 0 consistent gets. Note previously, the CBO would be forced to perform at the very least one read (if not more) as it can’t “know” there are indeed no rows, even with the table statistics set at zero (as they may no longer be accurate) until it actually physically accesses a segment.
 
The segment is not actually created and storage allocated to the object until the first row is inserted into the table. This means that this first insert will be a relatively expensive operation as the work associated with creating the segment and allocating the initial extent needs to be implicitly performed in the background at this time. Not only for the table itself, but for any dependant object as well (eg. indexes).

 
SQL> insert into empty (a, b) values (1,1);
 
1 row created.
 

Execution Plan
----------------------------------------------------------
 
--------------------------------------------------------------------------
| Id  | Operation                | Name  | Rows  | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |       |     1 |     1   (0)| 00:00:01 |
|   1 |  LOAD TABLE CONVENTIONAL | EMPTY |       |            |          |
--------------------------------------------------------------------------
 

Statistics
----------------------------------------------------------
       3535  recursive calls
        178  db block gets
        830  consistent gets
         26  physical reads
      27272  redo size
        367  bytes sent via SQL*Net to client
        321  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
         59  sorts (memory)
          0  sorts (disk)
          1  rows processed
 
SQL> commit;
 
Commit complete.

   

Although it’s only a single row insert, note the high number of recursive calls and logical I/Os due in large part to the segment(s) being implicitly created in the background at this time. Subsequent inserts are nowhere near as expensive:
 

 
SQL> insert into empty (a, b) values (1,1);
 
1 row created.
 

Execution Plan
----------------------------------------------------------
 
--------------------------------------------------------------------------
| Id  | Operation                | Name  | Rows  | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |       |     1 |     1   (0)| 00:00:01 |
|   1 |  LOAD TABLE CONVENTIONAL | EMPTY |       |            |          |
--------------------------------------------------------------------------
 
Statistics
----------------------------------------------------------
          0  recursive calls
          5  db block gets
          1  consistent gets
          0  physical reads
        692  redo size
        381  bytes sent via SQL*Net to client
        321  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          1  rows processed

 

Although, the table was populated with values that only directly impacted 2 of the indexes, once the table segment is created, all dependent segments are created at the same time.

 
SQL> select segment_name, blocks, bytes, extents from dba_segments where segment_name like 'EMPTY%';
 
SEGMENT_NAME     BLOCKS      BYTES    EXTENTS
------------ ---------- ---------- ----------
EMPTY               128    1048576          1
EMPTY_A_I           128    1048576          1
EMPTY_B_I           128    1048576          1
EMPTY_C_I           128    1048576          1
EMPTY_D_I           128    1048576          1
EMPTY_E_I           128    1048576          1
 

SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'EMPTY', cascade=>true, estimate_percent=>null, method_opt=> 'FOR ALL COLUMNS SIZE 1');
 
PL/SQL procedure successfully completed.
 
SQL> select index_name, blevel, leaf_blocks, status from dba_indexes where index_name like 'EMPTY%';
 
INDEX_NAME     BLEVEL LEAF_BLOCKS STATUS
---------- ---------- ----------- --------
EMPTY_A_I           0           1 VALID
EMPTY_B_I           0           1 VALID
EMPTY_C_I           0           0 VALID
EMPTY_D_I           0           0 VALID
EMPTY_E_I           0           0 VALID

 

 
Note all the index segments have been created although only 2 of them actually contain data.

Finally, there are some implications with regard to how quotas are enforced that potentially may cause issues. Prior to Oracle11g Release 2, if a user tried to create a segment in a tablespace for which they don’t have sufficient privileges, the operation will fail during the creation of the object:

 
SQL> create user muse identified by muse default tablespace user_data temporary tablespace temp;
 
User created.
 
SQL> grant create session, create table to muse;
 
Grant succeeded.
 
SQL> connect muse/muse;
Connected.
SQL> create table fred (id number primary key using index (create index fred_pk on fred(id) tablespace user_data), name varchar2(20));
create table fred (id number primary key using index (create index fred_pk on fred(id) tablespace user_data), name varchar2(20))
*
ERROR at line 1:
ORA-01950: no privileges on tablespace 'USER_DATA'

 

However with Oracle11g Release 2, as no segment is actually created and hence storage allocated when an object is created, such creation of objects will now initially succeed. No quotas can be violated if no storage is actually used during the creation of the object:
 

 
SQL> create user muse identified by muse default tablespace user_data temporary tablespace temp;
 
User created.
 
SQL> grant create session, create table to muse;
 
Grant succeeded.
 
SQL> connect muse/muse
Connected.
 
SQL> create table fred (id number primary key using index (create index fred_pk on fred(id) tablespace user_data), name varchar2(20));
 
Table created.

  

It’s only when the table is first populated and hence when the segment is actually created and storage allocated, will quota related issues generate errors:

 
SQL> insert into fred values (1, 'BOWIE');
insert into fred values (1, 'BOWIE')
            *
ERROR at line 1:
ORA-01950: no privileges on tablespace 'USER_DATA'

 

An interesting change in default behaviour introduced in Oracle11g Release 2 that can potentially be quite beneficial but perhaps also a little dangerous for the unwary.