jump to navigation

Oracle11g: Zero Sized Unusable Indexes Part II (Nathan Adler) February 27, 2011

Posted by Richard Foote in 11g, 11g New features, Oracle Indexes, Unusable Indexes.
12 comments

In my previous post, I discussed how Oracle from 11g R2 onwards will automatically drop the segment and associated storage from unusable index objects. Mohamend Houri asked in the comments section the excellent question of just how useful this feature will be in real life cases when typically indexes are not left in an unusuable state for a long time, perhaps only when performing large data loads when such indexes would ordinarily be rebuilt anyways.

Thought the question was worth a seperate blog entry to provide a worthy answer.

The first point I would make is that we need to think a little outside the box and consider how such change in behaviour can open up new possibilities and flexibilities in how we index our tables.

For example, previously a Local Partitioned Index must have the same number of index partitions as the parent table. But what if an index is only useful for the “current” partition, where accessing newish data makes sense via an index. However, historical data in “older” partitions might only be accessed in batch processes via full partition scans. Why have a local index for older partitions when such indexes are never used. Previously, we had no choice, it was a case of if one or some of the partitions needed an index, then all the partitions needed to be indexed. If we made such unnecessary partitioned indexes unusable, we still needed to allocate storage for the index segment. Now, we can make any unnecessary index partition unusable and no storage at all is allocated to such index partitions.

Taking this a step further, we now have a really nice method of potentially indexing only portions of a table that need indexing, values which don’t have any benefit of being indexed (perhaps because the values are too numerous to ever be accessed efficiently via an index) no longer need to be indexed at all.

Here’s a classic example. Following is a table with a flag  in which the vast number of rows in the data have been “processed”. However, we have a few rows, those current rows which are of interest to us, which have not yet been processed (they may have a status of another value). We need an index in order to find the few rows which have not yet been processed but the index needs to also include all the values which are not of interest and have been processed.

 
SQL> create table bowie_stuff (id number, processed varchar2(10));
Table created.

SQL> insert into bowie_stuff select rownum, 'YES' from dual connect by level <= 1000000;
1000000 rows created.

SQL> commit;
Commit complete.

SQL> update bowie_stuff set processed = ‘NO’ where id in (999990, 999992, 999994, 999996, 999998);
5 rows updated.

SQL> commit;
Commit complete.

SQL> create index bowie_stuff_i on bowie_stuff(processed) pctfree 0;
Index created.

SQL> select index_name, leaf_blocks from dba_indexes where index_name = 'BOWIE_STUFF_I';

INDEX_NAME                     LEAF_BLOCKS
------------------------------ -----------
BOWIE_STUFF_I                         1877

SQL> select segment_name, blocks from dba_segments where segment_name = 'BOWIE_STUFF_I';

SEGMENT_NAME             BLOCKS
-------------------- ----------
BOWIE_STUFF_I              1920

 
 

Notice how the index is quite large (1,877 leaf blocks) as it needs to hold values for all 1M rows, even though only a relative handful of values within the index are ultimately of any use.

If we now gather stats (note we need to collect histograms as the column value distribution is very skewed) and run a query to select just the 5 rows that have not actually been processed:

 
SQL> exec dbms_stats.gather_table_stats(ownname=>'BOWIE', tabname=>'BOWIE_STUFF', estimate_percent=>null, cascade=> true, method_opt=> 'FOR ALL COLUMNS SIZE 1');

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.gather_table_stats(ownname=>'BOWIE', tabname=>'BOWIE_STUFF', estimate_percent=>null, method_opt=> 'FOR COLUMNS PROCESSED SIZE 5');

PL/SQL procedure successfully completed.

SQL> select * from bowie_stuff where processed = 'NO';

Execution Plan
---------------------------------------------------------------------------------------------
| Id  | Operation                   | Name          | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |               |     5 |    40 |     4   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| BOWIE_STUFF   |     5 |    40 |     4   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | BOWIE_STUFF_I |     5 |       |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          6  consistent gets
          0  physical reads
          0  redo size
        540  bytes sent via SQL*Net to client
        396  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          5  rows processed

 
 

Note the CBO uses the index but it requires a total of 6 consistent reads.

Not bad but we can do somewhat better and perform less I/O , significantly reduce storage overheads and significantly reduce index maintenance operations, if only we didn’t store the unnecessary index values within the index.

One method could be to create a function-based index based on the decode function and only store non-null values that are of interest. However, this requires the application to likewise use the decode function in order to make use of the index.

Another method is to use a partitioned index and now with this new Oracle11g feature of zero sized unusable indexes, we don’t need any storage at all for the unwanted indexed values.

Let’s now re-create the index as a globally partitioned index, with one partition defined to contain all values of interest and another partition defined to contain the vast number of processed values. Initially, the index is created in an unusable state so no segments and no storage is allocated to any of the partitions:

 
SQL> drop index bowie_stuff_i;

Index dropped.

SQL> create index bowie_stuff_i on bowie_stuff(processed)
  2  global partition by range (processed)
  3  (partition not_processed_part values less than ('YES'),
  4   partition processed_part values less than (MAXVALUE))
  5  unusable;

Index created.

 
 

Next, we’re only going to rebuild the partition containing just the relatively few rows of interest. The partition containing the values that are not of interest is left in an unusable state and so continues to occupy no storage at all:

 
SQL> alter index bowie_stuff_i rebuild partition not_processed_part;

Index altered.

SQL> select index_name, partition_name, leaf_blocks from dba_ind_partitions where index_name = 'BOWIE_STUFF_I';

INDEX_NAME           PARTITION_NAME       LEAF_BLOCKS
-------------------- -------------------- -----------
BOWIE_STUFF_I        PROCESSED_PART                 0
BOWIE_STUFF_I        NOT_PROCESSED_PART             1

SQL> select segment_name, partition_name, blocks from dba_segments where segment_name = 'BOWIE_STUFF_I';

SEGMENT_NAME         PARTITION_NAME           BLOCKS
-------------------- -------------------- ----------
BOWIE_STUFF_I        NOT_PROCESSED_PART            8

 
 

Note how the index is now tiny (reduced from 1,877 leaf blocks to just 1) as it is only now just storing the index entries that are of interest. We have just saved ourselves heaps of storage as the other partition remains unusable and uses no storage at all.

If we now run our query again:

 
SQL> select * from bowie_stuff where processed = 'NO';

Execution Plan
--------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name          | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |               |     5 |    45 |     1   (0)| 00:00:01 |       |       |
|   1 |  PARTITION RANGE SINGLE      |               |     5 |    45 |     1   (0)| 00:00:01 |     1 |     1 |
|   2 |   TABLE ACCESS BY INDEX ROWID| BOWIE_STUFF   |     5 |    45 |     1   (0)| 00:00:01 |       |       |
|*  3 |    INDEX RANGE SCAN          | BOWIE_STUFF_I |     5 |       |     1   (0)| 00:00:01 |     1 |     1 |
--------------------------------------------------------------------------------------------------------------

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          4  consistent gets
          0  physical reads
          0  redo size
        542  bytes sent via SQL*Net to client
        395  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          5  rows processed

 
 

We notice that the execution plan is just using the tiny index partition and as a result we have reduced our consistent gets down from 6 to just 4. Additionally, we have not had to change our application at all to use the improved index, it was the exact same query as executed previously.

This method can of course be used in Oracle prior to 11g R2 but now with  zero sized unusable indexes, we do not have to allocate any storage at all to those indexes that we may wish to remain in an unusable state for extended or indefinite periods of time. So yes, zero sized unusable indexes can be extremely useful in many real life scenarios :)

Follow

Get every new post delivered to your Inbox.

Join 1,713 other followers