jump to navigation

Indexes And Small Tables Part IV (Treefingers) May 5, 2009

Posted by Richard Foote in Index Block Splits, Index Internals, Oracle Indexes, Root Index Block, Small Indexes.
17 comments

As I asked in my previous post, the key question when comparing the associated costs of accessing a small table via a Full Table Scan (FTS) vs. an index scan is why does Oracle visit the segment header during a FTS but not during an index scan ?

The answer all comes down to understanding why Oracle must visit the table segment header during a FTS and how Oracle can avoid visiting the index segment header during an index scan.

Oracle must visit the table segment header during a FTS because it contains vital information necessary to perform the FTS, namely the extent map and the High Water Mark (HWM) associated with the table. Even with a table that only contains 1 data block worth of rows as in the SMALL table in my examples, Oracle has no way of automatically determining there’s actually only just the one block worth of data. It has to somehow look that up and determine exactly what table blocks Oracle needs to access during the FTS operation and the table segment header contains this necessary meta data. During “most” FTS operations, which are generally speaking larger, “expensive” operations, these accesses to the table segment header constitute a relatively small overhead. However, for FTS operations on “small” tables, accessing the table segment header can actually be a significant proportion of the overall associated costs.

During an index scan operation, there’s nothing of interest within the index segment header. The critical index block, the index block by which all index scans must start is the root block of the index (except Fast Full Index Scans which are basically the FTS equivalent for indexes). There’s no need to access the index segment header because it’s the root block that actually contains all the necessary information by which to start the index scan operation. The root blocks contains the pointers to subsequent index blocks (be it a branch or leaf blocks) that Oracle needs to follow in order to find the index entry of interest. The key to starting an index scan therefore is in determining the location of the index root block.

But how can Oracle determine the location of the index root block ?

Well Oracle implements a little “trick”, a golden rule with regard to indexes that doesn’t change regardless of the Oracle version, regardless of the O/S version, regardless of the type of tablespace or tablespace option of the index and regardless of how the index is created or grows and block splits over time.

The index root block is always, always, always the block immediately after the index segment header.

Always.

Therefore, when the Oracle code issues the associated function calls to perform an index scan, the first index block that Oracle assesses is the index segment header plus an offset of 1. Whereas a FTS accesses the table segment header, an index scan accesses the index segment block id plus 1.

With a tiny index that only has a level of 0 (or height of 1), note there is not “root” block as such as all the index entries can fit within one index leaf block. However, this block, this one and only leaf block within the index structure is also always the block immediately after the index segment header.

Always.

When we add more index entries into this one and only leaf block, we’ll eventually reach a point when it’s full and Oracle must perform an index block split operation. Oracle will then allocate 2 new blocks to the index. Assuming a 50-50 block split, one of these new blocks is assigned the lower 1/2 of all the current index entry values and the other new block is assigned the other upper 1/2 of the current index entry values. The original index leaf block content is then cleaned out and reassigned with just relative block address pointers and value boundaries associated with the 2 new leaf blocks.

The original index leaf block has been “reborn” as the root block of the index.

It’s quite easy to demonstrate how the original index block in a level 0 index or the root block of an index never changes and is always the block that follows the index segment header.

First, just create a little table and associated index:

SQL> CREATE TABLE same_root (id NUMBER, name VARCHAR2(30));

Table created.

SQL> INSERT INTO same_root VALUES (1, ‘The Thin White Duke’);

1 row created.

SQL> COMMIT;

Commit complete.

SQL> CREATE INDEX same_root_i ON same_root(name);

Index created.

If we dump the block immediately following the index segment header, we can confirm it’s our index block of interest, containing our one and only index entry:

SQL> select header_file, header_block from dba_segments where segment_name=’SAME_ROOT_I’;

HEADER_FILE HEADER_BLOCK
----------- ------------
          5       107441

SQL> alter system dump datafile 5 block 107442;

System altered.

Following is an extract of the index block dump:

Leaf block dump
===============
header address 98959964=0x5e6025c
kdxcolev 0
KDXCOLEV Flags = – - -
kdxcolok 0
kdxcoopc 0×80: opcode=0: iot flags=— is converted=Y
kdxconco 2
kdxcosdc 0
kdxconro 1
kdxcofbo 38=0×26
kdxcofeo 8007=0x1f47
kdxcoavs 7969
kdxlespl 0
kdxlende 0
kdxlenxt 0=0×0
kdxleprv 0=0×0
kdxledsz 0
kdxlebksz 8036
row#0[8007] flag: ——, lock: 0, len=29
col 0; len 19; (19):  54 68 65 20 54 68 69 6e 20 57 68 69 74 65 20 44 75 6b 65
col 1; len 6; (6):  01 41 a3 aa 00 00
—– end of leaf block dump —–

Note that it is indeed our one and only index leaf block.

If we now take a treedump of the index:

SQL> SELECT object_id FROM dba_objects where object_name = ‘SAME_ROOT_I’;

 OBJECT_ID
----------
     67721

SQL> ALTER SESSION SET EVENTS ‘immediate trace name treedump level 67721′;

Session altered.

 

Following is the treedump output:

—– begin tree dump
leaf: 0x141a3b2 21078962(0: nrow: 1 rrow: 1)
—– end tree dump

We note that the relative block address of our one and only index leaf block is 21078962.

If we now add a whole bunch of new rows to the table so that the leaf block can no longer hold all the index entries, thereby forcing the index to block split and grow:

SQL> insert into same_root select rownum+1, ‘David Bowie’ from dual connect by level <=100000;

100000 rows created.

SQL> commit;

Commit complete.

And now take a block dump of the same index block:

SQL> alter system dump datafile 5 block 107442;

System altered.

Branch block dump
=================
header address 98959940=0x5e60244
kdxcolev 2
KDXCOLEV Flags = – - -
kdxcolok 0
kdxcoopc 0×80: opcode=0: iot flags=— is converted=Y
kdxconco 2
kdxcosdc 2
kdxconro 2
kdxcofbo 32=0×20
kdxcofeo 8016=0x1f50
kdxcoavs 7984
kdxbrlmc 21238606=0x144134e
kdxbrsno 1
kdxbrbksz 8060
kdxbr2urrc 0
row#0[8038] dba: 21238607=0x144134f
col 0; len 11; (11):  44 61 76 69 64 20 42 6f 77 69 65
col 1; len 5; (5):  01 41 be 5f 01
row#1[8016] dba: 21238769=0x14413f1
col 0; len 11; (11):  44 61 76 69 64 20 42 6f 77 69 65
col 1; len 5; (5):  01 44 12 ba 01
—– end of branch block dump —–

We notice that the block is no longer a leaf block but has changed itself into an index branch block.

But not just any branch block. If we now take a new treedump:

SQL> ALTER SESSION SET EVENTS ‘immediate trace name treedump level 67721′;

Session altered.

branch: 0x141a3b2 21078962(0: nrow: 3, level: 2)
   branch: 0x144134e 21238606 (-1: nrow: 161, level: 1)
      leaf: 0x141a3b3 21078963 (-1: nrow: 179 rrow: 179)
      leaf: 0x141a3b4 21078964 (0: nrow: 179 rrow: 179)
      leaf: 0x141a3b5 21078965 (1: nrow: 179 rrow: 179)
      leaf: 0x141a3b6 21078966 (2: nrow: 179 rrow: 179)
      leaf: 0x141a3b7 21078967 (3: nrow: 179 rrow: 179)

….

The above partial listing of the treedump clearly shows that the index has grown from a level 0 index to a level 2 index and that the root block is the very same index block as the original leaf block listed as it has the same relative block address as before (21078962).

Indeed the original leaf block is now the index root block which is still the same block that immediately follows the index segment header.

Because Oracle doesn’t have to visit the index segment header and can simply directly access the block following the index segment header as this block is always the first index block of interest when performing an index scan, the index scan has that little advantage over the FTS. And it’s this little advantage that can give the index scan the edge over a FTS, even if we’re potentially accessing data from a very small table.

And you can’t get much smaller than a table that has all it’s rows in the one table block.

So far though, the example I’ve shown has been a “normal”, everyday, non-unique index that has a 1 consistent get advantage over the FTS when accessing a row of interest. I’ll next discuss how indexes can having an even bigger edge and more significant advantage over a FTS of a tiny table,  than just the 1 consistent get …

Differences Between Unique and Non-Unique Indexes Part 4.5 (Fix You) March 30, 2009

Posted by Richard Foote in Fragmented Indexes, Index Internals, Non-Unique Indexes, Oracle Indexes, Unique Indexes.
5 comments

In my last post, Part IV in this series, we looked at how a Unique Index can reuse space deleted within the same logical transaction, whereas a Non-Unique Index can not.  Deleted space within a Non-Unique index can only be reused by subsequent transactions.

It’s sometimes important to appreciate this distinction because as discussed in the various OTN and Ask Tom threads mentioned in Part IV, there are times when this can make a significant difference to the manageability and efficiency of the resultant index.

Now, it’s not everyday someone might for example delete all rows in a table and repopulate it again within a single transaction (the TRUNCATE command was of course developed for a reason). However, perhaps an application was developed without your involvement, perhaps a large proportion but not all of the data is being deleted or as someone mentioned on OTN, perhaps the table in question is a Materialized View being fully refreshed within a refresh group. There could therefore be occasions when a single transaction might indeed perform a large delete followed by a similarly sized insert.

In which case, whether an index is defined as Unique or Non-Unique might make a difference …

To begin with, let’s populate a table with 1M rows and create an associated Unique index:

SQL> insert into bowie select rownum, ‘BOWIE’ from dual connect by level <=1000000;

1000000 rows created.

SQL> commit;

Commit complete.

SQL> create unique index bowie_idx on bowie(id);

Index created.

 

Let’s look at the size of  this newly created Unique index:

SQL> analyze index bowie_idx validate structure;

Index analyzed.

SQL> select blocks, lf_blks, del_lf_rows from index_stats;

    BLOCKS    LF_BLKS DEL_LF_ROWS
---------- ---------- -----------
      2176       2087           0

 

OK, let’s now delete the entire table and repopulate it again, within the same logical transaction

SQL> delete bowie;

1000000 rows deleted.

SQL> insert into bowie select rownum, ‘PINK FLOYD’ from dual connect by level <= 1000000;

1000000 rows created.

SQL> commit;

Commit complete.

 

Let’s look at the size difference for the Unique Index and see how many deleted index entries we have as a result:

SQL> analyze index bowie_idx validate structure;

Index analyzed.

SQL> select blocks, lf_blks, del_lf_rows from index_stats;

    BLOCKS    LF_BLKS DEL_LF_ROWS
---------- ---------- -----------
      2176       2087           0

 

OK good, the index is actually identical in size and we have no deleted entries, not a one. All the deleted entries as a result of the delete command have been reused by the subsequent insert statement. This means of course that the index is just as efficient now after all this DML activity, as it was when the index was first created.

 

Let’s perform exactly the same demo, but this time with a Non-Unique index and see any differences …

SQL> drop table bowie;

Table dropped.

SQL> create table bowie (id number, name varchar2(20));

Table created.

SQL> insert into bowie select rownum, ‘BOWIE’ from dual connect by level <=1000000;

1000000 rows created.

SQL> commit;

Commit complete.

SQL> create index bowie_idx on bowie(id);

Index created.

SQL> analyze index bowie_idx validate structure;

Index analyzed.

SQL> select blocks, lf_rows, lf_blks, del_lf_rows from index_stats;

    BLOCKS    LF_BLKS DEL_LF_ROWS
---------- ---------- -----------
      2304       2226           0

 

The first difference we notice is that the Non-Unique index after it has just been created is somewhat larger than the equvalent Unique index (2226 leaf blocks vs. 2087 leaf blocks). This is a direct result of the Non-Unique index having to store an extra byte for the length byte associated with the rowid being an additional index column for each and every one of the 1M index entries.

SQL> delete bowie;

1000000 rows deleted.

SQL> insert into bowie select rownum, ‘PINK FLOYD’ from dual connect by level <= 1000000;

1000000 rows created.

SQL> commit;

Commit complete.

SQL> analyze index bowie_idx validate structure;

Index analyzed.

SQL> select blocks, lf_blks, del_lf_rows from index_stats;

    BLOCKS    LF_BLKS DEL_LF_ROWS
---------- ---------- -----------
      4608       4518     1000000

OK not quite so good, big difference here. Previously, the Unique Index remained unchanged and had no deleted index entries. However, the Non-Unique index is now effectively double the size it was previously and has 1M deleted index entries still within the index structure. Not a one was recycled and reused within the logical transaction.

This index is now potentially problematic, especially if there are going to be no or few subsequent inserts until it next gets refreshed, where the deleted entries can be reused but the current entries may again remain in the index after they’ve been deleted.

Again, it’s important to understand what is going on here so one can take the appropriate adminstration steps. Perhaps it might be better to drop the index and recreate it after the transaction (if permitted). Perhaps the truncate command isn’t such a bad idea after all (if permitted). Perhaps it might be better to police the Unique constraint with a Unique rather than a Non-Unique index after all.

Perhaps, it might be better to not perform the above within a single transaction and issue an intermediate commit after all (if permitted) …

 

SQL> drop table bowie;

Table dropped.

SQL> create table bowie (id number, name varchar2(20));

Table created.

SQL> insert into bowie select rownum, ‘BOWIE’ from dual connect by level <=1000000;

1000000 rows created.

SQL> commit;

Commit complete.

SQL> create index bowie_idx on bowie(id);

Index created.

SQL> analyze index bowie_idx validate structure;

Index analyzed.

SQL> select blocks, lf_rows, lf_blks, del_lf_rows from index_stats;

    BLOCKS    LF_BLKS DEL_LF_ROWS
---------- ---------- -----------
      2304       2226           0

SQL> delete bowie;

1000000 rows deleted.

Because if we just issue the commit at this point in the process …

SQL> commit;

Commit complete.

SQL> insert into bowie select rownum, ‘PINK FLOYD’ from dual connect by level <= 1000000;

1000000 rows created.

SQL> commit;

Commit complete.

SQL> analyze index bowie_idx validate structure;

Index analyzed.

SQL> select blocks, lf_rows, lf_blks, del_lf_rows from index_stats;

    BLOCKS    LF_BLKS DEL_LF_ROWS
---------- ---------- -----------
      2304       2226           0

 

We would not have this problem as the subsequent transaction that performs the insert can reused all the deleted space associated with the first delete transaction. 

If one understands how indexes work and understands how deleted space can be reused, one can prevent many potential issues and unnecessary maintenance tasks.

Prevention is always the best cure …

Differences Between Unique and Non-Unique Indexes Part IV (Take It Back) March 25, 2009

Posted by Richard Foote in Index Internals, Non-Unique Indexes, Oracle Indexes, Unique Indexes.
11 comments

I’ve previously discussed various differences between Unique and Non-Unique indexes (Part I, Part II and Part III) and why I have a preference to implement Unique indexes whenever possible and practical.

Various recent discussions on the OTN forums and on Ask Tom reminded me that I hadn’t yet discussed on this blog another subtle, but potentially significant difference between Unique and Non-Unique indexes.

As I’ve previously discussed, there’s actually no such thing as a Non-Unique index entry as such as Oracle ensures all index entries are effectively unique by adding the rowid to the index key for all Non-Unique indexes. Fundamentally, this is essential because Oracle needs some way of efficiently finding the precise index entry associated with an update or delete operation. Without having the rowid as part of the index key, Oracle would be forced to navigate to the first occurrence of an index value and search through all occurrences of the index value until it finds the specific entry containing the rowid of interest. This could potentially result in visiting many leaf blocks if the index value spans multiple leaf blocks. By including the rowid as the last index key column, non-unique index values are further ordered based on the corresponding rowid within the same indexed values. Oracle can therefore always navigate directly to the leaf block containing the exact index entry of interest as the rowid can be included in the branch blocks to determine both the index entry and rowid ranges found in specific leaf blocks.

If we look at a simple example by creating a one row table with a Non-unique index:

SQL> create table bowie (id number, name varchar2(20));

Table created.

SQL> insert into bowie values (1, ‘BOWIE’);

1 row created.

SQL> commit;

Commit complete.

SQL> create index bowie_idx on bowie(id);

Index created.

SQL> select header_file, header_block from dba_segments where segment_name = ‘BOWIE_IDX’;

HEADER_FILE HEADER_BLOCK
----------- ------------
          7       124937

Let’s dump the index block …

SQL> alter system dump datafile 7 block 124938;

System altered.
Leaf block dump
===============
header address 425713756=0x195fe05c
kdxcolev 0
KDXCOLEV Flags = – - -
kdxcolok 0
kdxcoopc 0×80: opcode=0: iot flags=— is converted=Y
kdxconco 2
kdxcosdc 0
kdxconro 1
kdxcofbo 38=0×26
kdxcofeo 8024=0x1f58
kdxcoavs 7986
kdxlespl 0
kdxlende 0
kdxlenxt 0=0×0
kdxleprv 0=0×0
kdxledsz 0
kdxlebksz 8036
row#0[8024] flag: ——, lock: 0, len=12
col 0; len 2; (2):  c1 02
col 1; len 6; (6):  01 c1 e7 8a 00 00

Notice how the rowid is an additional index column within the index entry for the Non-Unique index.

Now if we were to delete and subsequently re-insert a row in the table with same index value within a single transaction, note the rowid of the new row by definition will differ from the deleted row. Therefore, we would need a different index entry for the new index row because if the rowids differ, then the associated index entries must differ as well. Note also (and this is critical) that because we would have a different rowid, if we had multiple index entries with the same key, this new index entry might not be in the same logical order as that of the deleted index entry. In fact, it’s quite possible that the new index entry might actually need to be stored in a totally different leaf block if this specific index value spanned multiple index leaf blocks because the index entries, including the rowids must always be logically ordered.

Therefore, if we were to delete and re-insert the same index value within a single transaction, Oracle is forced to create a new index entry and will not reuse the existing, deleted index entry.

So continuing with the demo, let’s delete the row:

SQL> delete bowie;

1 row deleted.

and now re-insert a row with the same indexed value within the same transaction:

SQL> insert into bowie values (1, ‘THIN WHITE DUKE’);

1 row created.

SQL> commit;

Commit complete.

If we look at a block dump now …

Leaf block dump
===============
header address 425713756=0x195fe05c
kdxcolev 0
KDXCOLEV Flags = – - -
kdxcolok 0
kdxcoopc 0×80: opcode=0: iot flags=— is converted=Y
kdxconco 2
kdxcosdc 0
kdxconro 2
kdxcofbo 40=0×28
kdxcofeo 8012=0x1f4c
kdxcoavs 7972
kdxlespl 0
kdxlende 1
kdxlenxt 0=0×0
kdxleprv 0=0×0
kdxledsz 0
kdxlebksz 8036
row#0[8024] flag: —D–, lock: 2, len=12
col 0; len 2; (2):  c1 02
col 1; len 6; (6):  01 c1 e7 8a 00 00
row#1[8012] flag: ——, lock: 2, len=12
col 0; len 2; (2):  c1 02
col 1; len 6; (6):  01 c1 e7 8a 00 01
—– end of leaf block dump —–

We notice the previous index entry has been logically deleted and Oracle has created a new index entry with the new associated rowid.

 

Let’s now run exactly the same demo again, but this time with a Unique index instead of the Non-Unique index …

SQL> drop table bowie;

Table dropped.

SQL> create table bowie (id number, name varchar2(20));

Table created.

SQL> insert into bowie values (1, ‘BOWIE’);

1 row created.

SQL> commit;

Commit complete.

SQL> create unique index bowie_idx on bowie(id);

Index created.

SQL> select header_file, header_block from dba_segments where segment_name = ‘BOWIE_IDX’;

HEADER_FILE HEADER_BLOCK
----------- ------------
          7       125193

SQL> alter system dump datafile 7 block 125194;

System altered.

Leaf block dump
===============
header address 371859548=0x162a205c
kdxcolev 0
KDXCOLEV Flags = – - -
kdxcolok 0
kdxcoopc 0×80: opcode=0: iot flags=— is converted=Y
kdxconco 1
kdxcosdc 0
kdxconro 1
kdxcofbo 38=0×26
kdxcofeo 8025=0x1f59
kdxcoavs 7987
kdxlespl 0
kdxlende 0
kdxlenxt 0=0×0
kdxleprv 0=0×0
kdxledsz 6
kdxlebksz 8036
row#0[8025] flag: ——, lock: 0, len=11, data:(6):  01 c1 e8 8a 00 00
col 0; len 2; (2):  c1 02
—– end of leaf block dump —–

 

Notice the big difference here. Because the index has been defined as Unique, all the associated index entries must be unique. It’s simply not possible to have duplicate index entries within a Unique index structure. Therefore, it’s not necessary to have the rowid as a separate column of the index entry as the index values themselves are sufficient to uniquely identify each and every index entry. The rowid is basically just another piece of overhead associated with the index entry rather than a separate index column. The length of this unique index entry is just 11 bytes, where it was previously 12 bytes, because we no longer need to store the length byte associated with the second index column necessary in the Non-unique index for the rowid.

And now comes the subtle difference …

If we were to now delete and re-insert the same index value within a single transaction, Oracle can now reuse the same, deleted index entry, because the index entry is effectively identical to the deleted one. The only possible difference is the rowid but the rowid is no longer a part of the index column list and so can just be updated as necessary. Note also (and this is the critical bit for Unique indexes), because the actual index value remains the same, the order of the index entry within the index must also remain the same. There is no need to move the re-inserted index entry to another part of the index structure because deleting and re-inserting the same index entry does not logically alter the order of where the index entry must reside.

Therefore, if we were to delete and re-insert the same index value within a single transaction, Oracle does not need to create a new index entry and can simply reuse the existing, deleted index entry.

So continuing with the demo, let’s delete the row:

SQL> delete bowie;

1 row deleted.

and now re-insert a row with the same indexed value within the same transaction:

SQL> insert into bowie values (1, ‘THIN WHITE DUKE’);

1 row created.

SQL> commit;

Commit complete.

If we look at a block dump now …

Leaf block dump
===============
header address 371859548=0x162a205c
kdxcolev 0
KDXCOLEV Flags = – - -
kdxcolok 0
kdxcoopc 0×80: opcode=0: iot flags=— is converted=Y
kdxconco 1
kdxcosdc 0
kdxconro 1
kdxcofbo 38=0×26
kdxcofeo 8025=0x1f59
kdxcoavs 7987
kdxlespl 0
kdxlende 0
kdxlenxt 0=0×0
kdxleprv 0=0×0
kdxledsz 6
kdxlebksz 8036
row#0[8025] flag: ——, lock: 2, len=11, data:(6):  01 c1 e8 8a 00 01
col 0; len 2; (2):  c1 02
—– end of leaf block dump —–

We note that Oracle has indeed reused the previously deleted index entry and has simply updated the rowid with the new rowid value. There is no deleted index entry, Oracle has simply changed the associated rowid to that of the new row in the table for the existing Unique index entry.

Where an Update of an index entry is actually effectively a delete and an insert of an index entry, notice that by contrast for Unique indexes, a delete and a re-insert operation is effectively an update of an index entry !!

In my next post I’ll highlight how this difference can be critical to the behaviour and efficiency of an index and why it’s important to understand how indexes work to avoid and prevent potential issues.

Larger Block Tablespace For Indexes Revisited Part III (Prove Yourself) March 2, 2009

Posted by Richard Foote in Index Block Size, Index Height, Index Internals, Index Rebuild, Oracle Indexes, Oracle Myths.
10 comments

Time to look a little at an Index Fast Full Scan (IFFS) with respect to moving indexes into a larger block tablespace.

To start, a few points about an IFFS. When performing an IFFS, Oracle will read each and every block in the index structure, including all branch and leaf blocks (up to the HWM), using multi-block read operations. So although an IFFS has its place, it’s a relatively expensive operation, especially if the index is large, as it needs to read all blocks in the current index structure. Therefore, it’s not generally considered a “common” operation, as in traditional index range scans, especially in OLTP environments. Effectively, an IFFS is the index equivalent of a Full Table Scan (FTS) and is performed in a similar fashion. During an IFFS, Oracle basically treats the index as if it were an smaller version of the table from which it can extract the necessary data.

So if an IFFS were indeed to be very common and could be performed so much more effectively in a larger block tablespace, this begs the question why not move tables in a larger block tablespace as well, so FTS can have the same benefits as the index. And if you move both tables and indexes into a larger block tablespace, why not create the database in the larger block and have done with it? Which brings us around to the question of selecting an appropriate block size for the database, which is a different discussion to the so-called benefits of just moving indexes into a larger block tablespace. For another day perhaps …

Now, although Robin’s demo clearly shows consistent gets are basically halved when an index is rebuilt with double the block size (which all sounds very impressive), what it doesn’t highlight however is that during an IFFS, approximately the same amount of data is still actually being read. Although there are certainly some efficiencies in having fewer logical reads, if each consistent gets is more expensive while the overall data being read is similar, you’re simply not going to get an enormous performance boost that simply halving consistent gets might suggest. Claims that things will suddenly run twice as fast (or 120 times as fast) purely by moving indexes into a larger block tablespace are shall we say “exaggerated” to say the least. However, as all index blocks within an index are being read with no subsequent table accesses, an IFFS is going to be as good as it gets when we talk about any possible performance improvements of moving indexes into a larger block tablespace.

Again, it’s all very simple to “give it a go” and see for yourself exactly what improvements one might expect. Move an index into a larger block tablespace and see those IFFS fly. Using exactly the same setup from Robin’s demo as in Parts I and II, let’s see the actual differences …

First, get some current session stats:

SQL> select n.name, s.value from v$sesstat s, v$statname n where s.statistic# = n.statistic# and s.sid = 137 and (n.name = ‘CPU used by this session’ or n.name = ‘consistent gets’ or n.name = ‘physical reads’ or n.name = ‘index fast full scans (full)’);

NAME                             VALUE
---------------------------- ---------
CPU used by this session         35856
consistent gets               10685006
physical reads                    2582
index fast full scans (full)     33003

Now execute 1000 IFFS using the index in an 8K block tablespace …
SQL> declare
  2  v_count number;
  3  begin
  4    for i in 1..1000 loop
  5       select /*+ index_ffs(bowie) */ count(*) into v_count from bowie;
  6    end loop;
  7  end;
  8  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:11.23

SQL> select n.name, s.value from v$sesstat s, v$statname n where s.statistic# = n.statistic# and s.sid = 137 and (n.name = ‘CPU used by this session’ or n.name = ‘consistent gets’ or n.name = ‘physical reads’ or n.name = ‘index fast full scans (full)’);

NAME                             VALUE
---------------------------- ---------
CPU used by this session         36952
consistent gets               11106006
physical reads                    2582
index fast full scans (full)     34003

 
Note that response times are roughly 11.23 secs, mostly consisting of CPU at 10.96 secs. Note also consistent gets are 1000 x 421 as one would expect from Robin’s demo. Now lets see how things differ when we move the index into a 16K block tablespace.

 
SQL> alter index bowie_idx rebuild tablespace ts_16k;

Index altered.

 
SQL> select n.name, s.value from v$sesstat s, v$statname n where s.statistic# = n.statistic# and s.sid = 137 and (n.name = ‘CPU used by this session’ or n.name = ‘consistent gets’ or n.name = ‘physical reads’ or n.name = ‘index fast full scans (full)’);

NAME                             VALUE
---------------------------- ---------
CPU used by this session         43192
consistent gets               12372513
physical reads                    2789
index fast full scans (full)     40003

SQL> declare
  2  v_count number;
  3  begin
  4    for i in 1..1000 loop
  5       select /*+ index_ffs(bowie) */ count(*) into v_count from bowie;
  6    end loop;
  7  end;
  8  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:11.14

SQL> select n.name, s.value from v$sesstat s, v$statname n where s.statistic# = n.statistic# and s.sid = 137 and (n.name = ‘CPU used by this session’ or n.name = ‘consistent gets’ or n.name = ‘physical reads’ or n.name = ‘index fast full scans (full)’);

NAME                               VALUE
----------------------------- ----------
CPU used by this session           44260
consistent gets                 12583513
physical reads                      2789
index fast full scans (full)       41003

 
Note there’s only a slight improvement in response times and CPU usage of a couple of percentage points, even though the number of consistent gets has halved. Now your milage may vary a tad but what you will not see is things suddenly using 1/2 the resources or running twice as fast (or running 120 times as fast). Because at the end of the day, both sets of IFFS are reading and processing approximately the same amount of data.

Note also that any such possible minor improvments in IFFS performance is likely going to be cancelled out or more likely overtaken by the increased costs associated with more typical index range scans due to the index not actually reducing in height as discussed in Part II. Again, what’s more common, smaller index range scans or IFFS ?

Robin’s demo and the demo above however assumes the index is fully cached, with no physical I/O (PIO) operations being performed. Now as an IFFS needs to read the entire index, it’s quite likely that an IFFS would require some physical I/O, especially if the index is large. Surely then, if an index is in a larger block tablespace, such physical I/Os would be more efficient because Oracle would be able to read more data with a multi-block read based on a larger block  size?

The answer is unfortunately no.  Another (rather poor) analogy to get the point across.

You go to the bank to take some money out. You’re only allowed to take out a maximum of $1000 per day from your account. You ask for this in $50 notes and you get 20 x $50 = $1000 in total. Your mate wants to take out more money and asks for the money in $100 notes. However, as the limit is set to $1000 a day, regardless of the denomination of the bank notes, he simply gets 10 x $100 = $1000. You both end up with exactly the same amount of money, it’s just that you have more bank notes.

An Oracle multi-block read works in exactly the same manner. There’s a maximum amount of data you can read which is calculated by the default block size x db_file_multiblock_read_count. If the default block size is 8K and the db_file_multiblock_read_count = 8, then the maximum size of a multiblock read is 8K x 8 = 64K.

If you attempt to perform a multi-block read using a non-default block size, Oracle simply divides the 64K by the default block size to determine how many blocks to read. So if you now attempt to perform a multi-block read from a 16K tablespace, Oracle will only read 16K x 4 blocks = 64K. The I/O size is the same regardless of the blocksize.

And this makes perfect sense. If you’ve tuned the size of a multi-block read perfectly with the default block size in mind, why would you want to (say) suddenly double this perfect size when you double the block size in another tablespace. If doubling a multi-block read size were to suddenly improved things, why not also double the multi-block read size for the default block size as well …

Again, it’s very easy and simple to test how changing the block size in a specific tablespace makes no difference to the behaviour of an associated multi-block read.

First flush the buffer cache and trace a session, while performing a multi-block read via an IFFS , first with a default 8K block size index:

SQL> show parameter db_file_m

NAME                             TYPE VALUE
----------------------------- ------- -----
db_file_multiblock_read_count integer     8

SQL> alter system flush buffer_cache;

System altered.

SQL> alter session set events ’10046 trace name context forever, level 12′;

Session altered.

SQL> select /*+ index_ffs(bowie) */ count(*) from bowie;
 
The trace file will show that blocks=8 are being read during the IFFS multiblock read operation as the following extract from the trace file highlights:

 
WAIT #2: nam=’db file scattered read’ ela= 22897 file#=7 block#=38929 blocks=8 obj#=95742 tim=1217521768059
WAIT #2: nam=’db file scattered read’ ela= 47158 file#=7 block#=38937 blocks=8 obj#=95742 tim=1217521816003
WAIT #2: nam=’db file scattered read’ ela= 37776 file#=7 block#=38945 blocks=8 obj#=95742 tim=1217521854530

However, when you trace a session as it performs the IFFS with a 16K block

WAIT #1: nam=’db file scattered read’ ela= 55181 file#=6 block#=1030 blocks=4obj#=95742 tim=867071895617
WAIT #1: nam=’db file scattered read’ ela= 68932 file#=6 block#=1034 blocks=4obj#=95742 tim=867071965238
WAIT #1: nam=’db file scattered read’ ela= 67136 file#=6 block#=1038 blocks=4obj#=95742 tim=867072142114
 

We notice that Oracle is now only reading 4 blocks at a time, ensuring that the same 64K is read each time. So increasing the blocksize of an index doesn’t suddenly impact the manner or efficiency in which multi-block PIOs are performed.

Next, we’ll perform a similar test as before, comparing the difference between the 8K and 16K block index during an IFFS but this time with PIOs introduced. There are various ways one could do this. Make the buffer cache too small to fit an index or make the index too big for the buffer cache. Keeping with Robin’s demo, I’ll use exactly the same index definitions but this time run the following procedure in another session that constantly flushes the buffer caches. So in one session:

SQL> begin
  2  for i in 1..10000 loop
  3    execute immediate (‘alter system flush buffer_cache’);
  4  end loop;
  5  end;
  6  /

PL/SQL procedure successfully completed.

 
Meanwhile in another session, run a number of IFFS (200 this time) with an 8K block size index:

 
SQL> select n.name, s.value from v$sesstat s, v$statname n where s.statistic# = n.statistic# and s.sid = 137 and (n.name = ‘CPU used by this session’ or n.name = ‘consistent gets’ or n.name = ‘physical reads’ or n.name = ‘index fast full scans (full)’);

NAME                             VALUE
---------------------------- ---------
CPU used by this session          4851
consistent gets                1536024
physical reads                  423992
index fast full scans (full)      3205

 
SQL> declare
  2  v_count number;
  3  begin
  4    for i in 1..200 loop
  5     select /*+ index_ffs(bowie) */ count(*) into v_count from bowie;
  6   end loop;
  7  end;
  8  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:35.92

SQL> select n.name, s.value from v$sesstat s, v$statname n where s.statistic# = n.statistic# and s.sid = 137 and (n.name = ‘CPU used by this session’ or n.name = ‘consistent gets’ or n.name = ‘physical reads’ or n.name = ‘index fast full scans (full)’);

NAME                             VALUE
---------------------------- ---------
CPU used by this session          5248
consistent gets                1620224
physical reads                  507651
index fast full scans (full)      3405

 

Note the response time is 35.92 secs and that we used approximately 3.97 secs. This time we have performed a how bunch of PIO operations.

Same thing, this time with the 16K index:

 
SQL> select n.name, s.value from v$sesstat s, v$statname n where s.statistic# = n.statistic# and s.sid = 137 and (n.name = ‘CPU used by this session’ or n.name = ‘consistent gets’ or n.name = ‘physical reads’ or n.name = ‘index fast full scans (full)’);

NAME                             VALUE
---------------------------- ---------
CPU used by this session          6695
consistent gets                1789531
physical reads                  633216
index fast full scans (full)      4205

 
SQL> declare
  2  v_count number;
  3  begin
  4    for i in 1..200 loop
  5     select /*+ index_ffs(bowie) */ count(*) into v_count from bowie;
  6   end loop;
  7  end;
  8  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:36.59

 
SQL> select n.name, s.value from v$sesstat s, v$statname n where s.statistic# = n.statistic# and s.sid = 137 and (n.name = ‘CPU used by this session’ or n.name = ‘consistent gets’ or n.name = ‘physical reads’ or n.name = ‘index fast full scans (full)’);

NAME                             VALUE
---------------------------- ---------
CPU used by this session          7118
consistent gets                1831731
physical reads                  674861
index fast full scans (full)      4405

 
We now note that the 16K block index has the slightly slower response time at 36.59 secs and uses a tad more CPU at 4.23 secs. As soon as we introduce PIOs, the PIOs contribute significantly to the overall costs and may well result in the larger block index being more costly depending on how the PIO is performed. So even in Robin’s example, depending on the nature of PIOs, even an IFFS can potentially run slower with an index in a larger block tablespace. Remembering of course that larger indexes are less likely to be fully cached and are more likely to incur PIOs during an IFFS.

Here’s a demo I’ve used before that uses a different approach to highlight the same issue (which doesn’t rely on a different session “interfering” with things). Another excellent example is this one by Greg Rahn on this OTN thread where he shows an IFFS performing a tad slower in a larger block tablespace. As I’ve been saying, just give it a go and see for yourself.

Perhaps, simply halving the consistent gets when doubling the index block size doesn’t tell the whole story …

The next time you see someone reference Robin’s demo as some sort of “proof” that by moving indexes into a larger block tablespace and (perhaps) reducing the number of consistent reads, the index will somehow be flatter and/or more efficient, just remember that the index may not necessarily be flatter, it might actually have the same height and that subsequent index operations, including index range scans and IFFS could very well be more expensive, not less expensive, to perform.

The next time you see someone claim that by moving indexes into a larger block tablespace, performance has suddenly improved by 100% or that things suddenly run 120 times faster, just ask one simple question. Why ?Ask what else has changed, what else might be contributing to the incredible performance improvements, because when you actually test things out for yourself, you’ll noticed results are not anywhere near as “impressive”.

Store Indexes In A Larger Block Tablespace: Height Reduction 1/2 Myth (Five Foot One) March 26, 2008

Posted by Richard Foote in Index Block Size, Index Height, Index Internals, Oracle General, Oracle Indexes, Oracle Myths, Tablespace Management.
14 comments

A common misconception with using a larger block tablespace specifically for indexes is that this will result in a reduction in the height of indexes and hence “flatten” index structures.

However, this is only partly true.

A few little generalisations to begin with.

First, most databases out there have a default block size of 8K. I won’t go into a big discussion on what the database block size should be set to (maybe another time), however I will say most databases these days have a default block size of 8K and that it’s certainly questionable to have the database block size set to 2K.

Note also in many common platforms (e.g. Windows, Linux) the maximum block size limit is 16K. So in many environments, when we talk about moving indexes into a bigger sized block, it specifically involves moving from an 8K to a 16K block size.

Yes, in theory you could move an index from (say) a 2K block size up as high as a 32K block size but you would need to question why the default is so low to begin with and whether the upper value is actually supported in your environment.

I make this point because the difference between block sizes makes a huge difference in the probability of the index height actually being reduced.

So let’s start with an example of moving indexes between an 8K default block size to a 16K block tablespace, not least because the arithmetic is easier and extrapolate out as we go along.

So in our example, the new block size is double or 2 times that of the default one. By doubling the block size, we effectively 1/2 the number of necessary leaf blocks in the index structure. By having fewer leaf blocks we also therefore reduce the overall associated block level overheads so the actual reduction in leaf blocks could be a tad more, but we’ll say a 1/2 reduction to keep the numbers nice and simple.

Note the reduction in leaf blocks in therefore simply 1 / the ratio of block increase (1/2). Moving from a 2K block to a 32K block is 16 times larger so we’ll have approximately 1/16 the number of leaf blocks.

So how does (say) halving the number of leaf blocks impact the overall height of the index ?

We obviously can’t reduce the height of an index with a height of just 1. The index consists of just the one block so a larger block would simply mean the block having more free space.

To reduce the height of an index with a height of 2 (back to 1), we therefore must be able to store all index entries within a single block. Therefore, in the 8K to 16K example, the index can only have 2 full leaf blocks for this to be possible. If an index has 3 or more “filled” leaf blocks, the index must remain at a height of 2 as we can’t fit all the index entries into the single larger index block.

Importantly therefore, all indexes with a height of 2 with more than 2 full leaf blocks would not reduce in height by simply doubling the block size. This could very well be the vast majority of indexes at this level.

For an index with a height of 2, the index must have less full leaf blocks than the ratio of block increase for a height reduction to be possible. In our best case scenario, 2K block to 32K block, any index with more than 16 full leaf blocks would not reduce in height.

To reduce the height of an index with a height of 3 (back to 2), we must therefore be able to store all intermediate branch blocks into the one branch (root) block. When we double the block size, we therefore 1/2 the leaf blocks and 1/2 again the necessary branch blocks. Therefore the necessary branch blocks is 1/(2×2) = 1/4 that of the default block size. Therefore any index with a height of 3 that has more than 4 full intermediate branch blocks will again not reduce in height as again all the necessary branch information would not fit in one root block.

Importantly therefore, all indexes with a height of 3 with more than 4 full intermediate branch blocks would not reduce in height by simply doubling the block size. Again, this could very well be a significant proportion of all indexes at this level. Note also in many databases, the vast majority of indexes have a height of 3 or less so by simply doubling the index block size, most indexes would not reduce in height …

For an index with a height of 3, the index must have less full intermediate branch blocks than the ratio of block increase to the power of 2 for a height reduction to be possible. In our best case scenario, the 2K block to 32K block, only those indexes with more than 16×16=256 full intermediate branch blocks will reduce in height. This is therefore likely to be a far higher proportion of all such indexes.

You see the pattern …

To reduce the height of an index with a height of 4 (back to 3), we must therefore store all first level intermediate branch blocks into the one branch (root) block. When we double the block size, we therefore 1/2 the leaf blocks, 1/2 again the second level intermediate branch blocks and 1/2 again the first level intermediate branch blocks. Therefore the necessary first level intermediate branch blocks is 1/(2x2x2) = 1/8 that of the default block size. Therefore any index with a height of 4 that has more than 8 full intermediate first level branch blocks would again not reduce in height as again all the necessary first level branch information would not fit in the one root block.

Importantly therefore, all indexes with a height of 4 with more than 8 full intermediate branch blocks will not reduce in height by simply doubling the block size. However, as the index height increases, the ratio of indexes where this is likely to be the case decreases.

For our best case scenario, 2K to 32K, we now start hitting very large numbers 16x16x16=4096 so the likelihood of a index height reduction is very very high.

And so on …

The important point being that by simply doubling the index block size, in most databases, the vast majority of indexes are actually quite unlikely to reduce in height as the index needs to be within very limited size boundaries for the index height to reduce. The greater the index height however, the greater the index size boundaries whereby an index height reduction is possible.

Also, the greater the index block increase, proportionally the fewer the index blocks and so greater the likelihood of an index height reduction.

This demo on the Impact Of Block Size On Index Height illustrates that by simply doubling the index block size, the height of an index (in various sizes) rarely decreases.

One final point. With our height 4 index example, note the index can only have a maximum of 8 first level branch blocks for the height to reduce. Therefore, in effect, we’re replacing a maximum of 9 x 8K branch blocks with 1 x 16K block. If this index is frequently accessed, these 9 branch blocks are likely cached and we only need to read two of these blocks anyways for an index range scan (for a total of 16K). After the rebuild, we still need to read this block (16K again) anyways so from a purely performance perspective with regard to just simply reducing the index height, the so-called performance benefits are often very much exaggerated.

As we’ll see in the next epic episode of this series, performance can actually decrease :(

Next time someone claims moving indexes into a larger block size will decrease the height and flatten an index, remember it really does depend. In many databases, especially when the index block size is just doubled, it’s actually quite surprising just how unlikely it is for an index to actually decrease in height.

Index Compression Part IV (Packt Like Sardines In a Crushd Tin Box) February 29, 2008

Posted by Richard Foote in Index Compression, Index Internals, Oracle Cost Based Optimizer, Oracle General, Oracle Indexes, Performance Tuning.
15 comments

A great question by Brian Tkatch has convinced me there needs to be at least a Part IV in this particular mini-series. The basic question is what size does the column need to be for it to be effectively compressed ?

If a column is only just the one byte, then it can’t really be compressed as the prefix related overheads would likely make the exercise pointless, right ?

Well not quite …

True a larger column that is heavily repeated will yield a better return from compression than a smaller column, but even a single byte column could be effectively compressed, IF there are sufficient repeated values per index leaf block.

Previously, I illustrated a logical representation of how Oracle compresses a block. The following is more of a physical representation of a compressed block:

Prefix 0: David Bowie

0:0

  : ROWID

1: 0

  : ROWID

2: 0

  : ROWID

3: 0

  : ROWID

4: 0

  : ROWID

Prefix 1: David Jones

 

 

5: 1

 

  : ROWID

6: 1

  : ROWID

7: 1

  : ROWID

Remember, only the leading columns in an index can be compressed and index row entries must be sorted by these leading columns. Therefore each prefix entry is referenced by one or more logically consecutive index row entries and each index row entry can only have the one index prefix. The corresponding prefix for a specific index row entry is therefore simply the prefix entry that precedes it within the leaf block.

If we look at sample prefix entries:

prefix row#0[8030] flag: -P—-, lock: 0, len=6
col 0; len 3; (3):  c2 08 43
prc 1
prefix row#1[8015] flag: -P—-, lock: 0, len=6
col 0; len 3; (3):  c2 08 44
prc 1

Note that the prefix row#0 starts at address 8030 while the next prefix entry row#1 starts at address 8015. That’s a total of 15 byes and yet the length of the prefix entry is just 6. Where’s the other 9 bytes ?

Well note the prc value for prefix row#0 is 1 meaning this prefix entry is only referenced by the one index row, as is the next prefix entry as well.  The leading column in this particular example is actually unique meaning each and every prefix entry only has one corresponding index row.

If we look at the corresponding row entry, guess what it’s length will be …

row#0[8021] flag: ——, lock: 0, len=9
col 0; len 6; (6):  04 81 aa 8b 00 69
psno 0
row#1[8006] flag: ——, lock: 0, len=9
col 0; len 6; (6):  04 81 aa 8b 00 6a
psno 1

If you guessed 9, well done. Note that it’s starting address is 8021, which is the starting address of the previous entry 3015 + 6 bytes for the prefix entry which equals 8021 (as Oracle actually fills index blocks from the “bottom up”). Note that index entry row#0 has a psno of 0 meaning it’s corresponding prefix entry references prefix row#0.

Note that the index row length of 9 is made up of 2 byes for locks and flags info, 6 byes for the rowid and 1 byte for the rowid length (as this index is a non-unique index, albeit with unique index values).

Therefore the psno value actually uses 0 bytes as its value can be purely derived from it’s position within the leaf block in relation to its parent prefix value.

Let’s now look at an index row entry with a leading column that is heavily repeated but only one byte in size.

row#0[8025] flag: ——, lock: 0, len=11
col 0; len 1; (1):  41
col 1; len 6; (6):  04 81 12 8a 02 67
row#1[8014] flag: ——, lock: 0, len=11
col 0; len 1; (1):  41
col 1; len 6; (6):  04 81 12 8a 02 68
row#2[8003] flag: ——, lock: 0, len=11
col 0; len 1; (1):  41
col 1; len 6; (6):  04 81 12 8a 02 69

Note the same leading column (hex 41) is repeated again and again within the leaf block and uses a total of 2 bytes of storage, 1 byte for the value and 1 byte for the column length. That means for each and every index row entry, this column uses 2 bytes. Note the length of the index row entries is currently 11 bytes.

Let’s look at the same index compressed.

prefix row#0[8032] flag: -P—-, lock: 0, len=4
col 0; len 1; (1):  41
prc 726
row#0[8023] flag: ——, lock: 0, len=9
col 0; len 6; (6):  04 81 12 8b 00 42
psno 0
row#1[8014] flag: ——, lock: 0, len=9
col 0; len 6; (6):  04 81 12 8b 00 43
psno 0
row#2[8005] flag: ——, lock: 0, len=9
col 0; len 6; (6):  04 81 12 8b 00 44
psno 0

First thing to note is that there is only the one prefix row entry for this particular leaf block and that all 727 index row entries all share the same prefix row entry.

The length of the prefix entry is 4, 2 bytes for these flags and locking info, 1 byte for the prefix column value and 1 bye for the prefix column length. That’s just 4, tiny little  byes.

Note that each index row entry is now only 9 bytes reduced from 11 as we no longer need to store the 2 bytes for the repeated column entry. That means for a total overhead of 4 byes, we are able to save 2 bytes in each and every index row entry.

Not bad at all and as such represents a useful saving. In this particular example, the index reduced from 163 leaf blocks down to 138 leaf blocks, just by compressing a single byte column as the column has very low cardinality.

Note however if this leading column were unique for every row, we would increase the storage of this column from 2 bytes for every index row entry up to 4 bytes for every index row entry as every index entry would have it’s own associated prefix entry. That’s why compression can potentially increase the size of an index if the compressed column(s) has too high a cardinality due to the additional 2 byte overhead required to store all prefix row entries.

The following Index Compression Part IV demo shows all this in a little more detail.

There’s more to index (and indeed table) compression than simply compressing an index (or table) for the sake of it …

BTW, the spelling mistakes in the post title are fully intentional ;)

Index Compression Part III (2+2=5) February 22, 2008

Posted by Richard Foote in Index Compression, Index Internals, Oracle General, Oracle Indexes, Performance Tuning.
1 comment so far

As previously discussed, compression is only beneficial when the compressed columns have repeated values within an index leaf block. If a compressed column has repeated values, Oracle will basically store the one copy of the compressed column (or columns) as a prefix entry within a leaf block and each index row entry will then be associated with its corresponding prefix entry.

That being the case, it makes no sense in compressing a single column Unique index. Each index entry must be unique, there can’t be any repeatable, duplicate column values. Therefore compression will be not only totally ineffective, but would actually result in an index structure that increases rather than decreases in size due to the extra overheads associated with having a prefix index entry for each and every index row entry.

It also makes no sense in compressing every column in a concatenated, multi-column Unique index. For exactly the same reasons. Each compressed index column combination must be unique and would result in a prefix entry for each and every index row entry. Compression would be worse than useless and result in an increased index structure.

However, Oracle does its best to protect ourselves from ourselves.

For a start, it does not allow one to create a compressed index on a single column Unique index. Attempt to do so and Oracle generates an “ORA-25193: cannot use COMPRESS option for a single column key”. Hey, even the error message is nice and meaningful.

If the Unique index has multiple columns, the default prefix length value (number of compressed columns) is the number of indexed columns minus one, not all columns as it is for a Non-Unique index. See, Oracle is doing its best here to prevent a useless attempt at index compression.

If you specify a prefix length value equal or greater than the number of columns in a Unique index, Oracle generates an “ORA-25194: invalid COMPRESS prefix length value”. These are not restrictions but designed to stop the creation of inefficient compressed indexes.

Note however that Non-Unique indexes can be used to police primary Key (PK) and Unique Key (UK) constraints. I’ve discussed all this previously. The constraints might be Unique, the data might be unique but the index is Non-Unique and so these “protections” fly out the window. There is nothing stopping one creating a single column compressed Non-Unique index to police a PK or UK constraint. There’s nothing preventing you from creating a concatenated, Non-Unique index with all columns compressed, from policing a PK or UK constraint. In fact, you can even create a fully compressed Non-Unique index at the same time as the PK or UK constraint with Oracle’s extended constraint and index creation syntax.

Nothing stopping you, except perhaps the realisation that it would be a very bad and futile thing to implement as the resultant index will be guaranteed to be less efficient than an equivalent nocompress index.

Yes, it might make sense to compress just the leading columns of a Unique index or a Non-Unique index that’s policing a PK or UK constraint, if such leading columns have sufficient repeating values to make compression effective. But it would simply not make sense to compress all columns from such indexes.

See this demo Index Compression Part III on how compression works and doesn’t work for Unique Indexes.

In Part IV, we’ll look at the issue of what specific columns might benefit from compression and look a little closer at how storage is actually saved.

Whoever said there can’t possibly be enough things to discuss on a Blog that focuses mainly on Oracle indexes ;)

It’s Less Efficient To Have Low Cardinality Leading Columns In An Index (Right) ? February 13, 2008

Posted by Richard Foote in Concatenated Indexes, Index Internals, Oracle General, Oracle Indexes, Oracle Myths, Performance Tuning.
21 comments

A common myth or mis-perception is that when deciding how to order the columns in a concatenated, multi-column index, one should avoid placing low cardinality columns in front.

For example, if you want to create an index on two columns, column ID which has many many distinct values and column CODE which has very few distinct values, create the index as (ID, CODE) as it’ll be far more efficient than a corresponding index on (CODE, ID).

The reasoning goes that by creating the (CODE, ID) index, one decreases the performance and efficiency of using the index as Oracle will have to scan through multiple index leaf blocks containing the low cardinality column, until it eventually finds the specific index entries of interest.

Or so the theory goes …

In actual fact, there’s no real difference in navigating to the specific leaf block of interest for an index on (ID, CODE) compared to an index based on (CODE, ID), providing both indexed columns are known.

The important fact that’s missed is that the branch index entries contain column entries based on all indexed columns, or at least on as much as is necessary to uniquely identify the required navigational path. Therefore, Oracle can directly navigate to the leaf block of interest, no matter the index order, providing all index column values are know.

The only slight overhead that an index based on (CODE,ID) will have is that these branch index entries are going to be somewhat larger as it will likely require both columns for the branch index entries but likely only the one column the other way around. However, branch blocks usually take up a small percentage of the overall index structure and this (usually) minor overhead is very unlikely to make a difference to the index height.

This demo on Index Column Cardinality Order shows how Oracle navigates to a specific leaf block of interest in the same manner and with the same costs, regardless of the ordering of low and high cardinality columns in the index. It also shows and describes a couple of index branch block dumps to highlight how Oracle uses the column values to define the necessary navigational path.

So the high cardinality column shouldn’t necessarily be the column given leading column status.

In actual fact there are a number of good reasons why the low cardinality column could be considered as the better option as the leading column. For a start, the index can be compressed much more efficiently if the leading column has lower cardinality. Also, an Index Skip Scan can at least be considered if the leading column has lower cardinality.

Of course, the ordering of columns in an index can be very significant and can make a huge difference to the possible efficiency of an index for other key reasons as well. Whether the leading column is always going to be a known value is an important consideration, as is the clustering factor of the leading column.

All good discussions for another day :)

How Are NULLs Actually Indexed ? (Fascination) January 30, 2008

Posted by Richard Foote in Index Internals, Indexing NULLs, Oracle General, Oracle Indexes.
12 comments

A nice question by Jeff regarding how Oracle goes about indexing NULLs has prompted me to show how one could go about actually determining the answer. The basic question is are NULLs treated as just another column value and grouped accordingly or does Oracle have to somehow search through all the leaf blocks looking for all occurrences of these mysterious NULLs.

The answer is that NULLs are basically considered to be potentially the largest value possible by Oracle and so are all grouped and sorted together at the “end” of the index structure (assuming the column is the leading column in the concatenated index, else they’ll be listed last for each distinct column that precedes it in the index).

The fact that index range scans are just as efficient when searching for NULLs values as for any other value strongly supports this assumption, but how does one actually prove it ?

The first obvious thing to check would be to create a little table and associated index with a few rows and a few NULL column values thrown in and see the results of a SELECT … ORDER BY. One would expect the order of an ascending index to match the order of the resulting output. Indeed, NULL values are by default listed last in ORDER BY ascending listings suggesting they would likewise be grouped and sorted last within an index.

The next thing to check would possibly be to use the DUMP function to again see what Oracle is likely to do with NULL values. The DUMP function displays the raw decimal representation of the specific character (depending of course on the character-set) . For NULL values however, there’s actually nothing to display other than a NULL text to represent there’s nothing actually there.

The best place to check of course is within the actual index itself. By determining the actual block that stores our example index, we can perform an index block dump and look at the resultant trace file that describes a representation of the index block to see precisely how Oracle deals with NULLs within indexes.

A quick check of the HEADER_FILE and HEADER_BLOCK in DBA_SEGMENTS will give us the index segment header location.To find the associated index root/leaf block simply add 1 to the HEADER_BLOCK.

Dump the block via the

ALTER SYSTEM DUMP DATAFILE a BLOCK b

command and look at the trace file in USER_DUMP_DEST (where ‘a‘ represents the datafile id and ‘b represents the block id determined from dba_segments).

The resultant output clearly shows that yes:

  1. Leading column NULLs values are all grouped together
  2. They are all listed at the “end” of the index structure
  3. Any NULLs in the non-leading indexed columns are listed “last” for each distinct value in the leading columns in which they appear
  4. Any index entry consisting of nothing but NULLs are not actually stored within the index

This NULLs Index Dump demo goes through this entire process with a little working example and describes the relevant section of the index block dump.

I spend some time discussing block dumps in my seminar as it’s an extremely useful tool when determining and learning how Oracle actually works.

Introduction To Reverse Key Indexes: Part III (A Space Oddity) January 18, 2008

Posted by Richard Foote in Index Block Splits, Index Internals, Oracle Indexes, Performance Tuning, Reverse Key Indexes.
11 comments

A possibly significant difference between a Reverse and a Non-Reverse index is the manner in which space is used in each index and the type of block splitting that takes place.

Most Reverse Key Indexes are created to resolve contention issues as a result of monotonically increasing values. As monotonically increasing values get inserted, each value is greater than all previous values (providing there are no outlier values present) and so fill the “right-most” leaf block. If the “right-most” block is filled by the maximum current value in the index, Oracle performs 90-10 block splits meaning that full index blocks are left behind in the index structure. Assuming no deletes or updates, the index should have virtually 100% used space.

However, it’s equivalent Reverse Key index will have the values reversed and dispersed evenly throughout the index structure. As index blocks fill, there will be a very remote chance of it being due to the maximum indexed value and 50-50 block splits will result. The PCT_USED is likely therefore to be significantly less, averaging approximately 70-75% over time.

Therefore, for indexes with no deletions, a Reverse Key index is likely to be less efficient from a space usage point of view.

However, if there are deletions, the story may differ.

Deleted space can be reused if an insert is subsequently made into an index block with deleted entries or if a leaf block is totally emptied. However, if a leaf block contains any non-deleted entries and if subsequent inserts don’t hit the leaf block, then the deleted space can not reused. As monotonically increasing values in a non-reverse index only ever insert into the “right-most” leaf block, it won’t be able to reuse deleted space if leaf blocks are not totally emptied. Overtime, the number of such “almost but not quite empty” index leaf blocks may in some scenarios increase to significant levels and the index may continue to grow at a greater proportional rate than the table (where the reuse of space is set and controlled by the PCTUSED physical property).

However, Reverse Key indexes will be able to reuse any deleted space as they evenly distribute inserts throughout the index structure. Overtime, the index is likely to grow at a similar proportional rate as the table.

For indexes that have deletions resulting in many sparsely (but not totally emptied) leaf blocks, a Reverse Key index could be more efficient from a space usage point of view.

See this demo Differences in Space Usage Between a Reverse and a Non-Reverse Index for further details.

Differences between Unique and Non-Unique Indexes (Part III) December 30, 2007

Posted by Richard Foote in Constraints, Index Internals, Oracle Indexes, Performance Tuning, Unique Indexes.
5 comments

A comment by Robert in Part II of this series reminded me of another subtle difference between Unique and Non-Unique Indexes. Now this difference is likely to be of minimal consequence to most applications as most applications don’t generally have problems with Primary Key (PK) or Unique Key (UK) constraint violations (and if they do, this is likely to be the least of their worries). But it’s a interesting difference nonetheless, something to keep in the back of your mind and a little tit-bit to end the year on.

When a row is inserted into a table or when a PK or UK is modified, Oracle of course needs to ensure that either the PK or UK constraint is not violated. If the constraint is policed via a Unique index, as previously discussed, Oracle knows the value must and can only ever be unique and so performs the constraint validation before the Unique index is actually modified. If the PK or UK is violated, the Unique index can not possibly have been changed as all the associated index entries must always be unique and so only the undo (and redo) of the changes associated with the table data blocks are actually generated and need to be subsequently rolled back.

However, if the PK or UK constraint is policed via a Non-Unique index, the mechanism for applying the changes differs somewhat. As the index is Non-Unique, as previously discussed, Oracle is not quite so certain as to the state of play and performs the constraint validation after the associated changes are made to the Non Unique index. If the PK or UK constraint is violated, both undo and redo of the Non-Unique index has been generated and both changes to the table data blocks and the index blocks need to be rolled back.

This means there’s an extra cost associated with violating a constraint if the constraint is policed via a Non-Unique Index vs. a Unique index. When performing media recovery, it also means that there’s an additional cost associated with performing the recovery. Obviously the more frequent the constraint violations, the greater the overall penalties. Also, the larger the PK or UK values, the greater the penalties.

See this little demo to illustrate the differences between a Unique and a Non-Unique index in the redo and undo generated when a constraint is violated: Difference in redo and undo between a Unique and a Non-Unique Index.

As mentioned, this difference in behaviour between Unique and Non-Unique Indexes is unlikely to be an issue. However, in applications or environments where there may be a significant number of such violations, it may be something to keep in the back of your mind.

For a more detailed discussion and where it could be an issue, see Eric Emrick’s presentation.

Differences between Unique and Non-Unique Indexes (Part II) December 21, 2007

Posted by Richard Foote in Index Access Path, Index Internals, Indexing Tricks, Oracle Cost Based Optimizer, Oracle Indexes, Primary Key, Unique Indexes.
20 comments

The most significant difference between a Unique and a Non-Unique index is of course the simple fact that in one index, all index entries MUST be unique and in the other index there can be duplicates of an index entry.

Although an obvious distinction between the two, it’s also a crucial difference as well.

When Oracle uses a Unique Index to scan for a specific value (via an equality predicate on all indexed columns or when policing a constraint ), there can only be one of two possible results. The value can exist returning at the very most one value or the value doesn’t exist returning 0 values. That’s it, 1 row or none. The value either exists or it doesn’t.

This fact means Oracle doesn’t have to worry about a whole bunch of things when dealing with Unique indexes during equality or unique checking processes. It doesn’t have to check the next index entry just in case there’s a second or more entries with the same value. It doesn’t have to worry about the potential of having to skip across to the next leaf page if the specific value it reads happens to be the maximum value in the current leaf page. It doesn’t have to worry about pointers to these “adjacent” leaf blocks changing on it due to block splits. It doesn’t have to concern itself with potentially visiting more than the one table data block during the index access operation.

Life is simple, it’s either 1 row or none.

Not so for Non-Unique indexes. With a Non-Unique index, there are no such guarantees. With a Non-Unique index, there are 3 categories of possibilities. An index scan could return 0 rows, it could return 1 row or it could return more than one row. It could potentially need to go and visit more than the current leaf block to return all the matching rows. It could potentially need to go and visit more than one table block.

Life’s not quite so “simple” for a Non-Unique index.

Note also and most importantly that life gets no easier for a Non-Unique index that polices a PK or Unique key constraint.

Even though there’s a PK or Unique constraint on a column, to Oracle, it’s just another Non-Unique index with the same “vague” possibilities. Remember that PK and Unique constraints can be enabled with NOVALIDATE meaning that even with a PK or Unique constraint, there’s still the possibility of duplicate index entries in the Non-Unique index. Remember constraints can be DEFERRABLE, meaning that even with a PK or Unique constraint, there’s still the possibility of duplicate index entries in the Non-Unique index.

This means that Oracle has to concern itself with a number of additional overheads, including having to “check” the next index entry, “just in case” it matches the required index value. It has to concern itself even with the possibility of having to visit the next index leaf block, “just in case”.

You will note when Oracle performs an equality search using a Unique Index, Oracle will perform an “INDEX UNIQUE SCAN” because the index entries MUST be unique.

You will note however when Oracle performs an equality search using a Non-Unique index, even if there’s a PK or Unique constraint of the column(s), Oracle will perform an INDEX RANGE SCAN, because it needs to scan multiple index entries “just in case”.

So are there any actual implications as a result of any of this ?

Yes.

When Oracle actually reads an index and processes the associated blocks in the buffer cache(s), Oracle uses a number of latches. These latches are used primarily to “protect” memory structures from concurrent activity. Very simplistically, by grabbing a latch, Oracle effectively performs a “lock” on the associated memory structure, perform whatever activity needs to be performed and releases the latch. These latches get grabbed and released (hopefully) extremely quickly (order of 1/10s of ms), but it’s a non zero value.

The issue with latches is that they’re a point of serialisation. If two (or more) processes want a specific latch, one (or more) has to wait. Latches also burn CPU. Only a teensy weeny bit at a time but some CPU nonetheless. They burn CPU while acquiring the latch and if fail due to latch contention, while attempting again and again to acquire the latch. They also burn CPU while performing the specific operation necessary of the latch.

Basically, the more latches, the greater the potential for contention, the greater the potential for latch related wait activity and perhaps most important of all, more CPU is required. In busy systems, there can be massive numbers of latch events and the best way to tune these events is to reduce where possible the number of latches required by the database environment. It’s one of the key reasons we try and reduce LIOs in a database as much as possible, to reduce the latch and CPU load on the system.

Because of the differences highlighted between Unique and Non-Unique indexes, the number and manner of latches required between the two indexes differs. And it differs significantly …

In this little demo, Latch Differences Between Unique and Non-Unique Indexes Demo, we compare the latches required to read an identical table, using a 2 level index. The  differences between the latch overheads of a Unique and a Non-Unique index are most interesting.

When using a Unique Index, Oracle required 3 consistent gets (one for the index root block, one for the leaf block and one for the table block). BUT, each consistent get was a consistent gets – examination, a special type of consistent get which only requires 1 latch (rather than the standard 2 latches).

So that’s a sum of 3 latches.

However, when using a Non-Unique index, Oracle required 4 consistent gets (one for the index root block, one for the leaf block, one for the table block and an additional one to recheck the leaf block for any duplicate index entries). BUT, only the 1 consistent read (for the index root block) was actually the “cheaper” consistent gets – examination, the other 3 were the more costly 2 latch variety.

So that’s a sum of 7 latches.

3 latches for the Unique index and 7 latches for the Non-Unique index.

That’s an increase of 133.3% in latches between the two types of indexes.

Now, the height of the index will change the ratio of latch difference between the two indexes. Also, in a busy system, there could potentially be differences in the types of latches used due to the current state or additional activity in a block.

However, the potential difference in latch requirements between a Unique or Non-Unique index can be very significant. But does a few additional latches here and there really make much of a difference ?

Well, of course it depends. On small scale systems with smaller loads, fewer indexes, fewer users and excess resources, the noticeable differences may be negligible.

However, in larger scale (especially OLTP) environments, a particular index may be accessed 100s or maybe 1000s of times a second. There may be 1000s of tables with 1000s of corresponding PK and Unique constraints policed by 1000s of Unique (or Non-Unique) indexes. It’s therefore not really of question of a few latches here or there. It’s a question of potentially a very significant proportion of overall latch related overheads.

Potentially when accessed, Non-Unique indexes could be generating double the latch related overheads for equality unique scan or unique checking index activity. Remember, the best way to tune latches and reduce latch contention is to simply reduce the requirement and load for latches.

The overall reduction in CPU and latch related wait activity could be significant between Unique and Non-Unique indexes because by using Non-Unique indexes you in the order of double the latches required for such activities.

Note also this doesn’t require any special parameters to be set or special tuning or monitoring by the DBA. It simply requires using Unique indexes to police PK or Unique constraints when there are no requirements of Non-Unique indexes. You then potentially gain a benefit each and every time the index is used for unique scan accesses.

Guess what type of access is extremely common in large scale OLTP environments …

The next time you complain about high CPU consumption or high latch contention and you’re tuned the application to death, just ask yourself how many Non-unique indexes are policing your PK or Unique Key constraints …

Do ROWID Index Row Entry Columns Impact Index Block Splits ? December 20, 2007

Posted by Richard Foote in Concatenated Indexes, Index Block Splits, Index Internals, Oracle Indexes, Richard's Musings, ROWID.
11 comments

Based on a great question by Alberto Dell’Era  in my “Differences Between Unique/Non-Unique” blog entry (comment 9), I thought it might be a useful exercise to show how I go about confirming my understanding of a specific concept by trying to develop a little test case or demo that can illustrate the concept. My “magic incarnation” if you like ;)

The basic question was does the ROWID that constitutes an additional column in a Non-Unique index determine whether a particular row entry is the maximum or equivalent entry or not. Because by implication, this can determine and influence whether Oracle performs the generally preferred 90-10 splits rather than 50-50 block splits for indexed column values that at least equal the maximum value.

The answer is yes because the ROWID column is just another column in the index row entry and is simply treated the same. But how to actually “illustrate” and show this ?

I needed a way therefore to insert a ROWID that was always going to be the maximum ROWID value for a Non-Unique index. Then insert a whole bunch of subsequent ROWIDs of a lesser value than the maximum and inspect via index statistics whether the type of block splits changed from 90-10 to 50-50 block splits. Remember with the Object Number being equal (if it’s there at all), the next significant portion of the ROWID is the Relative File Number.

The plan was (reasonably) simple. Create a tablespace with one data file and fill it with something. Then add a second data file and use this to store the start of my table of interest (and of course create the index). This will create a whole bunch of rows with ROWIDs of a higher Relative File Number than those in the first data file. Then drop the first table and ensure the second table uses the free space created in the first data file. That way, a whole bunch of ROWIDs can be created that are less than existing ROWIDs because it would be using ROWIDs from the first data file, which has a lesser Relative File Number.

It’s the usual process I go through with these things. Find something that’s of interest, have some idea on how I think things work, come up with plans or strategies that will illustrate whether or not what I think is true (ensuring that somewhere in the process I include at least one reference to David Bowie ;). I can then later take the initial strategies and expand them for all applicable database options and features. Then see if anything changes between database versions and platforms.

Hopefully this demo shows you how I went about proving this: Do ROWID Index Row Entry Columns Impact Index Block Splits Demo.

The benefit of then showing these demos is that others can see exactly how I came to a conclusion, potentially try them out for oneself and perhaps see holes or flaws or shortfalls in the strategy or expand or tailor them for individual requirements or environments.

Differences between Unique and Non-Unique Indexes (Part I) December 18, 2007

Posted by Richard Foote in Constraints, Deferrable Constraints, Index Internals, Indexing Tricks, Novalidate Constraints, Oracle Indexes, Primary Key, Unique Indexes.
29 comments

I’ve had a number of comments regarding my earlier blog entry where I recommended avoiding Deferrable and Novalidate constraints unless you need them and consider using Unique Indexes rather than Non-Unique Indexes where possible.

Why such a recommendation, aren’t Unique and Non-Unique indexes practically the same thing when it comes to policing constraints ?

Sure one index explicitly prevents the insertion of duplicates while the other doesn’t. Yes, dropping/disabling  a constraint policed by an automatically created Unique index causes the index to be dropped if you forget the KEEP INDEX clause.

But that’s about it, right ?

Well, if you need a constraint to be deferrable, then you must create (either implicitly or explicitly) a Non-Unique index. If you want to enable a constraint with novalidate, then again you can only do so with a Non-Unique index in place policing the constraint.

It does all rather sound like Non-Unique indexes have all the advantages and allows for all the flexibility one could want. Non-Unique indexes allows for both deferrable and novalidate constraints, they don’t get dropped when the associated constraint is dropped / disabled and they can actually police both PK and Unique constraints.

What possible benefits are there in Unique Indexes ?

Well, providing you don’t need your constraints to be deferrable, you validate your constraints when they get created/enabled and you don’t go around dropping PK and/or Unique constraints on too regular a basis (or remember the KEEP INDEX clause if you don’t want your index dropped when you do), then there are a number of reasons why you may just want to consider using Unique indexes over Non-Unique indexes.

There are actually a number of key differences between Unique and Non-Unique indexes, both in the manner in which they’re stored by Oracle and in the manner in which they get processed.

In Part I, I’m just going to focus on the differences in how Oracle physically stores index entries.

In actual fact, there’s really no such thing as a Non-Unique index in Oracle. In order for Oracle to be able to determine the location of any specific index row entry and for Oracle to be able to determine an appropriate “order” for each index row entry, internally, Oracle coverts all Non-Unique indexes into a Unique index. It does this by using the associated ROWID of the index row entry as an additional “column”. As each ROWID is unique, this effectively makes all index entries in a Non-Unique index unique as well. Oracle uses the unique combination of the Non-Unique index value and the associated ROWID to then determine the appropriate order and hence appropriate location within the index structure in which to store the index row entry.

By Oracle making the ROWID an additional column, it also has to allocate an additional byte per index row entry in order to store the length of this column. That’s one teeny weeny little byte extra for each and every index row entry.

So what ?

Well, for indexes that don’t have a particularly large index key length, that one byte can be a significant proportion of the overall key length. Now Oracle needs to allocate 2 byes per row entry for various flags and locking information, it requires 6 bytes for the rowid and 1 byte for each column entry. That’s 9 bytes minimum plus the length of the indexed value itself.

Well how large is a typical unique index entry? Well that of course all depends and some PK  / (and especially) Unique values can be quite large. But many many PK values are simply sequenced based numerical values, created nice and small so as to reduce overheads when stored in dependent child tables.

But can it really make any noticeable difference ?

Well, this little demo shows two tables with 1 million numeric PK values: Compare internal index storage between Unique and Non-Unique Indexes

Table test1 is created with a Non-Unique Index, table test2 is created with a Unique Index. The demo shows a partial block dump of a leaf block from each index, highlighting how the Non-Unique index requires an additional byte per index row entry.

The Unique index manages to hold 533 leaf entries in the block while the Non-Unique index could only hold 500. Comparing the total sizes of the two indexes, the Unique index required 1875 leaf blocks while the Non-Unique index required 1999 leaf blocks.

That’s an increase of approximately 6.6% in leaf blocks required for the Non-Unique index to store exactly the same number of index entries as the Unique Index (in this particular example).

That’s 6.6% less storage, that’s a reduction of 6.6% in block splitting and block allocations, that’s a reduction of 6.6% in the cost of full index scans, that’s 6.6% less memory required to cache the index, etc. etc.

The point here is that these savings don’t require any expensive, periodic rebuilding of indexes. They doesn’t require any additional fancy scripts or additional monitoring and processing. The DBA doesn’t have to calculate irrelevant statistics or demand scheduled outages to claim these savings.

This a getting more “dollars for your buck”  freebie from Oracle purely and simply by using a Unique index instead of an Non-Unique index.

Note also that not one or two but ALL of your numeric based PKs have the potential to get these types of savings. Obviously the larger the actual PK or Unique key values, the lesser a byte is in proportion to the overall key length and the less percentage savings.

But it’s not a bad payback for many many of your indexes, purely and simply by using Unique indexes instead of Non-unique indexes where possible …

This is but one of the benefits of using Unique Indexes. More (potentially significant) advantages to follow …

Constraints – Don’t make them DEFERRABLE or NOVALIDATE unless you need to. December 14, 2007

Posted by Richard Foote in Constraints, Deferrable Constraints, Index Internals, Oracle Cost Based Optimizer, Oracle General, Oracle Indexes, Oracle Myths, Performance Tuning.
29 comments

Back when Oracle8 was released, Oracle introduced a number of new features with regard to constraints.

The first was the option of making a constraint DEFERRABLE, meaning the policing of a constraint can be deferred until the issuing of the COMMIT, rather than during the execution of an individual statement. This gave application developers more freedom in how they designed code, particularly with regard to the order in which parent – child data is inserted and manipulated.

The second new option was the ability to enable a constraint with NOVALIDATE, meaning Oracle would enable the constraint but not bother to check existing data to ensure nothing violated the constraint.

This could be useful in a number of scenarios. For example, you have data that currently violates the constraint but have urgent business requirements to enable the constraint ASAP preventing further violations, with the intention of cleaning up the existing violations at some future time.

Or you know the data is OK, so to reduce the overheads associated with enabling the constraint (eg. reading all the data to check for violations), you enable the constraint “immediately” with NOVALIDATE, bypassing the “redundant” checking.

Both deferrable and novalidate constraints therefore imply there “could” be data at any given point in time that violates the constraint. Therefore Oracle also introduced the ability to have non-unique indexes (rather than unique indexes) policing either PK or Unique constraints. For deferrable or novalidate constraints, the index must in fact be non-unique, as a unique index would prevent any such (temporary) violations of PK or Unique constraints.

Now, there are a number of interesting and subtle differences in the manner in which Oracle manages and processes a Unique vs. a Non-Unique index (eg. the amount of storage they use, the amount of redo they generate, the number of latches they acquire). This will be discussed in another Blog entry some other day.

Today, I just want to focus on a couple of interesting little side-effects with regard to how the CBO deals (or doesn’t deal)with NOT NULL and CHECK constraints that have been created as Deferrable or Novalidate.

In 9i, the CBO was clever enough to know that if someone searched for a NULL value but the column had a NOT NULL constraint, there couldn’t possibly be any data matching the criteria. Providing you had an index on the column, the CBO would generate an execution plan that used the index, found no NULL values and returned an empty row set accordingly. If you had no index, the CBO would be forced to use a Full Table Scan. So the CBO actually used an index in an efficient manner to search for non-existent nulls.

BUT, if the NOT NULL constraint was either deferrable or novalidated, then Oracle couldn’t know there were no nulls, there just might be. Therefore, Oracle was forced into the FTS regardless of the existence of the constraint or index, as null values are not indexed (unless part of a concatenated index).

See this demo for details: NOT NULLs demo with 9i

Since 10g, the CBO has become smarter. The NOT NULL example works in a very similar manner, except that the index is no longer required. If one searches for a NULL value on a column that has a NOT NULL constraint, the CBO automatically determines there can be no matching rows and returns the empty row set immediately with no LIOs. None, as accessing the data is simply not necessary.

BUT again, it can only do so if and only if the NOT NULL constraint is validated and nondeferrable, otherwise the CBO can’t guarantee no nulls.

See this little demo for details: NOT NULLs demo with 10g

Although we actually have applications that intentionally search for nulls on NOT NULL columns to return empty row sets, it’s not common that an application would perform such a search.

What is much more common is searching for a column value that simply doesn’t exist. If a column value doesn’t meet a business rule, it’s a good idea to police such business rules with Check constraints. 10g has extended the NOT NULL scenario to include Check constraints. If a search attempts to search for a column value that violates a check constraint, Oracle will immediately return an empty row set without performing any LIOs.

But once again, it can only do so if the check constraint has been validated and set as nondeferrable.

See this demo for a 10g check constraint example: Check Constraints with 10g

Making constraints deferrable or enabling them with novalidate can be useful. However, if possible, ensure constraints are not deferrable and validated as this provides the CBO with additional information regarding the columns that it might just put to good use.

Additionally, unless there’s a requirement to the contrary, use unique indexes rather than non-unique indexes to police uniqueness. But that’s a discussion for another day …

Follow

Get every new post delivered to your Inbox.

Join 1,712 other followers