jump to navigation

Differences Between Unique and Non-Unique Indexes Part 4.5 (Fix You) March 30, 2009

Posted by Richard Foote in Fragmented Indexes, Index Internals, Non-Unique Indexes, Oracle Indexes, Unique Indexes.
trackback

In my last post, Part IV in this series, we looked at how a Unique Index can reuse space deleted within the same logical transaction, whereas a Non-Unique Index can not.  Deleted space within a Non-Unique index can only be reused by subsequent transactions.

It’s sometimes important to appreciate this distinction because as discussed in the various OTN and Ask Tom threads mentioned in Part IV, there are times when this can make a significant difference to the manageability and efficiency of the resultant index.

Now, it’s not everyday someone might for example delete all rows in a table and repopulate it again within a single transaction (the TRUNCATE command was of course developed for a reason). However, perhaps an application was developed without your involvement, perhaps a large proportion but not all of the data is being deleted or as someone mentioned on OTN, perhaps the table in question is a Materialized View being fully refreshed within a refresh group. There could therefore be occasions when a single transaction might indeed perform a large delete followed by a similarly sized insert.

In which case, whether an index is defined as Unique or Non-Unique might make a difference …

To begin with, let’s populate a table with 1M rows and create an associated Unique index:

SQL> insert into bowie select rownum, ‘BOWIE’ from dual connect by level <=1000000;

1000000 rows created.

SQL> commit;

Commit complete.

SQL> create unique index bowie_idx on bowie(id);

Index created.

 

Let’s look at the size of  this newly created Unique index:

SQL> analyze index bowie_idx validate structure;

Index analyzed.

SQL> select blocks, lf_blks, del_lf_rows from index_stats;

    BLOCKS    LF_BLKS DEL_LF_ROWS
---------- ---------- -----------
      2176       2087           0

 

OK, let’s now delete the entire table and repopulate it again, within the same logical transaction

SQL> delete bowie;

1000000 rows deleted.

SQL> insert into bowie select rownum, ‘PINK FLOYD’ from dual connect by level <= 1000000;

1000000 rows created.

SQL> commit;

Commit complete.

 

Let’s look at the size difference for the Unique Index and see how many deleted index entries we have as a result:

SQL> analyze index bowie_idx validate structure;

Index analyzed.

SQL> select blocks, lf_blks, del_lf_rows from index_stats;

    BLOCKS    LF_BLKS DEL_LF_ROWS
---------- ---------- -----------
      2176       2087           0

 

OK good, the index is actually identical in size and we have no deleted entries, not a one. All the deleted entries as a result of the delete command have been reused by the subsequent insert statement. This means of course that the index is just as efficient now after all this DML activity, as it was when the index was first created.

 

Let’s perform exactly the same demo, but this time with a Non-Unique index and see any differences …

SQL> drop table bowie;

Table dropped.

SQL> create table bowie (id number, name varchar2(20));

Table created.

SQL> insert into bowie select rownum, ‘BOWIE’ from dual connect by level <=1000000;

1000000 rows created.

SQL> commit;

Commit complete.

SQL> create index bowie_idx on bowie(id);

Index created.

SQL> analyze index bowie_idx validate structure;

Index analyzed.

SQL> select blocks, lf_rows, lf_blks, del_lf_rows from index_stats;

    BLOCKS    LF_BLKS DEL_LF_ROWS
---------- ---------- -----------
      2304       2226           0

 

The first difference we notice is that the Non-Unique index after it has just been created is somewhat larger than the equvalent Unique index (2226 leaf blocks vs. 2087 leaf blocks). This is a direct result of the Non-Unique index having to store an extra byte for the length byte associated with the rowid being an additional index column for each and every one of the 1M index entries.

SQL> delete bowie;

1000000 rows deleted.

SQL> insert into bowie select rownum, ‘PINK FLOYD’ from dual connect by level <= 1000000;

1000000 rows created.

SQL> commit;

Commit complete.

SQL> analyze index bowie_idx validate structure;

Index analyzed.

SQL> select blocks, lf_blks, del_lf_rows from index_stats;

    BLOCKS    LF_BLKS DEL_LF_ROWS
---------- ---------- -----------
      4608       4518     1000000

OK not quite so good, big difference here. Previously, the Unique Index remained unchanged and had no deleted index entries. However, the Non-Unique index is now effectively double the size it was previously and has 1M deleted index entries still within the index structure. Not a one was recycled and reused within the logical transaction.

This index is now potentially problematic, especially if there are going to be no or few subsequent inserts until it next gets refreshed, where the deleted entries can be reused but the current entries may again remain in the index after they’ve been deleted.

Again, it’s important to understand what is going on here so one can take the appropriate adminstration steps. Perhaps it might be better to drop the index and recreate it after the transaction (if permitted). Perhaps the truncate command isn’t such a bad idea after all (if permitted). Perhaps it might be better to police the Unique constraint with a Unique rather than a Non-Unique index after all.

Perhaps, it might be better to not perform the above within a single transaction and issue an intermediate commit after all (if permitted) …

 

SQL> drop table bowie;

Table dropped.

SQL> create table bowie (id number, name varchar2(20));

Table created.

SQL> insert into bowie select rownum, ‘BOWIE’ from dual connect by level <=1000000;

1000000 rows created.

SQL> commit;

Commit complete.

SQL> create index bowie_idx on bowie(id);

Index created.

SQL> analyze index bowie_idx validate structure;

Index analyzed.

SQL> select blocks, lf_rows, lf_blks, del_lf_rows from index_stats;

    BLOCKS    LF_BLKS DEL_LF_ROWS
---------- ---------- -----------
      2304       2226           0

SQL> delete bowie;

1000000 rows deleted.

Because if we just issue the commit at this point in the process …

SQL> commit;

Commit complete.

SQL> insert into bowie select rownum, ‘PINK FLOYD’ from dual connect by level <= 1000000;

1000000 rows created.

SQL> commit;

Commit complete.

SQL> analyze index bowie_idx validate structure;

Index analyzed.

SQL> select blocks, lf_rows, lf_blks, del_lf_rows from index_stats;

    BLOCKS    LF_BLKS DEL_LF_ROWS
---------- ---------- -----------
      2304       2226           0

 

We would not have this problem as the subsequent transaction that performs the insert can reused all the deleted space associated with the first delete transaction. 

If one understands how indexes work and understands how deleted space can be reused, one can prevent many potential issues and unnecessary maintenance tasks.

Prevention is always the best cure …

Comments»

1. Brian Tkatch - March 31, 2009

Richard, great stuff.

Like

2. Asif Momen - March 31, 2009

Excellent !!!

Specially, I liked Part “4.5” rather than moving to Part “5”.

However, you forgot the Create Index step during the Non-Unique Index test:

“create index bowie_idx on bowie(id);”

Like

3. Richard Foote - March 31, 2009

Thanks Brian 🙂

Thanks Asif, the problem with cutting ‘n’ pasting !! The create index statement (somewhat important !!) from the demo has been added in the post.

Like

4. Franck Pachot - October 3, 2009

Hi Richard,
After reading the serie about unique indexes, especially the fact that it is never possible to have duplicate entries in a unique index, then a big question arises. Let’s take an example:

create table TEST ( C char , N number UNIQUE );
insert into TEST values (‘A’,1);
insert into TEST values (‘B’,2);
update TEST set N=decode(N, 1,2, 2,1);

At the end of the query, the numbers are still unique. So the constraint is not violated.
But how does oracle manage the fact that there may be duplicates before the update is completed ?

Regards,
Franck.

Like

5. mahesh - June 13, 2011

it executes and checks the unique ness, and it finds there is no issue. it will not througth any sort of error.
update TEST set N=decode(N, 1,2, 2,1);

Like


Leave a comment