jump to navigation

NOVALIDATE Constraints Part II – Does It Matter ? July 30, 2008

Posted by Richard Foote in Constraints, Novalidate Constraints, Oracle Cost Based Optimizer, Oracle Indexes, Primary Key, Unique Indexes.
13 comments

As promised, more on NOVALIDATE constraints.

As previously discussed, a Primary Key or a Unique Key constraint that’s not validated is by default policed by a Unique index regardless. If there are no duplicate values, then no worries (yes, I’m Australian), Oracle will create the Unique index and enable the constraint in a NOVALIDATE state. If there are duplicate values, Oracle will complain about creating the Unique Index to police the constraint and you must either explicitly create a Non-Unique index when creating the constraint or use an existing Non-Unique index.

So what are the implications, if any, of having a Primary key constraint in a NOVALIDATE state, especially if a Unqiue index is used to police the constraint ? The data must really be unique else the Unique Index could not have been created, right ? Also Oracle can take advantage of all the benefits associated with having a Unique index such as less consistent reads and latching overheads as previously discussed.

Following on from the demo in Part I, if we have a table with a Primary Key in a NOVALIDATE state, policed by a Unique Index:

SQL> ALTER TABLE ziggy ADD CONSTRAINT ziggy_pk PRIMARY KEY(id) ENABLE NOVALIDATE;

Table altered.

SQL> SELECT constraint_name, validated, uniqueness
FROM user_constraints c, user_indexes i
WHERE c.constraint_name = i.index_name AND c.table_name= ‘ZIGGY';

CONSTRAINT_NAME VALIDATED     UNIQUENES
--------------- ------------- ---------
ZIGGY_PK        NOT VALIDATED UNIQUE

Oracle will be able to use the Unique index to perform a nice, efficient, low latching Unique Scan with the index:

SQL> SELECT * FROM ziggy WHERE id = 42;

------------------------------------------
|Id|Operation                   |Name    |
------------------------------------------
| 0|SELECT STATEMENT            |        |
| 1| TABLE ACCESS BY INDEX ROWID|ZIGGY   |
|*2|  INDEX UNIQUE SCAN         |ZIGGY_PK|
------------------------------------------

Everything’s perfect regardless of the PK constraint not being validated, right ?

Well, not exactly.

Remember, a PK constraint requires the data to be Unique AND Not Null. Now the Unique Index guarantees the data is indeed unique but it does nothing to protect us from having possible NULL values in our data. The index will simply ignore and not index any index entries that are fully NULL, therefore the PK column(s) could potentially, just maybe, contain NULLS. Brian Tkatch in a comment in Part I has a nice example of how this is possible.

This mean Oracle can not guarantee the index has index entries for every row in the table as any rows with a NULL PK will not be indexed. This can have serious reprecussions for the CBO when deciding an optimal execution plan.

For example, a query such as the following COUNT(*) query which could potentially be serviced via a “smaller” PK index segment can not use the Unique index and is forced to use either another index or a Full Table Scan:

SQL> select count(*) from ziggy;

---------------------------------
| Id| Operation          | Name |
---------------------------------
|  0| SELECT STATEMENT   |      |
|  1|  SORT AGGREGATE    |      |
|  2|   TABLE ACCESS FULL| ZIGGY|
---------------------------------

Another example, this query with an ORDER BY clause could potentially use the Unique index to retrieve the data and so avoid the sort operation as the Clustering Factor of the index is very good. However, it can’t as again, the CBO can’t guarantee all data will be retrieved via the index:

SQL> select * from ziggy order by id;

10000 rows selected.

---------------------------------
| Id| Operation          | Name |
---------------------------------
|  0| SELECT STATEMENT   |      |
|  1|  SORT ORDER BY     |      |
|  2|   TABLE ACCESS FULL| ZIGGY|
---------------------------------

However, if only we just validate the constraint, everything changes:

SQL> ALTER TABLE ziggy ENABLE VALIDATE PRIMARY KEY;

Table altered.

The COUNT(*) query suddenly starts using the index as a cheaper alternative as now, there can’t be any null values and so the index must reference all possible rows:

SQL> select count(*) from ziggy;

-------------------------------------
|Id|Operation             | Name    |
-------------------------------------
| 0|SELECT STATEMENT      |         |
| 1| SORT AGGREGATE       |         |
| 2|  INDEX FAST FULL SCAN| ZIGGY_PK|
-------------------------------------

The ORDER BY query suddenly starts using the index and avoids performing the sort operation as again, the index will now guarantee all rows are returned in a sorted order:

SQL> select * from ziggy order by id;

10000 rows selected.

------------------------------------------
|Id|Operation                   |Name    |
------------------------------------------
| 0|SELECT STATEMENT            |        |
| 1| TABLE ACCESS BY INDEX ROWID|ZIGGY   |
| 2|  INDEX FULL SCAN           |ZIGGY_PK|
------------------------------------------

The moral of the story. Provide the CBO with as much information as possible, as it can potentially use the information to determine a more optimal execution plan. Having a NOVALIDATE constraint possibly hides valuable information from the CBO and so needs to be used with caution.

NOVALIDATE Constraints – No really … July 28, 2008

Posted by Richard Foote in Constraints, Indexing Tricks, Novalidate Constraints, Oracle Indexes, Primary Key, Unique Indexes.
20 comments

There have been a number of posts recently on the OTN database forum regarding the whole topic of NOVALIDATE of constraints and the associated indexes so I thought it might be worth going over a couple of interesting little quirks with all this.

A NOVALIDATE constraint is basically a constraint which can be enabled but for which Oracle will not check the existing data to determine whether there might be data that currently violates the constraint.

This is useful if we know there’s data that violates the constraint but we want to quickly put on a constraint to prevent further violations, with the intention to clean up any possible violations at some future point in time.

It’s also potentially useful if we know the data is clean and so want to prevent the potentially significant overheads of Oracle having to check all the data to ensure there are indeed no violations.

I previously discussed the use of Non-Unique Indexes for manageing Primary and Unique Key Constraints but there are a few little traps one can easily fall into if one doesn’t understand these two very important fundamentals:

  1. By default, Oracle will attempt to create a Unique Index to police a PK or UK constraint
  2. A NOVALIDATE constraint requires a Non-Unique Index for the constraint to really be “Novalidated”

Get these two concepts confused and things can easily get a little difficult to follow …

Here’s a little example of how things can start to get confusing. First, let’s create a simple little table and populate it with a few rows.

SQL> CREATE TABLE ZIGGY (id NUMBER, text VARCHAR2(20));

Table created.

SQL> INSERT INTO ziggy SELECT rownum , ‘Ziggy’ FROM dual CONNECT BY LEVEL <= 10000;

10000 rows created.

Note that the ID column is populated with unique values. However, let’s now introduce a duplicate value, 42:

SQL> INSERT INTO ziggy VALUES (42, ‘DUPLICATE’);

1 row created.

SQL> COMMIT;

Commit complete.

OK, we now want to add a Primary Key to this table but because we suspect there might be some duplicate values which we intend to clean up at some future point in time, we want to create the constraint with NOVALIDATE:

SQL> ALTER TABLE ziggy ADD CONSTRAINT ziggy_pk PRIMARY KEY(id) ENABLE NOVALIDATE;
ALTER TABLE ziggy ADD CONSTRAINT ziggy_pk PRIMARY KEY(id) ENABLE NOVALIDATE
                                 *
ERROR at line 1:
ORA-02437: cannot validate (BOWIE.ZIGGY_PK) – primary key violated

Now what the hell is going on here ?

We clearly stated we want to create a NOVALIDATE constraint but Oracle appears to be ignoring this and is validating the constraint regardless and so generating an error because of the duplicate entry.

Why ?

Because by default Oracle will attempt to create a Unique index when creating a PK constraint. A Unique index MUST always contain unique values and so complains when it stumbles across our duplicate 42 ID value. The constraint is being effectively validated because the unique index will only be created providing there are indeed no duplicate values.

Not how I would have designed things but there you go …

However, if we either have an existing Non-Unique index which Oracle can use or we explicitly create a Non-Unique index, then we can proceed with creating the NOVALIDATE constraint as required:

SQL> ALTER TABLE ziggy ADD CONSTRAINT ziggy_pk PRIMARY KEY(id)
USING INDEX(CREATE INDEX ziggy_pk ON ziggy(id)) ENABLE NOVALIDATE;

Table altered.

If we look at the status of the constraint and the type of index used to police the constraint, we notice that the index is indeed a Non-Unique index and the constraint has not been validated:

SQL> SELECT constraint_name, validated, uniqueness
FROM user_constraints c, user_indexes i
WHERE c.constraint_name = i.index_name AND c.table_name= ‘ZIGGY';

CONSTRAINT_NAME VALIDATED     UNIQUENES
--------------- ------------- ---------
ZIGGY_PK        NOT VALIDATED NONUNIQUE

We have a PK constraint even though there are currently duplicate values of the PK column in the data.

OK, let’s now drop and the constraint, the Unique Index and delete the duplicate row:

SQL> ALTER TABLE ziggy DROP PRIMARY KEY;

Table altered.

SQL> DROP INDEX ZIGGY_PK;

Index dropped.

SQL> DELETE ziggy WHERE id = 42 and rownum <= 1;

1 row deleted.

SQL> COMMIT;

Commit complete.

The data is now clean and we have no existing constraint or index on the ID column:

SQL> SELECT constraint_name, validated FROM user_constraints WHERE table_name= ‘ZIGGY';

no rows selected

Let’s now do something that based on our understanding might appear to be a little odd, let’s try and recreate the constraint in a NOVALIDATE state but with a Unique index. This of course should now work as there are indeed no duplicates within the data:

SQL> ALTER TABLE ziggy ADD CONSTRAINT ziggy_pk PRIMARY KEY(id) ENABLE NOVALIDATE;

Table altered.

Success !! Let’s now look at the state of the constraint and the type of index created:

SQL> SELECT constraint_name, validated, uniqueness
FROM user_constraints c, user_indexes i
WHERE c.constraint_name = i.index_name AND c.table_name= ‘ZIGGY';

CONSTRAINT_NAME VALIDATED     UNIQUENES
--------------- ------------- ---------
ZIGGY_PK        NOT VALIDATED UNIQUE 

As expected, we have a constraint that’s policed by a Unique index that has not been validated.

This might appear be a little odd, because the question you might well now ask is why bother no validating a constraint that has effectively been validated anyways as the use of the Unique index has guaranteed there can not possibly be any duplicate values else the creation of the Unique index would have failed ?

We effectively have a validated constraint which Oracle is still classifying as being not validated :)

Then again, maybe not …

More later.

Differences between Unique and Non-Unique Indexes (Part I) December 18, 2007

Posted by Richard Foote in Constraints, Deferrable Constraints, Index Internals, Indexing Tricks, Novalidate Constraints, Oracle Indexes, Primary Key, Unique Indexes.
30 comments

I’ve had a number of comments regarding my earlier blog entry where I recommended avoiding Deferrable and Novalidate constraints unless you need them and consider using Unique Indexes rather than Non-Unique Indexes where possible.

Why such a recommendation, aren’t Unique and Non-Unique indexes practically the same thing when it comes to policing constraints ?

Sure one index explicitly prevents the insertion of duplicates while the other doesn’t. Yes, dropping/disabling  a constraint policed by an automatically created Unique index causes the index to be dropped if you forget the KEEP INDEX clause.

But that’s about it, right ?

Well, if you need a constraint to be deferrable, then you must create (either implicitly or explicitly) a Non-Unique index. If you want to enable a constraint with novalidate, then again you can only do so with a Non-Unique index in place policing the constraint.

It does all rather sound like Non-Unique indexes have all the advantages and allows for all the flexibility one could want. Non-Unique indexes allows for both deferrable and novalidate constraints, they don’t get dropped when the associated constraint is dropped / disabled and they can actually police both PK and Unique constraints.

What possible benefits are there in Unique Indexes ?

Well, providing you don’t need your constraints to be deferrable, you validate your constraints when they get created/enabled and you don’t go around dropping PK and/or Unique constraints on too regular a basis (or remember the KEEP INDEX clause if you don’t want your index dropped when you do), then there are a number of reasons why you may just want to consider using Unique indexes over Non-Unique indexes.

There are actually a number of key differences between Unique and Non-Unique indexes, both in the manner in which they’re stored by Oracle and in the manner in which they get processed.

In Part I, I’m just going to focus on the differences in how Oracle physically stores index entries.

In actual fact, there’s really no such thing as a Non-Unique index in Oracle. In order for Oracle to be able to determine the location of any specific index row entry and for Oracle to be able to determine an appropriate “order” for each index row entry, internally, Oracle coverts all Non-Unique indexes into a Unique index. It does this by using the associated ROWID of the index row entry as an additional “column”. As each ROWID is unique, this effectively makes all index entries in a Non-Unique index unique as well. Oracle uses the unique combination of the Non-Unique index value and the associated ROWID to then determine the appropriate order and hence appropriate location within the index structure in which to store the index row entry.

By Oracle making the ROWID an additional column, it also has to allocate an additional byte per index row entry in order to store the length of this column. That’s one teeny weeny little byte extra for each and every index row entry.

So what ?

Well, for indexes that don’t have a particularly large index key length, that one byte can be a significant proportion of the overall key length. Now Oracle needs to allocate 2 byes per row entry for various flags and locking information, it requires 6 bytes for the rowid and 1 byte for each column entry. That’s 9 bytes minimum plus the length of the indexed value itself.

Well how large is a typical unique index entry? Well that of course all depends and some PK  / (and especially) Unique values can be quite large. But many many PK values are simply sequenced based numerical values, created nice and small so as to reduce overheads when stored in dependent child tables.

But can it really make any noticeable difference ?

Well, this little demo shows two tables with 1 million numeric PK values: Compare internal index storage between Unique and Non-Unique Indexes

Table test1 is created with a Non-Unique Index, table test2 is created with a Unique Index. The demo shows a partial block dump of a leaf block from each index, highlighting how the Non-Unique index requires an additional byte per index row entry.

The Unique index manages to hold 533 leaf entries in the block while the Non-Unique index could only hold 500. Comparing the total sizes of the two indexes, the Unique index required 1875 leaf blocks while the Non-Unique index required 1999 leaf blocks.

That’s an increase of approximately 6.6% in leaf blocks required for the Non-Unique index to store exactly the same number of index entries as the Unique Index (in this particular example).

That’s 6.6% less storage, that’s a reduction of 6.6% in block splitting and block allocations, that’s a reduction of 6.6% in the cost of full index scans, that’s 6.6% less memory required to cache the index, etc. etc.

The point here is that these savings don’t require any expensive, periodic rebuilding of indexes. They doesn’t require any additional fancy scripts or additional monitoring and processing. The DBA doesn’t have to calculate irrelevant statistics or demand scheduled outages to claim these savings.

This a getting more “dollars for your buck”  freebie from Oracle purely and simply by using a Unique index instead of an Non-Unique index.

Note also that not one or two but ALL of your numeric based PKs have the potential to get these types of savings. Obviously the larger the actual PK or Unique key values, the lesser a byte is in proportion to the overall key length and the less percentage savings.

But it’s not a bad payback for many many of your indexes, purely and simply by using Unique indexes instead of Non-unique indexes where possible …

This is but one of the benefits of using Unique Indexes. More (potentially significant) advantages to follow …

Follow

Get every new post delivered to your Inbox.

Join 1,883 other followers