jump to navigation

How Does An Execution Plan Suddenly Change When The Statistics (And Everything Else) Remains The Same ? (In Limbo) February 16, 2010

Posted by Richard Foote in CBO, Index Access Path, Oracle Cost Based Optimizer, Oracle Indexes, Oracle Myths.
112 comments

I’ve slipped this post in as there have been a number of discussions recently on how execution plans have changed while nothing else appears to have changed in the database. How can an execution plan suddenly change when no one has made any changes to the database ?
 
By no changes, it means that there have been no alterations to any segments, no new indexes have been added, no changes associated  bind peeking (indeed, there may not even be any bind variables), no parameters changes, no new patches or upgrades, no new outlines or profiles, no new system stats and perhaps most prevalent of all, no changes to any CBO statistics.
 
The DBA hasn’t touched a thing and yet suddenly, for no apparent reason, execution plans suddenly change and (say) an inappropriate index is suddenly used and causes performance degradation.
 
How can this be possible ?
 
There are two key points I want to emphasise.
 
The first is that there’s a common misperception that if no new statistics are gathered (and assuming nothing else is altered in the database), that execution plans must always remain the same. That by not collecting statistics, one somehow can ensure and guarantee the database will simply perform in the same manner and generate the same execution plans.
 
This is fundamentally not true. In fact, quite the opposite can be true. One might need to collect fresh statistics to make sure vital execution plans don’t change. It’s the act of not refreshing statistics that can cause execution plans to suddenly change.
 
The second point is that when one goes through all the things that might have changed in the database, two important aspects are often overlooked.
 
The first thing that does usually change within most databases is the actual data within the database. Those damn users log on and keep adding new data and modifying data all the time. It might not be a database change as such but the fact the data changes within a database is a critical change that can directly influence CBO behaviour. When pointing the finger at what might have caused an execution plan to change, many simply ignore the fact the data is constantly changing in the background.
 
The other aspect that always changes is time. People have tried but it’s very difficult to stop time. When things worked well, it was at a different point in time than now when things have suddenly gone wrong.
 
So some things do change that are not in direct control of the DBA.
 
But if we don’t collect fresh statistics, even though the data might have changed, won’t those data changes be effectively invisible to the CBO? Won’t the statistics not reflect any possible data changes and if the CBO doesn’t think the data has changed, doesn’t that mean it can’t suddenly change how it determines an execution plan ?
 
Not true. It’s quite possible that because the statistics haven’t changed, the CBO is forced into makings changes in how it costs and determines an execution plan.
 
A very simple example follows, a classic case of why not refreshing statistics has caused the CBO to suddenly change an execution plan for no apparent reason.
 
I’ll begin by creating a simple little table and populate it with approximately 5 years worth of data.

 
SQL> create table muse (id number, muse_date date, name varchar2(10));
 
Table created.
 
SQL> declare
  2  v_count  number;
  3  begin
  4  v_count:=0;
  5  for i in 1..1830 loop
  6     for j in 1..1000 loop
  7     v_count:= v_count+1;
  8     insert into muse values (v_count, sysdate-i, 'MUSE');
  9     end loop;
 10  end loop;
 11  commit;
 12  end;
 13  /
 
PL/SQL procedure successfully completed.
 
SQL> create index muse_i on muse(muse_date);
 
Index created.
 
SQL> exec dbms_stats.gather_table_stats(ownname=>'BOWIE', tabname=>'MUSE', casca
de=>true, estimate_percent=>null, method_opt=>'FOR ALL COLUMNS SIZE 1');
 
PL/SQL procedure successfully completed.

So the procedure basically populates the table, setting the MUSE_DATE column with approximately 5 years worth of data, with 1000 rows for each day so the data is evenly distributed across those 5 years.

Note that I’ve collected statistics on the table and index and they’re fully up to date.

The following query is a typical query in our application, where we’re only interested in looking at the previous year’s worth of data. It simply selects all data that is only a year old. This is a query that’s run all the time and only looks at a “moving window” of data, that being just those rows that were inserted up to a year ago.


 
SQL> select * from muse where muse_date > sysdate - 365;
 
364000 rows selected.
 
 
Execution Plan
----------------------------------------------------------
Plan hash value: 2738706195
 
--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |   363K|  6390K|  1330  (11)| 00:00:07 |
|*  1 |  TABLE ACCESS FULL| MUSE |   363K|  6390K|  1330  (11)| 00:00:07 |
--------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter("MUSE_DATE">SYSDATE@!-365)
 
Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
       5992  consistent gets
       5912  physical reads
          0  redo size
    3638996  bytes sent via SQL*Net to client
       1188  bytes received via SQL*Net from client
         74  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
     364000  rows processed
 

Notice how the CBO has decided to use a Full Table Scan (FTS) as a year is quite a chunk of the table and is more effectively accessed in this manner. Notice also how the CBO has got the cardinality spot on and has correctly predicted the number of rows to be returned. If the CBO gets the selectivity and so the cardinality of the query correct, we have some confidence that it has indeed come up with the most efficient execution plan. Indeed, the users are perfectly happy with the performance of the query, the DBAs are happy and because we don’t really want to risk the CBO suddenly changing things, we decide to not collect statistics on this table any more.

However, more data is pumped into the table each and every day by the end-users.

The following procedure will add another years worth of data into the table to simulate how the table will be populated in a year’s time …

SQL> declare
  2  v_count  number;
  3  begin
  4  v_count:=1830000;
  5  for i in 1..365 loop
  6     for j in 1..1000 loop
  7     v_count:= v_count+1;
  8     insert into muse values (v_count, sysdate+i, 'MUSE');
  9     end loop;
 10  end loop;
 11  commit;
 12  end;
 13  /
 
PL/SQL procedure successfully completed.

Note that we have NOT collected any new statistics.

OK, let’s now fast track ourselves one year into the future and run the same query again. Note in a year’s time, we will be 365 days past the current sysdate. So we’ll mimic running the identical query by simply adding 365 days to the sysdate and again querying for the latest year’s worth of data:


 
SQL> select * from muse where muse_date > (sysdate+365) - 365;
 
365000 rows selected.
 
 
Execution Plan
----------------------------------------------------------
Plan hash value: 1682432684
 
--------------------------------------------------------------------------------------
| Id  | Operation                   | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |        |   944 | 16992 |     9   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| MUSE   |   944 | 16992 |     9   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | MUSE_I |   944 |       |     5   (0)| 00:00:01 |
--------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - access("MUSE_DATE">SYSDATE@!+365-365)
 
Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
       4005  consistent gets
       1301  physical reads
     134192  redo size
    4024147  bytes sent via SQL*Net to client
       1188  bytes received via SQL*Net from client
         74  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
     365000  rows processed
 

We notice that the execution plan has now changed !!

It’s now suddenly starting to use an index where previously it was using a FTS. Notice also that the CBO has got the cardinalty estimate way wrong, predicting only 944 rows will be returned. Instead of estimating it will get a year’s worth of data, the CBO is estimating only approximately 1 days worth of data or the selectivity based on one distinct value. If the CBO get’s this so terribly wrong, it’s a good chance it has also got the execution plan terribly wrong as well.

The query is effectively the same query that would be run in a year’s time, the statistics have not been changed and yet the execution plan has indeed changed. The CBO suddenly using this index may be a terrible thing, resulting in a really inefficient execution plan and a massive increase in LIOs.

Why has the plan changed when the statistics have not ?

The key issue here is that the CBO thinks the maximum date in the table was from a year ago when the statistics were last calculated. However, the query is attempting to select data that is beyond the range of values known to the CBO. How can it now know the estimated cardinality of the query, the number of expected rows to be returned when we’re only interested in rows that are beyond its maximum known range of data ?

The answer is that it can’t. Not accurately anyway.

The CBO has to guess and depending on the version of Oracle and the type of query being parsed, it can guess in a number of different ways. Because the query is effectively after data that is greater than the maximum known value, the CBO is basically “guessing” there will only be a days worth of data greater than its maximum known value, not the full years worth that’s really in the table. The CBO having to guess is not a good thing, especially when it’s more than likely to get the guess hopelessly wrong.

Note this change will have occurred suddenly one day into the future when the CBO  starts to consider there are so few days worth returning that the index suddenly becomes the best and cheapest option.

How do we fix this inefficient execution plan ?

Rather than having the CBO guess how many rows might be returned, let it actually know. Simply collect fresh statistics and let the CBO know that we actually have a full year’s worth of data since the statistics were previously collected:

SQL> exec dbms_stats.gather_table_stats(ownname=>'BOWIE', tabname=>'MUSE', cascade=>true, estimate_percent=>null, method_opt=>'FOR ALL COLUMNS SIZE 1');
PL/SQL procedure successfully completed.

 

If we run the same query again now …


 
SQL> select * from muse where muse_date > (sysdate+365) - 365;
 
365000 rows selected.
 
 
Execution Plan
----------------------------------------------------------
Plan hash value: 2738706195
 
--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |   364K|  6413K|  1652  (14)| 00:00:09 |
|*  1 |  TABLE ACCESS FULL| MUSE |   364K|  6413K|  1652  (14)| 00:00:09 |
--------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter("MUSE_DATE">SYSDATE@!+365-365)
 
Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
       7205  consistent gets
       6709  physical reads
          0  redo size
    4024147  bytes sent via SQL*Net to client
       1188  bytes received via SQL*Net from client
         74  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
     365000  rows processed

 

We now notice the CBO has got the cardinality spot on again and choses to use the efficient FTS.

So yes, an execution plan can change even if we don’t make any changes to the database, including not collecting fresh statistics. If you think by not collecting statistics, things will simply remain the same, one day when you least expect it, things might suddenly go terribly wrong.

Solving such issues can be extremely different if you try to do so by looking at what might have changed, especially if you don’t know what you’re looking for …

The CBO and Indexes: OPTIMIZER_INDEX_COST_ADJ Part III August 20, 2009

Posted by Richard Foote in Index Access Path, OPTIMIZER_INDEX_COST_ADJ, Oracle Indexes.
13 comments

After a bit of a layoff to organise a few upcoming overseas trips, while watching plenty of Ashes Cricket and the brilliantly funny “Flight Of The Conchords” DVDs, it’s about time I got back to my humble little blog.

In Part II, we looked at a really bad way to set the optimizer_index_cost_adj parameter, by just setting it a really low value and allow indexes to blindly reign supreme in the database.

Remember, the purpose of the optimizer_index_cost_adj parameter is to accurately reflect differences and discrepancies in costs associated with single block I/Os when compared with corresponding multi-block I/Os so that the CBO considers and incorporates these discrepancies in its costings. 

A second method of setting the optimizer_index_cost_adj parameter is to set it to a value that attempts to accurately reflect these comparative costs. So if a single block I/O is typically only half as expensive and/or only takes half the time to complete when compared to a multi-block I/O, then a reasonable setting for the optimizer_index_cost_adj parameter would be 50.
 
So how to set the optimizer_index_cost_adj parameter “intelligently” ?
 
Well, Oracle has excellent instrumentation and the comparative wait times for each of these types of I/Os are automatically measured and captured by Oracle. A single block I/O as performed typically by an index range scan is measured via the “db file sequential read” wait event while the multi-block I/O as typically performed during a FTS is measured via the “db file scattered read” wait event. 

By determining the average wait times for each of these events and comparing the differences, one can determine how much longer it takes on average for one type of I/O to complete versus the other. This will then provide us with a reasonable starting point with which to set the optimizer_index_cost_adj parameter.
 
One can simply look at these average wait events for the database since startup by querying v$system_event:
 
SQL> select event, average_wait from v$system_event where event like ‘db file s%read';

EVENT                   AVERAGE_WAIT
----------------------- ------------
db file sequential read          .59
db file scattered read           .78

 
In order to determine these wait events during a specific time period to perhaps better reflect typical loads during these times, one could also simply run a statspack or an AWR report and look at the wait event section of the report.

So in the above example, a “sequential” read only takes approximately 75% of the time when compared to a “scattered” read. As such, a value of 75 would be an appropriate starting value with which to set the optimizer_index_cost_adj parameter.
 
With the I/O costing model, the CBO is basing it’s costs on the number of I/Os performed by each possible access path. If an index is only going to take 75% of the time to perform it’s associated I/Os when compared to the time it takes to typical perform I/Os during a FTS, it’s reasonable to adjust the associated costs of an index access down to 75% of its overall costs.
 
This will hopefully have the desired effect of making it a “level playing field” between an index based access path and a FTS when determining how long all the I/Os associated with each possible execution path might take.

If we plug a value of 75 into the optimizer_index_cost_adj parameter and re-run the demo in Part Iwhere the CBO initially choose the more expensive FTS which had a cost of 65:
 

SQL> alter session set optimizer_index_cost_adj=75;
 
Session altered.
 
SQL> select * from bowie_stuff2 where id in (20, 30, 40, 50, 60);
 
10000 rows selected.
 

Execution Plan
———————————————————-
Plan hash value: 2964430066
 
——————————————————————————-
| Id  | Operation                    | Name           | Rows  | Bytes | Cost  |
——————————————————————————-
|   0 | SELECT STATEMENT             |                | 10000 |   175K|    60 |
|   1 |  INLIST ITERATOR             |                |       |       |       |
|   2 |   TABLE ACCESS BY INDEX ROWID| BOWIE_STUFF2   | 10000 |   175K|    60 |
|*  3 |    INDEX RANGE SCAN       | BOWIE_STUFF2_I | 10000 |       |    27 |
——————————————————————————-
 

We note the CBO is now choosing to use the index, which is the more appropriate plan as it provides a somewhat faster response that the previous FTS.
 
However, if we also re-run the demo from Part IIwith the optimizer_index_cost_adj also set to 75, where previously Oracle initially choose to use a FTS quite correctly:
 

SQL> alter session set optimizer_index_cost_adj=75;
 
Session altered.
 
SQL> select * from bowie where id between 1 and 1000;
 
1000873 rows selected.
 

Execution Plan
———————————————————-
Plan hash value: 1845943507
 
———————————————————–
| Id  | Operation         | Name  | Rows  | Bytes | Cost  |
———————————————————–
|   0 | SELECT STATEMENT  |       |  1000K|    69M| 16499 |
|*  1 |  TABLE ACCESS FULL| BOWIE |  1000K|    69M| 16499 |
———————————————————–
 
 

We note that the FTS is still selected as the change in the CBO index related costs were not significant enough to change the execution plan. A really low value of 2 for the optimizer_index_cost_adj parameter really stuffed things up previously, but a more appropriate value of 75 in this database has ensured that the FTS is still chosen when appropriate.
 
So in both scenarios, the CBO is now choosing an appropriate execution plan. By setting the optimizer_index_cost_adj parameter in a logical manner, consistent with the relative wait time differences between single and mutli-block I/Os, the CBO is more likely to choose appropriate execution plans. 

Of course, there are always likely to be some discrepancies when dealing with such “averages”. We only have the one parameter after all which impacts the costs of all index range scan access paths, so we can only deal with averages. Perhaps there are some specific indexes which take significantly more (or less) time to complete than the average, as their associated I/Os are impacted by where the blocks might physically sit on the disk arrays, or on contention issues due to other concurrent activity, or on index caching characteristics (Note: I’ll discuss the optimizing_index_caching parameter at another time), etc. etc.
 
Same for some specific FTS which have multi-block I/Os that take significantly less (or more) time to complete than the average, as it’s associated I/Os might be also be impacted by similar factors. Perhaps some of these I/O characteristics and timings might change depending on the load on the system at different times of the day or week or month.

But that’s what an “average” value means right, some objects will have a higher (or slower) value while some have a lower (or faster) value.
 
So setting the optimizer_index_cost_adj parameter is not a precise science although of course the CBO in general is not a precise science either and close enough is usually good enough for the vast majority of cases. The name of the game is ensuring that the parameter is set to a value that’s in the “ballpark” and using the associated wait events to determine comparative wait times for single and multi-block I/Os is a reasonable way to do this.

However, despite being able to set the optimizer_index_cost_adj parameter in a reasonably “intelligent” manner, my preferred method of setting this parameter is still method number 3. That is to simply not set the optimizer_index_cost_adj parameter at all and leave it at the default value of 100 and use system statistics and the CBO CPU costing model instead.
 
By generating and maintaining accurate system statistics, you can effectively get the desired “level playing field” benefits of a well tuned optimizer_index_cost_adj parameter in a somewhat easier manner but with a few other added benefits as well. I would therefore strongly recommend the use and implementation of system statistics and leave the optimizer_index_cost_adj parameter well alone. IMHO, the optimizer_index_cost_adj parameter is there now only for backward compatibility reasons since the introduction of the CBO CPU costing model.
 
However, these discussions have not all been in vain because the optimizer_index_cost_adj parameter still has an impact even with system statistics in place. It’s just that the use of the optimizer_index_cost_adj parameter in conjunction with system statistics typically has the effect of screwing up the “level playing field” environment system statistics is meant to create.
 
Also, the costing formulas for indexes as previously discussed are still very much relevant as the CPU costing model often has little impact on the actual costs associated with using indexes. As I’ll discuss later, system statistics actually achieves a very similar outcome to the optimizer_index_cost_adj parameter. It’s just that it does so in a somewhat different manner by generally increasing the associated FTS costings to a more appropriate comparative value, rather than simply decreasing the index related costs, while taking both I/O and CPU overheads into consideration.

The CBO and Indexes: OPTIMIZER_INDEX_COST_ADJ Part I July 8, 2009

Posted by Richard Foote in CBO, Index Access Path, OPTIMIZER_INDEX_COST_ADJ, Oracle Cost Based Optimizer, Oracle Indexes.
17 comments

In the previous entry regarding The CBO and Indexes, we saw how the CBO with a query that selected 5 distinct values in an IN list, representing 5% of the data, decided to use a FTS because the costs of doing so were less than that of using a corresponding index. These costs (using the I/O costing model) represented the number of expected I/Os and the FTS was basically going to perform fewer I/Os than the index. Less I/Os, less cost and so the FTS was selected as the preferred access path.
 
However, by default, the CBO when determining these costs via the I/O costing model makes two very important assumptions which may not necessarily be true.
 
Assumption one is that all I/Os are likely to be “physical I/Os” which all need to be costed and taken into account.
 
Assumption two is that all I/Os are costed equally, even though the size of a multiblock I/O performed typically during a FTS is larger and so potentially more costly than a single block I/O usually associated with an index access.
 
Today, I’m only going to focus on this second assumption. 

Now, when performing and processing data from a multiblock I/O as performed during a FTS operation, it’s typical for such operations to be more resource intensive than that of a single block I/O as performed during an index range scan, as the associated overheads are likely be greater such as having to read more actual data off the disk, having to transfer more data into the SGA, having to process more data in each associated block, etc.
 
Therefore, not all I/Os are equal. However, by default the CBO ignores all these possible differences and costs all I/Os associated with a FTS (multiblock) and an index (single block) as being equivalent or the same.
 
Now, this hardly seems fair or indeed accurate and desirable when determining the true cost differences between an index and a FTS. Shouldn’t the fact that a single block I/O is likely to be less resource intensive and take less elapsed time to process be taken into consideration when determining these relative costs ? 

Enter the optimizer_index_cost_adj parameter.
 
The purpose of this parameter is simply to “adjust” the corresponding costs associated with an index to (hopefully) more accurately reflect the relative I/O costs between using an index and a FTS. If for example a single block I/O only takes 1/2 the time and resources to perform compared to a multiblock I/O, shouldn’t these associated I/O cost differences be reflected when determining whether or not to use an index and perhaps reduce the index related costs by 1/2 as a result ?
 
This parameter has a very simple impact on how the CBO costs the use of an index based access path. It takes the value of the optimizer_index_cost_adj as a percentage and adjusts the cost of an index related range scan access path to only be the corresponding percentage of the total index cost. By default, it has a value of 100 meaning that a single block I/O is 100% when compared to that of a multiblock I/O which in turn means that the index related I/O costs are treated the same as that of a multiblock FTS I/O. A default value of 100 therefore has no effect on the overall cost of using an index related access path.
 
However, if the optimizer_index_cost_adj only has a value of (say) 25, it means that all single block I/O are only 25% as costly as that of a multiblock I/O and so index related range scan costs are adjusted to be only 25% of that of the total index access path cost.
 
Going back to the previous demo where the FTS was selected, I calculated the cost of using the index when retrieving the 5% of data to be:

index blevel + ceil(index selectivity x leaf blocks) + ceil(table selectivity x clustering factor)

2 + 5 x ceil(0.01 x 602) + ceil(0.05 x 854) = 2 + 5 x 7 + 43 = 37 + 43 = 80.
 
The cost of using a FTS was calculated as being only 65. A cost of 65 for the FTS is less than a cost of 80 for the index and so the FTS was selected.

This time, the linked demo sets the optimizer_index_cost_adj = 25 before running the exact same query again.

We notice of couple of key differences. The first obvious difference is that the plan has changed and that the CBO has now decided to use the index. The second difference is the associated cost relating to the use of the index. Previously, it was calculated as being 80 but now it only has a cost of 20. The maths is pretty simple as with an optimizer_index_cost_adj = 25, we need only mutliply the previous total with 0.25:

(2 + 5 x ceil(0.01 x 602) + ceil(0.05 x 854)) x 0.25 = (2 + 5 x 7 + 43) x 0.25 = 80 x 0.25 = 20.

Note also that just the index range scan cost component was previously 2 + 5 x ceil(0.01 x 602) = 37, but is now also adjusted to 37 x 0.25 which rounds to 9.

Basically by setting the optimizer_index_cost_adj = 25, we have effectively reduced the overall cost of using the index based execution path down from 80 to just 20, to just 25% of the previous total index cost.
 
The cost of the FTS remains unchanged at 65. The index access path at just 20 is now less than the FTS alternative and so the index is now chosen by the CBO.

Yes, all these numbers and costs make sense when one understands how the CBO performs its calculations and the effect of setting the optimizer_index_cost_adj parameter to a non-default value.

The  optimizer_index_cost_adj parameter can therefore obviously have a very significant impact in the behaviour and subsequent performance of the database as the CBO will reduce (or maybe increase) the actual costs of index related access paths by the percentage denoted in the optimizer_index_cost_adj parameter. It can potentially dramatically increase (or decrease) the likelihood of an index access path being chosen over a FTS.
 
There are typically 3 very different ways in which this parameter is set, which I’ll list in increasing order of preference.
 
1) Set it arbitrarily to a very low figure such that indexes reign supreme as their associated costs get adjusted to such a low figure by the CBO that a FTS access path has little chance of being chosen (for example, here’s a suggestion to set it to a magical value of 12). Generally a very bad thing to do in any database …
 
2) Set it to a value that the DBA determines is an approximate percentage of the costs associated with a single block I/O when compared to a multiblock I/O. An improvement of option 1), but I still prefer the next option 3) …
 
3) Leave it at the default value of 100 such that it has no impact and the CBO does not use it to adjust the cost of an index access path

 

I’ll explain in Part II a sensible approach in setting the optimizer_index_cost_adj parameter and why option 3 is the preferred option with any currently supported version of Oracle.

Indexes and NOT Equal (Not Now John) August 13, 2008

Posted by Richard Foote in Index Access Path, NOT Equal, Oracle Cost Based Optimizer, Oracle Indexes, Performance Tuning.
26 comments

The Cost Based Optimizer (CBO) is a rather complex piece of code that has to deal with countless different possible scenarios when trying to determine what the most optimal execution plan might be. It’s also a vitally important piece of code because not only do the decisions need to be reasonably accurate so that it doesn’t generate inefficient execution plans but it needs to make these decisions in a reasonably efficient manner else it wastes resources and most importantly wastes time while it performs its calculations.

So there’s a trade-off between ensuring the CBO makes reasonable decisions while ensuring it makes its decisions in a timely and resource efficient manner. Database performance could be directly impacted if these trade-offs are not managed effectively.

Therefore, there are all sorts of short cuts and assumptions that are coded into the CBO to make its life a little easier. However, these short cuts can sometimes be problematic if they’re not recognised and handled appropriately.

One of these little short cuts worth noting is how the CBO deals with NOT EQUAL (and NOT IN) conditions …

Typically when we have a condition where we just say NOT EQUAL, we’re basically suggesting we’re interested in the vast majority of possible values with the exception of the value specified in the NOT EQUAL condition. We want most values but not if it’s this particular value.

For example, a condition where we state something such as:

WHERE TEXT <> ‘BOWIE’

means we want all the other possible values of TEXT, just not those with the specific value of ‘BOWIE’. In other words, we’re typically interested in the vast majority of possible values when we specify a NOT EQUAL condition.

However, we all know that typically, Oracle will not use an index if generally a relatively “high” percentage of rows are to be selected. It would generally be more efficient and less costly to simply perform a Full Table Scan if most rows are going to be returned anyways.

Therefore the CBO simply ignores indexes when costing a NOT EQUAL condition. Why bother going to all the overhead of calculating the cost of using an index to retrieve the vast majority of rows when a Full Table Scan is going to be the cheaper alternative in the vast majority of such cases. So the CBO doesn’t even bother trying and ignores all indexes that could potentially be used to retrieve the rows based on the NOT EQUAL condition.

But what if the data isn’t evenly distributed and the NOT EQUAL condition actually retrieves only a relatively small proportion of the rows. What if most rows actually have the value specified in the NOT EQUAL condition and the remaining rows constitute a relatively small proportion of the remaining rows ?

When the CBO ignores indexes, it ignores indexes in all cases. Even if 99.99% of rows match the value in the NOT EQUAL condition and there’s only a handful of remaining rows to actually be retrieved, the code path in the CBO is still followed and indexes are ignored regardless. The reason possibly being such queries could be re-written to avoid the use of the NOT EQUAL condition and so its use is still suggesting a large selectivity.

The refusal of the CBO to consider an index with a NOT EQUAL condition can easily be illustrated.

First, let’s create a table and populate a TEXT column with the same value, ‘BOWIE':

SQL> create table bowie as select rownum id, ‘BOWIE’ text from dual connect by level <= 1000000;

Table created.

Let’s make the TEXT column NOT NULL so the CBO knows all rows have a value for this column:

SQL> alter table bowie modify text not null;

Table altered.

Let’s now add a new row, one that has a different value for the TEXT column:

SQL> insert into bowie values (1000001, ‘ZIGGY’);

1 row created.

Commit complete.

So all rows have a TEXT value of ‘BOWIE’, except for just the one row which has a value of ‘ZIGGY’.

OK, let’s now create an index on this column:

SQL> create index bowie_i on bowie(text);

Index created.

Let’s now collect some statistics on this table, including a histogram on the TEXT column so that the CBO knows the data is not even distributed and that the vast number of values of TEXT are ‘BOWIE':

SQL> exec dbms_stats.gather_table_stats(ownname=> null, tabname=> ‘BOWIE’, cascade=> true, estimate_percent=> null, method_opt=> ‘FOR ALL COLUMNS SIZE 1′);

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.gather_table_stats(ownname=> null, tabname=> ‘BOWIE’, cascade=> true, estimate_percent=> null, method_opt=> ‘FOR COLUMNS TEXT SIZE 5′);

PL/SQL procedure successfully completed.

So only one row has a value that is NOT a ‘BOWIE’ which means an index to retrieve this one and only row would be an efficient and appropriate execution path, right ?

Well, let’s see what the CBO decides to do. First, let’s set a 10053 trace so we can see how the CBO has costed it’s possible options.

SQL> alter session set events ‘10053 trace name context forever';

Session altered.

Let’s now execute this simple and innocent looking statement:

SQL> select * from bowie where text <> ‘BOWIE';

        ID TEXT
---------- -----
   1000001 ZIGGY

---------------------------------
| Id| Operation         | Name  | 
---------------------------------
|  0| SELECT STATEMENT  |       |
|* 1|  TABLE ACCESS FULL| BOWIE | 
---------------------------------

We note that Oracle has decided to not use the index but use a FTS instead.  If we look at the relevant parts of the 10053 trace, we note that the CBO did not even cost or consider using the index. The index was basically ignored and not considered at all:

***************************************
BASE STATISTICAL INFORMATION
***********************
Table Stats::
  Table: BOWIE  Alias: BOWIE
    #Rows: 1000001  #Blks:  2214  AvgRowLen:  10.00
Index Stats::
  Index: BOWIE_I  Col#: 2
    LVLS: 2  #LB: 2370  #DK: 2  LB/K: 1185.00  DB/K: 1105.00  CLUF: 2210.00
Access path analysis for BOWIE
***************************************
SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for BOWIE[BOWIE]
  Column (#2):
    NewDensity:0.000000, OldDensity:0.000000 BktCnt:1000001, PopBktCnt:1000000, PopValCnt:1, NDV:2
  Table: BOWIE  Alias: BOWIE
    Card: Original: 1000001.000000  Rounded: 1  Computed: 1.00  Non Adjusted: 1.00
  Access Path: TableScan
    Cost:  620.67  Resp: 620.67  Degree: 0
      Cost_io: 601.00  Cost_cpu: 435767288
      Resp_io: 601.00  Resp_cpu: 435767288
  Best:: AccessPath: TableScan
         Cost: 620.67  Degree: 1  Resp: 620.67  Card: 1.00  Bytes: 0

You can try to hint the query but the CBO will still ignore any RANGE SCAN operation because the CBO can’t know what all other possible potential values that are not ‘BOWIE’ might be (remembering the statistics may not necessarily be accurate). It can perform a FULL INDEX SCAN but this means reading all the leaf nodes that contain all the unwanted ‘BOWIE’ values and so it still an inefficient option:

SQL> select /*+ index (bowie bowie_i) */ * from bowie where text <> ‘BOWIE';

-----------------------------------
| Id| Operation                   |
-----------------------------------
|  0| SELECT STATEMENT            |
|  1|  TABLE ACCESS BY INDEX ROWID|
|* 2|   INDEX FULL SCAN           |
-----------------------------------

The INDEX RANGE SCAN is simply not an option …

What is an option of course is to simply rewrite the query. One can just write the query in the “positive” sense and the index is now considered and used:

SQL> select * from bowie where text = ‘ZIGGY';

-----------------------------------
| Id| Operation                   |
-----------------------------------
|  0| SELECT STATEMENT            |
|  1|  TABLE ACCESS BY INDEX ROWID|
|* 2|   INDEX RANGE SCAN          |
-----------------------------------

Or, if there a many different distinct values that are not ‘BOWIE’ but which in total still constitute a relatively small percentage of the total rows, then it could be re-written as follows which can make use of the index in an effective manner by concatenating two separate index range scans:

SQL> select * from bowie where text < ‘BOWIE’ or text > ‘BOWIE';

        ID TEXT
---------- -----
   1000001 ZIGGY
------------------------------------
| Id| Operation                    |
------------------------------------
|  0| SELECT STATEMENT             |
|  1|  CONCATENATION               |
|  2|   TABLE ACCESS BY INDEX ROWID|
|* 3|    INDEX RANGE SCAN          |
|  4|   TABLE ACCESS BY INDEX ROWID|
|* 5|    INDEX RANGE SCAN          |
------------------------------------

Note this same issue applies to NOT IN conditions.

Be very careful when using NOT EQUAL conditions and be mindful of the impact they may have with your indexes.

Empty Leaf Blocks and Statistics (Sense Of Doubt) July 8, 2008

Posted by Richard Foote in Index Access Path, Index Block Splits, Index Delete Operations, Index statistics, Oracle Cost Based Optimizer, Oracle Indexes, Performance Tuning.
14 comments

I’ve recently been discussing how empty index blocks or those blocks that contain nothing but deleted index entries are placed on the index freelist and can potentially be recycled during subsequent index block split operations.

A point that’s not so well known about such empty index blocks is how Oracle considers them when calculating index related statistics and the possible implications this may have on the CBO.

Let’s set the scene with an example I’ve used previously where we load a table/index with 10000 entries and then subsequently delete the vast majority of them.

SQL> create table rich as select rownum id, ‘Bowie’ text from dual connect by level <= 10000;
 
Table created.
 
SQL> create index rich_i on rich(id);
 
Index created.

OK, so we now have an index with 10000 entries. Let’s just check to see how many leaf blocks we currently have:

SQL> analyze index rich_i validate structure;

Index analyzed.

SQL> select lf_rows, lf_blks, del_lf_rows from index_stats;

   LF_ROWS    LF_BLKS DEL_LF_ROWS
---------- ---------- -----------
     10000         21           0

So we currently have 10000 LF_ROWS and 21 LK_BLKS with no deleted index rows at this stage.

Let’s now deleted the vast majority of rows from the table and hence index row entries from the index:
SQL> delete rich where id <= 9990;
 
9990 rows deleted.
 
SQL> commit;
 
Commit complete.

OK, so now we have an index with the vast majority of the index entries having been deleted and with all but one index leaf block effectively empty.

Let’s start by looking at how the ANALYZE INDEX … VALIDATE STRUCTURE deals with empty leaf blocks and index entries:

SQL> analyze index rich_i validate structure;
 
Index analyzed.
 
SQL> select lf_rows, lf_blks, del_lf_rows from index_stats;

   LF_ROWS    LF_BLKS DEL_LF_ROWS
---------- ---------- -----------
     10000         21        9990

The first thing we notice is that the LF_ROWS statistics still has a value of 10000. It still counts index entries, even if they’ve been deleted.

We also notice that the LF_BLKS value is 21 so those leaf blocks that are effectively empty are still counted as well.

Let’s now collect statistics using DBMS_STATS as currently recommended by Oracle:

SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=> ‘RICH’, cascade => true, estimate_percent=> null, method_opt=> ‘FOR ALL COLUMNS SIZE 1′);

If we now look at the index statistics:

SQL> select index_name, num_rows, leaf_blocks from dba_indexes where index_name = ‘RICH_I';

INDEX_NAME NUM_ROWS LEAF_BLOCKS
---------- -------- -----------
RICH_I           10           1

We notice a couple of important differences. Firstly, the NUM_ROWS value is 10, highlighting that only non-deleted index entries are counted. We also notice that the number of LEAF_BLOCKS is only 1, highlighting that only those index leaf blocks that contain non-deleted index entries are counted. Although there are 20 other leaf blocks within the index structure, these are not counted and considered by the CBO when statistics are calculated using DBMS_STATS.

If we run the following simple little query that effectively selects all remaining rows from the table, we notice the following execution plan:

SQL> select * from rich where id between 1 and 10000;

        ID TEXT
---------- -----
      9991 Bowie
      9992 Bowie
      9993 Bowie
      9994 Bowie
      9995 Bowie
      9996 Bowie
      9997 Bowie
      9998 Bowie
      9999 Bowie
     10000 Bowie

Execution Plan
--------------------------------------------
|Id | Operation                   | Name   |
--------------------------------------------
| 0 | SELECT STATEMENT            |        |
| 1 |  TABLE ACCESS BY INDEX ROWID| RICH   |
|*2 |   INDEX RANGE SCAN          | RICH_I |
--------------------------------------------

The index is actually used to select all the remaining 10 rows, in part because the index related costs are so low.

Let’s see what would happens if we were to use the old, ANALYZE command to calculate the index statistics:

SQL> analyze index rich_i compute statistics;

Index analyzed.

First, let’s see if the index statistics are any different …

select index_name, num_rows, leaf_blocks from dba_indexes where index_name = ‘RICH_I';

INDEX_NAME NUM_ROWS LEAF_BLOCKS
---------- -------- -----------
RICH_I           10          21

OK, a big big difference here. Where previously, DBMS_STATS didn’t include the empty leaf blocks in it’s statistics, we now notice that using the ANALYZE command does include such empty leaf blocks. The LEAF_BLOCKS value is now 21, not 1 as it was previously. Note though that the number of NUM_ROWS is still 10, so it still doesn’t count the deleted index entries themselves, just the empty leaf blocks.

But leaf blocks is one of the key statistics used by the CBO when calculating the cost of using an index related access path. Could this all make a difference in how our previous query is costed by the CBO ?

SQL> select * from rich where id between 1 and 10000;

        ID TEXT
---------- -----
      9991 Bowie
      9992 Bowie
      9993 Bowie
      9994 Bowie
      9995 Bowie
      9996 Bowie
      9997 Bowie
      9998 Bowie
      9999 Bowie
     10000 Bowie

10 rows selected.

Execution Plan
----------------------------------
| Id  | Operation         | Name |
----------------------------------
|   0 | SELECT STATEMENT  |      |
|*  1 |  TABLE ACCESS FULL| RICH |
----------------------------------

Oh yes indeed. Now the CBO has decided to use a Full Table Scan, in large part because of the additional calculated costs associated with using the index.

Note these tests work the same on all supported versions of Oracle.

So empty leaf blocks can still have a large impact on not only how a query may perform but indeed on how the CBO calculates the associated costs, depending on how the statistics are generated.

Yes, there are differences between the ANALYZE command and DBMS_STATS. This is one of the more subtle differences …

Index Skip Scan – Does Index Column Order Matter Any More ? (Warning Sign) March 10, 2008

Posted by Richard Foote in Index Access Path, Index Skip Scan, Oracle Cost Based Optimizer, Oracle General, Oracle Indexes, Performance Tuning.
33 comments

I’ve already written a few posts regarding concatenated indexes and things to consider (and not consider) when deciding the column order of a concatenated (composite) index.

A comment I see from time to time is that the whole question of column order within an index is now somewhat redundant anyways as Oracle since 9i has introduced the Index Skip Scan access path. Prior to 9i, if the leading column of an index wasn’t specified in a predicate, the index was effectively ignored by the CBO. However, if the leading column isn’t referenced now, Oracle can use the index anyways via an Index Skip Scan access path.

So the column order in a concatenated index doesn’t matter that much now, right ?

Well, not quite ….

An Index Skip Scan can only actually be used and considered by the CBO in very specific scenarios and is often an indicator there’s either a missing index or an exisiting index has the columns in the wrong order.

If the leading column of an index is missing, it basically means the values in subsequently referenced columns in the index can potentially appear anywhere within the index structure as the index entries are sorted primarily on the leading indexed column. So if we have column A with 100,000 distinct values and column B with 100,000 distinct values and an index based on (A,B), all index entries are sorted primarily on column A and within a specific value of column A, sorted by column B. Therefore if we attempt a search on just Column B = 42, these values could potentially appear anywhere within the index structure and so the index can not generally be effectively used.

However, what if the leading column actually contained very few distinct values ? Yes, the subsequent column(s) values could appear anywhere within the index structure BUT if these subsequent columns have relatively high cardinality, once we’ve referenced the required index entries for a specific occurrence of a leading column value, we can ignore all subsequent index row entries with the same leading column value. If the leading column has few distinct values, this means we can potentially “skip” several/many leaf blocks until the leading column value changes again, where we can again “probe” the index looking for the subsequent indexed column values of interest.

So if we have a leading column with few distinct values, we may be able to use the index “relatively” efficiently by probing the index as many times as we have distinct leading column values.

On the other hand, if the leading column has relatively high cardinality then an Index Skip Scan is not a viable option. An index can generally store many hundreds of index entries per index leaf block, depending on the block size and the average size of the index row entries of course. So if the leading column were to change just once on average within the subsequent index leaf block, Oracle would be forced to scan the next index leaf block anyways as it may contain the required index row entry.

For Oracle to be able to “skip” an index leaf block, the leaf block must contain nothing but the same leading column value as last changed in a preceding leaf block. Hopefully, there are many leaf blocks where the leading column value doesn’t change and so hopefully there are many leaf blocks that can potentially be “skipped”.

Therefore, the cardinality of the leading column is crucial for an Index Skip Scan to be viable. In the example above where we had 100,000 distinct values for columns A and B, unless the table is massive, it’s unlikely an Index Skip Scan will be viable regardless of which column is the first column in the index. However, if column B only had 10 distinct values, then an index based on (B,A) may very well be able to use an Index Skip Scan whereas an index on (A,B) would not.

Note though that an Index Skip Scan must probe the index at least as many times as there are distinct values of the leading column. This will not be as efficient as an index that only requires the one index probe. Therefore although a query with a predicate based on A=42 could use an Index Skip Scan  with an index on (B,A) assuming column B had few distinct values, an index on (A,B)  or (A) would be more efficient as it would only require the one index probe.

However, if the performance of index (B,A) were “good enough” and/or a search on just A=42 was uncommon, then the index on (B,A) may be quite adequate and an index on (A,B) may be unnecessary. The index on (B,A) would also be able to handle queries based on columns A and B and queries based on just column B (providing the CBO determined the selectivity acceptable, which it might for unevenly distributed rare values of column B).

See this Index Skip Scan demo to see when it all may prove useful.

No, an Index Skip Scan doesn’t mean we don’t need to consider the column order of an index. If anything, it’s something else that needs to be considered and along with index compression, is another reason why low cardinality leading index columns have advantages.

Introduction To Reverse Key Indexes: Part II (Another Myth Bites The Dust) January 16, 2008

Posted by Richard Foote in Index Access Path, Oracle Cost Based Optimizer, Oracle General, Oracle Indexes, Oracle Myths, Performance Tuning, Reverse Key Indexes.
27 comments

In Part I, we saw how with Reverse Key Indexes, Oracle will basically take the indexed value, reverse it and then index the reversed value. As a result, data that would ordinarily be logically sorted within an index structure will now be randomly distributed. This therefore negates the use of Reverse Key Indexes with range predicates, with the CBO not even considering them in its costings.

This is all the information we need to dispel a rather bizarre suggestion that has been doing the rounds regarding using Reverse Key Indexes to deal with LIKE predicates that have a leading wildcard. For example, such a suggestion can be found here and within an OTN discussion here.

Basically the suggestion is to:

1) Create a Reverse Key Index on the column to be searched with a LIKE predicate having a leading wildcard (such %, _).

2) Instead of writing the query as usual, e.g.

SELECT * FROM bowie_table WHERE name LIKE ‘%BOWIE’

rewrite the query programmatically such as:

SELECT * FROM bowie_table WHERE name LIKE ‘EIWOB%';

by reversing the required text and now having the wildcard at the end.

The Reverse Key Index stores the data in a reversed format identical to say ‘EIWOB’, so Oracle should be able to use the Reverse Key Index to efficiently find all rows that start with ‘EIWOB’ as they’re all grouped together within the index structure, right ?

Ummm, wrong.

Ignoring the fact the example in the above link is somewhat meaningless as it uses a leading and a trailing wildcard in both queries and so assuming the first query only has a leading wildcard and the second query only has a trailing wildcard, this suggested use of a Reverse Key Index can not possibly work on any current version of Oracle.

There are a few fundamental problems with this suggestion but in summary not only will it not work but worse, it will actually return the wrong results.

The suggestion is correct as far as indeed, using a normal index to return data with a LIKE statement containing a leading wildcard will negate the use of an index range scan, the CBO doesn’t even consider it. An index hint may push Oracle to use a Full Index Scan, but not an Index Range Scan.

However using a Reverse Index Key to solve this is unfortunately doomed to failure for two very simple reasons.

One, as we have already seen, Oracle also ignores Index Range Scans for Reverse Key Indexes with range predicates and unfortunately, a query such as WHERE name LIKE ‘EIWOB%’ is a range scan. The CBO simply doesn’t consider the Reverse Key Index in it’s deliberations.

Two, is of course that Oracle has no possible way of knowing that when you say LIKE ‘EIWOB%’, what you really mean is search for all records ending with BOWIE, LIKE ‘%BOWIE’. How can Oracle possibly know this ? If it could use the index (which it can’t) Oracle would only reverse the search string around anyways and use the index to look for indexed entries beginning with ‘BOWIE’ within the index structure, remembering everything is of course stored in reverse within the index.

So Oracle is actually searching for all records starting with ‘EIWOB’, not ending with ‘BOWIE’ which are two entirely different things.

The net result of using this suggested strategy is not good.

1) Oracle ignores the Reverse Key Index anyways as a LIKE ‘EIWOB%’ is a range predicate
2) Oracle therefore performs a Full Table Scan anyways
3) As the query is effectively searching for all records that start with ‘EIWOB’, not as expected all records that end with ‘BOWIE’, the two queries in the example will actually return completely different results

The Reverse Key Indexes Part II Demo shows how this suggested use of a Reverse Key Index is a very very bad idea …

However, if you want to solve the issue of efficiently finding the results of a LIKE ‘%BOWIE’, there are some possible approaches one could take that will use an index and return correct results.

One possible solution (as mentioned in the OTN link listed at the beginning) is to create a Function-Based Index using the REVERSE Function, (Warning: this function is undocumented and unsupported):

CREATE INDEX bowie_reverse_func_i ON bowie(REVERSE(name));

A query such as WHERE REVERSE(name) LIKE ‘EIWOB%’ or better still WHERE REVERSE(name) LIKE REVERSE(‘%BOWIE’) can now both potentially use the index.

The reverse function will reverse the name column (from say ‘DAVID BOWIE’ to ‘EIWOB DIVAD’) and the LIKE range predicate can work with the index as it’s a Function-Based index rather than a Reverse Key Index and it’s not using a LIKE with a leading wildcard. A column containing ‘DAVID BOWIE’, but stored as ‘EIWOB DIVAD’ within the index, can be found efficiently via an index range scan using this Function-Based Index.

I’ve included an example on effectively using a Function-Based Index with the Reverse Function at the end of the above demo. There’s also a discussion and other alternatives at Gints Plivna’s Blog.

Another alternative is to use an Oracle Text Index, which also has the capability of dealing logically with queries such as %BOWIE% but as they say, that’s a topic for another day.

More on Reverse Key Indexes to come as well.

Introduction To Reverse Key Indexes: Part I January 14, 2008

Posted by Richard Foote in Index Access Path, Oracle Cost Based Optimizer, Oracle Indexes, Reverse Key Indexes.
22 comments

Following on from the “8 things You May Not Know About Indexes”, #7 regarding Reverse Key Indexes requires a number of posts to do the subject justice. However, Part I will focus of the specific issue related to point # 7, namely:

“A REVERSE index can quite happily be used by the CBO to perform index range scans within an execution plan”.

Reverse Key Indexes are designed to resolve a specific issue, that being index block contention. Many indexes in busy database environments with lots of concurrent inserts (and in some scenarios updates and deletes as well) can suffer from index block contention (as highlighted by high levels of “buffer busy waits” and “read by other session” wait events for the index segments). Monotonically increasing indexes, such as Primary Keys generated by a sequence, are especially prone to contention as all inserts need to access the maximum “right-most” leaf block.  This is of particular concern in RAC environments, where this “hot” index block needs to be accessed by all the instances and is being bounced around the various SGAs causing expensive block transfers between instances.

A solution is make the index a Reverse Key Index.

CREATE INDEX bowie_reverse_idx ON bowie(id) REVERSE;

A Reverse Key Index simply takes the index column values and reverses them before inserting into the index. “Conceptually”, say the next generated ID is 123456, Oracle will reverse it to 654321 before inserting into the index. It will then take the next generated ID 123457 and reverse it to 754321 and insert it into the index and so on. By doing this, inserts are spread across the whole index structure, ensuring the right most block is no longer the only index leaf block being hammered. Index contention is dramatically reduced or eliminated entirely.

Reverse Key Indexes address a specific problem but may in turn introduce a number of problems themselves.

One problem is the simple fact index entries are no longer sorted in their natural order. Value 123456 is no longer adjacent to value 123457 in the index structure, they’re likely to be found in completely different leaf blocks. Therefore a range predicate (such as BETWEEN 123450 and 123460) can no longer be found by a single index probe, Oracle would be forced to search for each specific index value separately as each value in the range is likely to be in differing leaf blocks.

This makes it all just too difficult and troublesome for the Cost Based Optimizer (CBO). As a result, the CBO totally ignores Reverse Key Indexes when processing Range Predicates (eg. BETWEEN, <, >, <=, >=, LIKE etc.). Even innocent looking range predicates such as “BETWEEN 123456 and 123457″, with just the 2 values of interest are ignored by the CBO. A 10053 trace shows how the CBO totally ignores Reverse Key Indexes and doesn’t even bother to cost such accesses when processing Range Predicate conditions.

In the above example and in scenarios where it’s possible and practical to convert range predicates, use an IN clause instead, e.g. “IN (123456, 123457)” as Oracle will convert this into an OR clause with each equality condition usable with the Reverse Key Index.

Oracle is also clever enough to identify equality conditions that may be written as a range scan (e.g. BETWEEN 123456 and 123456) and use a Reverse Key Index accordingly.

Hints won’t work either. You may be able to force Oracle into performing a Full Index Scan but it will not perform an Index Range Scan with a Range Predicate.

But doesn’t all this mean I’m wrong when I suggested a Reverse Key Index can be used by the CBO to use Index Range Scans.

No :)

I’ve only described how Oracle ignores the use of a Reverse Key Index for Range Predicates, however Index Range Scans are quite possible.

Remember, a Reverse Key Index will reverse all values and if two values happen to have the same value or two index entries happen to have the same leading column, then all such values are indeed stored together and are logically adjacent to one another.

For example, if the Reverse Key Index is Non-Unique, Oracle must perform an Index Range Scan, even for equality predicates. I discussed this in some detail when discussing the differences between a Unique and a Non-Unique Index. Even if the column or columns have a PK or a Unique Key Constraint, Oracle will still check the next index entry “just in case” there are indeed duplicate values. Also, although usually used for monotonically index columns, there’s nothing preventing you from creating a Reverse Key Index on a Non-Unique column and all duplicate values must reside together in the index structure. Therefore an equality search that uses any Non-Unique Reverse Key Index will generate an Index Range Scan access

But even Unique indexes can be used to perform an Index Range Scan.

If you have a multi-column Unique Index but not all columns are being searched (although the leading column must be known), then again, all index values with the same leading column (or columns) must be stored together in the Reverse Key Index and an Index range Scan can be performed for such equality conditions.

For some examples of what I’ve discussed see this Reverse Key Part I Demo.

So yes, a Reverse Key Index can indeed be used by the CBO to perform Index Range Scans.

There are also a number of other issues (and indeed myths) associated with Reverse Key Indexes that will be discussed in the fullness of time.

Introduction to Fake / Virtual / NOSEGMENT Indexes January 11, 2008

Posted by Richard Foote in Fake Indexes, Index Access Path, NOSEGMENT Option, Oracle Cost Based Optimizer, Oracle Indexes, Virtual Indexes.
9 comments

OK, as promised, answer to index fact #5 you may not have known:

“It’s possible to make the CBO reference and use within an execution plan indexes that don’t in actual fact exist”.

Before I start, please note this feature is not officially documented other than the odd Metalink note and requires the setting of an undocumented parameter to work, so please exercise caution.

Fake Indexes (also known as Virtual or Nosegment Indexes) have been around for a long time, since 8i days. They’re used primarily by Oracle Enterprise Manager and its Tuning Pack which has various wizards that can do “what if” type analysis. One of these is the Index Wizard which can kinda “pretend” to create an index and see what the Cost Based Optimizer might do if such an index really existed.

It’s possible to create these Fake indexes manually by using the NOSEGMENT clause when creating an index:

CREATE INDEX Bowie_idx ON Bowie_Table(Ziggy) NOSEGMENT;

This will populate some (but not many) DD related tables but will not actually create an index segment or consume any actual storage. It’s not maintained in any way by DML operations on the parent table and it can’t be altered or rebuilt as can a conventional, “real” index (it will generate an ORA-08114 error if you try to do so). You can analyze or run dbms_stats over the index but the index is not treated as analyzed as such (as can be seen via a 10053 trace).

It’s also only visible to the CBO, if and only if a session has the following parameter set:

ALTER SESSION SET “_use_nosegment_indexes” = true;

The CBO will now consider the index and potentially include it within an execution plan. However, at execution time Oracle can of course not use the index and will revert to the next best thing.

A Fake index is basically an index you have when you don’t really have an index in order to see if it could be useful if it really existed.

This Fake Indexes Demo shows how they work and can be used.

1 down, 7 to go … ;)

Introduction To Linguistic Indexes – Part I January 3, 2008

Posted by Richard Foote in Index Access Path, Indexing Tricks, Linguistic Indexes, Oracle Indexes.
9 comments

Characters are sorted by default based on numeric values defined by the default character encoding scheme (known as Binary Sorting). For us Australians, this is fine as we (generally) speak English and the English alphabet is nicely sorted in ascending order by ASCII and EBCDIC standards. However, many other languages are not so fortunate as the binary sort does not sort the data in many language’s alphabetic sort order.  Oracle has many Globalization Support features to help users in other languages get over these issues (all very interesting and topics for many a Blog entry in the future).

However, even us Australians have issues when it comes to “case-insensitive” searches, where data may be stored in many different cases (eg. Ziggy, ZIGGY, ZiGgY, etc.) and we want to return all data that matches a character value, regardless of its case.

The issue of course is that by default, all text searches are case-sensitive. For example a search WHERE name=’ZIGGY’ will only return ‘ZIGGY’ but not ‘Ziggy’ or ‘ZiGgY’ etc.

The standard fix is for the application to convert the data to a consistent case when performing the search. For example a search WHERE UPPER(Name) = ‘ZIGGY’ will return all values of “ZIGGY” regardless of their case but this will negate the use of any standard index on the Name column.

Therefore, a Function-Based index is required, say based on UPPER(Name), to ensure an efficient index access is possible for case insensitive searches.

However, this often requires an additional index to be created and for the application to be explicitly written to make use of the function-based index defined function.

Now the best cure for this problem is simply to ensure all data is stored in a consistent case (ZIGGY, ZIGGY, ZIGGY) but this may not always be practical or even desirable in some cases.

Another possible solution is the use of a Linguistic Index. This is an index that is created based on a specific case insensitive linguistic language or multilingual option that ensures the index entries are sorted in the linguistic language order, not on the default binary order of the database encoding scheme.

Basic steps are:

1) Create a Linguistic Index, eg.

CREATE INDEX case_search_ling_name_i ON case_search(NLSSORT(name,’NLS_SORT=GENERIC_M_CI’));

2) Set NLS_SORT in the session (or set parameter) to use the required Linguistic sort option , eg.

ALTER SESSION SET NLS_SORT=’GENERIC_M_CI';   

Simply append _CI in the Linguistic sort option to make it Case-Insensitive or _AI to make it Accent-Insensitive.

(Note: if binary ordering is generally adequate, NLS_SORT can simply be set to ‘BINARY_CI’ for Binary Case-Insensitive searches)

3) Set NLS_COMP in the session (or set parameter) to use Linguistic Sorts/Case Insensitive Searches, eg.

ALTER SESSION SET NLS_COMP=’LINGUISTIC';

A search now based on WHERE name=’ZIGGY’ will automatically perform a case-insensitive search without the need to modify the application to use specific functions.

For a full demo, see Use Linguistic Indexes Demo.

However, before you rush out and start using Linguistic Indexes to possibly simplify the use of case insensitive searches, note there are various disadvantages to Linguistic Indexes, which can somewhat dampen their appeal. These will be covered in Part II of this series.

Differences between Unique and Non-Unique Indexes (Part II) December 21, 2007

Posted by Richard Foote in Index Access Path, Index Internals, Indexing Tricks, Oracle Cost Based Optimizer, Oracle Indexes, Primary Key, Unique Indexes.
22 comments

The most significant difference between a Unique and a Non-Unique index is of course the simple fact that in one index, all index entries MUST be unique and in the other index there can be duplicates of an index entry.

Although an obvious distinction between the two, it’s also a crucial difference as well.

When Oracle uses a Unique Index to scan for a specific value (via an equality predicate on all indexed columns or when policing a constraint ), there can only be one of two possible results. The value can exist returning at the very most one value or the value doesn’t exist returning 0 values. That’s it, 1 row or none. The value either exists or it doesn’t.

This fact means Oracle doesn’t have to worry about a whole bunch of things when dealing with Unique indexes during equality or unique checking processes. It doesn’t have to check the next index entry just in case there’s a second or more entries with the same value. It doesn’t have to worry about the potential of having to skip across to the next leaf page if the specific value it reads happens to be the maximum value in the current leaf page. It doesn’t have to worry about pointers to these “adjacent” leaf blocks changing on it due to block splits. It doesn’t have to concern itself with potentially visiting more than the one table data block during the index access operation.

Life is simple, it’s either 1 row or none.

Not so for Non-Unique indexes. With a Non-Unique index, there are no such guarantees. With a Non-Unique index, there are 3 categories of possibilities. An index scan could return 0 rows, it could return 1 row or it could return more than one row. It could potentially need to go and visit more than the current leaf block to return all the matching rows. It could potentially need to go and visit more than one table block.

Life’s not quite so “simple” for a Non-Unique index.

Note also and most importantly that life gets no easier for a Non-Unique index that polices a PK or Unique key constraint.

Even though there’s a PK or Unique constraint on a column, to Oracle, it’s just another Non-Unique index with the same “vague” possibilities. Remember that PK and Unique constraints can be enabled with NOVALIDATE meaning that even with a PK or Unique constraint, there’s still the possibility of duplicate index entries in the Non-Unique index. Remember constraints can be DEFERRABLE, meaning that even with a PK or Unique constraint, there’s still the possibility of duplicate index entries in the Non-Unique index.

This means that Oracle has to concern itself with a number of additional overheads, including having to “check” the next index entry, “just in case” it matches the required index value. It has to concern itself even with the possibility of having to visit the next index leaf block, “just in case”.

You will note when Oracle performs an equality search using a Unique Index, Oracle will perform an “INDEX UNIQUE SCAN” because the index entries MUST be unique.

You will note however when Oracle performs an equality search using a Non-Unique index, even if there’s a PK or Unique constraint of the column(s), Oracle will perform an INDEX RANGE SCAN, because it needs to scan multiple index entries “just in case”.

So are there any actual implications as a result of any of this ?

Yes.

When Oracle actually reads an index and processes the associated blocks in the buffer cache(s), Oracle uses a number of latches. These latches are used primarily to “protect” memory structures from concurrent activity. Very simplistically, by grabbing a latch, Oracle effectively performs a “lock” on the associated memory structure, perform whatever activity needs to be performed and releases the latch. These latches get grabbed and released (hopefully) extremely quickly (order of 1/10s of ms), but it’s a non zero value.

The issue with latches is that they’re a point of serialisation. If two (or more) processes want a specific latch, one (or more) has to wait. Latches also burn CPU. Only a teensy weeny bit at a time but some CPU nonetheless. They burn CPU while acquiring the latch and if fail due to latch contention, while attempting again and again to acquire the latch. They also burn CPU while performing the specific operation necessary of the latch.

Basically, the more latches, the greater the potential for contention, the greater the potential for latch related wait activity and perhaps most important of all, more CPU is required. In busy systems, there can be massive numbers of latch events and the best way to tune these events is to reduce where possible the number of latches required by the database environment. It’s one of the key reasons we try and reduce LIOs in a database as much as possible, to reduce the latch and CPU load on the system.

Because of the differences highlighted between Unique and Non-Unique indexes, the number and manner of latches required between the two indexes differs. And it differs significantly …

In this little demo, Latch Differences Between Unique and Non-Unique Indexes Demo, we compare the latches required to read an identical table, using a 2 level index. The  differences between the latch overheads of a Unique and a Non-Unique index are most interesting.

When using a Unique Index, Oracle required 3 consistent gets (one for the index root block, one for the leaf block and one for the table block). BUT, each consistent get was a consistent gets – examination, a special type of consistent get which only requires 1 latch (rather than the standard 2 latches).

So that’s a sum of 3 latches.

However, when using a Non-Unique index, Oracle required 4 consistent gets (one for the index root block, one for the leaf block, one for the table block and an additional one to recheck the leaf block for any duplicate index entries). BUT, only the 1 consistent read (for the index root block) was actually the “cheaper” consistent gets – examination, the other 3 were the more costly 2 latch variety.

So that’s a sum of 7 latches.

3 latches for the Unique index and 7 latches for the Non-Unique index.

That’s an increase of 133.3% in latches between the two types of indexes.

Now, the height of the index will change the ratio of latch difference between the two indexes. Also, in a busy system, there could potentially be differences in the types of latches used due to the current state or additional activity in a block.

However, the potential difference in latch requirements between a Unique or Non-Unique index can be very significant. But does a few additional latches here and there really make much of a difference ?

Well, of course it depends. On small scale systems with smaller loads, fewer indexes, fewer users and excess resources, the noticeable differences may be negligible.

However, in larger scale (especially OLTP) environments, a particular index may be accessed 100s or maybe 1000s of times a second. There may be 1000s of tables with 1000s of corresponding PK and Unique constraints policed by 1000s of Unique (or Non-Unique) indexes. It’s therefore not really of question of a few latches here or there. It’s a question of potentially a very significant proportion of overall latch related overheads.

Potentially when accessed, Non-Unique indexes could be generating double the latch related overheads for equality unique scan or unique checking index activity. Remember, the best way to tune latches and reduce latch contention is to simply reduce the requirement and load for latches.

The overall reduction in CPU and latch related wait activity could be significant between Unique and Non-Unique indexes because by using Non-Unique indexes you in the order of double the latches required for such activities.

Note also this doesn’t require any special parameters to be set or special tuning or monitoring by the DBA. It simply requires using Unique indexes to police PK or Unique constraints when there are no requirements of Non-Unique indexes. You then potentially gain a benefit each and every time the index is used for unique scan accesses.

Guess what type of access is extremely common in large scale OLTP environments …

The next time you complain about high CPU consumption or high latch contention and you’re tuned the application to death, just ask yourself how many Non-unique indexes are policing your PK or Unique Key constraints …

Local Index Issue With Partitioned PK and Unique Key Constraints December 20, 2007

Posted by Richard Foote in Constraints, Index Access Path, Local Indexes, Oracle Indexes, Partitioning, Performance Tuning, Unique Indexes.
11 comments

Nuno Souto (Noons) also asked a really interesting question on my Differences between Unique and Non-Unique Indexes blog entry (comment 4) that I thought it worthy of a separate blog entry to do the answer justice. The question was:

“Isn’t it still the case that unique indexes cannot be locally partitioned unless the partition key is part of the index key? Not sure if 11g removes this. If still so, that would weigh heavily in favour of non-unique indexing for PK on a table potentially requiring local index partitions.”

Simplistically, the answer to the first part is Yes it is still the case, even in 11g and the answer to the second part is No, it wouldn’t weigh heavily in favour of non-unique indexing for PK on a table requiring local index partitions. It wouldn’t actually be a consideration at all.

Let me explain why.

Firstly, there is a really really good reason why Oracle doesn’t allow us to create a Unique Index in which the Partition key is not part of a Local Index. It’s called protecting us from ourselves !!

Let’s start by mentioning constraints again.

Remember, the main reason we have indexes policing PK and Unique constraints is so that Oracle can very quickly and efficiently determine whether or not a new value already exists. Do a quick index look-up, is the value there, yes or no, allow the insert (or update), yes or no.

Just imagine for one moment what would happen if Oracle actually allowed us to create a Unique Local index in which the index didn’t include the partitioned column(s).

Lets say a table is Range Partitioned on column ‘A’ and we try and create a Unique Local index on just column ‘B’. Let’s assume we have (say) 500 table partitions meaning we must therefore have 500 local index partitions as well. When we insert a new value for our unique index for value B, it will attempt to do so in the corresponding local index partition as governed by the value A for the new row. However Oracle can’t just check this one index partition for uniqueness to ensure value of column B doesn’t already exist, Oracle would need to check all 500 index partitions because it would be possible for our new value of column B to potentially have previously been inserted into any of the other 499 partitions !!

Each and every insert into our partitioned table (partitioned by column A) therefore would require Oracle to check all (say)500 index partitions each and every time to check for duplicates of column B. Again, it’s important to understand that any given value of column B could potentially be in any of the 500 partitions, IF Oracle allowed us to create a Local Partitioned Index just on column B.

Checking all 500 index partitions looking for a specific value of column B would obviously be impractical, inefficient and totally un-scalable. Therefore Oracle doesn’t allow us to do this. It doesn’t allow us to create a Local index in which the indexed columns does’t include the partitioning columns as well.

This is actually a good thing.

If you want to create a Unique index in a partitioned table, you MUST either add all the partitioned columns and make it part of the LOCAL unique index (so that way each and every insert would only have to check the one local partition as this value is known now it’s part of the index) or you must create it as a GLOBAL index (in which again, Oracle only has to check the one index structure).

It actually makes a lot of sense to do this.

Moving onto the second part of the question. Let’s just use a Local Non-Unique index to police our PK constraints then.

Fortunately this isn’t allowed either for exactly the same reasons. You can’t create a Local Non-unique index to police a PK (or Unique) constraint if the Constraint does not also include the partitioned columns. Otherwise again, Oracle would need to check each and every index partition to determine whether the constraint has been violated or not.

If you attempt to use an existing Local Non-Unique index to police a PK or Unique constraint that does not contain the partitioned columns, you will get an error saying it can’t create the (by default Global index) because the useless Local Non-Unique index (from a policing the constraint point of view) already exists.

Again if you want to create a Non-Unique index to police a PK or Unique constraint you must either ensure the constraint includes all the partitioned columns in which case it can be Local or you must use a Global Non-Unique index.

In other words, the rules apply equally to both Unique and Non-Unique indexes.

So it’s not really a case of Oracle not allowing one to create a Local Unique index without including the partitioned columns (although that’s of course true) but really a case of Oracle not allowing a PK or Unique *constraint*  to be policed via *any* Local index (whether Unique or Non-Unique), unless the partitioned columns are also included.

Little demo to illustrate: Local Index Issue With Partitioned PK and Unique Key Constraints

Outlier Values – An Enemy Of The Index December 13, 2007

Posted by Richard Foote in Index Access Path, Indexing Tricks, Oracle Cost Based Optimizer, Oracle General, Oracle Indexes, Outlier Values.
10 comments

Outlier values are basically values that sit way way outside the standard range of a column’s normal value range.

Data can be a funny thing and sometimes there are values that are naturally “exceptional”. However, very commonly, outlier values are used by applications to represent bizarre default values, to avoid confusion with legitimate values. For example, I look after an application that uses the American Date Of Independence as it’s “default” date.

Usually, these weird outlier values are used to avoid nulls values, as nulls can be problematic and can not be indexed (well actually you can index a null column but we’ll leave that for another blog entry).

However, outlier values while (maybe) solving one problem, can introduce some very significant problems in return.

Firstly, the CBO “hates” outlier values as it potentially totally screws up the CBO’s selectivity calculations. The selectivity of a range scan is basically calculated by the CBO to be the number of values in the range of interest divided by the full range of possible values (IE. the max value minus the min value). Therefore if this calculation is invalidated by a massive and disprotionate “hole” in the full range of possible values, the CBO can get things horribly wrong.

See here for a simple demonstration:  Outlier Selectivity Problem

Additionally, indexes “hate” outlier values as it prevents Oracle using the 90-10 block split to keep indexes nice and compact and is forced to use 50-50 block splits instead. Basically a 90-10 block split is considered if and only if the index entry to be inserted is equal or greater than the current maximum value.  An outlier value that is also the maximum value,  usually means monotonically increasing values (such as sequences, dates, etc.) don’t actually insert the maximum value. Therefore, not only do indexes perform 50-50 splits but this 50% of free space is never used, as all new values are all almost, but not quite, maximum values.

Little demo to highlight this problem: Outlier Index Space Utilisation Problem 

In summary, avoid outlier values if at all possible.  They generally cause more problems than they solve !!

Invisible Indexes December 11, 2007

Posted by Richard Foote in 11g, Index Access Path, Invisible Indexes, Oracle Cost Based Optimizer, Oracle Indexes, Performance Tuning.
14 comments

New in 11g are “Invisible Indexes”, which are basically indexes that exist and are maintained by Oracle but are “invisible” to the CBO. Specific sessions can be set to see these invisible indexes as necessary.

Potentially useful if one has a problematic (and very large) index causing performance issues that you want to make invisible until the specific issue is addressed without the expensive of having to drop and latter recreate the index. Also useful if you want to introduce a new index but want it to be invisible until it’s been given a workout first in a specific “test” session.

Here’s a bit of a demo: Invisible Indexes

Follow

Get every new post delivered to your Inbox.

Join 1,852 other followers