jump to navigation

Why Is My Index Not Being Used Quiz (Brain Damage) September 28, 2011

Posted by Richard Foote in CBO, Oracle Indexes, Quiz.
9 comments

This one is a little different as it comes in the form of a demo (and about 1 minute to read) so you have to work a little 🙂
 
I create table, index and sequence:
 

SQL> create table bowie (id number, name varchar2(30)) tablespace user_data;
 
Table created.
 
SQL> create index bowie_id_i on bowie(id);
 
Index created.
 
SQL> create sequence bowie_id order;
 
Sequence created.

 
I then create a little procedure that simply adds 100,000 rows to the table:
 

SQL> create or replace procedure add_bowie_rows as
  2  begin
  3  for i in 1..100000 loop
  4  insert into bowie values (bowie_id.nextval, 'DAVID BOWIE');
  5  commit;
  6  end loop;
  7  end;
  8  /
 
Procedure created.

 
I then have 3 different sessions that run the procedure simultaneously (eg. exec add_bowie_rows).
 
I collect 100% accurate stats:
 

SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'BOWIE', estimate_percent=>null, cascade=>true, method_opt=>'FOR ALL COLUMNS SIZE 1');
 
PL/SQL procedure successfully completed.

 
I run a query that selects about 10,000 rows (out of the 300,000 the table now has):
 

SQL> select * from bowie where id between 42000 and 52000;
 
10001 rows selected.
 

Execution Plan
----------------------------------------------------------
Plan hash value: 131801496
 
------------------------------------------------------------------------------------------
| Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)|Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |            | 10002 |   166K|   125   (1)|00:00:02 |
|   1 |  TABLE ACCESS BY INDEX ROWID| BOWIE      | 10002 |   166K|   125   (1)|00:00:02 |
|*  2 |   INDEX RANGE SCAN          | BOWIE_ID_I | 10002 |       |    51   (0)|00:00:01 |
------------------------------------------------------------------------------------------
 

Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - access("ID">=42000 AND "ID"<=52000)
 

Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
        129  consistent gets
         44  physical reads
          0  redo size
     100270  bytes sent via SQL*Net to client
        264  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      10001  rows processed

 
 
Oracle uses the index and all is well.
 
OK, in exactly the same database, I repeat the demo again with the same 3 sessions populating the data in exactly the same way using exactly the same procedure with 100% accurate statistics, but there’s just one tiny little difference in the setup script:
 

SQL> drop table bowie;
 
Table dropped.
 
SQL> drop sequence bowie_id;
 
Sequence dropped.
 
SQL> create table bowie (id number, name varchar2(30)) tablespace user_data1;
 
Table created.
 
SQL> create index bowie_id_i on bowie(id);
 
Index created.
 
SQL> create sequence bowie_id order;
 
Sequence created.

 
 
I next populate the table in 3 different sessions concurrently and collect stats exactly as before…

However, now when I run my query:
 
 

SQL> select * from bowie where id between 42000 and 52000;
 
10001 rows selected.
 

Execution Plan
----------------------------------------------------------
Plan hash value: 4157583811
 
---------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       | 10002 |   166K|   285   (4)| 00:00:04 |
|*  1 |  TABLE ACCESS FULL| BOWIE | 10002 |   166K|   285   (4)| 00:00:04 |
---------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter("ID"<=52000 AND "ID">=42000)
 

Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
        998  consistent gets
        693  physical reads
          0  redo size
     100270  bytes sent via SQL*Net to client
        264  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      10001  rows processed

 
It performs a FTS ??? Note, the cardinality estimate of 1002 is practically spot on and identical to previously when the index was used by the CBO.

Instead of selecting 10,000 rows, if I now select say just 500 rows:
 

SQL> select * from bowie where id between 42000 and 42499;
 
500 rows selected.
 

Execution Plan
----------------------------------------------------------
Plan hash value: 4157583811
 
---------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       |   501 |  8517 |   284   (4)| 00:00:04 |
|*  1 |  TABLE ACCESS FULL| BOWIE |   501 |  8517 |   284   (4)| 00:00:04 |
---------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter("ID"<=42499 AND "ID">=42000)
 

Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
        997  consistent gets
          0  physical reads
          0  redo size
       5263  bytes sent via SQL*Net to client
        248  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
        500  rows processed

 
It still performs a FTS !!
 
Only when I get down to a really low number of rows, for example 100 rows:
 

SQL> select * from bowie where id between 42000 and 42099;
 
100 rows selected.
 

Execution Plan
----------------------------------------------------------
Plan hash value: 131801496
 
------------------------------------------------------------------------------------------
| Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)|Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |            |   101 |  1717 |    95   (0)|00:00:02 |
|   1 |  TABLE ACCESS BY INDEX ROWID| BOWIE      |   101 |  1717 |    95   (0)|00:00:02 |
|*  2 |   INDEX RANGE SCAN          | BOWIE_ID_I |   101 |       |     3   (0)|00:00:01 |
------------------------------------------------------------------------------------------
 

Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - access("ID">=42000 AND "ID"<=42099)
 

Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
        103  consistent gets
          0  physical reads
          0  redo size
       1266  bytes sent via SQL*Net to client
        248  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
        100  rows processed

 
Will Oracle use the index.
  
QUESTION: Why, what is the tiny little difference that has made such a huge difference in behaviour ???
 
Now there are a couple of possible answers (at least) that come to mind …

Enjoy !!

Rebuilding Indexes and the Clustering Factor Solution (Move On) September 25, 2011

Posted by Richard Foote in Clustering Factor, Index Rebuild, Indexing Myth, Oracle Indexes, Quiz, Reverse Key Indexes.
3 comments

Excellent !! This quiz created quite a bit of debate and it was nice to sit back and read some interesting discussions.

The Clustering Factor is basically the measurement of how well aligned the data in the underlining table is in relation to the index and is the number of table related I/Os required to read the entire table via a full index scan. A value of the CF approaching the number of blocks in the table suggests the data is reasonably well sorted/clustered in relation to the index (although the CF could in theory be somewhat less than the blocks in the table of course). A value of the CF approaching the number of index entries suggests the data is not particularly well sorted/clustered in relation to the index and means we may need to re-visit the same table block numerous times to get the required data, thus decreasing the efficiency of using the index, increasing the costs associated with using the index and therefore decreasing the likelihood of the CBO using the index.

So for an index rebuild to actual have an impact on the CF on an index, means either the rows in the table needs to change or the order of the index entries needs to change.

However, when we typically rebuild an index, it has no impact at all on the table and so can’t possibly change the order of the rows there. Additionally, no matter how fragmented or inefficient the index structure might be, an index rebuild doesn’t change the order of the index entries either as they’re always sorted within the index in the order of the indexed columns.

Therefore an index rebuild typically has no impact at all on the CF of an index, no matter the current value of the CF.

However, there is an exception to this rule.

If we rebuild the index and change it from a NOREVERSE index to a REVERSE index, we now do change the order of the index. Significantly so, as the index entries are now in the order of the index column values when reversed. Therefore this can in turn significantly change the CF of an index.

Conversely, if we rebuild an index and change it from REVERSE to NOREVERSE, we likewise significantly change the order of the index entries and hence the value of the CF.

For a nice little demo, see David Aldridge’s comment or my previous discussion on Reverse Key Indexes.

Of course, it’s always nice to see new ideas and something I hadn’t considered was Gary Myer’s comment regarding changing the logic behind a Function-Based Index prior to a rebuild …

So the moral of this story is that no matter how poorly fragmented the index, how high or low the current CF of an index might be, rebuilding an index in order to improve the CF is a futile exercise and will change less than diddly-squat, except in the above mentioned special circumstances.

Now, back to another hearing of Pink Floyd’s masterpiece “The Dark Side of the Moon” in all its surround sound glory 🙂

Rebuilding Indexes and the Clustering Factor Quiz (One Of The Few) September 20, 2011

Posted by Richard Foote in Clustering Factor, Index Rebuild, Oracle Indexes, Quiz.
35 comments

Today’s question has been prompted by various recent comments regarding the Clustering Factor (CF) of an index and how to change the CF requires a reorg of the underlining table.

It used to be quite a common myth that if the CF of an index was greater that “X” or based on some nonsensical formula the CF was greater than “Y”, then rebuilding the index somehow made everything better again. I believe it’s now much more commonly accepted that rebuilding an index does not change the CF of an index in any way. By rebuilding an index, the actual order of the index entries remains unchanged as does the order of the rows within the table and so the resultant CF can’t possibly change.

Pick any index, no matter how bad the CF or badly fragmented the index or table, take fresh statistics and after rebuilding the index (and fresh statistics if compute statistics on the index isn’t performed), the CF will remain the same.

However, there are nearly always exceptions …

Give an example of when rebuilding an index will significantly change the actual CF of an index, assuming 100% accurate statistics are gathered before/after the index rebuild on a table with no DML.

There are actually two such examples that spring to mind 🙂

Big Tables, Sorts and Indexes Solution (Right On Mother) September 19, 2011

Posted by Richard Foote in CBO, Oracle Indexes, Quiz.
42 comments

My, what a clever lot we have reading this blog 🙂

Indeed, most of the work has already been done for me as we already have a couple of excellent demos in the list of comments.

The answer is Yes, the CBO will consider using the index and Yes, there may be a number of scenarios when the index will actually be selected by the CBO as it’s the genuinely cheaper option.

At the end of the day, it comes down to whether performing a Full Index Scan, one little block at a time but with the data coming out pre-sorted is cheaper than performing a multiblock Full Table Scan AND a subsequent sort of the data. It of course depends on various factors such as:

  • The most crucial of all, the Clustering Factor of the index. If the data is very well clustered according to the index, then the cost of reading the entire table via the index can be vastly reduced and the index access path becomes viable. A poor (or average) CF, and using the index is just too expensive. Radoslav Golian has an excellent example in the comments on when an index with an excellent CF is chosen by the CBO.
  • The cost of the sort. Sorts are really expensive, especially if Oracle is forced to go to disk and even more so if it has to perform a multi pass sort, so the more costly the sort, the more likely the index is the cheaper option.

An important point for the index to be considered is that it must have a NOT NULL constraint on the column(s), else the index is ignored as the CBO can’t guarantee all rows can referenced within the index.

The moral of this story is this. There is no selectivity by which an index is not considered by the CBO. An index can potentially select 100% of all rows, if doing so is the cheapest option available to the CBO.

I’ve discussed using an index to select 100% of all data before if anyone is interested.

New question coming soon !!

Big Tables, Sorts and Indexes Quiz (Candidate) September 14, 2011

Posted by Richard Foote in Oracle Indexes, Quiz.
27 comments

Following on from the previous quiz on Descending indexes. Simple scenario.

You have a huge table, 10 Million plus rows.

You have an index on a column with a NOT NULL constraint but there are various other columns in the table not included in the index.

You want to select all columns and all rows from the table so you don’t even have a  WHERE condition, but you want the data returned in the order of your indexed column, e.g.

SELECT * FROM really_big_table ORDER BY id;

So you have an index on the ID column.

Question: Is it possible for the CBO to use the index or for the CBO to even consider the index in the first place ? If so, how so ? If not, why not ?

Enjoy 🙂

 

UPDATE:

OK, all statistics, including system statistics need to be “accurate” and give the CBO correct details on the scenario (so no cheating with manipulating statistics).

No hints allowed and it’s the  ALL_ROWS optimizer_mode, as we want the best solution to retrieve every row from the 10M+ table. Again note, there is no WHERE condition in the query.

If it helps, the index can be unique and there may be as little as just 1 additional column in the table.

Descending Indexes Solution (Yellow Submarine) September 9, 2011

Posted by Richard Foote in Descending Indexes, Oracle Indexes, Quiz.
28 comments

Answers to the quiz on Descending Indexes and hopefully some useful dangers and tips on using them.

The answer to the first question is Yes, a “normal” Ascending Index can be used by the CBO to retrieve data in descending order and hence possibly avoid a sort. The reason being that leaf blocks in the index structure have effectively two pointers, one that points to the next leaf block in the index structure (except for the very last leaf block) and one that points to the previous block (except for the first leaf block). So the data in an index can be retrieved in either order.

The answer to the second question is Yes as well, a Descending Index can also be used to also retrieve data in either logical order as again all the leaf blocks have the two set of pointers.

That being the case, if an index has just the one column value, does it therefore make any difference which index one creates, ascending or descending ?

Hence my last question. The answer is maybe, as there are a number of fundamental differences in how each type of index is implemented.

Naturally, a little demo to illustrate 🙂

Let’s begin by creating a simple little table and a normal B-Tree index on an ID column, which has monotonically increasing values:

SQL> create table bowie (id number, name varchar2(30));

Table created.

SQL> create index bowie_id_i on bowie(id);

Index created.

SQL> insert into bowie select rownum, 'DAVID BOWIE' from dual connect by level <= 100000;

100000 rows created.

SQL> commit;

Commit complete.

Note the index is indeed a “Normal” B-Tree index and because the indexed values monotonically increase, all index leaf block splits are 90-10 splits resulting a perfectly compact, 100% utilised index structure:

SQL> select index_type from dba_indexes where index_name = 'BOWIE_ID_I';

INDEX_TYPE
---------------------------
NORMAL

SQL> analyze index bowie_id_i validate structure;

Index analyzed.

SQL> select lf_rows, lf_blks, pct_used from index_stats;

   LF_ROWS    LF_BLKS   PCT_USED
---------- ---------- ----------
    100000        199        100

Let’s now run a query to ensure the index is indeed used and that the sort can indeed be avoided. Note I’ve not actually collected any CBO statistics at this stage but I’m definitely using the CBO:

SQL> alter system set optimizer_mode='ALL_ROWS' scope=both;

System altered.

SQL> select * from bowie where id between 42 and 84 order by id desc;

43 rows selected.
Execution Plan
----------------------------------------------------------
Plan hash value: 2771731789

-------------------------------------------------------------------------------------------
| Id  | Operation                    | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |            |    43 |  1290 |     1   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID | BOWIE      |    43 |  1290 |     1   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN DESCENDING| BOWIE_ID_I |    43 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("ID">=42 AND "ID"<=84)
       filter("ID">=42 AND "ID"<=84)

Note
-----
   - dynamic sampling used for this statement
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          9  consistent gets
          0  physical reads
          0  redo size
       1092  bytes sent via SQL*Net to client
        418  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         43  rows processed

So the execution plan clearly shows the use of the index via an index range scan descending and that there are indeed no sort operations performed. There were no statistics gathered, so the CBO performed some dynamic sampling to determine a taste for the data.

Let’s now change the optimizer_mode to CHOOSE, a common default setting (especially pre 11g, this example is run on a 10.2.0.4 database) and re-run the query:

SQL> select * from bowie where id between 42 and 84 order by id desc;

43 rows selected.
Execution Plan
----------------------------------------------------------
Plan hash value: 3062669298

---------------------------------------------------
| Id  | Operation                    | Name       |
---------------------------------------------------
|   0 | SELECT STATEMENT             |            |
|   1 |  SORT ORDER BY               |            |
|   2 |   TABLE ACCESS BY INDEX ROWID| BOWIE      |
|*  3 |    INDEX RANGE SCAN          | BOWIE_ID_I |
---------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("ID">=42 AND "ID"<=84)

Note
-----
   - rule based optimizer used (consider using cbo)
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          3  consistent gets
          0  physical reads
          0  redo size
       1092  bytes sent via SQL*Net to client
        418  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
         43  rows processed

No statistics on the table now means the Rule Based Optimizer kicks in and although a sort operation is performed (as there’s no descending scan), Oracle at least used the index.

OK, let’s now run the exact same sequence of events, but this time using a Descending Index.

SQL> drop table bowie;

Table dropped.

SQL> create table bowie (id number, name varchar2(30));

Table created.

SQL> create index bowie_id_i on bowie(id desc);

Index created.

SQL> insert into bowie select rownum, 'DAVID BOWIE' from dual connect by level <= 100000;

100000 rows created.

SQL> commit;

Commit complete.

So it’s the exact same table and set of data. Let’s now look at the type of index created:

SQL> select index_type from dba_indexes where index_name = 'BOWIE_ID_I';

INDEX_TYPE
---------------------------
FUNCTION-BASED NORMAL

OK, Difference Number 1. A Descending Index is no ordinary “Normal” index, but is implemented as a  “Function-Based Normal” index instead. This means there’ll be a new hidden virtual column created behind the scenes and that the Rule Based Optimizer is going to have an issue here as it can’t cope with Function-based Indexes.

Let’s look at some Index_Stats:

SQL> analyze index bowie_id_i validate structure;

Index analyzed.
SQL> select lf_rows, lf_blks, pct_used from index_stats;

   LF_ROWS    LF_BLKS   PCT_USED
---------- ---------- ----------
    100000        426         50

Difference Number 2: This index is approximately double the size of the previous index and only half as efficient with its storage. Why ? Because as the data is now inserted in reverse logical order, the last index leaf block no longer receives the largest current index value and so 90-10 splits are not performed. As only 50-50 splits are performed, the index structure is left with 50% empty blocks which can not be reused. Unfortunately, a possible candidate for periodic index rebuilds …

Let’s now re-run the query using the CBO:

SQL> select * from bowie where id between 42 and 84 order by id desc;

43 rows selected.
Execution Plan
----------------------------------------------------------
Plan hash value: 3472402785

------------------------------------------------------------------------------------------
| Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)|Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |            |   208 |  6240 |     1   (0)|00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| BOWIE      |   208 |  6240 |     1   (0)|00:00:01 |
|*  2 |   INDEX RANGE SCAN          | BOWIE_ID_I |     1 |       |     1   (0)|00:00:01 |
------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access(SYS_OP_DESCEND("ID")>=HEXTORAW('3EAAFF')  AND
              SYS_OP_DESCEND("ID")<=HEXTORAW('3ED4FF') )
       filter(SYS_OP_UNDESCEND(SYS_OP_DESCEND("ID"))>=42 AND
              SYS_OP_UNDESCEND(SYS_OP_DESCEND("ID"))<=84)

Note
-----
   - dynamic sampling used for this statement
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          9  consistent gets
          0  physical reads
          0  redo size
       1092  bytes sent via SQL*Net to client
        418  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         43  rows processed

Difference Number 3. Although the same execution plan with the same number of consistent gets is performed, the cardinality estimates are not as accurate and the SYS_OP_DESCEND and SYS_OP_UNDESCEND functions are used as access/filter conditions as they’re the functions implemented in the function-based index.

If we run the same query using the Rule Based Optimizer (remember, we “forgot” to collect statistics on the table):

SQL> alter system set optimizer_mode='CHOOSE' scope=both;

System altered.

SQL> select * from bowie where id between 42 and 84 order by id desc;

43 rows selected.
Execution Plan
----------------------------------------------------------
Plan hash value: 2027917145

------------------------------------
| Id  | Operation          | Name  |
------------------------------------
|   0 | SELECT STATEMENT   |       |
|   1 |  SORT ORDER BY     |       |
|*  2 |   TABLE ACCESS FULL| BOWIE |
------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("ID"<=84 AND "ID">=42)

Note
-----
   - rule based optimizer used (consider using cbo)
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        309  consistent gets
          0  physical reads
          0  redo size
       1092  bytes sent via SQL*Net to client
        418  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
         43  rows processed

Difference Number 4. The Rule based Optimizer does not support Function-Based Indexes and so the index is now completely ignored. Oracle has no choice here but to perform the much more expensive Full Table Scan, when previously the ascending index was used.

A Descending Index can potentially be useful in a concatenated, multi-column index, in which the columns could be ordered in a combination of ascending/descending order that could in turn return the data in a required specific order, thereby negating the need for a potentially expensive sort operation.

However, with a single column index, one would need to question the need for making such an index descending …

Having fun 🙂 Enjoy your weekend !!

Descending Indexes Quiz (Up On The Ladder) September 8, 2011

Posted by Richard Foote in Descending Indexes, Oracle Indexes, Quiz.
13 comments

OK, you won’t find the answer to these questions on my blog, so using my search facility won’t be of any help 🙂

Actually, it’s quite an easy one this, honest 😉

If you have a query such as:

SELECT * FROM bowie WHERE id BETWEEN 42 and 84 ORDER BY id DESC;

1) Can a default B-Tree index on the single ID column that stores data in Ascending order be used by the CBO to retrieve the data automatically in the required Descending order, without the need for a sort operation ?

2) Can a Descending B-Tree index on the ID column be used by the CBO to retrieve the data automatically in Ascending order, without the need for a sort operation ?

3) Depending on your answers above, what are the differences (if any) between the implementation of an Ascending and Descending index ?

Enjoy !!

Best Method To Select One Row From Small Table – Solution (Revolution 1) September 7, 2011

Posted by Richard Foote in Oracle Indexes, Small Indexes.
4 comments

OK, time for some answers, although of course regular readers of this blog will already know the answer 🙂

When selecting one row from the small table as in the quiz in my previous post, the correct order is as follows:

1) PK access of an Index Organized Table.

This option only requires just the 1 consistent get (as the IOT only consists of just 1 block) and this consistent get is a “cheaper” consistent get examination, which in turn only requires the 1 latch. So 1 consistent get and 1 latch. For more info on this, see: https://richardfoote.wordpress.com/2009/05/27/indexes-and-small-tables-part-vii-cluster-one/.

2) Use of Unique Index With a Heap Table.

This option only requires 2 consistent gets (one to read the index leaf block and one to read the table block). There can only be a maximum of one row when a single equality predicate is specified as the index is Unique, which means that 2 consistent gets is the maximum necessary (there may only be the 1 consistent get if there are either no rows to be returned or if all required columns can be found in the index). Additionally, because the index is Unique, both consistent gets are the cheaper, 1 latch consistent get examinations. So that’s 2 consistent gets and 2 latches. For more information, see: https://richardfoote.wordpress.com/2009/05/13/indexes-and-small-tables-part-v-its-no-game/.

3) Use of Non-Unique Index With a Heap Table

This option requires at most 3 consistent gets (one to read the leaf block, one to access the table and possibly one more to perform an additional fetch operation and checking the leaf block again in case of more rows, possibly necessary as the index is Non-Unique and there could be more than one row that matches an equality predicate). Unfortunately, as the index is Non-Unique, the consistent gets are the full-blown consistent gets which requires the buffer block to be pinned/unpinned via 2 latch calls. If the last value is specified, then the additional fetch may be unnecessary and if the index contains all the necessary columns, then just 1 consistent get would be necessary. But in the example provided, we’re looking at typically 3 consistent gets and 6 latches. For more information, see: https://richardfoote.wordpress.com/2009/05/05/indexes-and-small-tables-part-iv-treefingers/.

4) Use of a Full Table Scan

This option requires as a minimum 4 consistent gets (as Oracle needs to not only access the one block containing the 42 rows, but additionally the table segment header multiple times in order to determine the extent map, segment HWM etc.). These additional consistent gets are not generally an issue as these overheads are negligible for a typical FTS of a typical table. But in this example, with a tiny table, these consistent gets makes the difference. Note also that all 4 consistent gets require the block to be pinned/unpinned and so require 2 latch gets each. So that’s 4 consistent gets and 8 latches, minimum, even if all the rows can fit in 1 table block. On its own, no big deal, but if this small lookup table is accessed 10,000 times a minute, that’s potentially a lot of extra CPU and contention when performing a FTS over the above options. For more info, see: https://richardfoote.wordpress.com/2009/04/16/indexes-on-small-tables-part-i-one-of-the-few/ and the other articles on Indexes on Small Tables.

To those that got it right, well done. To those that didn’t, hopefully you’ve learnt something useful.

The moral of the story, no table is too small to potentially benefit from an index, even if the table is only one block in size 🙂

New question and discussion tomorrow.

Best Method To Select One Row From Small Table Quiz (Each Small Candle) September 5, 2011

Posted by Richard Foote in Oracle Indexes, Quiz, Small Indexes.
22 comments

Assume you have a tiny little table with just 42 rows (naturally) that all fit in one table block. Order the following options in order of “efficiency” (most efficient option first) when accessing just one of these rows:

1) Full Table Scan of Heap Table

2) PK access of an Index Organised Table

3) Index access of Heap Table via a Unique Index

4) Index access of Heap Table via a Non-Unique Index

If you think any of the options are the same, then you can order them as follows (example only):

1) Option 1

2) Option 2

2) Option 3

4) Option 4

Answer in the next few days …

UPDATE: just to clarify based on comments already made.

Yes, any index must visit the table as there are required columns within the table that are not stored in the index (this is implied by the above options). The table has only ever contained 42 rows and the are no additional table blocks below the table HWM (not that this really makes a difference to the answer). To keep it simple, the column being queried has a NOT NULL constraint (although it doesn’t really matter, except for when you want to put a PK constraint on it such as with the IOT option).

The Dark Side Of The Moon Immersion Box Set September 4, 2011

Posted by Richard Foote in Pink Floyd, Richard's Musings, The Dark Side of the Moon.
14 comments

It’s Father’s Day here today in Australia and because I’ve naturally been a really really good Dad all year, my family have given me a real treat for my present this year, the Immersion Box Set of the Pink Floyd classic, The Dark Side Of The Moon (although unfortunately, I have to wait a couple of weeks for it to get released until I can get my hands on it).

As the days of actually having a physical format for music (be it record or tape or CD or whatever) to hold and hug are fast disappearing in this age of digital downloads, Pink Floyd have decided to re-release their back catalogue in physical format one last time with some style.

All their albums are being re-released in new digitally remastered formats, but three of their very best albums (The Dark Side Of The Moon in late September, Wish You Were Here in November and The Wall in February 2012) get the special treatment with the release of Immersion Box Sets.

So what do you get in TDSOTM Immersion Box Set ?

The answer is heaps !!

In a very large box (naturally), you get:

Disc 1 , a CD containing a digitally remastered version of the album

Disc 2, a CD containing a previously unreleased live concert at Wembley dating back to 1974.

Disc 3, an audio DVD containing various 5.1 Surround Sound and Quadraphonic (as originally released in 1973) mixes of the album

Disc 4, a visual DVD containing various live performances, documentaries and all the original concert screen films (makes me want to go out and buy a circular TV !!)

Disc 5, a Blu-Ray containing both audio and video highlights of what I’ve listed already

Disc 6, a CD containing previously unreleased material, including demos and the various live sequences that didn’t quite make it onto the final album.

You also get a whole bunch of other goodies, including colour booklets, a photo book, Storm Thorgerson artwork and cards, a set of 9 coasters, a scarf (just in time for our Canberra summer), Pink Floyd marbles (of course) and replicas of various memorabilia.

I can’t wait !!

I’m almost pitying the neighbours already as I fully plan to sit in the middle of my surround sound system and play all this as it was intended. REALLY REALLY LOUD !!

If you want more details or you’re interested in buying this as well, simply click on the picture of the album artwork above.

This will definitely make my Recommendations Page 🙂

METHOD_OPT=> SIZE AUTO Quiz Solution (The Trickster) September 1, 2011

Posted by Richard Foote in CBO, Histograms, Oracle Indexes, Oracle Statistics.
16 comments

I was going to leave it for a few days but there have already been so many comments and discussions on all this, I thought I better write something up. In case anyone was wondering, yes I probably am driving my colleagues at work mad with my “Question of the Day” !!

Unfortunately, some might be disappointed at both Oracle and myself 🙂

Yes, I did kinda set things up to trick the unwary and yes, perhaps the answer isn’t what many are expecting.

The answer to my previous question of which column is going to have a histogram when using the METHOD_OPT  SIZE AUTO option is in fact Column 2. Well done to everyone who got it right.

Why ?

The simplest answer is because it’s the only column of the three that has 254 or less distinct values.

Here’s the key point. When using METHOD_OPT SIZE AUTO, every column with 254 or less distinct values that has been referenced within a predicate, will have a Frequency-based histogram. Each and every one of them, regardless of whether the data is actually skewed or not. So Column 2 with only 254 distinct values AND having previously been referenced in a predicate was guaranteed to have a histogram.

If a column has more than 254 distinct values, whether it then has a Height-Based histogram depends on how the data is skewed. If the data is perfectly evenly distributed, then it won’t have a histogram. Column 1, having sequenced based unique values will not meet the criteria and so not have a histogram.

Column 3 is interesting. Having inserted the outlier value, it now has 255 distinct values and so no longer qualifies for an automatic frequency based histogram. However, if all its values are evenly distributed, then it won’t qualify for a height based histogram either and Column 3 only has just the one outlier value, all other values are evenly distributed values. Unfortunately, Oracle doesn’t pick up on rare outlier values (even if you collect 100% statistics and it’s one of the low/high points of the column) and so will not generate a height-based histogram.

The only column that qualifies is Column 2.

A demo to illustrate. First, let’s create and populate our table:

SQL> create table bowie (id number, code1 number, code2 number);

Table created.

SQL> insert into bowie select rownum, mod(rownum,254), mod(rownum,254) from dual  connect by level <= 1000000;

1000000 rows created.

SQL> commit;

Commit complete.

Notice I’m using a MOD function to generate a perfectly even distribution of data. I’ve noticed a few examples (such as that by Charles Hooper in the comments of the Quiz posting), in which the DBMS_RANDOM function is used. Note this will almost certainly generate data with enough natural skewness on a 1M table with 254 random values that when the outlier 255th value is introduced, it will qualify for a height-based histogram. Very easy way to test and find out. Simply generate the 1M data with 255 random values and I suggest a height-based histogram is created regardless.

OK, I’ll run some SQL to generate sufficient workload to qualify the columns for automatic histograms:

SQL> select * from bowie where id = 42;
SQL> select * from bowie where code1 = 42;
SQL> select * from bowie where code2 = 42;

BTW, the difference between the SIZE AUTO and SIZE SKEWONLY options, is that AUTO requires previous workload to suggest a histogram might be relevant, SKEWONLY does not. 

If we were to collect statistics at this stage, we would notice that the second and third columns both have a Frequency-Based histogram as both columns only have 254 distinct values and so automatically qualify:

SQL> exec dbms_stats.gather_table_stats(ownname=>'BOWIE', tabname=>'BOWIE', estimate_percent=> null, cascade=>true);

PL/SQL procedure successfully completed.

SQL> select column_name, histogram from dba_tab_columns where table_name = 'BOWIE';

COLUMN_NAME                    HISTOGRAM
------------------------------ ---------------
ID                             NONE
CODE1                          FREQUENCY
CODE2                          FREQUENCY

If we were to run a query using the third column, notice how the cardinality estimates aren’t too bad in this example:

SQL> select * from bowie where code2 > 600;

no rows selected

Execution Plan
----------------------------------------------------------
Plan hash value: 1845943507

---------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       |     1 |    13 |   660   (2)| 00:00:08 |
|*  1 |  TABLE ACCESS FULL| BOWIE |     1 |    13 |   660   (2)| 00:00:08 |
---------------------------------------------------------------------------

There are no rows that are greater than 600 and so an estimate of 1 isn’t too bad at all.

OK, let’s add in this one, tiny little row and collect fresh, <strong>100% accurate statistics</strong> (Note: the accurate statistics is very important as Niall’s examples has demonstrated):

&nbsp;

SQL> insert into bowie values (1000001, 42, 99999999);

1 row created.

SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats(ownname=>'BOWIE', tabname=>'BOWIE', estimate_percent=> null, cascade=>true);

PL/SQL procedure successfully completed.

SQL> select column_name, histogram from dba_tab_columns where table_name = 'BOWIE';

COLUMN_NAME                    HISTOGRAM
------------------------------ ---------------
ID                             NONE
CODE1                          FREQUENCY
CODE2                          NONE

Note that the third column now has 255 distinct values and so no longer qualifies for the automatic Frequency-Based histogram. As most of its data is perfectly evenly distributed with just the one outlier value, the column doesn’t qualify for a Height-based histogram either and so now has no histogram at all.

Note as I collected 100% accurate statistics, Oracle is definitely aware of this outlier value:

SQL> select column_name, low_value, high_value from dba_tab_columns where table_name='BOWIE' and column_name='CODE2';

COLUMN_NAME  LOW_VALUE  HIGH_VALUE
------------ ---------- ------------
CODE2        80         C464646464

SQL> var high_num number
SQL> exec dbms_stats.convert_raw_value('C464646464',:high_num);

PL/SQL procedure successfully completed.

SQL> print high_num

  HIGH_NUM
----------
  99999999

But it’s not enough for Oracle to automatically generate a histogram. Which is a shame really, because now we can have all sorts of problems:

SQL> select * from bowie where code2 > 600;
Execution Plan
----------------------------------------------------------
Plan hash value: 1845943507

---------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       |   999K|    12M|   660   (2)| 00:00:08 |
|*  1 |  TABLE ACCESS FULL| BOWIE |   999K|    12M|   660   (2)| 00:00:08 |
---------------------------------------------------------------------------

When previously it had the cardinality estimates spot on, now they’re terrible (expecting not 1 row but 999,000 rows !!) because without a histogram, Oracle is assuming even distribution between its low and high point values.

I’m not a great fan of either the SIZE AUTO or SIZE SKEWONLY options 😉

Hope you’re enjoying these little quizzes, I’ll have another one for you all soon.