jump to navigation

Why Are My Indexes Still Valid Solution (A Second Face) October 20, 2011

Posted by Richard Foote in IOT, Oracle Indexes, Quiz, Secondary Indexes.
add a comment

I’ve been so busy lately, I just haven’t had any spare time to post.

For now, the quick answer to the last quiz is that the second table was indeed an Index Organized Table (IOT).

One of the nice benefits of an IOT is that when re-organised, unlike a Heap Table, all indexes remain valid, even the Secondary Indexes. I’ll explain why in my next post in the next few days. I’ll also explain why secondary indexes are one of the main disadvantages with IOTs as well.

Stay tuned !!

Why Are My Indexes Still Valid Quiz ? (Move On) October 11, 2011

Posted by Richard Foote in Index Organized Tables, Oracle Indexes, Quiz.
6 comments

OK, this quiz is a nice easy one, the lads at work got this without too much trouble. 
 

Normally, when you MOVE (re-org) a table, all the associated indexes become Unusable. As below:
 

SQL> create table pink_floyd (id number constraint pf_pk primary key, code number, name varchar2(30));
 
Table created.
 
SQL> create index pf_code_i on pink_floyd(code);
 
Index created.
 
SQL> insert into pink_floyd select rownum, mod(rownum,100), 'The Dark Side Of The Moon' from dual connect by level <= 10000;
 
10000 rows created.
 
SQL> commit;
 
Commit complete.
 
SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'PINK_FLOYD', cascade=>true, estimate_percent=>null, method_opt=>'FOR ALL COLUMNS SIZE 1');
 
PL/SQL procedure successfully completed.
 
SQL> select index_name, num_rows, status from dba_indexes where table_name = 'PINK_FLOYD';
 
INDEX_NAME                       NUM_ROWS STATUS
------------------------------ ---------- --------
PF_PK                               10000 VALID
PF_CODE_I                           10000 VALID
 
SQL> alter table pink_floyd move;
 
Table altered.
 
SQL> select index_name, num_rows, status from dba_indexes where table_name = 'PINK_FLOYD';
 
INDEX_NAME                       NUM_ROWS STATUS
------------------------------ ---------- --------
PF_PK                               10000 UNUSABLE
PF_CODE_I                           10000 UNUSABLE

 
 
 
So the indexes are now all unusable ..
 
 
However, I previously created another table called BOWIE that has exactly the same columns, indexes and data but when I MOVE this table:
 

SQL> select index_name, num_rows, status from dba_indexes where table_name = 'BOWIE';
 
INDEX_NAME                       NUM_ROWS STATUS
------------------------------ ---------- --------
BOWIE_PK                            10000 VALID
BOWIE_CODE_I                        10000 VALID
 
SQL> alter table bowie move;
 
Table altered.
 
SQL> select index_name, num_rows, status from dba_indexes where table_name = 'BOWIE';
 
INDEX_NAME                       NUM_ROWS STATUS
------------------------------ ---------- --------
BOWIE_PK                            10000 VALID
BOWIE_CODE_I                        10000 VALID

 
All the indexes remain VALID !!
 
What’s so different about this table and their indexes ??????

I plan to discuss this whole topic (finally) in more detail in the coming weeks …

Why Is My Index Not Being Used No. 2 Solution (The Narrow Way) October 6, 2011

Posted by Richard Foote in CBO, Oracle Indexes, Quiz.
3 comments

As many have identified, the first thing to point out is that the two queries are not exactly equivalent.

The BETWEEN clause is equivalent to a ‘>= and <=’ predicate, whereas the original query only had a ‘> and <‘ predicate. The additional equal conditions at each end is significant.

The selectivity of the original query is basically costed as  (max condition – min condition)/(max column value – min column value). As this equates to 1 day/2000 days, the selectivity of the first query is therefore 0.0005.

1M rows multiplied by 0.0005 = 500 rows, the CBO cardinality estimate.

Note that 0.0005 is also the selectivity of 1 day.

The  calculation for the BETWEEN clause is different. It’s the above formula PLUS the selectivity of 1 day for the each of the two equal conditions (as the CBO assumes you not only want the values within the range but the specific values on each side of the range).

So that’s a selectivity of 0.0005 for the date range as above plus 2 x 0.0005 for each of the equal conditions = 0.0005 + 0.001 = 0.0015.

1M muliplied by 0.0015 = 1500, the CBO cardinality estimate of the second query.

As many have mentioned in the comments, the Clustering Factor of this index is awful, as each of the distinct 500 occurences of each day is spread evenly throughout the whole table. It’s therefore very expensive to use this index to retrieve a “larger” number of rows.

The cost of the first query that used the index is 505, about as bad as it gets when retrieving 500 rows. If we were to retrieve not 500 but 1500 rows, then the cost would effectively triple and be approximately 1500.

However, the cost of the FTS as highlighted in the second query is 933. This is less than 1500 and so the FTS is prefered by the CBO in the second query.

It all comes down to the relative costs and these all come down to the estimated selectivity of the query and the associated segment (and system) statistics, of which the Clustering Factor of the index is one of the most significant factors of all (no pun intended of course). If we effectively triple the estimated costs of a query as we do with the second query, then this can obviously have an impact on the CBO calculations and the resultant execution plan.

If we were to rewrite the first query to be equivalent to using the BETWEEN:

SQL> select * from bowie where hist_date >= '01-JAN-2011' and hist_date <= '02-JAN-2011';
 
500 rows selected.
 

Execution Plan
----------------------------------------------------------
Plan hash value: 1845943507
 
---------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       |  1500 | 28500 |   933   (2)| 00:00:12 |
|*  1 |  TABLE ACCESS FULL| BOWIE |  1500 | 28500 |   933   (2)| 00:00:12 |
---------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter("HIST_DATE">=TO_DATE(' 2011-01-01 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND "HIST_DATE"<=TO_DATE(' 2011-01-02 00:00:00',
              'syyyy-mm-dd hh24:mi:ss'))
 
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       3344  consistent gets
          0  physical reads
          0  redo size
       5563  bytes sent via SQL*Net to client
        395  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
        500  rows processed

then we now get the same FTS costings and execution plan.

The moral of this story is that using a BETWEEN for a small range can significantly increase the cost of the query. Something to be aware of if perhaps close enough is good enough.

Why Is My Index Not Being Used No. 2 Quiz (Quicksand) October 3, 2011

Posted by Richard Foote in CBO, Oracle Indexes, Quiz.
34 comments

I have a table that has 1M rows with dates that span 2000 days, all evenly distributed (so there are 500 rows per day for the mathematically challenged). All stats are 100% accurate and I have an index on the date column.
 

SQL> create table bowie (id number, hist_date date, text varchar2(30));
 
Table created.
 
SQL> insert into bowie select rownum, sysdate-mod(rownum, 2000), 'BOWIE' from dual connect by level <= 1000000;
 
1000000 rows created.
 
SQL> commit;
 
Commit complete.
 
SQL> create index bowie_date_i on bowie(hist_date);
 
Index created.
 
SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'BOWIE', cascade=>true, estimate_percent=>null, method_opt=>'FOR ALL COLUMNS SIZE 1');
 
PL/SQL procedure successfully completed.

 
 
 
OK, I now select 1 day’s worth of data:

 
 

SQL> select * from bowie where hist_date > '01-JAN-2011' and hist_date < '02-JAN-2011';
 
500 rows selected.
 

Execution Plan
----------------------------------------------------------
Plan hash value: 690852991
 
--------------------------------------------------------------------------------------------
| Id  | Operation                   | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |              |   500 |  9500 |   505   (0)| 00:00:07 |
|   1 |  TABLE ACCESS BY INDEX ROWID| BOWIE        |   500 |  9500 |   505   (0)| 00:00:07 |
|*  2 |   INDEX RANGE SCAN          | BOWIE_DATE_I |   500 |       |     4   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------
 

Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - access("HIST_DATE">TO_DATE(' 2011-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')
              AND "HIST_DATE"<TO_DATE(' 2011-01-02 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
 
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        506  consistent gets
          0  physical reads
          0  redo size
       5563  bytes sent via SQL*Net to client
        395  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
        500  rows processed

 
 
 
Everything is perfect. The index is used and the cardinality estimate is spot on with the CBO correctly predicting that 500 rows will be returned.
 
 
OK, I now re-write the query with a BETWEEN clause:
 
 

SQL> select * from bowie where hist_date between '01-JAN-2011' and '02-JAN-2011';
 
500 rows selected.
 

Execution Plan
----------------------------------------------------------
Plan hash value: 1845943507
 
---------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       |  1500 | 28500 |   933   (2)| 00:00:12 |
|*  1 |  TABLE ACCESS FULL| BOWIE |  1500 | 28500 |   933   (2)| 00:00:12 |
---------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter("HIST_DATE">=TO_DATE(' 2011-01-01 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND "HIST_DATE"<=TO_DATE(' 2011-01-02 00:00:00',
              'syyyy-mm-dd hh24:mi:ss'))
 
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       3344  consistent gets
          0  physical reads
          0  redo size
       5563  bytes sent via SQL*Net to client
        395  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
        500  rows processed

 
 
And now everything has gone wrong. I’m still getting the same 500 rows but the CBO is choosing an inefficient FTS. The estimates are now way way off, with the CBO expecting 1500, not 500 rows to be returned.
 
 
QUESTION: Why has everything now gone so terribly wrong ?

Why Is My Index Not Being Used Solution (Eclipse) October 1, 2011

Posted by Richard Foote in ASSM, CBO, Clustering Factor, Oracle Indexes, Quiz.
1 comment so far

Well done to everyone that got the correct answer 🙂

Indeed, the subtle but significant difference between the two demos was that demo one created the table in a tablespace called USER_DATA with manual segment space management (with freelists/freelist groups set to 1), while demo two created the table in a tablespace called USER_DATA1 with automatic segment space management.

In the first demo, the 3 separate sessions all followed the same freelist and inserted their rows concurrently into the same table blocks, resulting in the table being effectively sorted in ID order.

If we look at the resultant Clustering Factor:

SQL> select num_rows, leaf_blocks, clustering_factor from dba_indexes where index_name = 'BOWIE_ID_I';
NUM_ROWS LEAF_BLOCKS CLUSTERING_FACTOR
---------- ----------- -----------------
    300000        1452              2171

We notice the Clustering Factor of 2171 is relatively low for an index with 300000 rows, as indeed the order of the rows in the table almost exactly matches the order of the index entries.

In the second demo, ASSM ensures the 3 separate transactions don’t cause contention and insert their rows in a different set of blocks from each other. This is good in that contention is reduced but has the nasty side-effect on now having the resultant rows scattered randomly between different sets of 3 varying blocks. The actual Clustering Factor isn’t particularly bad in that Oracle has to now visit 3 different blocks for a range of values that previously might have been co-located within the 1 block, but because of the manner of which the Clustering Factor is calculated and that it will increase even if forced to visit a block it had just visited a couple of I/O calls beforehand, the calculated Clustering Factor can be appalling.

If we look at the Clustering Factor of the index from the second demo:

SQL> select num_rows, leaf_blocks, clustering_factor from dba_indexes where index_name = 'BOWIE_ID_I';
 
NUM_ROWS LEAF_BLOCKS CLUSTERING_FACTOR
---------- ----------- -----------------
    300000        1573            271936

We notice the Clustering Factor is now terrible at 271936. It’s a classic example of a table with the data that is relatively well clustered but has an appalling Clustering Factor. If Oracle didn’t increment the Clustering Factor for a block it had only visited a couple of index entries previously, then it would likely have a similar Clustering Factor to the first demo.

But statistics collection doesn’t take this into consideration, it will increment the Clustering Factor even if the block had only just recently been visited (only if it’s the same table block as the previous index entry will the Clustering Factor not increment during stats collection), so hence the terrible Clustering Factor and hence the dramatic difference in how the index is now considered, costed and used by the CBO.

The moral of this story is that if you use ASSM or you use mutliple Freelists/Freelist Groups to avoid contention, seriously consider the impact of the Clustering Factor on indexed columns that would ordinarily have a good Clustering Factor and the impact this in turn may have on your resultant execution plans …