jump to navigation

12c Asynchronous Global Index Maintenance Part II (The Space Between) August 6, 2013

Posted by Richard Foote in 12c, Asynchronous Global Index Maintenance, Coalesce Cleanup, dbms_part.cleanup_gidx, Index Coalesce, Oracle Indexes, Partitioning.
7 comments

In Part I, I discussed how global indexes can now be asynchronously maintained in Oracle 12c when a table partition is dropped or truncated. Basically, when a table partition is dropped/truncated with the UPDATE GLOBAL INDEXES clause, Oracle simply keeps track of the object numbers of those table partitions and ignores any corresponding rowids within the index during subsequent index scans. As such, these table partition operations are very fast and efficient as the global indexes are not actually maintained during the partition operation, but importantly, continue to remain in a usable state.

If we look at a partial 11g global index block dump after dropping a table partition (eg. the MUSE_ID_I in the previous demo):

Block header dump: 0×01028750
Object id on Block? Y
seg/obj: 0x130ac csc: 0×00.3c7323 itc: 2 flg: E typ: 2 – INDEX
brn: 1 bdba: 0×1028748 ver: 0×01 opc: 0
inc: 0 exflg: 0

Itl Xid Uba Flag Lck Scn/Fsc
0×01 0×0000.000.00000000 0×00000000.0000.00 —- 0 fsc 0×0000.00000000
0×02 0×0006.001.00000f91 0x00c03e16.0177.02 —- 378 fsc 0x1c0b.00000000
Leaf block dump
===============
header address 130573412=0x7c86464
kdxcolev 0
KDXCOLEV Flags = – - -
kdxcolok 0
kdxcoopc 0×80: opcode=0: iot flags=— is converted=Y
kdxconco 2
kdxcosdc 0
kdxconro 378
kdxcofbo 792=0×318
kdxcofeo 1613=0x64d
kdxcoavs 821
kdxlespl 0
kdxlende 378
kdxlenxt 16942929=0×1028751
kdxleprv 16942927=0x102874f
kdxledsz 0
kdxlebksz 8036
row#0[8019] flag: —D–, lock: 2, len=17
col 0; len 3; (3): c2 10 13
col 1; len 10; (10): 00 01 30 a9 01 02 ab 34 00 3b
row#1[8002] flag: —D–, lock: 2, len=17
col 0; len 3; (3): c2 10 14
col 1; len 10; (10): 00 01 30 a9 01 02 ab 34 00 3c
row#2[7985] flag: —D--, lock: 2, len=17
col 0; len 3; (3): c2 10 15
col 1; len 10; (10): 00 01 30 a9 01 02 ab 34 00 3d
row#3[7968] flag: —D--, lock: 2, len=17
col 0; len 3; (3): c2 10 16
col 1; len 10; (10): 00 01 30 a9 01 02 ab 34 00 3e
row#4[7951] flag: —D–, lock: 2, len=17
col 0; len 3; (3): c2 10 17
col 1; len 10; (10): 00 01 30 a9 01 02 ab 34 00 3f

….

We notice that all index entries that reference the dropped table partition are marked as deleted. They all have a D (deleted) flag set and have been locked by the drop table partition transaction in ITL slot 2. So prior to Oracle 12c, to update global indexes on the fly was a relatively expensive operation as it required all the associated index entries to be deleted from the global indexes.

However, if we look at a block dump of the same index in an Oracle 12c database following a table partition being dropped:

Block header dump: 0x018000e0
Object id on Block? Y
seg/obj: 0x16bbe csc: 0×00.26ae40 itc: 2 flg: E typ: 2 – INDEX
brn: 1 bdba: 0x18000d8 ver: 0×01 opc: 0
inc: 0 exflg: 0

Itl Xid Uba Flag Lck Scn/Fsc
0×01 0×0000.000.00000000 0×00000000.0000.00 —- 0 fsc 0×0000.00000000
0×02 0xffff.000.00000000 0×00000000.0000.00 C— 0 scn 0×0000.0026ae40
Leaf block dump
===============
header address 364741220=0x15bd8264
kdxcolev 0
KDXCOLEV Flags = – - -
kdxcolok 0
kdxcoopc 0×80: opcode=0: iot flags=— is converted=Y
kdxconco 2
kdxcosdc 0
kdxconro 378
kdxcofbo 792=0×318
kdxcofeo 1613=0x64d
kdxcoavs 821
kdxlespl 0
kdxlende 0
kdxlenxt 25166049=0x18000e1
kdxleprv 25166047=0x18000df
kdxledsz 0
kdxlebksz 8036
row#0[8019] flag: ——-, lock: 0, len=17
col 0; len 3; (3): c2 10 13
col 1; len 10; (10): 00 01 6b bb 01 80 04 71 00 3b
row#1[8002] flag: ——-, lock: 0, len=17
col 0; len 3; (3): c2 10 14
col 1; len 10; (10): 00 01 6b bb 01 80 04 71 00 3c
row#2[7985] flag: ——-, lock: 0, len=17
col 0; len 3; (3): c2 10 15
col 1; len 10; (10): 00 01 6b bb 01 80 04 71 00 3d
row#3[7968] flag: ——-, lock: 0, len=17
col 0; len 3; (3): c2 10 16
col 1; len 10; (10): 00 01 6b bb 01 80 04 71 00 3e
row#4[7951] flag: ——-, lock: 0, len=17
col 0; len 3; (3): c2 10 17
col 1; len 10; (10): 00 01 6b bb 01 80 04 71 00 3f

We notice there are no deleted index entries, the index remains totally untouched by the drop table partition operation. So the good news is that dropping/truncating a table partition while updating global indexes is extremely fast and efficient while the indexes remain hunky dory as subsequent index range scans can ignore any rowids that don’t reference existing table partitions of interest.

However, the bad news is that during subsequent index DML operations, Oracle does not know which index entries are valid and which are not and so the space used by these “orphaned” index entries can not be automatically reclaimed and reused as it can with conventionally deleted index entries. Therefore, we need some other way to clean out the orphaned index entries.

There are a number of possible ways to do this. One way is to simply rebuild the global index (or index partition):

SQL> alter index muse_code_i rebuild partition code_p1;

Index altered.

SQL> select index_name, null partition_name, orphaned_entries, num_rows, s.blocks, leaf_blocks, status
from dba_indexes i, dba_segments s where i.index_name = s.segment_name and table_name='MUSE' and partitioned = 'NO'
union
select index_name, i.partition_name, orphaned_entries, num_rows, s.blocks, leaf_blocks, status
from dba_ind_partitions i, dba_segments s where i.partition_name = s.partition_name and index_name like 'MUSE%';

INDEX_NAME   PARTITION_NAME  ORP  NUM_ROWS     BLOCKS LEAF_BLOCKS STATUS
------------ --------------- ------------- ---------- ----------- --------
MUSE_CODE_I  CODE_P1         NO    1000000       2944        2758 USABLE
MUSE_CODE_I  CODE_P2         YES   1000000       4352        4177 USABLE
MUSE_ID_I                    YES   2000000       9216        5849 VALID

Effective, but relatively expensive as this requires the entire index structure to be rebuilt from scratch. Depending on the scale and distribution of the orphaned index entries, another possibly cheaper alternative is to use the new CLEANUP coalesce clause:

SQL> alter index muse_id_i coalesce cleanup;

Index altered.

SQL> exec dbms_stats.gather_index_stats(ownname=>user, indname=>'MUSE_ID_I', estimate_percent=>null);

PL/SQL procedure successfully completed.

SQL> select index_name, null partition_name, orphaned_entries, num_rows, s.blocks, leaf_blocks, status
from dba_indexes i, dba_segments s where i.index_name = s.segment_name and table_name='MUSE' and partitioned = 'NO'
union
select index_name, i.partition_name, orphaned_entries, num_rows, s.blocks, leaf_blocks, status
from dba_ind_partitions i, dba_segments s where i.partition_name = s.partition_name and index_name like 'MUSE%';

INDEX_NAME   PARTITION_NAME  ORP  NUM_ROWS     BLOCKS LEAF_BLOCKS STATUS
------------ --------------- --- --------- ---------- ----------- --------
MUSE_CODE_I  CODE_P1         NO    1000000       4224        4137 USABLE
MUSE_CODE_I  CODE_P2         YES   1000000       4352        4177 USABLE
MUSE_ID_I                    NO    2000000       9216        5849 VALID

SQL> analyze index muse_id_i validate structure;

Index analyzed.

SQL> select name, lf_rows, del_lf_rows from index_stats;

NAME            LF_ROWS DEL_LF_ROWS
------------ ---------- -----------
MUSE_ID_I       2000000           0

This visits each index leak block and removes all the orphaned index entries as part of the coalesce process. Note this is a more “powerful” version of coalesce as a standard coalesce is not aware of orphaned index entries and will only coalesce the index without actually removing the orphaned index.

Yet another possible option is to simply wait for the PMO_DEFERRED_GIDX_MAINT_JOB job to run (scheduled by default during the 2am maintenance window) to clean out orphaned index entries from all currently impacted global indexes. Yet another alternative is to manually run the dbms_part.cleanup_gidx procedure which is in turn called by this job:

SQL> exec dbms_part.cleanup_gidx;

PL/SQL procedure successfully completed.

SQL> select index_name, null partition_name, orphaned_entries, num_rows, s.blocks, leaf_blocks, status
from dba_indexes i, dba_segments s where i.index_name = s.segment_name and table_name='MUSE' and partitioned = 'NO'
union
select index_name, i.partition_name, orphaned_entries, num_rows, s.blocks, leaf_blocks, status
from dba_ind_partitions i, dba_segments s where i.partition_name = s.partition_name and index_name like 'MUSE%';

INDEX_NAME   PARTITION_NAME  ORP   NUM_ROWS     BLOCKS LEAF_BLOCKS STATUS
------------ --------------- --- ---------- ---------- ----------- --------
MUSE_CODE_I  CODE_P1         NO     1000000       2944        2758 USABLE
MUSE_CODE_I  CODE_P2         NO     1000000       4352        4177 USABLE
MUSE_ID_I                    NO     2000000       9216        5849 VALID

We notice the last index partition has now been cleaned out and no longer has orphaned index entries.

So with the new asynchronous global index maintenance capabilities of the Oracle 12c database, we can perform a much faster and more efficient drop/truncate table partition operation while keeping our global indexes in a usable state and leave the tidying up of the resultant orphaned index entries to another time and method of our convenience.

12c Asynchronous Global Index Maintenance Part I (Where Are We Now ?) August 2, 2013

Posted by Richard Foote in 12c, Asynchronous Global Index Maintenance, Oracle Indexes, Partitioning.
4 comments

I previously looked at how global index maintenance was performed when dropping a table partition prior to Oracle Database 12c. Let’s see how things have now changed since the introduction of 12c.

Let’s start by creating the same partitioned table and global indexes as previously:

SQL> create table muse (id number, code number, name varchar2(30)) partition by range (id) (partition muse1 values less than (1000001), partition muse2 values less than (2000001), partition muse3 values less than (maxvalue));

Table created.

SQL> insert into muse select rownum, mod(rownum,100000), 'DAVID BOWIE' from dual connect by level <= 3000000;

3000000 rows created.

SQL> commit;

Commit complete.

SQL> create index muse_id_i on muse(id);

Index created.

SQL> create index muse_code_i on muse(code) global partition by range(code)(partition code_p1 values less than (50000), partition code_p2 values less than (maxvalue));

Index created.

SQL> exec dbms_stats.gather_table_stats(ownname=>user, tabname=>'MUSE', cascade=>true, estimate_percent=>null, method_opt=>'FOR ALL COLUMNS SIZE 1');

PL/SQL procedure successfully completed.

If we look at the current state of affairs, all is currently hunky dory:

SQL> select index_name, null partition_name, num_rows, s.blocks, leaf_blocks, status, orphaned_entries from dba_indexes i, dba_segments s where i.index_name = s.segment_name and table_name='MUSE' and partitioned = 'NO'
union select index_name, i.partition_name, num_rows, s.blocks, leaf_blocks, status, orphaned_entries from dba_ind_partitions i, dba_segments s where i.partition_name = s.partition_name and index_name like 'MUSE%';

INDEX_NAME   PARTITION_NAME    NUM_ROWS     BLOCKS LEAF_BLOCKS STATUS   ORP
------------ --------------- ---------- ---------- ----------- -------- ---
MUSE_CODE_I  CODE_P1            1500000       4224        4137 USABLE   NO
MUSE_CODE_I  CODE_P2            1500000       4352        4177 USABLE   NO
MUSE_ID_I                       3000000       9216        8633 VALID    NO

However, a difference to note here is a new data dictionary column called ORPHANED_ENTRIES which denotes whether the index currently has any orphaned index entries. What are these ? We shall see …

Let’s see how expensive it is to now drop the same table partition while updating the global indexes:

SQL> select n.name, s.value from v$sesstat s, v$statname n where s.statistic# = n.statistic# and (n.name = 'redo size' or n.name = 'db block gets') and s.sid=7;

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
db block gets                                                        129249
redo size                                                         105069544

SQL> alter table muse drop partition muse1 update global indexes;

Table altered.

Elapsed: 00:00:00.76

SQL> select n.name, s.value from v$sesstat s, v$statname n where s.statistic# = n.statistic# and (n.name = 'redo size' or n.name = 'db block gets') and s.sid=7;

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
db block gets                                                        129314
redo size                                                         105083724

As we can see, this is significantly different than before when this was a relatively slow and expensive exercise. At just 65 block gets and only 14180 bytes of redo, this is now about the same cost as dropping the partition without updating the global indexes. How can this be ?

If we now look at the status of our global indexes:

SQL> exec dbms_stats.gather_table_stats(ownname=>user, tabname=>'MUSE', cascade=>true, estimate_percent=>null, method_opt=>'FOR ALL COLUMNS SIZE 1');

PL/SQL procedure successfully completed.

SQL> select index_name, null partition_name, num_rows, s.blocks, leaf_blocks, status, orphaned_entries
from dba_indexes i, dba_segments s where i.index_name = s.segment_name and table_name='MUSE' and partitioned = 'NO'
union
select index_name, i.partition_name, num_rows, s.blocks, leaf_blocks, status, orphaned_entries
from dba_ind_partitions i, dba_segments s where i.partition_name = s.partition_name and index_name like 'MUSE%';

INDEX_NAME   PARTITION_NAME    NUM_ROWS     BLOCKS LEAF_BLOCKS STATUS   ORP
------------ --------------- ---------- ---------- ----------- -------- ---
MUSE_CODE_I  CODE_P1            1000000       4224        4137 USABLE   YES
MUSE_CODE_I  CODE_P2            1000000       4352        4177 USABLE   YES
MUSE_ID_I                       2000000       9216        5849 VALID    YES

We notice that indeed, the index entries have been reduced (for example, only 2M index entries now in MUSE_ID_I instead of 3M) as if the indexes have been updated. However, we also notice that although the indexes are both marked as now having orphaned entries, they’re still in a USABLE state.

Basically, when dropping (or truncating) a table partition, Oracle in 12c now “postpones” the actual removal of the global index entries associated with the dropped/truncated partition. This can now be performed asynchronously at a time of our choosing. So it’s therefore now very quick and very cheap to update these global indexes on the fly.

However, most importantly, the indexes are still usable and can be guaranteed to return the correct results, ignoring any orphaned entires as required. These can be easily ignored as they all have an object number in the index entry rowids associated with the dropped table partition object and not the table partition(s) of interest as required by the queries.

So if we now select values via the ID column index that only spans data in the dropped table partition:

SQL> select * from muse where id between 42 and 420;

no rows selected

Execution Plan
----------------------------------------------------------
Plan hash value: 2515419874
------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name      | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |           |     1 |    23 |     4   (0)| 00:00:01 |       |       |
|   1 |  TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED| MUSE      |     1 |    23 |     4   (0)| 00:00:01 |     1 |     1 |
|*  2 |   INDEX RANGE SCAN                         | MUSE_ID_I |     1 |       |     3   (0)| 00:00:01 |       |       |
------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("ID">=42 AND "ID"<=420)

filter(TBL$OR$IDX$PART$NUM("MUSE",0,8,0,"MUSE".ROWID)=1)

Statistics
----------------------------------------------------------

0  recursive calls
0  db block gets
4  consistent gets
0  physical reads
0  redo size
470  bytes sent via SQL*Net to client
532  bytes received via SQL*Net from client
1  SQL*Net roundtrips to/from client
0  sorts (memory)
0  sorts (disk)
0  rows processed

We notice that quite correctly, no rows are now returned.

The partitioned global index on the CODE column likewise only returns valid data when accessed:

SQL> select * from muse where code=42;

ID       CODE NAME
---------- ---------- ------------------------------
1000042         42 DAVID BOWIE
1100042         42 DAVID BOWIE
1200042         42 DAVID BOWIE
1300042         42 DAVID BOWIE
1400042         42 DAVID BOWIE
1500042         42 DAVID BOWIE
1700042         42 DAVID BOWIE
1600042         42 DAVID BOWIE
1900042         42 DAVID BOWIE
1800042         42 DAVID BOWIE
2000042         42 DAVID BOWIE
2100042         42 DAVID BOWIE
2200042         42 DAVID BOWIE
2300042         42 DAVID BOWIE
2400042         42 DAVID BOWIE
2500042         42 DAVID BOWIE
2600042         42 DAVID BOWIE
2700042         42 DAVID BOWIE
2900042         42 DAVID BOWIE
2800042         42 DAVID BOWIE

20 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 4070098220
---------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                   | Name        | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
---------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                            |             |    20 |   460 |    24   (0)| 00:00:01 |       |       |
|   1 |  PARTITION RANGE SINGLE                     |             |    20 |   460 |    24   (0)| 00:00:01 |     1 |     1 |
|   2 |   TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED| MUSE        |    20 |   460 |    24   (0)| 00:00:01 | ROWID | ROWID |
|*  3 |    INDEX RANGE SCAN                         | MUSE_CODE_I |    20 |       |     3   (0)| 00:00:01 |     1 |     1 |
---------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access("CODE"=42)

filter(TBL$OR$IDX$PART$NUM("MUSE",0,8,0,"MUSE".ROWID)=1)

Statistics

----------------------------------------------------------
1  recursive calls
0  db block gets
25  consistent gets
2  physical reads
0  redo size
1147  bytes sent via SQL*Net to client
554  bytes received via SQL*Net from client
3  SQL*Net roundtrips to/from client
0  sorts (memory)
0  sorts (disk)
20  rows processed

As we can see, only valid data belonging to the non-dropped partitions are returned via the index, even though the index has orphaned index entries that reference the dropped table partition.

If we look at the INDEX_STATS of these indexes, we notice at one level that the orphaned index entries are counted as if they’re deleted entries:

SQL> analyze index muse_id_i validate structure;

Index analyzed.

SQL> select name, lf_rows, del_lf_rows from index_stats;

NAME            LF_ROWS DEL_LF_ROWS
------------ ---------- -----------
MUSE_ID_I       3000000     1000000

We see that the index statistics is indicating that there are 1M so-called deleted index entries. The validation process is ensuring that the orphaned index entries only reference partitions that indeed no longer exist and counts such entries as deleted ones.

So it currently looks we’ve got the best of both worlds here. We effectively get the same performance during the drop table partition operation as if we don’t maintain the global indexes but get the same index availability and subsequent query performance as if we do. So what’s the catch ?

Well, very importantly, unlike actual deleted index entries, they are not readily removed and their space reused by subsequent DML activities within the leaf blocks. In fact, these orphaned index entries can even “get in the way” as we see here when we attempt to reinsert the same data back into table:

SQL> insert into muse select rownum, mod(rownum,100000), 'DAVID BOWIE' from dual connect by level <= 1000000;

1000000 rows created.

Elapsed: 00:03:56.52

Execution Plan
----------------------------------------------------------

Plan hash value: 1731520519
-------------------------------------------------------------------------------
| Id  | Operation                      | Name | Rows  | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | INSERT STATEMENT               |      |     1 |     2   (0)| 00:00:01 |
|   1 |  LOAD TABLE CONVENTIONAL       | MUSE |       |            |          |
|   2 |   COUNT                        |      |       |            |          |
|*  3 |    CONNECT BY WITHOUT FILTERING|      |       |            |          |
|   4 |     FAST DUAL                  |      |     1 |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - filter(LEVEL<=1000000)

Statistics
----------------------------------------------------------

700  recursive calls
758362  db block gets
53062  consistent gets
5069  physical reads
355165692  redo size
869  bytes sent via SQL*Net to client
903  bytes received via SQL*Net from client
3  SQL*Net roundtrips to/from client
75  sorts (memory)
0  sorts (disk)
1000000  rows processed

SQL> commit;

Commit complete.

This is notably slower and more expensive than if the index entries had actually been deleted because Oracle is not able to simply identify and overwrite the orphaned index entries during DML operations as they’re not physically marked as deleted. If we look at the INDEX_STATS after inserting these new rows:

SQL> analyze index muse_id_i validate structure;

Index analyzed.

SQL> select name, blocks, lf_rows, del_lf_rows from index_stats;

NAME                BLOCKS    LF_ROWS DEL_LF_ROWS
--------------- ---------- ---------- -----------
MUSE_ID_I            12288    4000000     1000000

SQL> analyze index muse_code_i validate structure;

Index analyzed.

SQL> select name, blocks, lf_rows, del_lf_rows from index_stats;

NAME                BLOCKS    LF_ROWS DEL_LF_ROWS
--------------- ---------- ---------- -----------
MUSE_CODE_I           9216    2000000      500000

We notice that unlike actual deleted index entries in which all the deleted space would have simply have been reused, we see instead  that none of the space occupied by the orphaned rows has been reused. This in the end means accessing more index blocks, potentially performing more block splits, more newer blocks having to be generated and overall more work having to be performed than would have been necessary if they had just been plain deleted index entries.

So how we actually get rid of these orphaned index entries ? I look at a number of different techniques we can use in Part II. And yes, good old block dumps are on their way as well :)

Reuse Of Empty Index Leaf Blocks (Free Four) August 1, 2013

Posted by Richard Foote in DBMS_SPACE, Leaf Blocks, Oracle Indexes.
add a comment

A recent question by Stalin Subbiah has prompted me to write a quick post on the reuse of empty leaf blocks. In part, the question asked:

“Is there anyway I could monitor the effectiveness of empty blocks being reused from freelist of an index resulting from purge process that we are planning to start soon?”

I’ve previously discussed how Oracle can recycle index blocks that contain nothing but deleted index entries as such blocks are effectively added to the index freelist to be reused by subsequent index block splits. In my “Index Internals – Rebuilding The Truth” presentation, I mention a number of methods of how to see this reuse in operation, such as via block dumps, tree dumps and INDEX_STATS.

However, another simple method which I don’t think I’ve discussed here before is the use of the DBMS_SPACE package. So to help answer Stalin’s question, a simple demo.

Let’s start by creating and populating a table/index in a non-ASSM tablespace:

SQL> create table radiohead (id number, name varchar2(30)) tablespace bowie_stuff;

Table created.

SQL> insert into radiohead select rownum, 'DAVID BOWIE' from dual connect by level <= 1000000;

1000000 rows created.

SQL> commit;

Commit complete.

SQL> create index radiohead_id_i on radiohead(id) tablespace bowie_stuff;

Index created.

If we use DBMS_SPACE.FREE_BLOCKS to take a look at the number of free blocks currently in the index:

SQL> var free_blocks number
SQL> exec dbms_space.free_blocks(segment_owner=> user, segment_name=> 'RADIOHEAD_ID_I', segment_type=> 'INDEX', freelist_group_id=> 0, free_blks=> :free_blocks)

PL/SQL procedure successfully completed.

SQL> print free_blocks

FREE_BLOCKS
-----------
          0

We can see there are currently no free blocks.

OK, lets now delete a whole bunch of rows from the table/index:

SQL> delete from radiohead where id between 1 and 900000;

900000 rows deleted.

SQL> commit;

Commit complete.

If we now look at the number of free blocks:

SQL> exec dbms_space.free_blocks(segment_owner=> user, segment_name=> 'RADIOHEAD_ID_I', segment_type=> 'INDEX', freelist_group_id=> 0, free_blks=> :free_blocks)

PL/SQL procedure successfully completed.

SQL> print free_blocks

FREE_BLOCKS
-----------
       2003

We can see we now have some 2003 free blocks. Index blocks that are totally empty or contain nothing but deleted index entries are considered free blocks, which can potentially be reused/recycled by subsequent index block split operations.

We’ll now insert a whole bunch of new rows into the table, about 1/2 the number I deleted. Notice these new rows have ID values that are greater than all the current ID values within the table. As we’re effectively inserting monotonically increasing values, Oracle will perform 90-10 block splits, but these new index blocks as required will simply reuse the empty blocks that previously contained the deleted (lower range) ID values:

SQL> insert into radiohead select rownum+1000000, 'ZIGGY STARDUST'
from dual connect by level <= 500000;

500000 rows created.

SQL> commit;

Commit complete.

We can confirm this by seeing how the number of free blocks has now reduced since the rows have been inserted:

SQL> var free_blocks number
SQL> exec dbms_space.free_blocks(segment_owner=> user, segment_name=> 'RADIOHEAD_ID_I',
segment_type=> 'INDEX', freelist_group_id=> 0, free_blks=> :free_blocks)

PL/SQL procedure successfully completed.

SQL> print free_blocks

FREE_BLOCKS
-----------
        938

We can see that the free blocks has now reduced to just 938 blocks, down from 2003.

So if you’ve previously deleted a batch of rows in a similar manner and you want to keep track of how many index blocks are still currently free (remembering they remain in the index structure in their original logical location until recycled or reused), you can simply use the DBMS_SPACE.FREE_SPACE package.

If your index resides in an Automatic Segment Space Management (ASSM) tablespace, DBMS_SPACE.UNUSED_SPACE provides similar data.

Next, back to Oracle Database 12c and Asynchronous Global Index Maintenance …

Global Index Maintenance – Pre 12c (Unwashed and Somewhat Slightly Dazed) July 26, 2013

Posted by Richard Foote in Global Indexes, Oracle Indexes, Partitioning.
1 comment so far

Before I discuss another Oracle Database 12c new feature, Asynchronous Global Index Maintenance, thought it might be worthwhile discussing how Global Indexes were handled prior to 12c.

I’ll begin by creating and populating a simple range partitioned table:

SQL> create table muse (id number, code number, name varchar2(30)) partition by range (id)
(partition muse1 values less than (1000001), partition muse2 values less than (2000001), partition muse3 values less than (maxvalue));

Table created.

SQL> insert into muse select rownum, mod(rownum,100000), 'DAVID BOWIE' from dual connect by level <= 3000000;

3000000 rows created.

SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats(ownname=>user, tabname=>'MUSE', estimate_percent=>null, method_opt=>'FOR ALL COLUMNS SIZE 1');

PL/SQL procedure successfully completed.

I’ll now create two global indexes, one non-partitioned, the other partitioned:

SQL> create index muse_id_i on muse(id);
Index created.

SQL> create index muse_code_i on muse(code) global partition by range(code)
(partition code_p1 values less than (50000), partition code_p2 values less than (maxvalue));

Index created.

SQL> select index_name, null partition_name, num_rows, s.blocks, leaf_blocks, status
from dba_indexes i, dba_segments s where i.index_name = s.segment_name and table_name='MUSE' and partitioned = 'NO'
union select index_name, i.partition_name, num_rows, s.blocks, leaf_blocks, status
from dba_ind_partitions i, dba_segments s where i.partition_name = s.partition_name and index_name like 'MUSE%';

INDEX_NAME      PARTITION_NAME    NUM_ROWS     BLOCKS LEAF_BLOCKS STATUS
--------------- --------------- ---------- ---------- ----------- --------
MUSE_CODE_I     CODE_P1            1500000       4224        4135 USABLE
MUSE_CODE_I     CODE_P2            1500000       4352        4177 USABLE
MUSE_ID_I                          3000000       9216        8633 VALID

So we currently have two happy chappy global indexes. I’m now however going to drop one of the table partitions without updating the global indexes and monitor both the db block gets and amount of redo that gets generated:

SQL> select n.name, s.value from v$sesstat s, v$statname n where s.statistic# = n.statistic# and (n.name = 'redo size' or n.name = 'db block gets') and s.sid=129;

NAME               VALUE
------------- ----------
db block gets     457109
redo size      234309652

SQL> alter table muse drop partition muse1;

Table altered.

Elapsed: 00:00:00.66

SQL> select n.name, s.value from v$sesstat s, v$statname n where s.statistic# = n.statistic# and (n.name = 'redo size' or n.name = 'db block gets') and s.sid=129;

NAME           VALUE
-------------- ----------
db block gets      457165
redo size       234320512

We notice the operation completed very quickly and generated minimal db block gets (just 56) and redo (just 10860 bytes). However, this of course comes at a price:

SQL> select index_name, null partition_name, num_rows, leaf_blocks, status
2  from dba_indexes i where table_name='MUSE' and partitioned = 'NO'
3  union select index_name, i.partition_name, num_rows, leaf_blocks, status
4  from dba_ind_partitions i where index_name like 'MUSE%';

INDEX_NAME      PARTITION_NAME    NUM_ROWS LEAF_BLOCKS STATUS
--------------- --------------- ---------- ----------- --------
MUSE_CODE_I     CODE_P1            1500000        4135 UNUSABLE
MUSE_CODE_I     CODE_P2            1500000        4177 UNUSABLE
MUSE_ID_I                          3000000        8633 UNUSABLE

Both global indexes are now unusable as a result as they haven’t been maintained on the fly and so have orphaned index entries pointing to the now non-existent table partition. So it was fast but left the global indexes in an unusable state which have to now be rebuilt.

The other option would be to drop the table partition but to also update the global indexes at the same time as follows:

SQL> select n.name, s.value from v$sesstat s, v$statname n where s.statistic# = n.statistic# and (n.name = 'redo size' or n.name = 'db block gets') and s.sid=129;
NAME               VALUE
------------  ----------
db block gets     129615
redo size      103978912

SQL> alter table muse drop partition muse1 update global indexes;

Table altered.

Elapsed: 00:00:13.08

SQL> select n.name, s.value from v$sesstat s, v$statname n where s.statistic# =n.statistic# and (n.name = 'redo size' or n.name = 'db block gets') and s.sid=129;

NAME                VALUE
-------------- ----------
db block gets      185758
redo size       148012132 

We notice this time, the operation has taken considerably longer and has generated many more db block gets (56,143 up from 56) and much more redo  (44,033,220 bytes up from 10,860).  So updating the global indexes on the fly comes at a cost, but at least they remain usable at the end of the operation:

SQL> select index_name, null partition_name, num_rows, s.blocks, leaf_blocks, status from dba_indexes i, dba_segments s where i.index_name = s.segment_name andtable_name='MUSE' and partitioned = 'NO'
2  union select index_name, i.partition_name, num_rows, s.blocks, leaf_blocks, status from dba_ind_partitions i, dba_segments s where i.partition_name = s.partition_name and index_name like 'MUSE%';

INDEX_NAME      PARTITION_NAME    NUM_ROWS     BLOCKS LEAF_BLOCKS STATUS
--------------- --------------- ---------- ---------- ----------- --------
MUSE_CODE_I     CODE_P1            1000000       4224        4135 USABLE
MUSE_CODE_I     CODE_P2            1000000       4352        4177 USABLE
MUSE_ID_I                          2000000       9216        5849 VALID

Having updated the global indexes and having effectively deleted 1/3 of the table with the lowest ID values, if we were to now try and find the current minimum ID value:

SQL> select min(id) from muse;
MIN(ID)
----------
1000001

Execution Plan
----------------------------------------------------------
Plan hash value: 2104594370
----------------------------------------------------------------------------------------
| Id  | Operation                  | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |           |     1 |     6 |     3   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE            |           |     1 |     6 |            |          |
|   2 |   INDEX FULL SCAN (MIN/MAX)| MUSE_ID_I |     1 |     6 |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Statistics
----------------------------------------------------------

0  recursive calls
0  db block gets
2787  consistent gets
0  physical reads
0  redo size
528  bytes sent via SQL*Net to client
520  bytes received via SQL*Net from client
2  SQL*Net roundtrips to/from client
0  sorts (memory)
0  sorts (disk)
1  rows processed

The CBO is trying to use the index via the Index Full Min/Max Scan to quickly find this minimum ID. However, it keeps hitting leaf blocks with nothing but empty/deleted entries due to dropping the table partition, until it gets through roughly 1/3 of all the index leaf blocks before finally finding the first (and so minimum) non-deleted index value. As such, at 2,787 consistent gets, it’s a relatively expensive operation.

If however, we were to insert a whole bunch of new rows into the table (note these are rows with an ID value greater than existing rows) and then re-run the same query:

SQL> insert into muse select rownum+3000000, mod(rownum,100000), 'DAVID BOWIE'
from dual connect by level <= 1000000;

1000000 rows created.

SQL> commit;

Commit complete.

SQL> select min(id) from muse;

MIN(ID)
----------
1000001

Execution Plan
----------------------------------------------------------

Plan hash value: 2104594370
----------------------------------------------------------------------------------------
| Id  | Operation                  | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |           |     1 |     6 |     3   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE            |           |     1 |     6 |            |          |
|   2 |   INDEX FULL SCAN (MIN/MAX)| MUSE_ID_I |     1 |     6 |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Statistics
----------------------------------------------------------

0  recursive calls
0  db block gets
161  consistent gets
0  physical reads
0  redo size
528  bytes sent via SQL*Net to client
500  bytes received via SQL*Net from client
2  SQL*Net roundtrips to/from client
0  sorts (memory)
0  sorts (disk)
1  rows processed

We notice that at just 161 consistent gets (reduced from 2,787), the Index Full Min/MAX Scan is much more efficient as most of the previously empty leaf blocks on the “left hand side of the index” have now been recycled due to inserting the new data into the “right hand side of the index”. As such, we now find the minimum ID value via the index much more efficiently.

So that was how things kinda worked in 11g and beforehand. However, with Oracle 12c, things have now changed as we’ll see in the next post …

12c Partial Indexes For Partitioned Tables Part II (Vanishing Act) July 12, 2013

Posted by Richard Foote in 12c, Local Indexes, Oracle Indexes, Partial Indexes, Partitioning.
2 comments

In Partial Indexes Part I, we looked at how it was possible with the 12c database  to create a Partial Index based on data from only selected table partitions. The resultant Partial Index can be either a Global or Local Index.

In Part I, we only really looked at Global Indexes, so let’s look at a Local Index example. Using the same Partitioned Table example as before:

SQL> create table pink_floyd (id number, status varchar2(6), name varchar2(30))
indexing off
partition by range (id)
(partition pf1 values less than (1000001),
partition pf2 values less than (2000001) indexing off,
partition pf3 values less than (maxvalue) indexing on);
Table created.

This time, we’ll create a Local Partial Index:

SQL> create index pink_floyd_status_i on pink_floyd(status)
local indexing partial;

Index created.

If we look at the details of the resultant Local Index:

SQL> select index_name, partition_name, num_rows, status, leaf_blocks from dba_ind_partitions where index_name = 'PINK_FLOYD_STATUS_I';

INDEX_NAME           PARTITION_NAME    NUM_ROWS STATUS   LEAF_BLOCKS
-------------------- --------------- ---------- -------- -----------
PINK_FLOYD_STATUS_I  PK1                      0 UNUSABLE           0
PINK_FLOYD_STATUS_I  PK2                      0 UNUSABLE           0
PINK_FLOYD_STATUS_I  PK3                1000000 USABLE          2513

We can see that for those table partitions with INDEXING OFF, the associated Local Indexes have simply been made UNUSABLE. Since Unusable Indexes consume no storage, there is effectively no corresponding index segment for these index partitions.

For the one and only PK3 table partition with INDEXING ON, its associated Local Index has been created as normal. So the end result is very similar to the previous Global Index example, only those rows from the table partitions with the INDEXING ON property are effectively being indexed.

There is one scenario in which the creation of a Partial Index is not permitted, that is in the creation of a Unique Index or a Non-Unique Index to police a Primary Key or Unique Key constraint. Some examples:

SQL> create unique index pink_floyd_id_i on pink_floyd(id)
indexing partial;
create unique index pink_floyd_id_i on pink_floyd(id) indexing partial
*
ERROR at line 1:

ORA-14226: unique index may not be PARTIAL

SQL> alter table pink_floyd add constraint pink_floyd_pk primary key(id)
using index (create index pink_floyd_id_i on pink_floyd(id) indexing partial);
alter table pink_floyd add constraint pink_floyd_pk primary key(id) using index
(create index pink_floyd_id_i on pink_floyd(id) indexing partial)
*
ERROR at line 1:
ORA-14196: Specified index cannot be used to enforce the constraint.

SQL> create index pink_floyd_id_i on pink_floyd(id) indexing partial;

Index created.

SQL> alter table pink_floyd add primary key(id);
alter table pink_floyd add primary key(id)
*
ERROR at line 1:

ORA-01408: such column list already indexed

It clearly doesn’t make sense to create a Partial Unique Index or on a Non-Unique Index policing a PK or Unique Key constraint as it would be impossible to use such an index to guarantee the required unique property. With missing index entries associated with non-indexed partitions, how can Oracle determine whether a value from new row already exists or not ? It can’t and hence Oracle doesn’t permit the creation of such a Partial Index.

Partial Indexes can potentially be extremely useful in reducing unnecessary storage requirements, reducing index maintenance overheads and in improving performance by reducing index block accesses.

But they’re only useful (possible) with Partitioned Tables.

I’ll next look at another cool index improvement introduced with the Oracle 12c Database that’s associated with Partitioning, Asynchronous Global Index Maintenance

12c Partial Indexes For Partitioned Tables Part I (Ignoreland) July 8, 2013

Posted by Richard Foote in 12c, Oracle Indexes, Partial Indexes, Partitioning.
8 comments

In my opinion, one of the unsung “Heroes” of the new 12c Oracle database are improvements associated with Partitioning. There are lots of really nice capabilities introduced in the 12c database that significantly improves the manageability of partitioned environments.

One of those new capabilities is the ability to now create both local and (importantly) global  indexes on only a subset of partitions within a partitioned table. This provides us with the flexibility to say only create partitions with data that would make sense to index, to not index current partitions where perhaps data insert performance is paramount, etc. Additionally and just as importantly, the CBO is aware of the indexing characteristics of individual partitions and can access partitions in differing manners accordingly.

To illustrate, a simple little demo as usual :) Firstly, I’ll create a partitioned table with the new INDEXING clause:

SQL> create table pink_floyd (id number, status varchar2(6), name varchar2(30))
INDEXING OFF
partition by range (id)
(partition pf1 values less than (1000001),
partition pf2 values less than (2000001) INDEXING OFF,
partition pf3 values less than (maxvalue) INDEXING ON);
Table created.

The INDEXING clause determines whether or not the partition is to be indexed. It can be set at the table level and so set the default behaviour for the table or at the individual partition/subpartition level.

In the above example, I’ve set INDEXING OFF at the table level and so indexing by default is not be enabled for the table partitions. Therefore the PF1 partition is not indexed by default. The PF2 partition is explicitly set to also not be indexed but the PF3 index is explicitly set (INDEXING ON) to enable indexing and so override the table level default.

Let’s now populate the table with some basic data:

SQL> insert into pink_floyd select rownum, 'CLOSED', 'DAVID BOWIE' from dual connect by level <= 3000000;

3000000 rows created.

SQL> commit;

Commit complete.

SQL> update pink_floyd set status = 'OPEN' where id > 2000000 and mod(id,10000)=0;

100 rows updated.

SQL> commit;

Commit complete.

Most of the data has a STATUS column value of ‘CLOSED’ but I’ve updated a few rows within just the last partition with a STATUS set to ‘OPEN’.

Let’s now create an index on this STATUS column and collect table statistics:

SQL> create index pink_floyd_status_i on pink_floyd(status);

Index created.

SQL> exec dbms_stats.gather_table_stats(ownname=>user, tabname=>'PINK_FLOYD', estimate_percent=>null, cascade=> true, method_opt=>'FOR ALL COLUMNS SIZE 1 FOR COLUMNS STATUS SIZE 5');

PL/SQL procedure successfully completed.

SQL> select index_name, num_rows, leaf_blocks, indexing from dba_indexes where index_name = 'PINK_FLOYD_STATUS_I';

INDEX_NAME             NUM_ROWS LEAF_BLOCKS INDEXIN
-------------------- ---------- ----------- -------
PINK_FLOYD_STATUS_I     3000000        9203 FULL

By default, an index will include all partitions in a table, regardless of the INDEXING table clause setting. So this index covers all 3M rows in the table and currently has 9203 leaf blocks. The new INDEXING column in DBA_INDEXES shows us that this index is a FULL (non-Partial) index.

We can of course get the data of interest (STATUS = ‘OPEN’) via this index now:

SQL> select * from pink_floyd where status = 'OPEN';

100 rows selected.

Execution Plan
----------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name                | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
----------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |                     |   964 | 24100 |     4   (0)| 00:00:01 |       |       |
|   1 |  TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED| PINK_FLOYD          |   964 | 24100 |     4   (0)| 00:00:01 | ROWID | ROWID |
|*  2 |   INDEX RANGE SCAN                         | PINK_FLOYD_STATUS_I |   100 |       |     3   (0)| 00:00:01 |       |       |
----------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("STATUS"='OPEN')

Statistics
----------------------------------------------------------

0  recursive calls

0  db block gets

110  consistent gets

0  physical reads

0  redo size

4479  bytes sent via SQL*Net to client

609  bytes received via SQL*Net from client

8  SQL*Net roundtrips to/from client

0  sorts (memory)

0  sorts (disk)

100  rows processed

However, we can potentially also run a query based on just the last partition as all the ‘OPEN’ statuses of interest only reside in this last partition:

SQL> select * from pink_floyd where status = 'OPEN' and id > 2000001;

100 rows selected.

Execution Plan
----------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name                | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
----------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |                     |    99 |  2475 |     4   (0)| 00:00:01 |       |       |
|*  1 |  TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED| PINK_FLOYD          |    99 |  2475 |     4   (0)| 00:00:01 |     3 |     3 |
|*  2 |   INDEX RANGE SCAN                         | PINK_FLOYD_STATUS_I |   100 |       |     3   (0)| 00:00:01 |       |       |
----------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("ID">2000001)
2 - access("STATUS"='OPEN')

Statistics
----------------------------------------------------------

0  recursive calls

0  db block gets

110  consistent gets

0  physical reads

0  redo size

2787  bytes sent via SQL*Net to client

609  bytes received via SQL*Net from client

8  SQL*Net roundtrips to/from client

0  sorts (memory)

0  sorts (disk)

100  rows processed

Currently however, the index includes data from all table partitions, even though we’re only really interested in using the index to retrieve the less common ‘OPEN’ status that resides in only the last table partition. With 12c, there is now the capability to only index those partitions that are of interest to us, which with proper design can also be implemented such that only those column values of interest are included within an index.

I’m going to drop and create the index as a “Partial” Index:

SQL> drop index pink_floyd_status_i;

Index dropped.

SQL> create index pink_floyd_status_i on pink_floyd(status) indexing partial;

Index created.

SQL> select index_name, num_rows, leaf_blocks, indexing from dba_indexes where index_name = 'PINK_FLOYD_STATUS_I';

INDEX_NAME             NUM_ROWS LEAF_BLOCKS INDEXIN
-------------------- ---------- ----------- -------
PINK_FLOYD_STATUS_I     1000000        3068 PARTIAL

The new INDEXING PARTIAL clause means only those table partitions with INDEXING ON are to be included within the index.

Notice how the index, which is a Global, Non-Partitioned Index, now only has 1M entries (not all 3M as previously) and with 3068 leaf blocks is only 1/3 of what it was previously. The INDEXING column now denotes this as a “Partial” index.

If we run the query again that only explicitly references the last “active” table partition:

SQL> select * from pink_floyd where status = 'OPEN' and id >= 2000001;

100 rows selected.

Execution Plan
----------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name                | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
----------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |                     |   100 |  2500 |     4   (0)| 00:00:01 |       |       |
|*  1 |  TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED| PINK_FLOYD          |   100 |  2500 |     4   (0)| 00:00:01 |     3 |     3 |
|*  2 |   INDEX RANGE SCAN                         | PINK_FLOYD_STATUS_I |    33 |       |     3   (0)| 00:00:01 |       |       |
----------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("ID">=2000001)
2 - access("STATUS"='OPEN')

Statistics
----------------------------------------------------------

0  recursive calls

0  db block gets

110  consistent gets

0  physical reads

0  redo size

2787  bytes sent via SQL*Net to client

609  bytes received via SQL*Net from client

8  SQL*Net roundtrips to/from client

0  sorts (memory)

0  sorts (disk)

100  rows processed

We see that the index is used as it was previously. By stating with the ID > 2000001 predicate we’re only interested in data that can only reside in the last table partition, the partition with INDEXING ON, the CBO knows the index can be used to retrieve all the rows of interest. If we know the application will only extract data in this manner, all is well with our smaller, Partial index.

However, if it’s possible within the application to perhaps search for STATUS values from other partitions, that have INDEXING OFF:

SQL> select * from pink_floyd where status = 'OPEN';

100 rows selected.

Execution Plan

----------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                    | Name                | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop
----------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                             |                     |   100 |  2500 |  2474   (1)| 00:00:01 |       |
|   1 |  VIEW                                        | VW_TE_2             |    99 |  3465 |  2474   (1)| 00:00:01 |       |
|   2 |   UNION-ALL                                  |                     |       |       |            |          |       |
|*  3 |    TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED| PINK_FLOYD          |    33 |   825 |     4   (0)| 00:00:01 | ROWID | ROWID
|*  4 |     INDEX RANGE SCAN                         | PINK_FLOYD_STATUS_I |   100 |       |     3   (0)| 00:00:01 |       |
|   5 |    PARTITION RANGE ITERATOR                  |                     |    66 |  1650 |  2470   (1)| 00:00:01 |     1 |     2
|*  6 |     TABLE ACCESS FULL                        | PINK_FLOYD          |    66 |  1650 |  2470   (1)| 00:00:01 |     1 |     2
----------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - filter("PINK_FLOYD"."ID">=2000001 OR "PINK_FLOYD"."ID" IS NULL)
4 - access("STATUS"='OPEN')
6 - filter("STATUS"='OPEN')

Statistics
----------------------------------------------------------

0  recursive calls

0  db block gets

16341  consistent gets

8204  physical reads

0  redo size

2787  bytes sent via SQL*Net to client

609  bytes received via SQL*Net from client

8  SQL*Net roundtrips to/from client

0  sorts (memory)

0  sorts (disk)

100  rows processed

We notice the index is still used to efficiently find those rows of interest from the last partition, but a Full Table (Partition) Scan is performed to search for data from the other two partitions, for which with INDEXING OFF means the index does not contain entries that reference these partitions. As a result, this query is now much more expensive than it was previously as the index can not be used to exclusively find the rows of interest. The CBO within the one execution plan uses the index where it can and a full scan of the other partitions where it can’t use the index.

If however we were a little cleverer in how we designed our table and also created table subpartitions based on the STATUS column so that rows of interest resided in their own subpartitions and only set INDEXING ON for these subpartitions:

SQL> create table pink_floyd (id number, status varchar2(6), name varchar2(30))
indexing off
partition by range (id) subpartition by list(status)
subpartition template
(subpartition closed values ('CLOSED') indexing off, subpartition open values ('OPEN') indexing on)
(partition pf1 values less than (1000001),
partition pf2 values less than (2000001),
partition pf3 values less than (maxvalue))
enable row movement;

Table created.

Notice how only the subpartitions with a STATUS of ‘OPEN’ are now to be indexed. If we populate the table with the exact same data as before, we find the table and partial index have the follow characteristics:

SQL> select subpartition_position, subpartition_name, num_rows, indexing from dba_tab_subpartitions where table_name = 'PINK_FLOYD';

SUBPARTITION_POSITION SUBPARTITION_NAME      NUM_ROWS IND
--------------------- -------------------- ---------- ---
                    1 PF1_CLOSED              1000000 OFF
                    2 PF1_OPEN                      0 ON
                    1 PF2_CLOSED              1000000 OFF
                    2 PF2_OPEN                      0 ON
                    1 PF3_CLOSED               999900 OFF
                    2 PF3_OPEN                    100 ON

SQL> select index_name, num_rows, leaf_blocks, indexing from dba_indexes where index_name = 'PINK_FLOYD_STATUS_I';

INDEX_NAME             NUM_ROWS LEAF_BLOCKS INDEXIN
-------------------- ---------- ----------- -------
PINK_FLOYD_STATUS_I         100           1 PARTIAL

We can see that only the subpartitions with data of interest are now indexed. The resultant Partial global non-partitioned index is now tiny, with just the 100 index entries of interest residing in a single leaf block.

Just as importantly, a query searching for this data across the whole table is now extremely efficient and can be fully serviced by this tiny Partial index:

SQL> select * from pink_floyd where status = 'OPEN';

100 rows selected.

Execution Plan
----------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name                | Rows | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
----------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |                     |   100 |  2500 |     2   (0)| 00:00:01 |       |       |
|   1 |  TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED| PINK_FLOYD          |   100 |  2500 |     2   (0)| 00:00:01 | ROWID | ROWID |
|*  2 |   INDEX RANGE SCAN                         | PINK_FLOYD_STATUS_I |   100 |       |     1   (0)| 00:00:01 |       |       |
----------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("STATUS"='OPEN')

Statistics
----------------------------------------------------------

0  recursive calls

0  db block gets

16  consistent gets

0  physical reads

0  redo size

4479  bytes sent via SQL*Net to client

609  bytes received via SQL*Net from client

8  SQL*Net roundtrips to/from client

0  sorts (memory)

0  sorts (disk)

100  rows processed

The query can now be fully serviced by the Partial index as all subpartitions that could contain data of interest are indexed and because the data of interest is all neatly clustered within the table subpartitions, can be retrieved with far fewer consistent gets than previously.

If we ever wanted to access those STATUS values of ‘CLOSED’, the CBO can only do so via a Full Table Scan as such values are not indexed. However, as these values represent the vast majority of rows in the table, the Full Table Scan would be the most appropriate and efficient manner to access these rows any-ways.

The new Partial Index capabilities introduced in Oracle 12c enables us to easily use global (and local) indexes to just index data of interest without having to change the application. Such a capability has many potential uses.

More on Partial Indexes to come soon in Part II.

12c: Intro To Multiple Indexes On Same Column List (Repetition) July 2, 2013

Posted by Richard Foote in 12c, Multiple Indexes, Oracle Indexes.
6 comments

From an indexing perspective, one of the bigger ticket items introduced with Oracle Database 12c is the new capability to create multiple indexes on the same column list. It’s even a feature listed in the New Features Guide :) This can be particularly useful when you want to change the index definition in some way but you want to reduce the time in which the corresponding columns might not indexed.

Previous to 12c, you could not create an index if the same column list is already indexed and would generate an ORA-01408: such column list already indexed error. So, if you wanted to change an index from being say a B-Tree index to a Bitmap index, or from being Unique to Non-Unique or from being Non-Partitioned to Partitioned in same manner, etc. then you had to first drop the index and re-create it again as required. This means for the period in which the index is being re-created (which could be a considerable period for a larger index), the column list is not covered by an index, which might prove to be problematic.

Here, we create a partitioned table and create a Non-Unique, Non-Partitioned index on the ID column:

SQL> create table ziggy (id number, name varchar2(30)) partition by range (id) (partition ziggy1 values less than (1000), partition ziggy2 values less than (2000), partition ziggy3 values less than (maxvalue));

Table created.

SQL> insert into ziggy select rownum, 'DAVID BOWIE' from dual connect by level <=5000;

5000 rows created.

SQL> commit;

Commit complete.

SQL> create index ziggy_id_i1 on ziggy(id);              

Index created.

SQL> alter table ziggy add constraint ziggy_pk primary key(id);

Table altered.

If we decide for whatever reason we want to have say a Unique index instead:

SQL> create unique index ziggy_id_i2 on ziggy(id);
create unique index ziggy_id_i2 on ziggy(id)
                                         *
ERROR at line 1:
ORA-01408: such column list already indexed

No good, we can’t, even if we make it initially INVISIBLE:

SQL> create unique index ziggy_id_i2 on ziggy(id) invisible;
create unique index ziggy_id_i2 on ziggy(id) invisible
                                         *
ERROR at line 1:
ORA-01408: such column list already indexed

Because the index is used to police a PK constraint, we can’t even just drop the index:

SQL> drop index ziggy_id_i1;
drop index ziggy_id_i1
           *
ERROR at line 1:
ORA-02429: cannot drop index used for enforcement of unique/primary key

We have to first drop or disable the PK constraint, then drop the index, then re-create the index.

SQL> alter table ziggy drop primary key;

Table altered.

SQL> drop index ziggy_id_i1;

Index dropped.

SQL> alter table ziggy add primary key(id) using index (create unique index ziggy_id_i2 on ziggy(id));

Table altered.

This means the constraint is not automatically enforced (unless we disable it with validate, thus locking the table) and the ID column is not available via an index during the entire duration of creating the new index.

The 12c database has given us more flexibility in this regard.

Providing we create the index as INVISIBLE (meaning there is only ever the one Visible index on the column list), we can now create multiple indexes on the same column list, providing the new index has a different characteristic to existing indexes. So with a 12c database:

SQL> create unique index ziggy_id_i2 on ziggy(id) invisible;

Index created.

We now have two indexes on the same column list (the ID column). An attempt however to create or alter an index such that two visible indexes have the same column list will fail:

SQL> alter index ziggy_id_i2 visible;
alter index ziggy_id_i2 visible
*
ERROR at line 1:
ORA-14147: There is an existing VISIBLE index defined on the same set of
columns.

This means we can now (say) replace the index policing the PK constraint quicker (or convert a B-Tree to a Bitmap index or convert a Non-Partitioned index to a Partitioned index, etc.) as we don’t now have to wait for the new index to be created first:

SQL> alter table ziggy drop primary key;

Table altered.

SQL> drop index ziggy_id_i1;

Index dropped.

SQL> alter index ziggy_id_i2 visible;

Index altered.

SQL> alter table ziggy add constraint ziggy_pk primary key(id);

Table altered.

We can create as many indexes as we like on the ID, providing only one index is ever visible at a time and providing all the indexes have some form of differing characteristic. For example, all these indexes could potentially all co-exist:

SQL> create index ziggy_id_i3 on ziggy(id) local invisible;

Index created.

SQL> create bitmap index ziggy_id_i4 on ziggy(id) local invisible;

Index created.

SQL> create index ziggy_id_i5 on ziggy(id) reverse invisible;

Index created.

Of course, having many multiple indexes on the same column list in this manner is very likely a very bad idea …

Oracle Database 12c Released – Lots of Indexing Stuff To Talk About June 26, 2013

Posted by Richard Foote in 12c, Oracle Indexes.
10 comments

Finally, Oracle have released the long awaited 12c Database. You can now download it from OTN and other usual places. This is good news as my tongue was getting rather sore from having to bite on it for so long :)

From an indexing point of view, there are lots of fantastic new features, especially from a manageability perspective. The ability to create multiple indexes with the same column list, to partially index tables, with enhanced capabilities regarding online operations, with enhanced index monitoring capabilities, etc. etc. there’s plenty that I can now discuss here.

Unfortunately, all my presentations have been rejected for Oracle Openworld this year so I won’t be able to present them over there :(

Fortunately I’ll be able to discuss them all here in lots of detail in the coming weeks, so stay tuned :)

Clustering Factor Calculation Improvement Part II (Blocks On Blocks) May 14, 2013

Posted by Richard Foote in 11g, Clustering Factor, Index statistics, Oracle Cost Based Optimizer, Oracle Indexes.
6 comments

My previous post on the new TABLE_CACHED_BLOCKS statistics gathering preference certainly generated some interest :) My blog hits for the week have gone off the charts !!

One of the concerns raised by this new capability was that setting such a preference might result in really unrealistic and inaccurate Clustering Factor (CF) values, especially for those tables that truly have appalling CFs. Although there are certainly some dangers, Oracle has limited the possible “abuse” by ensuring TABLE_CACHED_BLOCKS can only be set to a maximum of 255. This means Oracle will only ignore a maximum of 255 table blocks that have recently been accessed during the CF calculation. For larger tables with truly randomised data patterns, not even the maximum 255 setting if utilised will make an appreciable difference to the final CF.

A couple of examples to demonstrate.

The first table is a relatively “large” table that has a DOB column that is effectively randomised throughout the table. There are approximately 20,000 different DOB values in a 2 million row table (so each DOB occurs approximately 100 times, give or take).

SQL> create table major_tom (id number, DOB date, text varchar2(30));

Table created.

SQL> insert into major_tom select rownum,  sysdate-trunc(dbms_random.value(0, 20000)), 'DAVID BOWIE' from dual connectby level <= 2000000;

2000000 rows created.

SQL> commit;

Commit complete.

Let’s now create an index on this DOB column and have a look at the CF:

SQL> create index major_tom_dob_i on major_tom(dob);

Index created.

SQL> EXEC dbms_stats.gather_table_stats(ownname=>user, tabname=>'MAJOR_TOM', estimate_percent=> null, cascade=> true, method_opt=>'FOR ALL COLUMNS SIZE 1');

PL/SQL procedure successfully completed.

SQL> SELECT t.table_name, i.index_name, t.blocks, t.num_rows, i.clustering_factor
2  FROM user_tables t, user_indexes i
3  WHERE t.table_name = i.table_name AND i.index_name='MAJOR_TOM_DOB_I';

TABLE_NAME   INDEX_NAME          BLOCKS   NUM_ROWS CLUSTERING_FACTOR
------------ --------------- ---------- ---------- -----------------
MAJOR_TOM    MAJOR_TOM_DOB_I       9077    2000000           1988164

So at 1,988,164, the CF is terrible. This is as expected as the DOB values are all randomised throughout the table. The index is not being used as we had hope (naively) so let’s use the new TABLE_CACHED_BLOCKS preference to now improve the calculated CF by setting it to the maximum 255 setting and recalculate the index statistics:

SQL> exec dbms_stats.set_table_prefs(ownname=>user, tabname=>'MAJOR_TOM',
pname=>'TABLE_CACHED_BLOCKS', pvalue=>255);

PL/SQL procedure successfully completed.

SQL> EXEC dbms_stats.gather_index_stats(ownname=>user, indname=>'MAJOR_TOM_DOB_I', estimate_percent=> null);

PL/SQL procedure successfully completed.

SQL> SELECT t.table_name, i.index_name, t.blocks, t.num_rows, i.clustering_factor
2  FROM user_tables t, user_indexes i
3  WHERE t.table_name = i.table_name AND i.index_name='MAJOR_TOM_DOB_I';

TABLE_NAME   INDEX_NAME          BLOCKS   NUM_ROWS CLUSTERING_FACTOR
------------ --------------- ---------- ---------- -----------------
MAJOR_TOM    MAJOR_TOM_DOB_I       9077    2000000           1941946

We notice that although the CF has improved marginally, at whopping 1,941,946 it’s still terrible and has made no real appreciable difference. Why ?

Well let’s do some basic maths here. There are 9077 blocks in the table and the next DOB referenced in the index can potentially be in any one of them. Therefore, the chances of the next DOB being in one of the 255 previously accessed table blocks is only 255/9077 x 100 = approximately 2.8%. So in only 2.8% of the time is the CF likely to not be incremented and so the CF is only likely to drop by around this 2.8% amount.

Let’s check. (1988164 – 1941946)/1988164 x 100  indeed does equal approximately 2.8%.

So statistically with such a poor CF on such a “large” table, to limit the CF calculation if any of the last 255 table blocks are referenced is only going to improve things by 2.8% on average. Effectively of no real use at all.

Another example now, but this time with a CODE column with just 100 distinct values that are randomly distributed throughout another reasonable “large” 2 million row table. For those mathematically challenged, that means each value occurs approximately 20,000 times, give or take:

SQL> create table ziggy (id number, code number, text varchar2(30));

Table created.

SQL> insert into ziggy select rownum,  trunc(dbms_random.value(0, 100)), 'DAVID
BOWIE' from dual connect by level <= 2000000;

2000000 rows created.

SQL> commit;

Commit complete.

SQL> create index ziggy_code_i on ziggy(code);

Index created.

SQL> EXEC dbms_stats.gather_table_stats(ownname=>user, tabname=>'ZIGGY', estimate_percent=> null, cascade=> true,
method_opt=>'FOR ALL COLUMNS SIZE 1');

PL/SQL procedure successfully completed.

SQL> SELECT t.table_name, i.index_name, t.blocks, t.num_rows, i.clustering_factor
2  FROM user_tables t, user_indexes i
3  WHERE t.table_name = i.table_name AND i.index_name='ZIGGY_CODE_I';

TABLE_NAME   INDEX_NAME          BLOCKS   NUM_ROWS CLUSTERING_FACTOR
------------ --------------- ---------- ---------- -----------------
ZIGGY        ZIGGY_CODE_I          7048    2000000            662962

So at 662,962 it’s what I would describe as a “poor to average” CF. It’s not particularly great with there being just  7,048 table blocks but it’s still some distance from the 2,000,000 row value.

The index is not being used in SQL statements as we (naively) wish, so let’s try and improve things by lowering the index CF by setting the new TABLE_CACHED_BLOCKS preference to the maximum 255 setting:

SQL> exec dbms_stats.set_table_prefs(ownname=>user, tabname=>'ZIGGY',
pname=>'TABLE_CACHED_BLOCKS', pvalue=>255);

PL/SQL procedure successfully completed.

SQL> EXEC dbms_stats.gather_index_stats(ownname=>user, indname=>'ZIGGY_CODE_I',
estimate_percent=>null);

PL/SQL procedure successfully completed.

SQL> SELECT t.table_name, i.index_name, t.blocks, t.num_rows, i.clustering_factor
2  FROM user_tables t, user_indexes i
3  WHERE t.table_name = i.table_name AND i.index_name='ZIGGY_CODE_I';

TABLE_NAME   INDEX_NAME          BLOCKS   NUM_ROWS CLUSTERING_FACTOR
------------ --------------- ---------- ---------- -----------------
ZIGGY        ZIGGY_CODE_I          7048    2000000            662962

We notice to our great disappointment (well, not really) that the CF remains completely unchanged at 662,962 !! Why ?

Again, let’s do some basic maths and consider the data distribution.

The table has some 7048 blocks but each distinct CODE value has some 20,000 occurrences on average. Therefore, each value is going to be found 20000/7048 = roughly 2 to 3 times per block. As the index is in CODE order and for each CODE in rowid order, the CF is going to increment for each CODE value for each distinct block we visit. We will therefore only go back to a previously visited table block (except for the 2 to 3 visits to the current block) when the CODE value changes but this will take us all the way back to the first block which is always going to be some 7047 blocks away from the current one. As 7047 is much greater than the 255 the CF calculation will only cater for, the CF is going to remain unchanged from the default calculation as a result.

And this is all as it should be, as the fundamental CF is indeed poor for these scenarios and even going back the maximum 255 data blocks will not reduce appreciably the manner in which the CF is calculated.

Of course, if there was no limit, then a setting of TABLE_CACHED_BLOCKS  of say 7100 would enable the CF to be recalculated as being perfect in the above scenario, which would indeed be a concern. But 255 is the limit and so limits the potential “damaged” that can be done.

More on all this to come :)

Important !! Clustering Factor Calculation Improvement (Fix You) May 8, 2013

Posted by Richard Foote in 11g, ASSM, CBO, Clustering Factor, Index statistics, Oracle Cost Based Optimizer, Oracle Indexes.
41 comments

Believe me, this article is worth reading :)

I’m currently not allowed to discuss Oracle 12c Database goodies but I am allowed to discuss things perhaps initially intended for 12c that are currently available and already back-ported to 11g. This includes a wonderful improvement in the manageability of how the Clustering Factor (CF) of an index can now be calculated. Many thanks to Martin Decker for pointing this out to me.

As anyone who has attended my Index Seminars will know, the CF of an index is one of the most important statistics used by the Cost Based Optimizer (CBO) in determining the most efficient execution plan. As such, it has always been an issue for me that the manner in which the CF is calculated has been so flawed.

Basically, the CF is calculated by performing a Full Index Scan and looking at the rowid of each index entry. If the table block being referenced differs from that of the previous index entry, the CF is incremented. If the table block being referenced is the same as the previous index entry, the CF is not incremented. So the CF gives an indication of how well ordered the data in the table is in relation to the index entries (which are always sorted and stored in the order of the index entries). The better (lower) the CF, the more efficient it would be to use the index as less table blocks would need to be accessed to retrieve the necessary data via the index.

However, there’s a basic flaw here. The CF calculation doesn’t take into consideration the fact the referenced table block, although maybe different from the previous one index entry, might already have recently been accessed. As such, during an index scan, the table block being accessed is almost certainly still cached in the buffer cache from the previous access, thereby not reducing the effectiveness of the index in any appreciable manner. A classic example of this would be a table with a few freelists. Although the data being inserted is not ordered precisely within the same data blocks, the data might actually be very well clustered within only a few blocks of each other.

Picture a table with 100 rows being inserted by 2 sessions simultaneously, each inserting 50 rows based on an ordered sequence. With one freelist, the data is basically inserted in one block first and then once full a second table block. The data is therefore perfectly ordered/clustered and the CF will evaluate to a value of 2 on such an indexed column. But with 2 freelists, one session could insert data into one block while the other session inserts into a second block, with the ordered sequenced values being randomly distributed among the 2 blocks.  The CF could now potentially evaluate to a value of 100 as the rows are jumbled or “toggled” across the two blocks. This is a much much worse value (2 vs. 100) that can adversely impact the CBO calculations, although the efficiency of such an index is really almost identical as both table blocks are certain to be cached during an index scan regardless.

This is also a very common scenario with Automatic Segment Space Management (ASSM) tablespaces as I’ve discussed previously, which of course is now the default these days.

OK, let’s look at an example scenario. I’ll begin by creating a simple little table, an ordered sequence and a procedure that inserts 100,000 rows into the table:


SQL> create table bowie (id number, text varchar2(30));

Table created.

SQL> create sequence bowie_seq order;

Sequence created.

SQL> CREATE OR REPLACE PROCEDURE bowie_proc AS

2  BEGIN

3     FOR i IN 1..100000 LOOP

4         INSERT INTO bowie VALUES (bowie_seq.NEXTVAL, 'ZIGGY STARDUST');

5         COMMIT;

6     END LOOP;

7  END;

8  /

Procedure created.

We note the table lives in an ASSM tablespace:


SQL> select table_name, i.tablespace_name, segment_space_management

from dba_tables i, dba_tablespaces t   where i.tablespace_name = t.tablespace_name and table_name='BOWIE';

TABLE_NAME   TABLESPACE_NAME                SEGMEN

------------ ------------------------------ ------

BOWIE        USERS                          AUTO

We next have 3 different sessions that simultaneously run the procedure to load the table. Note that an ordered sequence is used which means the 3 sessions are randomly grabbing the next sequenced value to insert. The data though is basically being inserted in order of the ID column, it’s just that the data is being distributed across a few blocks as we go along the table, rather than strictly one block after the other.


SQL> exec bowie_proc

PL/SQL procedure successfully completed.

Let’s create an index on the ID (sequenced) column and collect fresh statistics:


SQL> create index bowie_id_i on bowie(id);

Index created.

SQL> EXEC dbms_stats.gather_table_stats(ownname=>user, tabname=>'BOWIE',      estimate_percent=> null, cascade=> true, method_opt=>'FOR ALL COLUMNS SIZE 1');

PL/SQL procedure successfully completed.

SQL> SELECT t.table_name, i.index_name, t.blocks, t.num_rows, i.clustering_factor

2  FROM user_tables t, user_indexes i

3  WHERE t.table_name = i.table_name AND i.index_name='BOWIE_ID_I';

TABLE_NAME   INDEX_NAME       BLOCKS   NUM_ROWS CLUSTERING_FACTOR

------------ ------------ ---------- ---------- -----------------

BOWIE        BOWIE_ID_I         1126     300000            241465

We notice that although the data in the table in reality is actually quite well clustered/ordered on the ID column, the actual CF of the index is not reflecting this. At a massive 241,465 it’s an extremely high (bad) CF, much closer in value to rows in the table than the number of table blocks, as the CF calculation keeps flipping back and forth between differing blocks. With such a high CF, the CBO is therefore going to cost an index scan accordingly:


SQL> select * from bowie where id between 42 and 429;

388 rows selected.

Execution Plan

----------------------------------------------------------

Plan hash value: 1845943507

---------------------------------------------------------------------------

| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |

---------------------------------------------------------------------------

|   0 | SELECT STATEMENT  |       |   389 |  7780 |   310   (1)| 00:00:04 |

|*  1 |  TABLE ACCESS FULL| BOWIE |   389 |  7780 |   310   (1)| 00:00:04 |

---------------------------------------------------------------------------

Predicate Information (identified by operation id):

---------------------------------------------------

1 - filter("ID"<=429 AND "ID">=42)

Statistics

----------------------------------------------------------

0  recursive calls

1  db block gets

1093  consistent gets

0  physical reads

0  redo size

4084  bytes sent via SQL*Net to client

519  bytes received via SQL*Net from client

2  SQL*Net roundtrips to/from client

0  sorts (memory)

0  sorts (disk)

388  rows processed

Even though only approx. 0.13% of rows are being accessed and more importantly a similar low percentage of table blocks, the CBO has determined that a Full Table Scan (FTS) is the cheaper alternative. This is an all too familiar scenario, all down to the fact the CF is not accurately reflecting the true clustering of the data and subsequent efficiency of the index.

Finally, at long last, there’s now an official fix for this !!

Bug 13262857 Enh: provide some control over DBMS_STATS index clustering factor computation INDEX describes this scenario and currently has available patches that can be applied on both Exadata databases and Oracle versions 11.1.0.7, 11.2.0.2 and 11.2.0.3. The patches (eg. Patch ID 15830250) describe the fix as addressing “Index Clustering Factor Computation Is Pessimistic“. I couldn’t have described it better myself :)

Once applied (the following demo is on a patched 11.2.0.3 database), there is a new statistics collection preference that can be defined, called TABLE_CACHED_BLOCKS. This basically sets the number of table blocks we can assume would already be cached when performing an index scan and can be ignored when incrementing the CF during statistics gathering. The default is 1 (i.e. as performed presently) but can be set up to be a value between 1 and 255, meaning during the collection of index statistics, it will not increment the CF if the table block being referenced by the current index entry has already been referenced by any of the prior 255 index entries (if set to 255). It basically sets the appropriate parameter in the sys_op_countchg function used to calculate the CF value during statistic gathering to not increment the CF if the current table block has already been accessed “x” index entries previously.

The TABLE_CACHED_BLOCKS preference can be set by either the DBMS_STATS.SET_TABLE_PREFS, DBMS_STATS.SET_SCHEMA_PREFS or DBMS_STATS.SET_DATABASE_PREFS procedures.

So let’s now change the TABLE_CACHED_BLOCKS preference for this table and re-calculate the index statistics:


SQL> exec dbms_stats.set_table_prefs(ownname=>user, tabname=>'BOWIE',

pname=>'TABLE_CACHED_BLOCKS', pvalue=>42);

PL/SQL procedure successfully completed.

SQL> EXEC dbms_stats.gather_index_stats(ownname=>user, indname=>'BOWIE_ID_I', estimate_percent=> null);

PL/SQL procedure successfully completed.

SQL> SELECT t.table_name, i.index_name, t.blocks, t.num_rows, i.clustering_factor

2  FROM user_tables t, user_indexes i

3  WHERE t.table_name = i.table_name AND i.index_name='BOWIE_ID_I';

TABLE_NAME   INDEX_NAME       BLOCKS   NUM_ROWS CLUSTERING_FACTOR

------------ ------------ ---------- ---------- -----------------

BOWIE        BOWIE_ID_I         1126     300000              1035

We notice that the CF has now been significantly reduced (down from 241465 to just 1035), reflecting far more accurately the true clustering of the data when considering the actual effectiveness of using the index.

If we now run the same query as before:


SQL> select * from bowie where id between 42 and 429;

388 rows selected.

Execution Plan

----------------------------------------------------------

Plan hash value: 3472402785

------------------------------------------------------------------------------------------

| Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)|Time     |

------------------------------------------------------------------------------------------

|   0 | SELECT STATEMENT            |            |   389 |  7780 |     4   (0)|00:00:01 |

|   1 |  TABLE ACCESS BY INDEX ROWID| BOWIE      |   389 |  7780 |     4   (0)|00:00:01 |

|*  2 |   INDEX RANGE SCAN          | BOWIE_ID_I |   389 |       |     2   (0)|00:00:01 |

------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):

---------------------------------------------------

2 - access("ID">=42 AND "ID"<=429)

Statistics

----------------------------------------------------------

0  recursive calls

0  db block gets

6  consistent gets

0  physical reads

0  redo size

9882  bytes sent via SQL*Net to client

519  bytes received via SQL*Net from client

2  SQL*Net roundtrips to/from client

0  sorts (memory)

0  sorts (disk)

388  rows processed

We notice the index is now being selected by the CBO. At a cost of 4 (previously the cost was somewhat greater than the 310 cost of the FTS), this much more accurately reflects the true cost of using the index (notice only 6 consistent gets are performed).

Being able to now set the TABLE_CACHED_BLOCKS preference during statistics collection finally gives us a fully supported and easy method to collect more accurate CF statistics. This in turn can only lead to more informed and accurate decisions by the CBO and ultimately better performing applications. Although available right now via the back ported patches, this will no doubt all be fully documented once the 12c database is finally released.

I can’t recommend enough the use of this new capability :)

Storage Indexes vs Database Indexes IV: 8 Column Limit (Eight Line Poem) May 1, 2013

Posted by Richard Foote in Exadata, Oracle Indexes, Smart Scans, Storage Indexes.
2 comments

As Exadata Storage Indexes (SI) are purely memory only structures located on the Exadata storage servers, care needs to be taken in how much memory they can potentially consume. As a result, there is a limit of 8 columns (or 8 SIs) that can be defined for a given 1M storage region at any point in time, even though SIs are much smaller structures than equivalent database indexes. As database indexes are physical constructs however, there’s no such limit on the number of indexes that can be defined for a table. This can become another key difference between SIs and database indexes.

The following table has more than 8 columns, each with differing numbers of distinct values or distributions of data:

SQL> create table radiohead (id number, col1 number, col2 number, col3 number, col4 number, col5 number, col6 number, col7 number, col8 number, col9 number, col10 number, col11 number, col12 number, some_text varchar2(50));
Table created.

SQL> insert into radiohead select rownum, mod(rownum,10), mod(rownum,100), mod(rownum,1000), mod(rownum,10000), mod (rownum,100000), mod(rownum,1000000), ceil(dbms_random.value(0,10)), ceil(dbms_random.value(0,100)), ceil
 (dbms_random.value(0,1000)), ceil(dbms_random.value(0,10000)), ceil(dbms_random.value(0,100000)), ceil(dbms_random.value (0,1000000)), 'OK COMPUTER' from dual connect by level <=2000000; 2000000 rows created. SQL> commit;

Commit complete.

SQL> insert/*+ append */ into radiohead select * from radiohead;

2000000 rows created.

SQL> commit;

Commit complete.

SQL> insert/*+ append */ into radiohead select * from radiohead;

4000000 rows created.

SQL> commit;

Commit complete.

SQL> insert/*+ append */ into radiohead select * from radiohead;

8000000 rows created.

SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats(ownname=>user, tabname=>'RADIOHEAD', estimate_percent=>null, method_opt=> 'FOR ALL COLUMNS SIZE 1');

Let’s start by running a highly selective query of the ID column:

SQL> select * from radiohead where id = 42;
8 rows selected.

Elapsed: 00:00:00.05

Execution Plan
 ---------------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
 ---------------------------------------------------------------------------------------
 | 0 | SELECT STATEMENT | | 8 | 416 | 42425 (1)| 00:08:30 |
 |* 1 | TABLE ACCESS STORAGE FULL| RADIOHEAD | 8 | 416 | 42425 (1)| 00:08:30 |
 ---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
 ---------------------------------------------------

1 - storage("ID"=121212)
 filter("ID"=121212)

If we look at the session statistics, we’ll notice that a SI has been created and saved us physical IOs. Note: If you follow the demo, you’ll need to keep track of these statistics after each query or simply reconnect as a new session to ensure a SI has or has not been used.

SQL> select name , value/1024/1024 MB from v$statname n, v$mystat s where n.statistic# = s.statistic# and n.name in ('cell physical IO interconnect bytes returned by smart scan', 'cell physical IO bytes saved by storage index');

NAME MB
 ---------------------------------------------------------------- ----------
 cell physical IO bytes saved by storage index 1333.99219
 cell physical IO interconnect bytes returned by smart scan .164878845

OK, let’s now run a selective query on the COL1 column (there are no values 42 in this case and so no rows are returned):

SQL> select * from radiohead where col1=42;
no rows selected

Elapsed: 00:00:00.01

Execution Plan
 ---------------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
 ---------------------------------------------------------------------------------------
 | 0 | SELECT STATEMENT | | 1 | 52 | 42440 (1)| 00:08:30 |
 |* 1 | TABLE ACCESS STORAGE FULL| RADIOHEAD | 1 | 52 | 42440 (1)| 00:08:30 |
 ---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
 ---------------------------------------------------

1 - storage("COL1"=42)
 filter("COL1"=42)

SQL> select name , value/1024/1024 MB from v$statname n, v$mystat s where n.statistic# = s.statistic# and n.name in
 ('cell physical IO interconnect bytes returned by smart scan', 'cell physical IO bytes saved by storage index');

NAME MB
 ---------------------------------------------------------------- ----------
 cell physical IO bytes saved by storage index 2546.90625
 cell physical IO interconnect bytes returned by smart scan .341438293

Again, a SI has been created and used here. The SI in this case has been extremely beneficial because no data exists for 42 (only the values 1 – 10 exist). However, if an existing value were to be selected, the SI would be next to useless as such a value would exist throughout all 1M storage regions. With just 10 distinct randomly distributed values, this SI has the potential to be a waste of time. But while we search for values that don’t exist, it serves a very useful purpose. An important consideration in what’s to come.

If we now run a query now using column COL4:

SQL> select * from radiohead where col4=42;
1600 rows selected.

Elapsed: 00:00:00.68

Execution Plan
 ---------------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
 ---------------------------------------------------------------------------------------
 | 0 | SELECT STATEMENT | | 1600 | 83200 | 42486 (1)| 00:08:30 |
 |* 1 | TABLE ACCESS STORAGE FULL| RADIOHEAD | 1600 | 83200 | 42486 (1)| 00:08:30 |
 ---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
 ---------------------------------------------------

1 - storage("COL4"=42)
 filter("COL4"=42)

SQL> select name , value/1024/1024 MB from v$statname n, v$mystat s where n.statistic# = s.statistic# and n.name in
 ('cell physical IO interconnect bytes returned by smart scan', 'cell physical IO bytes saved by storage index');

NAME MB
 ---------------------------------------------------------------- ----------
 cell physical IO bytes saved by storage index 2612.64063 <= + 66MB
 cell physical IO interconnect bytes returned by smart scan 32.3048401

OK, this is the really important one to note. Firstly, yes again a SI has been created and used. Note though that this has not been the first SI to be created or used on this table; SIs have previously been created and used in the previous two queries on different columns. This will also not be the most recent SI to be created, more will soon follow. However, this is by far the least effective use of a SI, because of both the selectivity of the query and distribution of data. Here, only some 66MB or so of physical IOs have been saved.

We now repeat this process, running a query against a different column, being very selective and ensuring a SI is created and used. We finally reach a point when 8 SIs have been created on the table:

SQL> select * from radiohead where col9=0;
no rows selected

Elapsed: 00:00:00.02

Execution Plan
 ---------------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
 ---------------------------------------------------------------------------------------
 | 0 | SELECT STATEMENT | | 15984 | 811K| 42561 (1)| 00:08:31 |
 |* 1 | TABLE ACCESS STORAGE FULL| RADIOHEAD | 15984 | 811K| 42561 (1)| 00:08:31 |
 ---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
 ---------------------------------------------------

1 - storage("COL9"=0)
 filter("COL9"=0)

SQL> select name , value/1024/1024 MB from v$statname n, v$mystat s where n.statistic# = s.statistic# and n.name in
 ('cell physical IO interconnect bytes returned by smart scan', 'cell physical IO bytes saved by storage index');

NAME MB
 ---------------------------------------------------------------- ----------
 cell physical IO bytes saved by storage index 8792.94531
 cell physical IO interconnect bytes returned by smart scan 45.3872757

OK, we’ve now reached the point where  8 SIs have definitely been created on this table.

If we now run a query that could potentially use a SI but isn’t particularly effective, basically on a par to the one we saw created previously on COL4:

SQL> select * from radiohead where col10=42;
1536 rows selected.

Elapsed: 00:00:00.73

Execution Plan
----------------------------------------------------------
Plan hash value: 2516349655

---------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1600 | 83200 | 42577 (1)| 00:08:31 |
|* 1 | TABLE ACCESS STORAGE FULL| RADIOHEAD | 1600 | 83200 | 42577 (1)| 00:08:31 |
---------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

1 - storage("COL10"=42)
 filter("COL10"=42)

SQL> select name , value/1024/1024 MB from v$statname n, v$mystat s where n.statistic# = s.statistic# and n.name in ('cell physical IO interconnect bytes returned by smart scan', 'cell physical IO bytes saved by storage index');

NAME MB
---------------------------------------------------------------- ----------
cell physical IO bytes saved by storage index 8792.94531
cell physical IO interconnect bytes returned by smart scan 45.9288864

We notice that no bytes have been saved here via a SI. We can run this query repeatedly and the results will be the same. No SI is created and no bytes are saved. Although Oracle could potentially create a SI and save some work, the fact we already have 8 SIs created for this table means we have already reached the limit on the number of SIs that can be created for this table. 8 is it.

Let’s run another query now using yet another different column (COL12), but this time it’s again a very selective query, much more selective and efficient than the previous query based on COL4 in which a SI had been created:

SQL> select * from radiohead where col12=42;
8 rows selected.

Elapsed: 00:00:00.39

Execution Plan
 ---------------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
 ---------------------------------------------------------------------------------------
 | 0 | SELECT STATEMENT | | 19 | 988 | 42607 (1)| 00:08:32 |
 |* 1 | TABLE ACCESS STORAGE FULL| RADIOHEAD | 19 | 988 | 42607 (1)| 00:08:32 |
 ---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
 ---------------------------------------------------

1 - storage("COL12"=42)
 filter("COL12"=42)

SQL> select name , value/1024/1024 MB from v$statname n, v$mystat s where n.statistic# = s.statistic# and n.name in
 ('cell physical IO interconnect bytes returned by smart scan', 'cell physical IO bytes saved by storage index');

NAME MB
 ---------------------------------------------------------------- ----------
 cell physical IO bytes saved by storage index 9526.01563
 cell physical IO interconnect bytes returned by smart scan 53.5218124

This time however the number of bytes saved has again gone up from previously meaning that indeed a new SI has been created and used for column COL12. But this makes 9 SIs in total now for this table where the limit should be a maximum of 8 ?

Does this mean that a previously created SI has been dropped and replaced by this new one. If so, which SI is now gone ?

Well, let’s go back to the first SI we created and the one that hasn’t been used for the longest period of time. If we re-run the first query again:

SQL> select * from radiohead where id=42;
8 rows selected.

Elapsed: 00:00:00.05

Execution Plan
 ---------------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
 ---------------------------------------------------------------------------------------
 | 0 | SELECT STATEMENT | | 8 | 416 | 42425 (1)| 00:08:30 |
 |* 1 | TABLE ACCESS STORAGE FULL| RADIOHEAD | 8 | 416 | 42425 (1)| 00:08:30 |
 ---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
 ---------------------------------------------------

1 - storage("ID"=42)
 filter("ID"=42)

SQL> select name , value/1024/1024 MB from v$statname n, v$mystat s where n.statistic# = s.statistic# and n.name in
 ('cell physical IO interconnect bytes returned by smart scan', 'cell physical IO bytes saved by storage index');

NAME MB
 ---------------------------------------------------------------- ----------
 cell physical IO bytes saved by storage index 10726.9844
 cell physical IO interconnect bytes returned by smart scan 53.5264816

We notice that not only has it used a SI, but it has saved the maximum amount of IO bytes possible here meaning there was no “warming up” processes happening here indicating a newly created SI. So not only did it use a SI, it clearly used a previously created one. So this SI was not obviously impacted by the creation of this “9th” SI.

However, if we run the query again that used column COL4, the query that previously used a SI but was by far the least effective in saving physical IOs:

SQL> select * from radiohead where col4 = 4242;
1600 rows selected.

Elapsed: 00:00:00.73

Execution Plan
 ---------------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
 ---------------------------------------------------------------------------------------
 | 0 | SELECT STATEMENT | | 1600 | 83200 | 42486 (1)| 00:08:30 |
 |* 1 | TABLE ACCESS STORAGE FULL| RADIOHEAD | 1600 | 83200 | 42486 (1)| 00:08:30 |
 ---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
 ---------------------------------------------------

1 - storage("COL4"=4242)
 filter("COL4"=4242)

SQL> select name , value/1024/1024 MB from v$statname n, v$mystat s where n.statistic# = s.statistic# and n.name in
 ('cell physical IO interconnect bytes returned by smart scan', 'cell physical IO bytes saved by storage index');

NAME MB
 ---------------------------------------------------------------- ----------
 cell physical IO bytes saved by storage index 10726.9844 No Change !!
 cell physical IO interconnect bytes returned by smart scan 54.1522598

This time we notice there’s no change to the number of bytes saved by a SI. No matter how often we run this query now, no SI is used. So the SI that was created previously is now gone as a result of creating the more effective SI on the COL12 column. Indeed there are still just the 8 SIs on this table.

So Oracle will indeed limit the number of SIs to 8 for each table/storage region. However, it’s not simply a case of “first in first served” or some such, with Oracle using (undocumented) performance metrics to determine which 8 SIs/columns to choose. This means it might be possible in some rarer scenarios where more than 8 columns get referenced in SQL statements for SIs to come and go depending on changing workloads.  Another example of where database indexes may yet play a role in Exadata environments, where currently tables have more than 8+ indexed columns.

Storage Indexes vs Database Indexes Part III: Hidden Values (Hide Away) February 7, 2013

Posted by Richard Foote in Exadata, Oracle Indexes, Smart Scans, Storage Indexes.
3 comments

OK, my holiday to Hawaii is now slowing fading away into distant memory. Time for a new post :)

In my previous post on differences between Exadata Storage Indexes and Database Indexes Part II, I discussed how the clustering of data within the data is an important factor (pun fully intended !!) in the performance and efficiency of both types of indexes. In this post, I’ll expand a little more on this point and highlight scenarios where a normal database index can significantly improve performance but be totally ineffective as a Storage Index.

Storage Indexes can only determine where required data can’t exist within a (1M default) storage region via the associated Min/Max values of the index. If a required value can’t possibly exist within the Min/Max range boundary, the specific storage region can be ignored and not be accessed during the full scan operation. However, in some scenarios, there can be limitations in the usefulness of only having Min/Max values to work with if both Min and Max values are relatively common and randomly distributed throughout the table. This is not an uncommon scenario when dealing with columns that have data such as code status values.

In the BIG_BOWIE table I’ve been using in this series, I have a FORMAT_ID column that was initially populated with just values 2,4,6,8 and 10, evenly distributed throughout the table. I then updated a relatively small number of these values in the following manner:

SQL> update big_bowie set format_id = 3 where mod(id,10000)=0;
1000 rows updated.

SQL> commit;

Commit complete.

SQL> update big_bowie set format_id = 5 where id between 424242 and 425241;

1000 rows updated.

SQL> commit;

Commit complete.

So I have relatively few FORMAT_ID = 3 (1000 in 10M rows, 0.01% of data) littered throughout the table and a few FORMAT_ID = 5 located in a specific portion/location of the table.

I’ll now create a normal database index on this FORMAT_ID column and a histogram to let the CBO know there are indeed relatively few occurances of values 3 and 5 within the table:

SQL> create index big_bowie_format_id_i on big_bowie(format_id);
Index created.

SQL> exec dbms_stats.gather_table_stats(ownname=>user, tabname=>'BIG_BOWIE', method_opt=> 'FOR COLUMNS FORMAT_ID SIZE 10');

PL/SQL procedure successfully completed.

OK, if we now run a query looking for all the FORMAT_ID = 3:

SQL> select album_id, total_sales from big_bowie where format_id = 3;
1000 rows selected.

Elapsed: 00:00:00.01

Execution Plan
 -----------------------------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
 -----------------------------------------------------------------------------------------------------
 | 0 | SELECT STATEMENT | | 1000 | 11000 | 72 (0)| 00:00:01 |
 | 1 | TABLE ACCESS BY INDEX ROWID| BIG_BOWIE | 1000 | 11000 | 72 (0)| 00:00:01 |
 |* 2 | INDEX RANGE SCAN | BIG_BOWIE_FORMAT_ID_I | 1000 | | 4 (0)| 00:00:01 |
 -----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
 ---------------------------------------------------

2 - access("FORMAT_ID"=3)

Statistics
 ----------------------------------------------------------
 0 recursive calls
 0 db block gets
 1072 consistent gets
 0 physical reads
 0 redo size
 22984 bytes sent via SQL*Net to client
 1250 bytes received via SQL*Net from client
 68 SQL*Net roundtrips to/from client
 0 sorts (memory)
 0 sorts (disk)
 1000 rows processed

We notice  the CBO has used the index to efficiently retrieve just the 0.01% of required rows because of the high selectivity of the query, even though the overall Clustering Factor of the index is appalling.

It does an even better job when accessing FORMAT_ID = 5 as these specific values are all well clustered and  found within in a very tight portion of the table:

SQL> select album_id, total_sales from big_bowie where format_id = 5;
1000 rows selected.

Elapsed: 00:00:00.00

Execution Plan
 -----------------------------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
 -----------------------------------------------------------------------------------------------------
 | 0 | SELECT STATEMENT | | 1000 | 11000 | 72 (0)| 00:00:01 |
 | 1 | TABLE ACCESS BY INDEX ROWID| BIG_BOWIE | 1000 | 11000 | 72 (0)| 00:00:01 |
 |* 2 | INDEX RANGE SCAN | BIG_BOWIE_FORMAT_ID_I | 1000 | | 4 (0)| 00:00:01 |
 -----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
 ---------------------------------------------------

2 - access("FORMAT_ID"=5)

Statistics
 ----------------------------------------------------------
 0 recursive calls
 0 db block gets
 154 consistent gets
 0 physical reads
 0 redo size
 22685 bytes sent via SQL*Net to client
 1250 bytes received via SQL*Net from client
 68 SQL*Net roundtrips to/from client
 0 sorts (memory)
 0 sorts (disk)
 1000 rows processed

At just 154 consistent gets in this example, the results return almost instantaneously. The database index is extremely effective in these examples because it can point just to the locations within the table where these relatively few rows of interest are located.

Let’s see how Storage Indexes cope in this scenario. First, we make the database index invisible:

SQL> alter index big_bowie_format_id_i invisible;
Index altered.

We now run the same query again (this can be repeated as many times as you like):

SQL> select album_id, total_sales from big_bowie where format_id = 3;
1000 rows selected.

Elapsed: 00:00:00.80

Execution Plan
 ---------------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
 ---------------------------------------------------------------------------------------
 | 0 | SELECT STATEMENT | | 1000 | 11000 | 36682 (1)| 00:07:21 |
 |* 1 | TABLE ACCESS STORAGE FULL| BIG_BOWIE | 1000 | 11000 | 36682 (1)| 00:07:21 |
 ---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
 ---------------------------------------------------

1 - storage("FORMAT_ID"=3)
 filter("FORMAT_ID"=3)

SQL> select name , value/1024/1024 MB from v$statname n, v$mystat s where n.statistic# = s.statistic# and n.name in ('cell physical IO interconnect bytes returned by smart scan', 'cell physical IO bytes saved by storage index');

NAME MB
 ---------------------------------------------------------------- ----------
 cell physical IO bytes saved by storage index 0
 cell physical IO interconnect bytes returned by smart scan .397476196

We notice no matter how often we try, that Storage Indexes have managed to save us precisely nothing. The entire table has had to be scanned and read during the Full Table Scan operation.

SQL> select album_id, total_sales from big_bowie where format_id = 5;
1000 rows selected.

Elapsed: 00:00:00.89

Execution Plan
 ---------------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
 ---------------------------------------------------------------------------------------
 | 0 | SELECT STATEMENT | | 1000 | 11000 | 36682 (1)| 00:07:21 |
 |* 1 | TABLE ACCESS STORAGE FULL| BIG_BOWIE | 1000 | 11000 | 36682 (1)| 00:07:21 |
 ---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
 ---------------------------------------------------

1 - storage("FORMAT_ID"=5)
 filter("FORMAT_ID"=5)

SQL> select name , value/1024/1024 MB from v$statname n, v$mystat s where n.statistic# = s.statistic# and n.name in ('cell physical IO interconnect bytes returned by smart scan', 'cell physical IO bytes saved by storage index');

NAME MB
 ---------------------------------------------------------------- ----------
 cell physical IO bytes saved by storage index 0
 cell physical IO interconnect bytes returned by smart scan .565643311

Same results when accessing the FORMAT_ID = 5. Any possible Storage Indexes have been totally ineffective and the Full Table Scan has had to read the entire table.

The result being performance is significantly slower than with the database indexes visible and in place.

Why ?

The data consists primarily of randomly distributed values 2,4,6,8 and 10, with the odd value 3 and 5 littered around. This means that if created, the Storage Index Min/Max values will effectively be 2 and 10 for all 1M storage regions throughout the entire table.

Therefore, both values 3 and 5 could possibly exist within all/any of the table storage regions as they sit inside the 2 – 10 Min/Max range boundaries. These 3 and 5 values are effectively “hidden” within the Storage Index Min/Max values making Storage Indexes totally ineffective in this scenario, as they can’t skip a thing.

A final point here. If one of these rarer values of interest were to be say value “12″, then a Storage Index could be useful when searching for this particular value as any Min/Max ranges of 2-10 that don’t therefore contain a value of 12 can be skipped. However, if values 3 and 5 were also of interest, the Storage Index would still only be useful in a subset of required cases. A database index would therefore still be required to cater all necessarily cases.

As Database indexes can directly point to indexed values of interest, this is another example of where we would likely not want to just drop a database index in Exadata.

Storage Indexes vs Database Indexes Part II: Clustering Factor (Fast Track) December 19, 2012

Posted by Richard Foote in Clustering Factor, Exadata, Oracle Indexes, Storage Indexes.
2 comments

Two posts in two days !! Well, with Christmas just around the corner, I thought I better finish off a couple of blog posts before I get fully immersed in the festive season :)

The Clustering Factor (CF) is the most important index related statistic, with the efficiency of an index performing multi-row range scans very much dependent on the CF of the index. If the data in the table is relatively well clustered in relation to the index (i.e. it has a “low” CF), then an index range scan can visit relatively few table blocks to obtain the necessary data. If the data is effectively randomised and not well clustered in relation to the index (i.e. has a “high” CF), then an index range scan has to visit many more table blocks and not be as efficient/effective as a result. The CBO will be less inclined to use such an index as a result, depending on the overall selectivity of the query.

It’s something I’ve discussed here many times before.

It’s a very similar story for Exadata Storage Indexes (SI) as well. The better the table data is clustered in relation to the SIs, the more efficient and effective the SIs are likely to be in relation to being able to eliminate accessing storage regions that can’t possibly contain data of interest. By having the data more clustered (or ordered) in relation to a specific SI, the Min/Max ranges associated with the SI are more likely to be able to determine areas of the table where data can’t exist.

For the data I’ll be using in the following examples, I refer you to the previous post where I setup the necessary data.

The following query is on a 10 million row table, based on the ALBUM_ID column that has an excellent CF:

SQL> select * from big_bowie where album_id = 42;
100000 rows selected.

Elapsed: 00:00:01.07

Execution Plan
 ----------------------------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
 ----------------------------------------------------------------------------------------------------
 | 0 | SELECT STATEMENT | | 100K| 8984K| 1550 (1)| 00:00:19 |
 | 1 | TABLE ACCESS BY INDEX ROWID| BIG_BOWIE | 100K| 8984K| 1550 (1)| 00:00:19 |
 |* 2 | INDEX RANGE SCAN | BIG_BOWIE_ALBUM_ID_I | 100K| | 199 (1)| 00:00:03 |
 ----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
 ---------------------------------------------------

2 - access("ALBUM_ID"=42)

Statistics
 ----------------------------------------------------------
 0 recursive calls
 0 db block gets
 1590 consistent gets
 1550 physical reads
 0 redo size
 9689267 bytes sent via SQL*Net to client
 733 bytes received via SQL*Net from client
 21 SQL*Net roundtrips to/from client
 0 sorts (memory)
 0 sorts (disk)
 100000 rows processed

An index range scan access path is selected by the CBO to retrieve the 100,000 rows. At just 1590 consistent gets, with such an excellent CF, the index can very efficiently access just the necessary 100,000 rows of data. Notice that most of these consistent gets are still physical reads (1550). If we re-run the query several times and are able to cache the corresponding index/table blocks in the database buffer cache:

SQL> select * from big_bowie where album_id = 42;
100000 rows selected.

Elapsed: 00:00:00.27

Execution Plan
 ----------------------------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
 ----------------------------------------------------------------------------------------------------
 | 0 | SELECT STATEMENT | | 100K| 8984K| 1550 (1)| 00:00:19 |
 | 1 | TABLE ACCESS BY INDEX ROWID| BIG_BOWIE | 100K| 8984K| 1550 (1)| 00:00:19 |
 |* 2 | INDEX RANGE SCAN | BIG_BOWIE_ALBUM_ID_I | 100K| | 199 (1)| 00:00:03 |
 ----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
 ---------------------------------------------------

2 - access("ALBUM_ID"=42)

Statistics
 ----------------------------------------------------------
 0 recursive calls
 0 db block gets
 1590 consistent gets
 0 physical reads
 0 redo size
 9689267 bytes sent via SQL*Net to client
 733 bytes received via SQL*Net from client
 21 SQL*Net roundtrips to/from client
 0 sorts (memory)
 0 sorts (disk)
 100000 rows processed

The overall execution times reduce down from 1.07 to just 0.27 seconds. Not bad at all considering we’re returning 100,000 rows.

However, if we run the same query on the same data with all the smarts turned on in Exadata (with the index made Invisible so that it doesn’t get used by the CBO):

SQL> select * from big_bowie where album_id = 42;
100000 rows selected.

Elapsed: 00:00:00.27
 Execution Plan
 ---------------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
 ---------------------------------------------------------------------------------------
 | 0 | SELECT STATEMENT | | 100K| 8984K| 36663 (1)| 00:07:20 |
 |* 1 | TABLE ACCESS STORAGE FULL| BIG_BOWIE | 100K| 8984K| 36663 (1)| 00:07:20 |
 ---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
 ---------------------------------------------------

1 - storage("ALBUM_ID"=42)
 filter("ALBUM_ID"=42)

Statistics
 ----------------------------------------------------------
 1 recursive calls
 0 db block gets
 134834 consistent gets
 134809 physical reads
 0 redo size
 4345496 bytes sent via SQL*Net to client
 73850 bytes received via SQL*Net from client
 6668 SQL*Net roundtrips to/from client
 0 sorts (memory)
 0 sorts (disk)
 100000 rows processed

 SQL> select name , value/1024/1024 MB from v$statname n, v$mystat s where n.statistic# = s.statistic# and n.name in
 ('cell physical IO interconnect bytes returned by smart scan', 'cell physical IO bytes saved by storage index');

NAME MB
 ---------------------------------------------------------------- ----------
 cell physical IO bytes saved by storage index 1042.24219
 cell physical IO interconnect bytes returned by smart scan 9.56161499

We notice that although a Full Table Scan is being performed, the overall performance of the query is practically identical to that of using the index. That’s because the SIs are kicking in here and by saving 1042 MB (approximately 99% of the table), Oracle only has to actually physically access a tiny 1% of the table (basically, the 1% selectivity of the query itself). SIs based on the well clustered ALBUM_ID column are therefore very effective at eliminating the access of unnecessary data.

If we now run a query based on the TOTAL_SALES column in which the data is randomly distributed all over the place and so the associated index has a very poor CF:

SQL> select album_id, artist_id from big_bowie where total_sales between 42 and 142;
2009 rows selected.

Elapsed: 00:00:01.45

Execution Plan
 -------------------------------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes |Cost (%CPU)| Time |
 -------------------------------------------------------------------------------------------------------
 | 0 | SELECT STATEMENT | | 2040 | 26520 | 2048 (1)| 00:00:25 |
 | 1 | TABLE ACCESS BY INDEX ROWID| BIG_BOWIE | 2040 | 26520 | 2048 (1)| 00:00:25 |
 |* 2 | INDEX RANGE SCAN | BIG_BOWIE_TOTAL_SALES_I | 2040 | | 7 (0)| 00:00:01 |
 -------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
 ---------------------------------------------------

2 - access("TOTAL_SALES">=42 AND "TOTAL_SALES"<=142)

Statistics
 ----------------------------------------------------------
 1 recursive calls
 0 db block gets
 2150 consistent gets
 2005 physical reads
 0 redo size
 43311 bytes sent via SQL*Net to client
 1987 bytes received via SQL*Net from client
 135 SQL*Net roundtrips to/from client
 0 sorts (memory)
 0 sorts (disk)
 2009 rows processed

We notice that although only 2009 rows are retrieved with this query, 2150 consistent gets have been performed (practically 1 for each row returned) . This is somewhat more than the 1590 consistent gets of the previous example when a full 100,000 rows were returned. Using this index therefore is nowhere near as efficient/effective in retrieving data as was the index in the previous example.

If all this data can be cached in the buffer cache however, we can again improve overall execution times:

SQL> select album_id, artist_id from big_bowie where total_sales between 42 and 142;
2009 rows selected.

Elapsed: 00:00:00.02

Execution Plan
 ------------------------------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes |Cost (%CPU)| Time |
 ------------------------------------------------------------------------------------------------------
 | 0 | SELECT STATEMENT | | 2040 | 26520 | 2048 (1)| 00:00:25 |
 | 1 | TABLE ACCESS BY INDEX ROWID| BIG_BOWIE | 2040 | 26520 | 2048 (1)| 00:00:25 |
 |* 2 | INDEX RANGE SCAN | BIG_BOWIE_TOTAL_SALES_I | 2040 | | 7 (0)| 00:00:01 |
 ------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
 ---------------------------------------------------

2 - access("TOTAL_SALES">=42 AND "TOTAL_SALES"<=142)

Statistics
 ----------------------------------------------------------
 0 recursive calls
 0 db block gets
 2150 consistent gets
 0 physical reads
 0 redo size
 43308 bytes sent via SQL*Net to client
 1987 bytes received via SQL*Net from client
 135 SQL*Net roundtrips to/from client
 0 sorts (memory)
 0 sorts (disk)
 2009 rows processed

So with all the index/table data now cached, we can return the 2000 odd rows in just 0.02 seconds.

If we now run the same query with the same data in Exadata:

SQL> select album_id, artist_id from big_bowie where total_sales between 42 and 142;
2009 rows selected.

Elapsed: 00:00:01.25

Execution Plan
 ---------------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
 ---------------------------------------------------------------------------------------
 | 0 | SELECT STATEMENT | | 2040 | 26520 | 36700 (1)| 00:07:21 |
 |* 1 | TABLE ACCESS STORAGE FULL| BIG_BOWIE | 2040 | 26520 | 36700 (1)| 00:07:21 |
 ---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
 ---------------------------------------------------

1 - storage("TOTAL_SALES"<=142 AND "TOTAL_SALES">=42)
 filter("TOTAL_SALES"<=142 AND "TOTAL_SALES">=42)

Statistics
 ----------------------------------------------------------
 1 recursive calls
 0 db block gets
 134834 consistent gets
 134809 physical reads
 0 redo size
 47506 bytes sent via SQL*Net to client
 1987 bytes received via SQL*Net from client
 135 SQL*Net roundtrips to/from client
 0 sorts (memory)
 0 sorts (disk)
 2009 rows processed

 SQL> select name , value/1024/1024 MB from v$statname n, v$mystat s where n.statistic# = s.statistic# and n.name in
 ('cell physical IO interconnect bytes returned by smart scan', 'cell physical IO bytes saved by storage index');

NAME MB
 ---------------------------------------------------------------- ----------
 cell physical IO bytes saved by storage index 72.65625
 cell physical IO interconnect bytes returned by smart scan .383415222

We noticed that the physical IO bytes saved by the SIs has significantly reduced from the previous example (just 72 MBs down from 1042 MBs), even though at just 2000 odd rows we require much less data than before. In this example, only approximately 7% of table storage need not be accessed, meaning we still have to access a significant 93% of the table as the required data could potentially exist throughout the majority of the table. The poor clustering of the data in relation the TOTAL_SALES column has effectively neutralised the effectiveness of the associated SIs on the TOTAL_SALES column.

Note also that the 1.25 seconds is as good as it gets when performing a FTS with the fully generated SIs in place. In this case, using a fully cached database index access path can outperform the FTS/SI combination and provide a more efficient and ultimately more scalable method of accessing this data. As the required selectively on this column is low enough to warrant the use of a database index despite the poor CF, this is again another example of an index we may not necessarily want to automatically drop when moving to Exadata.

Storage Indexes vs Database Indexes Part I MIN/MAX (Maxwell’s Silver Hammer) December 18, 2012

Posted by Richard Foote in Exadata, Oracle Indexes, Storage Indexes.
add a comment

It’s often stated that in Exadata, you don’t need conventional database indexes anymore as everything runs so damn fast that indexes are simply a waste of time and space. Simply drop all database indexes and things will run just as fast.

Well, not quite …

There are many many scenarios where database indexes are still critical for optimal performance and scalability in an Exadata environment.

Prompted by a question in an earlier post (thanks Paul), I thought I might start looking at some scenarios where dropping a database index in Exadata would not be the best of ideas.

The first example is the scenario where one wants either the minimum/maximum of a column value. As database index entries are always ordered, this can very efficiently be found by traversing down to either the first or last index leaf block within an appropriate index. There’s an execution the INDEX FULL SCAN MIN/MAX execution path by which the CBO can decide to access an index in this manner.

For example, if I create an index on the ID column of my large DWH_BOWIE table and select the MIN(ID):

SQL> create index dwh_bowie_id_i on dwh_bowie(id);
Index created.

SQL> select min(id) from dwh_bowie;

MIN(ID)
 ----------
 1

Elapsed: 00:00:00.01

Execution Plan
 ---------------------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
 ---------------------------------------------------------------------------------------------
 | 0 | SELECT STATEMENT | | 1 | 6 | 4 (0)| 00:00:01 |
 | 1 | SORT AGGREGATE | | 1 | 6 | | |
 | 2 | INDEX FULL SCAN (MIN/MAX)| DWH_BOWIE_ID_I | 1 | 6 | 4 (0)| 00:00:01 |
 ---------------------------------------------------------------------------------------------

Statistics
 ----------------------------------------------------------
 1 recursive calls
 0 db block gets
 4 consistent gets
 3 physical reads
 0 redo size
 525 bytes sent via SQL*Net to client
 524 bytes received via SQL*Net from client
 2 SQL*Net roundtrips to/from client
 0 sorts (memory)
 0 sorts (disk)
 1 rows processed

With only a handful of consistent gets, we have managed to get the minimum ID value extremely quickly and efficiently.

However, if we drop this index and re-run the same query in Exadata numerous times:

SQL> select min(id) from dwh_bowie;
MIN(ID)
 ----------
 1

Elapsed: 00:00:41.31

Execution Plan
 ----------------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
 ----------------------------------------------------------------------------------------
 | 0 | SELECT STATEMENT | | 1 | 6 | 2345K (1)| 07:49:12 |
 | 1 | SORT AGGREGATE | | 1 | 6 | | |
 | 2 | TABLE ACCESS STORAGE FULL| DWH_BOWIE | 640M| 3662M| 2345K (1)| 07:49:12 |
 ----------------------------------------------------------------------------------------

Statist5ics
 ----------------------------------------------------------
 0 recursive calls
 0 db block gets
 8626963 consistent gets
 8625479 physical reads
 0 redo size
 525 bytes sent via SQL*Net to client
 524 bytes received via SQL*Net from client
 2 SQL*Net roundtrips to/from client
 0 sorts (memory)
 0 sorts (disk)
 1 rows processed

NAME MB
 ---------------------------------------------------------------- ----------
 cell physical IO bytes saved by storage index 0
 cell physical IO interconnect bytes returned by smart scan 7644.38194

We notice the query takes considerably longer and that the Storage Indexes have been unable to be of any help with not a single byte saved.

This is because all aggregation type operations are performed at the database level, not within the storage servers. Even though in theory Storage Indexes could be seen as perhaps being of value here, they are totally ignored by Oracle when retrieving either the minimum/maximum of a column value. Remember also that a key difference between a Storage Index and a Database Index is that a Storage Index does not necessarily have to exist fully for all regions of a given table.

Drop database indexes used in this manner at your peril …

Many more examples to come :)

Exadata Storage Indexes Part V: Warming Up (Here Come The Warm Jets) December 3, 2012

Posted by Richard Foote in Exadata, Oracle Indexes, Storage Indexes.
add a comment

As I mentioned in a previous post, there are a number of Similarities between Storage Indexes and Database Indexes.

One of these similarities is the “warming up” process that needs to take place before indexes become “optimal” after either the Storage Server (in the case of Storage Indexes) or the Database Server (in the case of Database Indexes) is restarted.

When a database server is restarted, the contents associated with the database buffer cache is lost and has to be reloaded (as well as other components of the SGA of course). This means for a period of time while the instance “warms up”, there is a spike in physical IO (PIO) activity. Index range scans, which generally greatly benefit from re-using cached data therefore need to perform additional PIOs for both the index blocks and the table blocks they reference. The overheads associated with these additional PIOs in turn slows the overall performance of (especially) these index related execution plans.

As a simple illustration, the following query is executed after a flushed buffer cache:

SQL> select * from big_bowie where album_id = 42;
1000 rows selected.

Elapsed: 00:00:01.99

Execution Plan
 ----------------------------------------------------------
 Plan hash value: 2703824528

----------------------------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
 ----------------------------------------------------------------------------------------------------
 | 0 | SELECT STATEMENT | | 1000 | 75000 | 101 (0)| 00:00:01 |
 | 1 | TABLE ACCESS BY INDEX ROWID| BIG_BOWIE | 1000 | 75000 | 101 (0)| 00:00:01 |
 |* 2 | INDEX RANGE SCAN | BIG_BOWIE_ALBUM_ID_I | 1000 | | 1 (0)| 00:00:01 |
 ----------------------------------------------------------------------------------------------------
 Predicate Information (identified by operation id):
 ---------------------------------------------------

2 - access("ALBUM_ID"=42)
 Statistics
 ----------------------------------------------------------
 1 recursive calls
 0 db block gets
 1072 consistent gets
 1005 physical reads
 0 redo size
 91231 bytes sent via SQL*Net to client
 1245 bytes received via SQL*Net from client
 68 SQL*Net roundtrips to/from client
 0 sorts (memory)
 0 sorts (disk)
 1000 rows processed

There is a considerable amount of physical reads associated with having to re-load the contents of the database buffer cache. If we now re-run the query:

SQL> select * from big_bowie where album_id = 42;
1000 rows selected.

Elapsed: 00:00:00.01

Execution Plan
 ----------------------------------------------------------
 Plan hash value: 2703824528

----------------------------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
 ----------------------------------------------------------------------------------------------------
 | 0 | SELECT STATEMENT | | 1000 | 75000 | 101 (0)| 00:00:01 |
 | 1 | TABLE ACCESS BY INDEX ROWID| BIG_BOWIE | 1000 | 75000 | 101 (0)| 00:00:01 |
 |* 2 | INDEX RANGE SCAN | BIG_BOWIE_ALBUM_ID_I | 1000 | | 1 (0)| 00:00:01 |
 ----------------------------------------------------------------------------------------------------
 Predicate Information (identified by operation id):
 ---------------------------------------------------

2 - access("ALBUM_ID"=42)
 Statistics
 ----------------------------------------------------------
 0 recursive calls
 0 db block gets
 1072 consistent gets
 0 physical reads
 0 redo size
 91231 bytes sent via SQL*Net to client
 1245 bytes received via SQL*Net from client
 68 SQL*Net roundtrips to/from client
 0 sorts (memory)
 0 sorts (disk)
 1000 rows processed

All the data is now cached and the overall query response time improves considerably. Once a database instance has warmed up and much of the database’s hot, data-set of interest has been cached, PIOs are minimised and performance improves accordingly.

Much has been made of the fact Storage Indexes (SIs) need to be created on the fly but in a similar manner to database indexes, they just need a period of time to warm up and be created/stored in memory to become optimal. The difference being that SIs are actually created on the fly when cached in memory unlike database indexes which physically preexist on disk.

So when an Exadata storage server is re-started, the performance associated with the SIs on the server is initially sluggish as well, while Oracle performs the additional PIOs off disk to create the necessary SIs in memory. Additionally, new queries using new column predicates will also be initially sluggish while any necessary SIs are created and cached in memory. Note that SIs are created on a 1M storage region basis so it could well take a period of time for a SI to be created across all the associated storage regions of the table. There is also a limit of 8 SIs per 1M storage region which might limit whether a SI is actually created or preserved (more on this point in a later post).

The following query is executed on a relatively new table and in a new session to capture fresh session statistics:

SQL> select artist_id, total_sales from big_bowie where album_id = 42;
100000 rows selected.

Elapsed: 00:00:00.89

Execution Plan
 ---------------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
 ---------------------------------------------------------------------------------------
 | 0 | SELECT STATEMENT | | 100K| 1269K| 36663 (1)| 00:07:20 |
 |* 1 | TABLE ACCESS STORAGE FULL| BIG_BOWIE | 100K| 1269K| 36663 (1)| 00:07:20 |
 ---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
 ---------------------------------------------------

1 - storage("ALBUM_ID"=42)
 filter("ALBUM_ID"=42)

SQL> select name , value/1024/1024 MB from v$statname n, v$mystat s where n.statistic# = s.statistic# and n.name in
 ('cell physical IO interconnect bytes returned by smart scan', 'cell physical IO bytes saved by storage index');

NAME MB
 ---------------------------------------------------------------- ----------
 cell physical IO bytes saved by storage index 114.601563
 cell physical IO interconnect bytes returned by smart scan 1.77637482

The amount of physical IO bytes saved by a SI is initially minimal, about 114 MB.

However, when the query is re-executed and the associated SI on the ALBUM_ID column has been fully created:

SQL> select artist_id, total_sales from big_bowie where album_id = 42;
100000 rows selected.

Elapsed: 00:00:00.27

Execution Plan
 ---------------------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
 ---------------------------------------------------------------------------------------
 | 0 | SELECT STATEMENT | | 100K| 1269K| 36663 (1)| 00:07:20 |
 |* 1 | TABLE ACCESS STORAGE FULL| BIG_BOWIE | 100K| 1269K| 36663 (1)| 00:07:20 |
 ---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
 ---------------------------------------------------

1 - storage("ALBUM_ID"=42)
 filter("ALBUM_ID"=42)

SQL> select name , value/1024/1024 MB from v$statname n, v$mystat s where n.statistic# = s.statistic# and n.name in
 ('cell physical IO interconnect bytes returned by smart scan', 'cell physical IO bytes saved by storage index');

NAME MB
 ---------------------------------------------------------------- ----------
 cell physical IO bytes saved by storage index 1156.84375
 cell physical IO interconnect bytes returned by smart scan 3.41764832

The amount of physical IO bytes saved by SIs increases considerably and overall performance improves as a result.

Restarting a database server will result in sluggish performance for a period until most of the database hot data is cached. This is also the case when an Exadata Storage server is re-started, in part while the necessary SIs are recreated and cached.

I’ll next discuss another similarity, that being the importance of the clustering of data within the table to the efficiency of these indexes.

Follow

Get every new post delivered to your Inbox.

Join 1,808 other followers