jump to navigation

Larger Block Tablespace For Indexes Revisited: Part I (The Tourist) February 18, 2009

Posted by Richard Foote in Index Block Size, Index Height, Oracle Indexes, Oracle Myths.
trackback

I’ve previously discussed the various issues and myths relating to the so-called benefits of creating a separate, larger block tablespace for indexes and why it’s not recommended and generally a bad idea:

Store Indexes In A Larger Block Tablespace:  Some Thoughts (Big Brother)

Store Indexes In A Larger Block Tablespace: The Multiblock Read Myth (Karma Police)

Store Indexes In A Larger Block Tablespace: The Multiblock Read Myth Part II (The Fly)

Store Indexes In A Larger Block Tablespace: Height Reduction 1/2 Myth (Five Foot One)

Larger Block Tablespace and Small Index Scans – Performance Improvement ? (Let Down)

However, some myths have a habit of lingering 😉

A recent question on reverse indexes (which could so easily have been answered by the person asking the question if they had only just “given it a go”) had me thinking that so many of these myths and misconceptions can be easily challenged and unproven.

Perhaps the most repeated example I’ve seen where the “so-called” benefits of moving indexes into larger block size tablespace is misunderstood is this one by Robin Schumacher. He demonstrates how by moving an index from an 8K block size to a 16K block size, “the amount of logical reads has been reduced in half”, with consistent gets reducing from 421 to 211, during an Index Fast Full Scan operation.

Sounds impressive, but only if one doesn’t understand what the numbers actually represent and if one doesn’t understand what Oracle actually does under the covers. In actual fact, the reduction in consistent gets in this specific example is somewhat meaningless …

Some folks even go on to say that “When can we “prove” a benefit from an index rebuild?  Here, Robin Schumacher proves that an index that is rebuilt in a larger tablespace will contain more index entries be block, and have a flatter structure”.

So I thought I might demonstrate just how easy it is to “give it a go” and see for yourself whether or not these sorts of claims are actually true.

In Part I, I’m just going to focus on the specific claim that this example somehow proves indexes have a “flatter structure” when rebuilt in a larger block tablepsace. That by recreating an index in a larger block tablespace and halving the consistent gets from 421 to 211, the index will somehow have a “flatter structure” as result.

The key message I want to convey however is how easy it is to actually determine the accuracy of these sorts of claims yourself, simply by “giving it a go”. Trust but verify (or in some cases just verify). I’ll revisit some of the other misconceptions with these claims, such as why the reduction in consistent gets is not as impressive as it sounds, in later posts.

The first thing we need to do is reproduce the example and test the results for ourselves. Unfortunately we don’t have a script to reproduce the data used by Robin but we have enough clues at hand to reproduce the same demonstration. We need to basically create an index in an 8K block that performs 421 consistent gets during an Index Fast Full Scan when performing a count(*)SQL operation. So with a little bit of experimenting with different volumes of data, inserting 187,200 numbers into an index produced the necessary volume of data to replicate the scenario. Note: your mileage may vary slightly depending on database version, tablespace options, the specific query you use to test the results, etc.

SQL> create table bowie (id number not null, value varchar2(10));

Table created.

SQL> insert into bowie select rownum, ‘BOWIE’ from dual connect by level <=187200;

187200 rows created.

SQL> commit;

Commit complete.

SQL> create index bowie_idx on bowie(id);

Index created.

SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>’BOWIE’, cascade=>true, estimate_percent=>null, method_opt=> ‘FOR ALL COLUMNS SIZE 1’);

PL/SQL procedure successfully completed.

As in Robin’s example, running the following simple count(*) SQL statement a couple of times to cache the data produced the following results:

SQL> select /*+ index_ffs(bowie) */ count(*) from bowie;

Execution Plan
-------------------------------------------
Plan hash value: 1410776261
-------------------------------------------
| Id  | Operation             | Name      |
-------------------------------------------
|   0 | SELECT STATEMENT      |           |
|   1 |  SORT AGGREGATE       |           |
|   2 |   INDEX FAST FULL SCAN| BOWIE_IDX |
-------------------------------------------
Statistics
----------------------------------------------------------
  0  recursive calls
  0  db block gets
421  consistent gets
  0  physical reads
  0  redo size
412  bytes sent via SQL*Net to client
396  bytes received via SQL*Net from client
  2  SQL*Net roundtrips to/from client
  0  sorts (memory)
  0  sorts (disk)
  1  rows processed

 

OK, so now we have an index that produces 421 consistent gets when performing an Index Fast Full Scan when performing a count(*) SQL operation.

Let’s now see look at the height and size of such an index …

SQL> select index_name, blevel from user_indexes where index_name = ‘BOWIE_IDX’;

INDEX_NAME BLEVEL
---------- ------
BOWIE_IDX       1

SQL> analyze index bowie_idx validate structure;

Index analyzed.

SQL> select height, btree_space, used_space from index_stats;

HEIGHT BTREE_SPACE USED_SPACE
------ ----------- ----------
     2     3336032    2988168

OK, so the index has a height of 2 (or a blevel of 1).

So would rebuilding such an index in a 16K block tablespace really give the index a “flatter structure” ? Will it really reduce the height of the index ? Well, let’s give it a go and see …

SQL> alter index bowie_idx rebuild tablespace ts_16k;

Index altered.

Let’s ensure our test case matches the one used by Robin and see if the number of consistent gets drops as expected by running the same select count(*) statement a number of times:

SQL> select /*+ index_ffs(bowie) */ count(*) from bowie;

Execution Plan
----------------------------------------------------------
Plan hash value: 1410776261
-------------------------------
| Id  | Operation             |
-------------------------------
|   0 | SELECT STATEMENT      |
|   1 |  SORT AGGREGATE       |
|   2 |   INDEX FAST FULL SCAN|
-------------------------------
Statistics
----------------------------------------------------------
  0  recursive calls
  0  db block gets
211  consistent gets
  0  physical reads
  0  redo size
412  bytes sent via SQL*Net to client
396  bytes received via SQL*Net from client
  2  SQL*Net roundtrips to/from client
  0  sorts (memory)
  0  sorts (disk)
  1  rows processed

Indeed it does, consistent gets have indeed reduced from 421 down to 211, exactly as in Robin’s example. Exciting stuff  !! Well, not really, I’ll demonstrate later why these numbers don’t actually mean as much as they appear…

Let’s see if indeed the index does have a “flatter structure” …

SQL> select index_name, blevel from user_indexes where index_name = ‘BOWIE_IDX’;

INDEX_NAME BLEVEL
---------- ------
BOWIE_IDX       1

SQL> select height, btree_space, used_space from index_stats;

     HEIGHT BTREE_SPACE USED_SPACE
---------- ----------- ----------
         2     3351776    2985662

No !!

The index in the 16K block tablespace does not have a flatter structure. In fact the index has exactly the same height of 2 as it did previously and practically the same amount of index space.

As you can see, it’s very easy to give it a go, to test and validate these types of claims yourself. So no, even Robin’s “infamous” test case does not in fact “prove” indexes will have a “flatter structure” if rebuilt in a larger block tablespace. This is the first of the key points I want to get across. Just because you rebuild an index in a larger block tablespace, it doesn’t necessarily mean the index height will reduce or that the resultant index will have a flatter structure. In many many cases, depending on the size of the index and the increase in block size, the height of an index will not reduce at all. I’ve explained why this is the case in some detail in this previous post.

In fact, in some ways, the index height has now INCREASED, not decreased as a result of moving the index to a larger block size. The index had a “flatter structure” when it was in the 8K block tablespace than it does after it was rebuilt in the 16K block tablespace. Robin’s demo is actually a perfect example of this !!

Why ?

Well previously, we had an index structure that had a height of 2 with each “level” being 8K. Now we have an index structure that also has a height of 2 but each level now consists of 16K index blocks. Previously to perform an Index range scan we had to read at least 2 x 8K index blocks or 16K in total. Now we have to read at least 2 x 16K or 32K in total when performing an index range scan.

We’ve just rebuilt the index with larger sized blocks. Imagine a new building that has the same number of floors as the older building but each floor is now double the “size” (height) than it was previously. Although the new building is still a 2 storey building, the actual physical height of the building has just doubled … 

And guess what ? For many many common queries and processes, there’s now potentially an additional overhead associated with having to always read in an index block that is double the size.

To be discussed next …

Comments»

1. raj - February 18, 2009

If index stats remained the same even after rebuilding then how did consistent gets went down. Can you explain that as well?
I enjoy reading your blog and thank you for helping us in understanding the mechanism on working of Oracle indexes.

Like

2. Brian Tkatch - February 19, 2009

If Height * Block Size has such an effect, does that mean a a Height of 3 with a Block Size of 8 (24) would be less costly than a Height of 2 with a Block Size of 16 (32)?

Like

3. jeff - February 19, 2009

Hey Richard, just thought I’d let you know the link 5 paragraphs up is broken.

As always, thanks for the great info.

Like

4. joel garry - February 19, 2009

Consistent gets is “Number of times a consistent read was requested for a block from the buffer cache.” If the same stuff is in half as many larger blocks, and the blocks are all sitting there in the buffers ready to go, there ought to be half as many consistent gets.

I guess this just shows that you can’t compare consistent gets across blocksizes, unless you factor the blocksize in. And as the first Robin link intimates, things may be different if you actually start doing physical reads.

Plugging these numbers into the perfomance guide hit ratio formula shows another way the ratio is useless.

I think Don should be given some small credit for saying in so many words “For most databases, creating multiple blocksizes is not going to make a measurable difference, and the deployment of multiple blocksizes must be carefully evaluated on a case-by-case basis,” whatever other crazyfoxiness he’s done.

Like

5. Frederick Tang - February 19, 2009

I think “consistent gets” represent number of blocks read from buffer cache, where the 8k blocksize index is stored in the default cache, and the 16k blocksize index is stored in the 16k cache?

So…

421 blocks * 8k = 3368k
211 blocks * 16k = 3376k

The amount of bytes read is roughly the same…

Richard, I think I have read somewhere that reducing consistent gets will reduce CPU usage?

Like

6. raj - February 19, 2009

Thanks Frederick and Joel for explaining reason behind reduced consistent gets. I will now myself check if response/elapsed time also remains same in both the cases

Like

7. Richard Foote - February 20, 2009

Hi Brian

There’s a slight penalty one way if Oracle needs to perform more I/O requests, even if it’s only processing more distinct blocks in memory. However, there’s also a slight penalty the other way if Oracle needs to read in a larger block and process and search through a larger block in memory.

So in the scenario you suggest, 3 x 8K vs. 2 x 16K, these penalties will basically cancel each other out with the net effect being negliable difference in performance.

Like

8. Richard Foote - February 20, 2009

Hi Jeff, thanks, fixed the link 🙂

Like

9. Richard Foote - February 21, 2009

Hi Raj

Others have answered the question. If you double the size of the block, you reduce the number of blocks by roughly 1/2 and so roughly 1/2 the number of CRs. BUT, in Robin’s example, you still end up reading a similar amount of data (via a fast full index scan) so it would be entirely wrong to suggest things would therefore run twice as fast (or 150 times as fast as I’ve seen suggested) IF the only difference were the block size and the reduction of CRs.

I’ll expand on this point later.

Like

10. Richard Foote - February 21, 2009

Hi Joel

Correct, one needs to be careful comparing the CRs across different block sizes. Yes reducing CRs is a good thing but if the cost of each CR increases, then this needs to be considered as well. Also if you don’t reduce the number of CRs but still double the cost of a CR, then this also needs to be factored in as well.

As I’ve discussed, doubling the size of an index block does not necessarily reduce the number of CRs in many scenarios. And when it does, other factors need to be considered as well, which I’ll expand on.

Like

11. Richard Foote - February 21, 2009

Hi Frederick

Yes, decreasing the number of CRs reduces CPU and this is especially the case when the block size doesn’t change (although not always if for example increasing CRs means preventing a sort operation).

But increasing the size associated with a CR also increases associated CPU and so the benefits are not anywhere near as great. Therefore reducing by 50% the number of CRs does not reduce the associated overheads by 50% because we are no longer comparing apples with apples and so the “savings” are not really as dramatic as they may at first appear.

Like

12. Log Buffer #136: A Carnival of the Vanities for DBAs - February 21, 2009

[…] Richard Foote continues his myth-busting ways in Larger Block Tablespace For Indexes Revisited. […]

Like

13. coskan - September 24, 2009

Richard

Apart from excellent information in this post I have a question nobody asked before

“So with a little bit of experimenting with different volumes of data, inserting 187,200 numbers into an index produced the necessary volume of data to replicate the scenario. ”

How many times you experimented till you reach this exact number ?

Is there a math behind the initial try ?

Like

Richard Foote - October 12, 2009

Hi Coskan

Good question.

To be honest, I can’t remember the exact number of attempts but I basically knew the block size, I knew how many CRs I was after and I knew the row size of my data so it was basically a question of multiplying approx 8000 by approx 420 and dividing by my row size to get very close to the actual size of the table I was after with the number of rows necessary. I think I just missed but got there in my 2nd or 3rd attempt.

Nothing too clever really 😉

Like

14. Book Review: Oracle Tuning: The Definitive Reference Second Edition « Charles Hooper's Oracle Notes - November 10, 2010

[…] Page 149 states, “The amount of logical reads has been cut in half simply by using the new 16K tablespace and accompanying 16K data cache.”  It has been demonstrated to the book author multiple times why this test case is flawed (reference reference2 reference3 reference4). […]

Like


Leave a comment