jump to navigation

Store Indexes In A Larger Block Tablespace: The Multiblock Read Myth (Karma Police) March 18, 2008

Posted by Richard Foote in Index Block Size, Index statistics, Oracle Cost Based Optimizer, Oracle General, Oracle Indexes, Oracle Myths, Performance Tuning, Tablespace Management.
trackback

One of the great myths surrounding the use of differing block sizes is that storing indexes in larger block sizes somehow dramatically improves the performance of index related multiblock reads.

Oracle performs index multiblock reads when performing an Index Fast Full Scan, when it basically treats the index structure as a skinny version of the table. It reads the entire index structure, multiple blocks at a time, “throwing away” any non leaf blocks as it stumbles across them.

The theory goes that by storing indexes in larger size blocks, we would obviously have fewer index related blocks. If we need to read the entire index, as we do with an Index Fast Full Scan, surely it must be more efficient if we have fewer, larger index blocks.

The evidence looks convincing. Here’s a link to an extract from a book by Robin Schumacher where he clearly shows a dramatic “improvement” by using an index tablespace with double the block size. In one query using an 8K block index, the consistent reads during an Index Fast Full Scan was 421. However, when the same index was recreated as a 16K block index, the same query only used 211 consistent gets, 1/2 of what it was previously.

Conclusive “proof” that the 16K block index improved performance wouldn’t you say ?

Well actually, it’s only conclusive proof that the number of consistent gets has dropped, whether it actually improves “performance” is another thing entirely.

There are a couple of little “details” that many don’t quite appreciate. The devil is always in the details …

The first point to note is that when Oracle performs a multiblock read, it uses the value in the db_file_multiblock_read_count parameter to determine how many blocks to read per multiblock read (with system statistics, Oracle itself can determine how best to set this value).

So if the db_file_multiblock_read_count value were set to say 16, Oracle will attempt to read as many as 16 blocks at a time during a multiblock read operation.

Note this value is based on the default block size of the database. So if the default block size is 8K and the db_file_multiblock_read_count is 16, Oracle will try and read 16 x 8K blocks at a time during a multiblock read operation.

However, if there’s a non-default block sized segment (say 16K), Oracle will adjust the number of blocks that are actually read during a multiblock read operation so that the maximum size of the overall multiblock read is identical to that of the default block size.

So if the db_file_multiblock_read_count is 16 and the default block size is 8K, a multiblock read of an object in a 16K tablespace will only read 8 blocks at a time (and not 16). A multiblock read of an object in a 2K tablespace will read 64 blocks at a time.

The actual size of a multiblock read therefore is identical regardless of the block size of an object within a database.

An easy way to highlight this is to simply trace a session and see the specific size of corresponding multiblock read operations.

A sample from a trace on an 8K block index (with the db_file_multiblock_read_count set to 16), performing an Index Fast Full Scan looks like this:

WAIT #1: nam=’db file scattered read’ ela= 1487 file#=8 block#=1050 blocks=16 obj#=78294 tim=615409554677
WAIT #1: nam=’db file scattered read’ ela= 1377 file#=8 block#=1066 blocks=16 obj#=78294 tim=615409557777
WAIT #1: nam=’db file scattered read’ ela= 1143 file#=8 block#=1082 blocks=16 obj#=78294 tim=615409561563

Note that Oracle is reading 16 x 8K blocks (128K) per multiblock read operation.

However, when the index is recreated in a 16K block size tablespace, the Fast Full Index Scan looks like this:

WAIT #1: nam=’db file scattered read’ ela= 1413 file#=6 block#=14 blocks=8 obj#=78296 tim=626802128684
WAIT #1: nam=’db file scattered read’ ela= 1447 file#=6 block#=22 blocks=8 obj#=78296 tim=626802131649
WAIT #1: nam=’db file scattered read’ ela= 2014 file#=6 block#=30 blocks=8 obj#=78296 tim=626802135222

Note that Oracle is now only reading 8 x 16K blocks (128K) per multiblock operation.

Both indexes are effectively doing exactly the same work, both are effectively reading up to 128K of data per multiblock read …

It’s like paying someone $50 per 1/2 hour of work and then deciding to make things more “efficient” by paying them $100 per hour of work instead. In the end, you’re still just paying them $800 for an 8 hour day’s work regardless …

Note a larger block size will have less associated block overheads with there being less actual blocks so the overall size of an index may reduce a little, depending on index size and differences in block sizes. Therefore any possible improvements will only be restricted to the potential savings in the overall index size. With many databases having default block sizes of 8k and a maximum block size restricted to 16k, these savings may be minimum or non-existent.

This demo on the impact of different block sizes on multiblock read operations shows how Oracle actually performs the same sized reads when performing multiblock reads from differing block sized tablespaces, with the performance of the index in the larger block size tablespace being somewhat worse in this specific example.

With Oracle effectively performing identical work behind the scenes, the performance between different block size tablespaces is likely to be similar. You’re still paying $800 a day regardless …

Although it’s often claimed that multiblock reads is one of the key areas where larger index block sizes are beneficial, a claim based generally on the simplistic fact the number of consistent reads is reduced, the reality of the situation is somewhat different …

Comments»

1. Brian Tkatch - March 19, 2008

Thanx for the informative article Richard.

BTW, why do you use a comma in the hint?
>SELECT /*+ index_ffs (bowie, bowie_i) */ count(*) FROM bowie where date_field > ’01-JAN-1980′;

Doesn’t it work just fine without it?

Like

2. Jonathan Lewis - March 20, 2008

Richard,

Since the Schumacher demo caches its data so that it can ignore the issue of disk I/O times, it’s worth pointing out that even the ‘memory only’ argument is still suspect given the fact that the query selects “count(*)”.

As you pointed out, all we see is the count of consistent reads. But count(*) has a special mechanical optimisation that allows it to use the ‘rowcount’ value in the block if the block is clean – so the test avoids any CPU costs of locating data in the block.

When you do “typical” OLTP work, a large fraction of the CPU time for a “logical I/O” goes into the code path for latching, pinning and unpinning the block. But when you start doing block scans, the amount of CPU time spent actually using the block can become the important bit.

Change count(*) to (e.g.) sum(second_column) and you may find that whilst the buffer gets are higher with the smaller blocks the CPU consumption is lower – because tracking through a large block to find rows can be more CPU intensive than tracking through small blocks – CPU data cache sizes become significant.

Somewhere I have a script that emulates the Schumacher test, but shows better timing from smaller block sizes. I wrote it a few years ago, but I’ll see if I can dig it out some time, test if it’s still true, and put it on my blog.

Regards
Jonathan Lewis

Opening Menu


http://www.jlcomp.demon.co.uk

Like

3. Richard Foote - March 20, 2008

Hi Jonathan

Yes, all very good points, thank you.

I plan to discuss sometime the CPU, disk and memory footprints associated with using larger index block sizes, especially in OLTP environments.

I find the Schumacher demo to be ultimately meaningless as reducing CRs doesn’t really mean anything if it results in no change or even an increase in CPU and response times.

I really don’t care if I’m billed at $50 per 1/2 hour or at $100 per hour if I end up getting paid the same regardless although if it requires extra administration costs to be billed by the hour, why bother.

I might not be so agreeable however if I’m forced to work the full hour to get paid as a result 😉

Like

4. Val - March 20, 2008

Jonathan,

I am not sure I agree with this bit:


Change count(*) to (e.g.) sum(second_column) and you may find that whilst the buffer gets are higher with the smaller blocks the CPU consumption is lower – because tracking through a large block to find rows can be more CPU intensive than tracking through small blocks – CPU data cache sizes become significant.

Do you mean L1/L2 CPU caches ?

According to some recent research, L2 D cache misses are indeed the major component of CPU cache delays in DSS queries/sequential scans. However, L2 D poor cache usage is already caused by insufficient data locality at the row level, the summed column bits are already too far apart inside the page.

Besides, it appears that page traversal overhead is a rather substantial component too, now due to the instruction cache misses.

Therefore, one would expect that fewer pages, or larger blocks, should incur *less* cache related overhead with row-level non-locality cache misses being the same.

I do not doubt your experimental results, but the non-intuitive large block performance deterioration must have some other explanation ( perhaps increased virtual memory management related overhead with larger pages. In this case, however, the time should be accounted as system time rather than user time).

Thanks.

Val

Like

5. Log Buffer #89: a Carnival of the Vanities for DBAs - March 21, 2008

[…] Richard Foote’s Oracle Blog debunks the myth of increased performance when building indexes in larger block size tablespaces. […]

Like

6. L’approccio scientifico, pensieri sparsi « Oracle and other - March 25, 2008

[…] un recente post di Richard Foote, mi sono sentito molto affine al BAAG party ed ho trovato la discussione attorno al tema (bisogna […]

Like

7. Book Review: Oracle Tuning: The Definitive Reference Second Edition « Charles Hooper's Oracle Notes - November 10, 2010

[…] Page 149 states, “The amount of logical reads has been cut in half simply by using the new 16K tablespace and accompanying 16K data cache.”  It has been demonstrated to the book author multiple times why this test case is flawed (reference reference2 reference3 reference4). […]

Like

Richard Foote - November 10, 2010

Highly recommend reading this highly detailed review by Charles Hooper on the first pages of Don Burleson’s so-called Oracle Tuning Definitive Reference. Charles raises the questions and issues that of course any decent book on the subject should be answering …

At least pages 31 and 32 might be error free 🙂

Like


Leave a comment