jump to navigation

The CPU Costing Model: A Few Thoughts Part III (Bang Bang) December 21, 2009

Posted by Richard Foote in CBO, Multiblock Reads, System Statistics.
trackback

One of the advantages of system statistics and the CPU costing model is in how the CBO deals with multiblock reads and the impact of the db_file_multiblock_read_count parameter.

When performing a full table (or fast full index) scan, the CBO needs to determine just how many multiblock read operations are likely to be performed so the associated operation can be costed correctly. The CBO simply takes the number of blocks to be read (e.g. for a table, BLOCKS in dba_tables), and divides this by the “effective” multiblock read count.

With the I/O costing model, the effective multiblock read count value is derived directly from the db_file_multiblock_read_count parameter. However, the CBO doesn’t use the actual db_file_multiblock_read_count value as the CBO knows that most multiblock read operations are unlikely to read the maximum possible number of  blocks.

This is due to a number of factors. For example a multiblock read operation can’t span across extent boundaries. The more common reason however is that Oracle will not read a block into the buffer cache that is already cached and so will split a multiblock read operation at the point where a block is already in cache. Therefore, if the db_file_multiblock_read_count is set to (say) 16 blocks, many of the actual multiblock read operations may read something less than the maximum 16 blocks if some of the blocks in the table are already cached. So if within the next 16 possible blocks to be read, the 6th block is already in cache, the multiblock read will only consist of the 5 uncached blocks and the next mutliblock read will commence from the 7th block. 

The CBO takes this into consideration and therefore doesn’t simply divide the blocks in the table by the full db_file_multiblock_read_count value. It uses a “fudged” or an “adjusted” multiblock read value in its calculations. Jonathan Lewis discusses this very well in his Cost-Based Oracle Fundamentals book and lists how the adjusted multiblock read value for the following examples is adjusted:

db_file_multiblock_read_count of 8 is adjusted to 6.59
db_file_multiblock_read_count of 16 is adjusted to 10.40
db_file_multiblock_read_count of 32 is adjusted to 16.39
db_file_multiblock_read_count of 64 is adjusted to 25.84 

Now these adjusted values are nothing more than globally implemented “guesses”, adjusted with the assumption that the larger the multiblock read operation, the more likely a block will actually be cached and the less likely the maximum possible multiblock read is going to be actually performed.  Not only are these adjusted values simply guesses, but they’re derived directly from the db_file_multiblock_read_count parameter. By increasing this parameter, you directly impact the actual costs associated with FTS as the value by which to divide the number of blocks in the segment is impacted. Adjusting this parameter needs to be very carefully considered as it not only changes the number of blocks that might be read per multiblock read but the associated CBO costs as well. Simply increasing the db_file_multiblock_read_count blindly might lead Oracle to start favouring FTS operations inappropriately, as the increase might not actually result in fewer, more efficient multiblock reads but it will result in lower FTS costs.

System Statistics and the CPU costing model simplifies and addresses some of these issues.

The first point is that the CBO no longer uses a blind guess based on the size of the db_file_multiblock_read_count parameter, but rather an “educated” guess based on the actual average size of multiblock read operations in the specific database environment. The MBRC system statistic is the actual average size of a multiblock read operation during the period in which system statistics are collected.

Like any average or single generic figure, it will not always be right but at least it’s based on the actual average multiblock read size in the specific database environment. You can’t really ask more from an average figure than to use the actual average figure in your environment. 

However, the other important point is that simply increasing the value of the db_file_multiblock_read_count parameter is no longer quite so dangerous as it will not now directly impact the costs of multiblock read operations if indeed increasing the parameter has no such or very minimal effect. Only if the average MBRC is actually changed and reflected in any updated system statistics will the subsequent CBO costs of FTS be adjusted.

Therefore, you can potentially apply one of 2 strategies when setting the db_file_multiblock_read_count parameter with the CPU costing model:

 1) Simply set it to the highest supported value in your environment and get the “biggest bang for your buck” while the actual average MBRC is automatically captured by the system statistics and used for costing purposes by the CBO, or

2) Simply leave the parameter unset and let Oracle automatically determine and set the parameter to the largest physical read size supported by your environment (recommended)

Either way, you allow the multiblock reads to be as large and efficient as possible, you allow the actual average multiblock read size to be as large and efficient as possible, but the CBO will only use the actual average multiblock read that is achieved in your database environment when determining the cost of FTS operations. This means multiblock reads will be as large and as efficient as possible but will likely be costed appropriately by the CBO.

Not only are the I/O costs relating to multiblock reads likely to be more accurate in general as a result of using system statistics, but just as importantly, they also take into consideration the actual CPU related costs of such operations and automatically incorporate these into the final costs as well.

To be discussed next.

So in summary, when using CPU costing model, don’t set the db_file_multiblock_read_count parameter, let Oracle determine the maximum optimal size for you and ensure the system statistics are accurate and reflect the actual average MBRC size in your database environment.

Comments»

No comments yet — be the first.

Leave a comment