jump to navigation

Unconference – Highlight of Oracle OpenWorld 2009 September 23, 2009

Posted by Richard Foote in Richard's Musings.
1 comment so far

I’m about to embark on another overseas adventure, this one taking me right around the world. I’ll first visit the country of my birth, England, for a few weeks before making my way across the Atlantic Ocean to attend Oracle OpenWorld in San Francisco.

This will be my second Oracle OpenWorld.  I already have a packed schedule with something planned each evening so it’s going to be a really busy period but hopefully a lot of fun as well. With 11g Release 2 and the new Sun OLTP Oracle Database Machine to investigate, I’ll try and target the more technical, less marketing hype presentations and discussion forums if I can.

However this year, I believe one of the real highlights of the whole conference will be the somewhat low key Oracle Unconference (note the photo in the link !!) that will be running for the 3rd time. Held on the 3rd floor of Moscone West, it’s basically a series of presentations that anyone interested in discussing or presenting a topic can just sign up and participate. It’s designed to be a far less “formal” or structured event than the usual conference presentations with more emphasis on discussions and interaction than Powerpoint slides and the such. That said, anyone can present on any topic in any manner they like and if it’s of interest will hopefully attract like interested folk.

With a whiteboard and laptop in hand, this year I’ll be presenting a Q & A session on Oracle Indexes on Tuesday 13 October at 10am where anyone can ask me whatever question relating to Oracle Indexes might be on their minds. If the Oracle questions run out, we can always discuss what David Bowie album is the most influential 🙂

This year, there is a truly excellent list of people who are also presenting at the Unconference including many Oracle ACE Directors and well renowned experts such as Cary Millsap, Greg Rahn, Alex Gorbachev, Kevin Closson, Chris Muir to name but a few, who will be covering a wealth of really interesting, technical based topics. It’s a fantastic opportunity to see these folks in a small, informal setting where you can actually ask questions and interact in discussions, rather than just be one of the crowd in a typical “big room” OOW presentation.

For those of you lucky enough to attend the conference, I would strongly recommended spending some of your precious time checking out the various presentations on offer at the Unconference.

Hope to meet some of you at Oracle OpenWorld in a couple of weeks !! If you see me wandering around or at the Unconference, please come or stop me and say hello.

The CBO CPU Costing Model and Indexes – Another Introduction September 16, 2009

Posted by Richard Foote in CBO, Index statistics, Oracle Indexes, System Statistics.
11 comments

I’ve previously discussed some basic concepts and formulas regarding how the CBO derives index related costings via the I/O costing model. Time to look at system statistics and the CPU costing model with specific regard to indexes.
 
The first point I would make is that the CPU costing model has some significant improvements over the older I/O costing method and I would strongly recommend adopting the CPU costing model where possible. I’ll explain some of these improvements and advantages over the coming posts.
 
The I/O costing model basically looks at the cost of a specific execution plan in terms of the estimated number of physical I/Os. The less I/Os, the less costly and more efficient the execution plan and the faster the expected response times. There are however a number of short falls with this basic I/O costing strategy in that is doesn’t automatically differentiate between the costs associated with different types of I/Os (eg. between single block and multiblock reads), it doesn’t automatically determine a typical or average size of a multiblock I/O and it doesn’t cost and take into consideration the time and overheads associated with likely CPU resources.
 
The CPU costing model attempts to take into consideration these previous limitations. It automatically takes into consideration discrepancies between the time to complete an average single block I/O versus a multiblock I/O, automatically determines the average size of a multiblock I/Os so it can more accurately determine the likely number of multiblock I/Os in a FTS and automatically determines the expected CPU time for a specific task.
 
To use the CBO CPU costing model, one needs to collect system statistics so that CBO has this additional information, based on the actual system hardware characteristics (Note: since 10g, the hidden parameter _optimizer_cost_model defaults to ‘cpu’ and so is used by default). You do this with the dbms_stats.gather_system_stats procedure. You can collect “Noworkload” statistics in which Oracle basically randomly reads the database data files to determine base statistics such as the average I/O seek time, the average I/O transfer speed and the CPU speed. However,  I would rather recommend the collection of “Workload” stats which are based on the actual workload characteristics of your hardware, based on the real load on your system during the time in which system statistics are gathered (in which case Noworkload statistics are simply ignored).
 
You can gather Workload system statistics by either running:
 
dbms_stats.gather_system_stats(‘START’) to start the system stats collection process followed by dbms_stats.gather_system_stats(‘STOP’) to stop the collection process over a typical, workload period, or
 
dbms_stats.gather_system_stats(‘INTERVAL’, interval=> 120) to say collect system workload stats over a 120 minute period.
 
To view the collected system statistics, query SYS.AUX_STATS$.
 
 
SQL> SELECT pname, pval1 FROM SYS.AUX_STATS$
           WHERE pname IN (‘SREADTIM’, ‘MREADTIM’, ‘MBRC’, ‘CPUSPEED’);
 

PNAME             PVAL1
------------ ----------
SREADTIM              5
MREADTIM             10
CPUSPEED           1745
MBRC                 10

 
The four systems statistics that I’ll focus on for now are:
 
SREADTIM – time in milliseconds for a single block I/O
MREADTIM– time in milliseconds for a multiblock I/O
CPUSPEED – million of CPU cycles per second
MBRC – average number of blocks actually read during multiblock read operations
 
In the above figures, just note therefore that a multiblock read on average takes approximately double the time of that of a single block read and that on average, 10 blocks are read during a multiblock read operation. This provides the CBO with vital information regarding how to now cost and compare potential execution plans.
 
 
The CBO CPU costing model basically looks at the total time required to complete an execution plan by summing:
 
total time to complete all single block I/O activity +
total time to complete all multiblock I/O activity +
total time to complete all the CPU activity

 
This can basically be calculated by:
 
sum of all the single block I/Os x average wait time for a single block I/O +
sum of all the multiblock I/Os x average wait time for a multiblock I/O +
sum of all the required CPU cycles / CPU cycles per second
 
In theory, this should provide the total response time to service an execution plan. However, to keep the actual “cost” figures calculated by the CBO consistent with the I/O costing model, the CBO divides this total time by the average time for a single block I/O, such that the full formula becomes:
 
(sum of all the single block I/Os x average wait time for a single block I/O + 
 sum of all the multiblock I/Os x average wait time for a multiblock I/O +
 sum of all the required CPU cycles / CPU cycles per second)
/
average wait time for a single block I/O
 
The final “cost” figure, even with the CPU costing model, is therefore still expressed in units of single block I/Os. This is an important point …
 
So how does the CBO determine the value of the various figures within this formula ? Well as we’ll see, the CBO get’s the required information both from the system statistics and from the costing formulas previously discussed with the I/O costing model.

However, for index related access paths, there’s some good news regarding being able to simplify matters somewhat.
 
The first bit of good news is that from the perspective of an index access path, there are no multiblock I/Os (except for a Fast Full Index Scan) and so the CPU costing formula can be simplified for indexes to remove the multiblock read component and be just:
 
(sum of all the single block I/Os x average wait time for a single block I/O +
 sum of all the required CPU cycles / CPU cycles per second)
/
average wait time for a single block I/O
 
Secondly, if the CPU component is relatively trivial, it may not be sufficient enough to count towards the final cost. As smaller index scans are likely to consume little CPU, it means the CPU component can also generally be ignored. This reduces the formula for such index scans to just:
 
(sum of all the single block I/Os x average wait time for a single block I/O)
/
average wait time for a single block I/O
 
However, the average wait time for a single block I/O now becomes redundant in this simplified equation, reducing the cost to now be just:
 
sum of all the single block I/Os
 
Well the next bit of good news for those that have followed my previous blog entries with regard to the CBO and Indexes is that the previous formulas regarding the I/O costing model are still applicable when determining the sum of all expected I/Os. The sum of all the single block I/Os associated with an index scan is still basically:
 
sum of all the single block I/Os = index blevel + ceil(index selectivity x leaf blocks) + ceil(table selectivity x clustering factor)
 
In other words, for smaller index scan execution plans, the cost calculated by CBO using the CPU costing model is the same as with the I/O costing model. So no, I wasn’t wasting everyone’s time discussing the various formulas using the older I/O costing model 🙂
 
If we run the same demo as I ran previously in my initial post regarding the CBO and Indexes where the total cost of the index access plan was 18, but this time using the system statistics listed above:
 
 SQL> alter session set “_optimizer_cost_model” = cpu;
 
Session altered.
 
SQL> SELECT * FROM bowie_stuff2 WHERE id = 420;
 
2000 rows selected.
 

Execution Plan
———————————————————-
Plan hash value: 134336835
 
——————————————————————————–
|Id|Operation                   |Name          |Rows|Bytes|Cost (%CPU)|Time    |
——————————————————————————–
| 0|SELECT STATEMENT            |              |2000|36000|   18   (0)|00:00:01|
| 1| TABLE ACCESS BY INDEX ROWID|BOWIE_STUFF2  |2000|36000|   18   (0)|00:00:01|
|*2|  INDEX RANGE SCAN          |BOWIE_STUFF2_I|2000|     |    9   (0)|00:00:01|
——————————————————————————–
 

We notice that the cost remains exactly the same at 9 for the index range scan component and exactly the same at 18 for the total execution plan when comparing the cost of using the IO costing model vs. the CPU costing model. Introducing system statistics hasn’t changed things for this particular index related execution plan.
 
And this is a very common observation. As indexes use single block I/Os, as the CBO cost remains as a unit of single block I/Os and as CPU consumption for an index scan is often trivial, the resultant costs for index access paths often remain unchanged with the CPU costing model. 

Previously, we looked at how changing parameters such as the optimizer_index_cost_adj impacts the costings of index related execution plans to create a level playing field between index and FTS execution plans.
 
The key point to make with regard to system statistics and the CPU costing model is that in general, the system statistics and the associated formula will automatically ensure a level playing field. However, unlike the optimizer parameters, it will do so by typically adjusting the associated costs of the FTS (rather than the index accesses) as the true costs and wait times associated with multiblock FTS are calculated, but are divided by and expressed in units of single block reads.

So rather than decreasing the associated costs of an index access path, system statistics and the CPU costing model will typically create a level playing by automatically increasing the associated costs of a FTS as appropriate.

To be discussed further …

Lanyon United Masters Div 3 Football Premiers 2009 (We Are The Champions) September 15, 2009

Posted by Richard Foote in Richard's Musings.
7 comments

On Sunday mornings, I play for the mighty Lanyon United Football Club, in the Canberra Masters Division 3 competition. You need to be over 35 to compete, meaning it’s a league specifically for those  who are highly skilled, have years of previous experience but not handicapped with youthful exuberance 🙂

In the previous 2 years, we managed to get to the final series but lost out in the semi-finals. This year we managed to get to the Grand Final where we met our local rivals, the Brindabella Blues.

This was the first Grand Final I’ve ever played in so I was pretty excited, managing only a couple of hours of sleep the night before. I’ve been struggling with injuries for much of the season but a few sore leg muscles weren’t going to stop me from giving it my all. It was a perfect Canberra Spring morning and I just had a good feeling about the day. There was a decent crowd on hand to support us oldies in action when the whistle blew to start the game.

The first half was a pretty close affair with perhaps Brindabella having the better run of play. Within a few minutes of starting, my left calf was feeling pretty tight and sore and I had their speedster on the left wing to contend with (note that “speed” is a relative word in Masters Div 3). It was with some relief when half-time arrived with the game still tied at 0-0. The second half was equally close although we started to gain more control and have the better of the chances. With about 20 minutes to go, I just couldn’t get the final touch on a ball that slipped through the penalty box.

Then with 15 minutes to go, disaster struck. I was off chasing a ball down the wing when my legs just gave out, first tearing my right hamstring and then with the very next stride, tearing my left calf. Ouch !! Down I went like a sack of lumpy potatoes and off I slowly hobbled with the support of a few teammates. My contribution was over but with the game still locked at 0-0, the game was far from over.

Watching helpless from the sidelines, in a lot of pain wasn’t easy. We had a number of good chances to win late in the game but full-time arrived with still no score. So 10 minutes of extra-time each way it was with then. 5 minutes into the first period and the dead lock was finally broken when a break on the right resulted in a nice cross for Robin, our ever reliable striker, to brilliantly put the ball into the back of the net.

Wooohooo !! We were up 1-0. I was so excited that I forgot I was injured and hurt my legs some more as I jumped up off the chair. Time appeared to just drag for the rest of the game but our superb defence was just impenetrable and we managed to hold on for victory.

Lanyon United 1 Brindabella Blues 0. We had won the game and were Premiers for season 2009 !! 

It was such a great feeling, all the hard work and pain was worth it. And a beer had never tasted so good.

Lanyon 2009 Champions

 

Yes indeed, we’re one hell of a good looking team 🙂 More photos here.

Thanks to all the guys and all our supporters for making it such a fun and rewarding year. I’m already looking forward to defending our title next season.

I visited the physio the next day who was really impressed that I managed to injure 2 different muscles in both legs simultaneously. I should thankfully be OK for my upcoming trips to England and then Oracle OpenWorld in the coming weeks.

 

Back to Oracle indexes and the CPU based CBO in the next couple of days …

Malaysia and Tall Buildings September 7, 2009

Posted by Richard Foote in Richard's Musings, Travel.
3 comments

Canberra is not only the capital of Australia, a fact not particularly well known outside our shores, but it’s also one of the most fabulous cities in the world in which to live. This last fact is not particularly well known within Australia either 😉

It’s a beautiful, peaceful city that offers a wonderful standard of living, with generally lovely weather wrapped within four distinct seasons. Yes, the winters can be a little cool and the summers a touch warm, but having lived in Manchester, England for quite a number of years, the weather isn’t too far from perfect most of the year round.

Another “charm” with Canberra is that it isn’t a big city “Metropolis” with large crowds, big traffic problems and tall buildings. In fact, because of Canberra’s strict building restrictions,  the tallest actual building in Canberra is the somewhat unimpressive Lovett Tower, at just 26 stories and 93 metres in height.

When I visit a “big” city, I therefore get easily impressed by tall buildings and usually walk around staring upwards, walking into passersby and lightposts on a continual basis. I love New York for example and can spend the whole day just walking the streets, staring up at all the fantastic tall buildings.

Last week, I had the pleasure of visiting Kuala Lumpur in Malaysia for the first time, presenting my Index Internals seminar. I only had a few days but I made sure I had time to walk around and have a good look at the very impressive and oh so tall Petronas Twin Towers. At an incredible 88 floors and at almost 453 metres, they’re the tallest twin towers in the world and were the tallest buildings until surpassed by the Taipei 101 in 2004.

 

Kuala Lumpur 2009 047

 

What’s so spectacular though is the skybridge that connects the two buildings on the 41 and 42 floors. It just seems to hang there suspended in the middle, with what looks like a few fragile legs holding the bridge in place. Although impressive during the day, the buildings seem to sparkle to life in the evenings as the buildings light up in spectacular fashion, like a pair of huge exotic Christmas trees.

I resisted the temptation however to actually go up the towers. Having been up the World Trade Centre in New York, I still get a little nervous about such things.

Shopaholics would also I’m sure be impressed by the 6 levels of shopping available in the massive Suria Shopping Centre, at the bottom of the Petronas Twin Towers. Fortunately, I already had all the David Bowie albums on offer and resisted the temptation to spend too much 😉 I particularly loved this view of the towers from within the shopping centre:

 

Kuala Lumpur 2009 033

 

I had a great time during my brief visit, ate some “interesting” food and met some lovely folks during the seminar. Kuala Lumpur is well worth a visit if you ever get the chance.

OPTIMIZER_INDEX_CACHING Parameter September 1, 2009

Posted by Richard Foote in Oracle Cost Based Optimizer, Oracle Indexes.
10 comments

As previously discussed, the CBO assumes the actual costs and overheads associated with all I/Os to be the same, regardless of the type of I/O, unless told otherwise via the optimizer_index_cost_adj parameter.
 
Another key assumption the CBO makes by default is that all I/Os will be physical I/Os (PIO) and so be relatively expensive to perform and “worthy” of being costed.
 
However, this of course is not always the case with many of the blocks being requested and accessed by Oracle already cached in the buffer cache. In specific scenarios, the CBO can take the likely caching characteristics of indexes into consideration and reduce the cost of an index related execution path accordingly. Note however this only applies for I/Os associated for index specific blocks in specific scenarios where the same index is repeatedly accessed.
 
For example, in the case of a nested loop join where the inner table is typically accessed via an index look-up, the same index may repeatedly access the table many times within the loop. Many of the blocks associated with this index are therefore quite likely to be cached as the index structure is being continually accessed. Same scenario for an index look-up process as a result of an IN list condition. For each element in the IN list, an index is often used to look-up the corresponding value in the table, thereby accessing the specific index again and again for each element in the IN list. As the index is continually being accessed, many of its associated blocks are likely to already be cached in memory.
 
The purpose of the optimizer_index_caching parameter is to tell the CBO what percentage of index related blocks are likely to already be cached in the buffer cache during these types of operations and so should not be considered in the overall costings associated with the index related execution path. The default is 0 which means by default Oracle doesn’t consider any index blocks to ever be cached and all I/Os associated with an index during an index access path need to treated as PIOs and costed accordingly. If however the optimizer_index_caching parameter is set to say 25, it means that the CBO will consider 25% of all I/Os associated directly with index blocks are likely to already be cached and will therefore reduce the overall cost of index block I/Os by 25%.
 
As discussed previously, the CBO I/O based costing formula is:
 
basic index range scan cost = index blevel + ceil(index selectivity x leaf blocks) + ceil(table selectivity x clustering factor)

The optimizer_index_caching parameter adjusts the formula in the following manner by reducing just the index accesses portion of the formula:

basic index range scan cost = ceil((index blevel + ceil(index selectivity x leaf blocks)) x (1- optimizer_index_caching)) + ceil(table selectivity x clustering factor)

but only for specific index scans such as nested loop joins and IN list conditions where an index is likely to be continually accessed within the same execution path.

 

So if we were to go back to the example I covered in the single predicate demo in the first CBO and Indexes Introduction post as shown below:

SQL> select * from bowie_stuff2 where id = 420;

2000 rows selected.

Execution Plan
———————————————————-
Plan hash value: 134336835
——————————————————————————
|Id| Operation                   | Name           | Rows  | Bytes | Cost  |
——————————————————————————
|0| SELECT STATEMENT            |                |  2000 | 36000 |    18 |
|1|  TABLE ACCESS BY INDEX ROWID| BOWIE_STUFF2   |  2000 | 36000 | 18 |
|*2|   INDEX RANGE SCAN          | BOWIE_STUFF2_I |  2000 |       |     9 |
——————————————————————————

 

we notice that the cost of the execution plan is 18.

If we now change the optimizer_index_caching parameter to say 75, meaning that 75% of all index blocks are now likely to be cached and rerun the query:
 
SQL> alter system set optimizer_index_caching=75;
 
System altered.
 
SQL> select * from bowie_stuff2 where id = 420;
 
2000 rows selected.
 

Execution Plan
———————————————————-
Plan hash value: 134336835
 
——————————————————————————
|Id| Operation                   | Name           | Rows  | Bytes | Cost  |
——————————————————————————
|0| SELECT STATEMENT            |                |  2000 | 36000 |    18 |
|1|  TABLE ACCESS BY INDEX ROWID| BOWIE_STUFF2   |  2000 | 36000 | 18 |
|*2|   INDEX RANGE SCAN          | BOWIE_STUFF2_I |  2000 |       |     9 |
——————————————————————————

 

we notice that the cost remains unchanged at 18 and parameter has had no effect, as the query was based on a single table equality predicate and did not have processing involving either a nest loop or IN list condition.
 
 
However, if we run the second IN list predicate demo involving an IN list condition as shown below (first resetting the optimizer_index_caching parameter back to 0):
 
 
SQL> alter system set optimizer_index_caching=0;
 
System altered.
 
SQL> SELECT * FROM bowie_stuff2 WHERE id in (20, 30, 420);
 
6000 rows selected.
 

Execution Plan
———————————————————-
Plan hash value: 2964430066
 
——————————————————————————-
|Id| Operation                    | Name           | Rows  | Bytes | Cost  |
——————————————————————————-
|0| SELECT STATEMENT             |                |  6000 |   105K|    49 |
|1|  INLIST ITERATOR             |                |       |       |       |
|2|   TABLE ACCESS BY INDEX ROWID| BOWIE_STUFF2   |  6000 |   105K| 49 |
|*3|    INDEX RANGE SCAN          | BOWIE_STUFF2_I |  6000 |       |    23 |
——————————————————————————-
 

We note we had a cost of 49. Remember, the cost of 49 was calculated in the following manner as we have 3 elements in the IN list condition:

cost = index blevel + ceil(index selectivity x leaf blocks) + ceil(table selectivity x clustering factor)

= 2 + 3 x ceil(0.01 x 602) + ceil(0.03 x 852)

= 2 + 3×7 + 26

= 2 + 21 + 26

= 23 + 26 = 49

 

If we now alter the optimizer_index_caching parameter to 75 and rerun the same IN list query:

SQL> alter system set optimizer_index_caching=75;
 
System altered.
 
SQL> SELECT * FROM bowie_stuff2 WHERE id in (20, 30, 420);
 
6000 rows selected.
 

Execution Plan
———————————————————-
Plan hash value: 2964430066
 
——————————————————————————-
|Id| Operation                    | Name           | Rows  | Bytes | Cost  |
——————————————————————————-
|0| SELECT STATEMENT             |                |  6000 |   105K|    32 |
|1|  INLIST ITERATOR             |                |       |       |       |
|2|   TABLE ACCESS BY INDEX ROWID| BOWIE_STUFF2   |  6000 |   105K| 32 |
|*3|    INDEX RANGE SCAN          | BOWIE_STUFF2_I |  6000 |       |     6 |
——————————————————————————-
 

 we notice the cost has been reduced from 49 down to 32. How has the optimizer_index_caching set to 75 changed the costs:
 

basic index range scan cost = ceil((index blevel + ceil(index selectivity x leaf blocks)) x (1- optimizer_index_caching)) + ceil(table selectivity x clustering factor)

= ceil((2 + 3 x ceil(0.01 x 602)) x (1-0.75)) + ceil(0.03 x 852)
 
= ceil((2 + (3×7)) x 0.25) + 26
 
= ceil((2 + 21) x 0.25)+ 26
 
= 6 + 26 = 32

 
So whereas previously there were 23 index block I/Os, this has been reduced down to just 6. Note that the I/Os and associated costs with accessing the actual table blocks within the index scan remains unaltered.

So how to set this parameter in a database ?  Well, there are a number of issues with it all.

Firstly, as with the optimizer_index_cost_adj parameter, there’s only the one “global” parameter (for the system or session) which means any value needs to be averaged out for all indexes and for all situations in which this parameter can have an effect. However, some indexes may for example be quite small and heavily accessed and as such quite likely to have most index blocks cached at any point in time (including leaf blocks) whereas other indexes may be quite huge and rarely and randomly accessed which means perhaps only the branch level blocks are likely to be cached even during a (say) IN list operation. As discussed previously, with all averages there will likely be examples where the value is appropriate, too high or too low depending the the characteristics of specific indexes.

Secondly, the poor table related blocks don’t have an equivalent parameter and so Oracle always assumes not only the table blocks within an index scan will be PIOs, but a FTS will only ever consist of PIOs, which conversely might not always be the case. So while we might make a reasonable guesstimate of the likelihood of an index block being cached (say via the buffer cache hit ratio, a study of the v$bh view, etc.), the CBO makes so such allowances for the possible caching characteristics of table related blocks. Yes, index blocks are more likely to be cached, especially during the specific scenarios in which the optimizer_index_caching parameter has an effect, but that doesn’t mean table blocks will always be PIOs. Therefore simply setting this parameter to what might appear a reasonable generalistic index caching value might still run the risk of favouring indexes unduly, even though it only impacts the index accessed blocks in the costing formula, as the CBO doesn’t make any such cost allowances for table blocks that might be cached in a FTS.

In the ideal world, we would have some idea of the caching characteristics of all individual indexes and tables and based on the segments being accessed and their associated caching characteristics, have the CBO make the necessary adjustments to it’s costing estimates in an execution path. Until we reach such an ideal world (which might not be that far away BTW), I basically recommend not to set this parameter at all and again simply ensure you use accurate system statistics and have accurate enough segment statistics.

I recommend setting this parameter if and when you find the CBO is commonly not choosing appropriate indexes for the above mentioned scenarios when perhaps it should and a slight “nudge” of costs in the right direction is sufficient to address the issues. The optimizer_index_caching parameter is not quite as overly “dangerous” if set incorrectly as the optimizer_index_cost_adj parameter can be, as it only impacts the “half” of the formula relating directly to index block I/Os and not the table block I/Os, which often constitute the greater proportion of overall I/Os in many index range scan operations (although as my example above shows, this depends as well).

However, with both of the optimizer_index parameters set, they can both have a hand in reducing the overall costs of an index related execution plan. The optimizer_index_caching parameter first impacts the cost of just the half of the formula relating to index block I/Os as shown above and then the optimizer_index_cost_adj parameter further impacts the overall resultant cost. So if we were to run the IN list query again, but this time also set the optimizer_index_cost_adj to say 25 as well as leaving the optimizer_index_caching to 75:

SQL> alter system set optimizer_index_cost_adj=25;

System altered.

SQL> SELECT * FROM bowie_stuff2 WHERE id in (20, 30, 420);

6000 rows selected.
Execution Plan
———————————————————-
Plan hash value: 2964430066

——————————————————————————-
|Id| Operation                    | Name           | Rows  | Bytes | Cost  |
——————————————————————————-
|0| SELECT STATEMENT             |                |  6000 |   105K|     8 |
|1|  INLIST ITERATOR             |                |       |       |       |
|2|   TABLE ACCESS BY INDEX ROWID| BOWIE_STUFF2   |  6000 |   105K| 8 |
|*3|    INDEX RANGE SCAN          | BOWIE_STUFF2_I |  6000 |       |     2 |
——————————————————————————-

 

We note the the cost of the execution plan has further reduced down from 32 to just 8. Basically it’s just the previous cost of 32 x optimizer_index_cost_adj = 32 x 0.25 = 8.

However, rather than setting either of these parameters, I would simply recommend the appropriate use of system statistics and the CPU costing model as I’ll discuss later.