jump to navigation

3 Steps To Performance Tuning (Working Class Hero) July 3, 2008

Posted by Richard Foote in Oracle Opinion, Performance Tuning, Richard's Musings.
8 comments

Last night, I answered a question on the Database OTN forum regarding Database Re-Org and Performance Tuning. I thought it might be worthwhile sharing my response here as it’s something I feel quite strongly about.

Basically my response to the question of what basic steps one should follow when performing performance tuning was:

1) Identify an actual problem that needs addressing, one that’s problematic to the business, not one that only exists in some statistic or in one’s imagination

2) Determine what’s actually causing the problem as identified in Step 1.

3) Address the specific issue as identified in Step 2.

It all sounds rather obvious but it’s amazing how many don’t follow these 3 basic steps and attempt to jump straight to Step 3.

Unless you perform Step 1, you can’t accurately perform Step 2 which means you’ll only be guessing when performing Step 3.

The secret to performance tuning is not to guess …

I’ve lost count of the number of times I see people guessing at what a problem might be and hence get it all wrong …

The number of times people waste time and resources on problems that aren’t really problems and hence make no measurable difference …

The number of times people throw hardware at a problem without fully considering whether additional hardware will actually resolve the problem and hence waste money and resources for no measurable benefit …

The number of times people jump straight to applying a solution to a problem that they haven’t properly or correctly diagnosed and hence don’t actually solve the issue …

The number of times people attempt to resolve a problem by focusing on the symptoms rather than the root cause, only to fail dismally …

The number of times people are lucky and fix a problem by guesswork and by fumbling around in the dark without understanding why it fixed the problem, only to attempt the same thing again at another time and for it to fail dismally …

Like I said, the secret to performance tuning is not to guess.

John Lennon was once quoted as saying his secret to writing music was to:

1) Say what you want to say

2) Make it rhyme

3) Put a back beat to it

Three basic, fundamentally important steps. He would have made a good DBA :)

Extended SQL Tracing Presentation (Your Possible Pasts) May 30, 2008

Posted by Richard Foote in Oracle General, Oracle Myths, Oracle Opinion, Performance Tuning.
3 comments

I’ve recently dug up an old presentation I did for the local Oracle User Group a number of years ago:

Yet Another Presentation On Extended Tracing

It’s slightly dated but has some useful general information on the subject of diagnosing performance issues and Extended SQL Tracing in Oracle that some may find useful, so I thought it might be worth posting it here. I stripped out most of the formatting so that the resultant file will be smaller to download.

Extended SQL Tracing has been absolutely invaluable over the years in diagnosing and trouble-shooting performance related issues. I had the very good fortune of attending a Hotsos Diagnosing Oracle Performance course in Sydney around 2003 (at the same time when Guy Sebastian won the first Australian Idol, ooops, probably shouldn’t have admitted remembering that) with Cary Millsap and Gary Goodman. It was a really fantastic training course which focused on the whole subject of “Method R” and using Extended SQL Tracing to diagnose performance issues. I also remember Steve Adams attending as well and thinking to myself, this guy really  knows his Oracle …

If you haven’t already, I would strongly recommend checking out the Optimizing Oracle Performance book by Cary Millsap with Jeff Holt, an excellent read.

And Guess what. Using Extended SQL Tracing actually works !!

Read-Only Table Before 11g (A Day In The Life) May 15, 2008

Posted by Richard Foote in Oracle General, Oracle Indexes, Oracle Opinion, Read Only.
9 comments

An excellent question by fellow Aussie Chris Muir on this OTN Thread reminded me of a little trick I picked up in my travels.

Basically the question is how can one make a table read-only before 11g ?

The thread mentions a number of possibilities, some better than others. I thought I might just mention this possible solution involving Materialized Views. There are various alternatives based on this basic idea, this is just a simple example.

First create and populate a table we want to convert to Read-Only.

SQL> create table bowie_ro (id number, name varchar2(20));

Table created.

SQL> alter table bowie_ro add primary key (id);

Table altered.

SQL> insert into bowie_ro values (1, ‘Bowie’);

1 row created.

SQL> insert into bowie_ro values (2, ‘Ziggy’);

1 row created.

SQL> insert into bowie_ro values (3, ‘Major Tom’);

1 row created.

SQL> commit;

Commit complete.

Next, rename the table to a another name.

SQL> rename bowie_ro to bowie_temp;

Table renamed.

Next, create a materialized view called the original name based on the renamed table.

SQL> create materialized view bowie_ro
2 refresh on demand complete
3 as select * from bowie_temp;

Materialized view created.

Next, drop the orignal table.

SQL> drop table bowie_temp;

Table dropped.

We can now see and select the table as we could previously.

SQL> select * from bowie_ro;

ID         NAME
———————–
1          Bowie
2          Ziggy
3          Major Tom

However, you now can’t perform DML on the table, making it effectively read-only …

SQL> insert into bowie_ro values (4, ‘Thin White Duke’);
insert into bowie_ro values (4, ‘Thin White Duke’)
*
ERROR at line 1:
ORA-01732: data manipulation operation not legal on this view

You may of course need to add a few grants, constraints or indexes here or there but the table is now effectively read-only without the need of a read-only tablespace or other trick as mentioned in the OTN thread.

Back to indexes accessing all rows in a table soon :)

Index Scan or Full Table Scan: The “Magic” Number (Magic Dance) May 12, 2008

Posted by Richard Foote in Oracle Cost Based Optimizer, Oracle General, Oracle Indexes, Oracle Myths, Oracle Opinion.
30 comments

What seems like ages ago, I listed 8 things you may not have known about indexes. Although I’ve since written about many of the 8 items, I’ve yet to address the last item listed:

8. An index can potentially be the most efficient and effective may to retrieve anything between 0% and 100% of the data from a table.

A few recent posts on OTN reminded me that perhaps it’s about time I wrote something on this topic.

Generally, the question that’s commonly asked is at what point or at what percentage of data does Oracle no longer consider the use of an index and judges the Full Table Scan (FTS) as the most efficient method to retrieve the data from a table.

Basically, what’s the “magic number”, is it 1% of data, 2%, 5%, 7.5%, 15%, 42%, 50% ???

The answer unfortunately is that there is no such magic number or percentage, it all entirely depends. The way I often answer this question is by simply stating I can very easily come up with a scenario where a FTS is the most cost effective method to retrieve 1% of the data. Equally, I can very easily come up with a scenario where an index is the most cost effective method to retrieve 99% of the data.

Like I said, there is no magic number, it entirely depends on a whole list of different factors and variables.

To start, I thought I might go through the example of how a 1% cardinality result is best achieved via a FTS, highlighting why and how the Cost Based Optimizer comes to such a decision.

I’ll use a simple little scenario with nice simple numbers to make the mathematics nice and easy to follow :)

OK, let’s assume we have a table that has 10,000,000 rows. The table uses 100,000 table blocks to store these rows and so we have on average 100 rows per block. With an 8K block size, we’re basically looking at a table with an average row size of about 80 bytes.

Let’s say this table has an associated index with approximately 20,000 leaf blocks required to store the index entries for a particular column and the index has a blevel of 2 (or a height of 3). This basically means we can store approximately 500 index entries per block and the average index entry is about 16 bytes or so in length.

The indexed column has 100 distinct values which are evenly distributed such that each distinct value has approximately 100,000 occurrences each. The column has no NULL values.

Let’s say we write a query based on the indexed column and we’re interested in just one of the possible 100 values or approximately 1% of the data in total. For example:

SELECT * FROM bowie_table WHERE code = ‘ABCDE’;

Does the CBO choose the index or does it chose the FTS ?

Well, let’s first cost the index access path.

We begin by reading the root block and the intermediate branch block for a cost of 2.

We also need to read approximately 1% of all the index leaf blocks in order to access all the index entries of interest. So that’s 20,000 (leaf blocks) x 0.01 = 200 leaf blocks in total.

So the total cost of reading just the index is 202.

Next comes the interesting bit. How many of the 100,000 table blocks do we need to access in order to read just 1% of the data (i.e. 100,000 rows) ?

Well, the answer depends entirely on the Clustering Factor of the index or to put it another way, in how well ordered the rows in the table are in relation to the index. If the index column values of interest are all very well clustered together in the table, then we can access the required rows by visiting fewer blocks than if the index column values are evenly and randomly distributed throughout the table.

In fact, in the worst possible cases scenario, if the Clustering Factor is appalling and has a value close to the number of rows in the table (10,000,000), we may actually need to visit each and every block in the table as each block has an average of 100 rows per block and we want on average 1% or one of these rows from each and every table block.

In the best possible case scenario, with the column values perfectly clustered together and with a Clustering Factor approaching the number of blocks in the table (100,000), we may get away with only having to visit 1% of all the table blocks or just 1,000 of them.

So the Clustering Factor is a crucial variable in how costly it would be to read the table via the index. The actual table access costs therefore are simply calculated as being the selectivity of the query (0.01 in our case) multiplied by the Clustering Factor of the associated index. 

In this example, the Clustering Factor is indeed appalling with a value of 10,000,000 and the table access costs are therefore calculated as 0.01 x 10,000,000 = 100,000.

So the total costs of using the index is 202 (for the index related costs) + 100,000 (to access the rows from the table) = 100,202 in total.

So what are the costs associated with the FTS ?

Well, the FTS has a number of advantages over the index scan. Firstly, as Oracle needs to process all the blocks, it can retrieve all the necessary rows by reading a specific table block just the once. However, with the index scan, Oracle may possibly need to access a specific table block multiple times in some scenarios. 

Secondly, as Oracle knows it has to read each and every block, Oracle can do so with a larger “bite of the pie” each time via multiblock reads, knowing it’s not wasting resources as all blocks need to be processed anyways. Index access reads perform single block I/Os whereas a FTS can perform muiltblock I/Os at a time. In this specific example, let’s assume the effective multiple read value is 10, remember, we want to keep the arthmetic nice and simple …

Finally, a FTS can be performed in parallel, even if the table itself isn’t partitioned, which means the overall response times can be further improved and the CBO can reduce its “costs” accordingly. In this example, we won’t worry about parallel query.

So the costs of a FTS in our example is basically 1 (for the segment header) + 100,000 (table blocks) / 10 (the effective multblock read value) = 1+10,000 = 10,001.

So that’s roughly an overall cost of 100,202 for the index vs. 10,001 for the FTS.

The results are not even close with the FTS winning hands down and that’s for just 1% of the data …

A couple of final little points for now.

Firstly, the cost of just reading 1 block (for the single block index reads) vs. 10 blocks (for the multiblock FTS reads) may actually differ somewhat as multiblock reads are doing more “work” with it’s associated I/O. By default, with no parameters set and with no system statistics, the CBO will cost each I/O as being the same. More about how to possibly adjust this another time.

Also, by default the CBO will assume all associated I/Os are physical I/Os and will cost them accordingly, even if the BCHR is nice and high and the index access path in question might be accessed within (say) a nested loop join where the likelihood of many of the index related I/Os in particular being cached is very high.  More on this at another time as well.

But for now, just note how in this relatively trivial example, the following factors came into play when determining the potential costs of this query:

  • Selectivity of the query
  • Data distribution with regard to the actual occurrences of the required data
  • Number of table blocks (below the high water mark)
  • Number of leaf blocks
  • Index Height
  • Average number of rows per table block
  • Average number of leaf entries per leaf block
  • Clustering Factor
  • Caching characteristics of index and table
  • Effective multiblock read count
  • Relative cost of single vs. multiblock I/Os
  • Parallelism

All of which contribute to make any single “magic number” by which Oracle will no longer consider using an index but another fairy tale in the Oracle book of myths and folklore …

Indexes In Their Own Tablespace: Recoverability Advantages (Get Back) May 2, 2008

Posted by Richard Foote in Backup and Recovery, Oracle General, Oracle Indexes, Oracle Myths, Oracle Opinion, Tablespace Management.
5 comments

Thought I might share some thoughts regarding recoverability issues with regard to having indexes separate and stored in their own tablespace.

I’ve already discussed here how the loss of an index only tablespace would be a catastrophic event, with the database in dire straights until the indexes are recovered. Therefore the faster we can recover from the situation, the faster we can make DML statements work again, the faster we can prevent Full Table Scans from crippling database performance, the faster we can return the database to a functional state again, the better for our users and for our sanity.

One of the advantages of having indexes separate from tables and stored in their own tablespace is that we have a number of different recovery options available to us. Rather than having to perform a full tablespace or data file recovery, we can potentially simply just rebuild all the impacted indexes. Providing the base tables are available and have not been impacted by whatever catastrophic event has befallen the index tablespace, we can rebuild the indexes (in another tablespace if necessary). This will hopefully be a more simplistic, efficiently and most importantly faster method of recovering all our impacted indexes than performing an actual database recovery.

But will it really be more simplistic, efficient and faster ? The recovery advantages with having indexes in their own tablespace are often exaggerated. Let’s first take a look at an example scenario.

Let’s assume we have an index only tablespace that stores all the indexes for our application. Let’s say we have 100G worth of indexes. In a physically separate table only tablespace, let’s say we have a total of 200G worth of table data which is approximately double that of the index tablespace. Generally speaking, it’s common for indexes to not use the same amount of storage as the tables as typically not all columns are indexed. Of course it’s possible for a specific column to be indexed several times and for the index storage to exceed table storage in some cases, but not typically. There may of course be some free space in these tablespaces but let’s assume free space is minimal.

So we have 100G of indexes and 200G of tables.

Let’s also assume there’s on average 2 indexes per table, if only to keep the following arithmetic nice and simple :) Of course some tables may have many more indexes, some may just have the one index and in some rare examples there may be no indexes at all.

Now, it’s important to note that building a new index is actually a very expensive exercise. Oracle has to read all the data blocks in the base table, it has to sort the data in the order of the index entries, it has to create the index segment and write the index data, while generating undo and redo in the process.

Now that’s a lot of work …

However, in this scenario, we need to do this work for each and every index that’s in our stuffed index tablespace. Not only that, but we also need a script that can identify each of our impacted indexes, that generates the necessary index rebuild scripts (to another tablespace if necessary) and that handles any necessary constraint related issues.

In this specific scenario, we have to make Oracle and the database processes basically perform the following amount of work:

  • Read approximately 400G of table related data. As we have an average of 2 indexes per table, we have to basically read each and every table an average of 2 times to build their related indexes. That’s 2 x 200G = 400G.
  • Sort approximately 100G worth of index related data. Sorting is a really expensive, relatively slow process and we have 100G worth of index data that needs to be sorted.
  • Write and create approximately 100G of index related segments

Note we also have to generate Data Dictionary related changes, we have to generate a bunch of undo related changes and we also (although optionally) generate lots and lots of redo.

In short, the database is being absolutely hammered during this whole process and it will take a loooong time to complete.

And this is meant to be the easy, efficient and above all fast method of recovering our indexes ?

So what is the alternate recovery strategy that this method of “simply” rebuilding all indexes is meant to protect us from.

Well, with a damaged tablespace, we basically need to perform a tablespace level recovery, restoring “just” the 100G worth of data files and applying any associated redo logs since our last backup. Depending on our backup and recovery strategy, we may actually reduce the redo logs being applied by applying incremental or cumulative backups as well.

Instead of the database slowly and laboriously having to read, process and write 6 or 7 times the amount of data (in our scenario), we can use the OS to much more efficiently copy across the index related data files.

Instead of having to script the rebuilding of all impacted indexes, literally a couple of RMAN commands will basically automatically completely restore and recover the impacted index tablespace for us.

In the scenario when only a specific data file or mount point within the tablespace has been problematic, the implications of attempting to recover the situation by simply rebuilding the indexes gets worse, much worse.

Firstly, if we are so inclined, we need to identify which indexes have at least one extent within the damaged portion of the index tablespace. We then need to entirely rebuild all these indexes, regardless of how much of the index may actually remain undamaged with other extents in undamaged portions of the index tablespace. This all takes resources, resources, resources and time, time, time.

We can’t just rebuild a part of an index (unless it’s partitioned of course) but we can recover a part of a tablespace. We can simply recover the damaged part of the tablespace, restoring and recovering just the specific data file or files, again potentially with just a few simple RMAN commands.

With small databases with small amounts of data, the time it takes to rebuild all indexes in an application may be acceptable for the business. However, in larger database environments, the extra time and resources required to rebuild large amounts of index data compared to other recovery strategies would be totally and completely unacceptable.

An exercise for those who store indexes in a separate tablespace, in large part because of the recoverability advantages. On a QA system or equivalent copy of your production database environment, go through a real exercise of attempting to recover your indexes by rebuilding them and actually time how long such a recovery process takes. Then repeat the exercise by recovering the database using a conventional database recovery technique and time the differences.

You may just come to the conclusion that rebuilding indexes may not be such a fast and efficient recovery process in many scenarios after all …

Indexes In Their Own Tablespace: Availabilty Advantages (Is There Anybody Out There?) April 28, 2008

Posted by Richard Foote in Backup and Recovery, Oracle General, Oracle Indexes, Oracle Myths, Oracle Opinion, Tablespace Management.
17 comments

I’ve already discussed here some reasons why performance is not particularly improved by simply separating indexes from tables and storing them in a different tablespace. It sounds like it might be helpful from a performance perspective but when one digs down a little, it turns out any so-called performance benefits are questionable to say the least.

However, performance is only one reason why it’s claimed storing indexes in their own tablespace is beneficial. There are wondrous advantages to database availability and database recovery options if only indexes are stored in their own tablespaces. The loss of all indexes due to a catastrophic disaster in the index tablespace means that the database tables themselves are all still potentially available.

This sounds as if there might be a number of advantages with this strategy, right ?

Well it means for a start that none of the “real” data has been lost. If we store indexes away from the parent tables and we only lose the index tablespace, the table tablespace could possible be totally unaffected by this loss. This potentially suggests a number of things:

  1. The Database will still be functional. Yes it might run a little slower without indexes but at least with the tables still available, we can still perform our business critical operations until the indexes have been fixed as the actual tables are unaffected
  2. We don’t actually have to perform a database recovery to get us out of this mess. So long as all the tables are still available, we can simply recover the situation by rebuilding all the indexes from the problematic tablespace. This will hopefully be more simplistic, more efficient and most importantly faster than having to perform an actual database recovery

This all sounds perfectly reasonable …

Today, I’m just going to briefly mentioned some thoughts on the first point, the second point I’ll discuss another day.

I guess the key question here (pun fully intended) is just how important and vital are indexes to the “normal” operation of a database? Is a database effectively operational if we were to lose all our indexes, is an application still effective and operational if we were to lose all indexes belonging to the application? If by storing indexes in their own tablespace, do we get availability benefits if we were to lose only the index related tablespace?

All good questions to ask and ponder about every now and then.

Let’s be clear I’m not discussing the loss or corruption of a single (or handful) of indexes. If a specific index gets corrupted for whatever reason, yes we could recover the index by (say) making the index unusable and rebuilding the index. However, we can do this whether the specific problematic index in question was stored with or separate from the parent table so the scenario doesn’t really differ much.

No, one of the (so-called) benefits of storing indexes in their own tablespace is that if we have a catastrophic issue with the index tablespace, we only lose a whole bunch of indexes. No tables are impacted, just all the indexes stored in the tablespace. However, just how well will business critical operations function without indexes in our database …

The suggestion is that things will just be a lot slower. We’ll have lots of Full Table Scans where previously we had nice efficient index related scans, but at least data can be viewed and manipulated as the actual tables themselves will still be available. Yes things will be slower and less than ideal but better than if we had stored tables and indexes together because in this scenario we would have lost both indexes and tables making the database effectively useless until recovered.

Well let’s setup a really simple scenario and see how things fair without indexes …

First, we create a simple little “parent” test table and populate it  with a few rows:

SQL> create table bowie_1 (id number, name varchar2(20));

Table created.

SQL> insert into bowie_1 values (1, ‘Bowie’);

1 row created.

SQL> insert into bowie_1 values (2, ‘Ziggy’);

1 row created.

SQL> insert into bowie_1 values (3, ‘Floyd’);

1 row created.

SQL> commit;

Commit complete.

Next, we create a simple little “child” table and populate it with a few rows:

SQL> create table bowie_2 (id number, fk_value number);

Table created.

SQL> insert into bowie_2 values (1,1);

1 row created.

SQL> insert into bowie_2 values (2,1);

1 row created.

SQL> insert into bowie_2 values (3,2);

1 row created.

SQL> insert into bowie_2 values (4,3);

1 row created.

SQL> insert into bowie_2 values (5,3);

1 row created.

SQL> commit;

Commit complete.

We now add a Primary Key to the parent table which will create for us an index. Note this is the only index in this demonstration which is stored in a separate tablespace to the table:

SQL> alter table bowie_1 add constraint bowie_1_pk primary key(id) using index tablespace users;

Table altered.

Next we create a Foreign Key in our child table. Note this table doesn’t actually have a Primary Key (rare, not recommended but possible) and the Foreign Key has no associated index:

SQL> alter table bowie_2 add constraint bowie_2_fk foreign key(fk_value) referencing bowie_1(id);

Table altered.

Finally, we take the index tablespace offline to simulate a problematic index related tablespace:

SQL> alter tablespace users offline;

Tablespace altered.

OK, the setup is now complete. Let’s see what life is like without our poor little index. First, let’s perform a simple query on our parent table. I’ve hinted the query to make the CBO use the index which the CBO is of course likely to do with most of our queries on most of our tables (and if the CBO doesn’t want to use the index for a specific query, the loss of an index is not going to be an issue then anyways):

SQL> select /*+ index */ * from bowie_1 where id = 1;
select /*+ index */ * from bowie_1 where id = 1
                           *
ERROR at line 1:
ORA-00376: file 4 cannot be read at this time
ORA-01110: data file 4: ‘C:\ORACLE\ORADATA\FLOYD\USERS01.DBF’

Now the error one may get if the index was simply corrupted or if there’s a problem or corruption at the hardware level may differ but the overall ramification will be the same. Queries that the CBO deems should use a “problematic” index will simply fall over. This is not exactly a good thing from an availability perspective …

How about inserting a new row in the parent table:

SQL> insert into bowie_1 values (4, ‘IGGY’);
insert into bowie_1 values (4, ‘IGGY’)
            *
ERROR at line 1:
ORA-00376: file 4 cannot be read at this time
ORA-01110: data file 4: ‘C:\ORACLE\ORADATA\FLOYD\USERS01.DBF’

Oh yeah, that’s right. We have an index that also needs to be inserted as well. Not just any index mind you, but an index that is used to police the uniqueness of the associated PK constraint. Yes, if the problem was at the hardware level, the error message will differ but the ramifications will be the same. We will not be able to insert into the table unless the index is dropped and we can’t drop the index unless the PK constraint is dropped as well.

How about an insert into the other table that doesn’t even have an index:

SQL> insert into bowie_2 values (6, 1);
insert into bowie_2 values (6, 1)
            *
ERROR at line 1:
ORA-00376: file 4 cannot be read at this time
ORA-01110: data file 4: ‘C:\ORACLE\ORADATA\FLOYD\USERS01.DBF’

Oh for goodness sake, what now !! Well the table has a FK that points to the table with the problematic index and we need to check to ensure the FK value actually exists in the parent table. How do we perform such a check, why by using the index on the PK column of course and yep, the index can’t currently be used. So unless we drop the FK constraint, we’re stuffed here as well …

Perhaps life isn’t so sweet without these indexes after all …

What if we make the index unusable first rather than it be simply “unavailable” or “damaged” for whatever reason:

SQL> alter index bowie_1_pk unusable;

Index altered.

Well, providing we’re setup to skip unusable indexes:

SQL> show parameter skip

NAME                                 TYPE        VALUE
———————————— ———– ——————————
skip_unusable_indexes                boolean     TRUE

We can at least now make our queries run without the use of any problematic indexes:

SQL> select /*+ index */ * from bowie_1 where id = 1;

        ID NAME
———- ——————–
         1 Bowie

If this table contained 100M rows, it might of course take a long long long time and if we had too many users performing too many Full Table Scans, the entire database might of course scream to a thudding halt, but yes at least we’ve now got our queries working to the point of ignoring unusable indexes.

But is a database (or application or part thereof) that performs nothing but Full Table Scans really a scenario we want to be in? Does this really help to justify the separating of indexes from our tables ? Hummm, not sure about that one …

What about our DML operations now the index is unusable, do these at least function to some degree ?

SQL> insert into bowie_1 values (4, ‘IGGY’);
insert into bowie_1 values (4, ‘IGGY’)
*
ERROR at line 1:
ORA-01502: index ‘BOWIE.BOWIE_1_PK’ or partition of such index is in unusable state

That’s a no for our first parent table example …

SQL> insert into bowie_2 values (6, 1);
insert into bowie_2 values (6, 1)
*
ERROR at line 1:
ORA-01502: index ‘BOWIE.BOWIE_1_PK’ or partition of such index is in unusable state

And that’s also a no for our child, FK table example. Oracle still needs to use the problematic PK related index to police the value in our FK column.

So what kind of database environment are we left with when the indexes from our index only tablespace becomes problematic, even with all our tables totally intact.

Well, until we make the indexes unusable, all index related queries will be falling over all over the place with database related errors. Once we go through a process of identifying all problematic indexes and making them all unusable, we’re left with a database environment that’s performing Full Table Scans all over the place. Just imagine how long it’ll now take to find the customer details of that 10G table. Just imagine the user experience on the database when that 6 table join query can only be performed with Full Table Scans. Just imagine your user concurrent activity with no associated indexes available …

The good news of course is that the tables will at least get no bigger as all inserts will fail, all deletes will fail and many of the updates will fail, except on all those tables that have no Primary Key and no Unique Key and no Foreign Key. Ummm, just how many tables do you have that have no PK or UK or FK constraint ? Not many right …

Losing an index only tablespace would be a catastrophic event, one that would ruin the day of not only the poor DBA having to recover from such a scenario but also any poor user needing to access an impacted application.

One might even argue things could be better if a tablespace containing both tables and indexes was lost if it resulted in another tablespace containing other tables and indexes still being available as at least some table/indexes would be accessible and usable in a viable manner.

Regardless, in either scenario, the database/tablespace/datafile would need to be recovered ASAP to stop user complaints flooding the help desk.

Of course having indexes in their own tablespace will help us recover from such a catastrophic scenario in a more simplistic, efficient and ultimately faster manner, right ?

Well, unfortunately, maybe not. I’ll get around to discussing this issue sometime soon …

Separate Indexes From Tables, Some Thoughts Part II (There There) April 18, 2008

Posted by Richard Foote in Oracle General, Oracle Indexes, Oracle Myths, Oracle Opinion, Tablespace Management.
10 comments

In Part I, I discussed how separating indexes from tables won’t likely improve performance as:

  • Oracle moves from reading index blocks to table blocks in a sequential manner
  • Most of the associated I/Os are likely to be random anyways
  • Multi-User environments would result in disk contention regardless

That being said, why is it then some sites claim performance improvements after separating indexes from tables ? Previously, performance was sluggish however after moving indexes into a separate tablespace, performance appears to have picked up. Clearly then, moving indexes into a separate tablespace does improve performance, even if common sense might suggest otherwise.

Well, not quite. Here’s a scenario that’s not entirely uncommon …

Currently, an application has both tables and indexes in the same tablespace. The tablespace consists of various datafiles distributed across (say) 4 physical devices. Most database waits are I/O related with both db file sequential reads and db file scattered reads featuring heavily in performance metric reports. However, I/O performance is somewhat average with slow I/O related wait times and performance is generally suffering as a result.

Maybe, just maybe, the problem is due to having tables and indexes in the same tablespace. Perhaps if we separate the indexes away from the tables, contention will reduce, I/O wait times will decrease and database performance might improve as a result.

So we create a shining new, index only tablespace spread across (say) 4 additional physical disks and rebuild all our indexes in this new tablespace. To our relief, thankfully, performance has indeed improved. Average I/O wait times have been reduced and overall database performance has improved as a result. Despite what folks like that Richard Foote dude claims, here is clear proof and evidence of performance indeed improving, purely and simply by just separating indexes from their tables.

All we did was pull the wings off the fly and now it won’t take off after clapping our hands. Clear proof that flies go deaf when you pull off their wings …

There are of course two additional, potentially significant events that have also occurred other than just the indexes being separated from the tables.

The first one is that not only have all the indexes been moved to another tablespace, but all indexes have also been rebuilt as a consequence. Now, I’m the last person to get all excited about indexes being rebuilt, however as I’ve gone to great lengths to document, there are rare scenarios when indexes can get fragmented and may benefit from a rebuild. By moving indexes into a new tablespace, we’ve effectively rebuilt all the indexes, the (say) 99% where it wouldn’t have mattered but also the (say) 1% where it may have improved things. We have also rebuilt those indexes where there may be some temporary improvement until the index starts to flesh itself out again.

As a result, there could be all manner of related changes to execution plans and performance generally, especially related to larger index range scans and index fast full scans.

It’s not the indexes being separate from the tables that’s making some difference here, it’s the fact all the indexes have been rebuilt (especially those that were badly fragmented and accessed by large index scans).

The fly isn’t really deaf …

However, the far more significant difference we’ve also made is that we have of course just introduced 4 new physical devices into our database infrastructure. As a result, we may have significantly enhanced our I/O bandwidth and possibly reduced I/O related contention issues. All the general I/O activity related to indexes that was occurring on our initial 4 disk table/index tablespace have all been removed and are now occurring on our new, separate 4 disk index only tablespace.

But that’s a good thing right, that’s what we wanted to achieve ?

Not quite.

In the index range scan scenario I discussed in Part I, just note how few of the overall I/Os were related to the index. In larger index range scans where in theory separating indexes might improve performance, very few of the related physical I/O activity is actually attributed to indexes. The index would have to have an extremely low (and rare) clustering factor for index costs to be significant. In most “randomly” distributed index scans, there’s significantly more table related physical I/O activity than index activity.

By moving just the indexes into these new physical devices, we’ve just moved a whole bunch of segments that as a group incur relatively low physical I/O related activity while leaving together all those that result in the majority of physical I/Os.

Yes. we’ve reduced contention and I/O demands on the initial tablespace but as whole, we haven’t done it very well at all. Yes, we’ve reduced contention and perhaps improved performance, but we could have done it so much better. Yes, it appears separating indexes from tables has improved performance but has it really …

It’s not the separating of indexes from tables that’s improved performance, it’s the fact we’ve introduced 4 new disks and we’ve shifted some of the I/O activity away from the initial tablespace.

The fly isn’t deaf after all …

As an example, previously we had 100% of related I/O activity in the initial table/index, 4 disk tablespace. However, only (say) 20% of the activity was actually related to the indexes, 80% was attributed to all the tables. By moving all the indexes into the new, 4 disk index only tablespace, we therefore reduce the load on the initial tablespace by 20%. We now have 80% of the I/O load on 4 disks and just 20% on the other 4 disks. Yes, performance might improve as a result but we could do so much better. Currently, 4 of the disks have 20% of all segment related load on them and the other 4 disks have just 5% of all associated load.

Instead, if only we either added the 4 disks to the other 4 disk set and striped both tables and indexes across all 8 disks or moved and distributed both indexes and tables into the new 4 disk set, we might have been able to distribute load much more evenly across all 8 disks with approximately 12.5% load across each one.

By doing so, we may have improved performance by an even better and more significant amount. Conversely, by separating indexes into their own tablespace, we may actually be hurting general database performance because database performance is not optimal due to the uneven distribution of I/O related activities.

Of course, there’s a very easy way to confirm this. Look at the statistics in V$FILESTAT or look at a statspack report and carefully study the physical I/O activity in the table only and index only tablespaces and compare the results. Just how evenly distirubuted are the I/O related workloads …

Yes, there are scenarios where distributing individual segments here or there may be beneficial but the overall objective is generally to try and even out disk/spindle workloads as much as possible. Separating all indexes blindly is typically a very poor method of trying to achieve this.

If an individual query is not likely to improve by having an index in a separate tablespace and if separating indexes results in a non-uniform distribution of physical I/O activity, then you may want to start questioning whether it’s all really worth it.

Of course, database recoveries will be simplified by having indexes in their own tablespace, right ?. Ummm, I’ll tackle that myth next …

Separate Indexes From Tables, Some Thoughts Part I (Everything In Its Right Place) April 16, 2008

Posted by Richard Foote in Oracle General, Oracle Indexes, Oracle Myths, Oracle Opinion, Tablespace Management.
19 comments

Although by no means as common as it once was, there’s still some who believe separating indexes in a different tablespace from their parent tables somehow improves performance.

The theory goes that by having indexes in their own tablespace, we can reduce overall contention issues and thereby improve the overall performance of the database.

Here are some thoughts for consideration for those who might be so inclined …

First, let’s just have a little look at the behaviour of a “typical” large scale index range scan, using an index with a height of say 3.

We first begin by accessing the root block of the index. This is a single block read which for many high accessed indexes would typically be cached and not result in a physical I/O. Next we read an intermediate branch block. This is also a single block read and is also likely to be cached if the index is heavily accessed. Regardless, it’s another index related I/O. Next we finally reach and read the first index leaf block containing the start of the index entries of interest. Again, it’s a single block I/O and again it’s index related.

So far we’ve performed 3 “random”, single block I/Os of index related blocks. If the index were in a separate tablespace, all the action would only be on the index tablespace thus far.

We next read our first table block containing the first row referenced by the first index entry of interest. This yet again is a single block I/O that could potentially be any block within the table. If the table were in a separate tablespace from the index, we would still need to perform a physical I/O (assuming the block isn’t already cached) on a “random” block within the table tablespace. If the table were in the same tablespace as the index, we again need to perform a physical I/O on a random table block. Still no difference thus far.

We next (very likely) reference the same index leaf block to determine the second row of interest. Note this block will almost certainly still be cached as it’s just been accessed. Therefore, if the index were in the same or different tablespace to the table, still no difference as there’s no associated physical I/O.

We then read the second table block of interest via a single block I/O. Unless this index has a very good clustering factor, we’re likely to read a totally different table block that could be any other block within the table. It’s extremely unlikely therefore to be the block that is physically contiguous to the block previously read. Only if the index were very well clustered, could it possibly be the same block as previously read or possibly the next logical block in the table.

However, in all these scenarios, having the table in a separate tablespace still makes no difference at this stage. We either need to perform another physical I/O on the table or we perform just a logical I/O. Even in the extremely unlikely case the next block read is physically contiguous to the previous block read, it would still be contiguous whether the index was separate or not and not be impacted by the index read activity thus far. Again, thus far it makes no real difference having the index in a separate tablespace.

We go back to the same index leaf block to determine the next row of interest and then access the next table block, which for a large randomly distributed table is again likely to be another different block. The point being we’re accessing the index and the table in a sequential fashion, reading the index, then reading the table. Reading the index and then reading the table again.

For small index scans, the index leaf block in question is likely to be the same single leaf block as a leaf block can potentially store hundreds of index entries (depending of course on block size, index row size and where within the index leaf block we logically begin to read the index entries of interest). So for small scans, it’s not going to have any real impact having indexes in a separate tablespace as we’re basically reading a few index related blocks followed by the table related blocks.

The table blocks are likely to be different blocks in a randomly distributed, poorly clustered index or possibly (although more rarely) a small sample of blocks in a well clustered index. However, in either scenario, if if we need to access just the one leaf block, it makes no difference whether the index is in a separate tablespace or not, the I/Os and so-called contention are the same regardless.

In some scenarios, Oracle can perform a prefetch step whereby it orders the index entries based on the rowids to first determine which table blocks need to be accessed, thus preventing the same table block having to be re-read several times. However, again, it makes no difference thus far if the index is in a separate tablespace or not as the I/O requirements are the same regardless.

In larger index range scans however, we might need to visit the next logical index leaf block or indeed subsequently many such index leaf blocks. Note each leaf block contains a pointer (referred to as kdxlenxt in a block dump) so Oracle can directly access the next index leaf block. If our index were in a separate tablespace and making the HUGE assumption that there’s thus far been no other activity in the index tablespace, the disk head may not have moved from where it left off after reading the last leaf block. With the indexes and tables coexisting in the same tablespace, we have very likely moved on from this location with any subsequent table related I/O activity.

Maybe now at last, finally  we have a benefit in having indexes in their own tablespace …

However, reading the next index leaf block is again a single block read and most importantly is not necessarily “physically” contiguous to the previous leaf block. Remember, index leaf blocks split as part of their natural growth and the new block allocated is simply the next block available in the index freelist. Therefore the next logical index leaf block in an index structure could physically be virtually anywhere within the extents allocated to the index. When we read the next “logical” index leaf block, it does not necessarily mean it’s the next “physical” block within the index segment. It’s likely just another random, single block I/O.

That being the case, again we have no benefit in the index being in a separate tablespace. In both scenarios, we have to go scanning the disk looking for the physical location of the next index leaf block (again assuming the index leaf block isn’t already cached). This activity needs to be performed whether the index is in it’s own tablespace or not.

When we move back to read the next table block based on the first index entry from the newly accessed index leaf block, again, it’s extremely unlikely the next table block accessed will be the next contiguous block from the previously read table block. So again, we very likely need to go a hunting for the next table block on disk, regardless of it being in a separate tablespace from the index. Again, separating indexes from tables makes no real difference.

So not only do we move between index and table in a sequential manner but the actual blocks read within both the index and the table are likely to be totally random, non contiguous, single block reads.

That being the case, what are the performance benefits of storing indexes and tables separately ? How does storing indexes and tables separately actually reduce contention when most physical I/Os in both index and table segments are effectively random, single block reads ?

Now this example has just been a single index scan, performed by one user on just one index and table. The benefits therefore of separating indexes and tables even in a single user environment are somewhat “dubious”.

However, how many environments only have the one user. Not many. Most environments have lots of users, some with many hundreds, some with many thousands of concurrent users . All these users are potentially performing concurrent I/O operations, not only potentially on these very same tables and indexes but on lots of different tables and lots of different indexes within our table and index tablespaces. Even if index leaf blocks were to be physically contiguous in some cases (such as monotonically increasing indexes where this is more likely), by the time we’ve read the index leaf block, processed and read all the associated table blocks referenced by the index leaf block, the chances of there being no subsequent physical activity in the index tablespace due to another user session is virtually nil. We would still need to re-scan the disk to physically access the next index leaf block (or table block) anyways.

Add to the mix the fact many sites now use SANS, NAS, ASM etc. and what might appear to be one contiguous file could actually be physically split and spread all over the place. The whole notion of what is actually physically contiguous and what isn’t is blurred anyways.

The next time someone suggests separating indexes from table improves performance, you may just want to ask a couple of little questions; why and how ?

However, I’ll next discuss how indeed performance can improve by storing indexes in a separate tablespace. But just like our mad scientist thinking flies with no wings go deaf, I’ll explain how the performance improvement is not actually directly related to the indexes being separate from the tables.

I’ll also discuss how database recoveries are not typically helped by having indexes in a separate tablespace as often suggested.

Most Influencial Person In My Career (I Am…I Said) April 10, 2008

Posted by Richard Foote in Oracle Blog, Oracle Opinion, Richard's Musings.
14 comments

I previously listed four of the people who have probably had the biggest impact on me as a DBA.

However, the single most important person in my career as a DBA, the person who’s had the biggest impact in all my various successes and failures throughout my career, is undoubtedly the one and only Richard Foote.

Congratulations Slater :)

Now I’m not suggesting for one minute I’m as capable or knowledgeable as the four previously listed, indeed I can say with some confidence that I’m not, but there’s no question that at the end of the day, I’m ultimately responsible for being the DBA (and indeed the person generally) I am today.

Influences are of course very important, but it’s up to the individual to ensure all influences (good and bad) become positive experiences. It’s entirely up to the individual to take those influences and to find the drive, the energy, the motivation and the enthusiasm to be as successful, as capable, as knowledgeable and as competent as they can be.

Or indeed as “successful” as one ultimately wants to be because all these things are measured and mean something different to each individual. The scale that really matters, the best measurement to determine the level or standard or confidence that one has achieved is ultimately happiness. When you walk into the office each morning, how do you feel about yourself? You really don’t need to be world’s best Oracle expert (or in any subject matter or profession) to feel good about yourself, to feel you’re heading in the right direction and that you’re at a stage in your career, your work-life journey, where you want to be. To feel like you’re a bloody good and successful DBA.

If you’re “happy” with where you are, congratulations, because you’re the one that’s had the biggest impact and influence in your “success”. If you’re not happy, if you’re not satisfied with where you’re at, if you feel you’re behind where you really want to be, if you’re not the Oracle DBA you want to be (or developer, or pilot or porn star, or whatever), the good news, the really exciting and positive news, is that it’s entirely in your own hands to turn things around.

I spend a good portion of my life at work. I probably spend as much time talking to my work colleagues as I do talking to my own family. I certainly spend more time working on Oracle databases than I do working in my garden or playing football or losing at computer games against the kids or watching David Bowie DVDs. Therefore, it’s really important to me that I enjoy what I do at work and that I’m as good at my job as I can reasonably hope to be. How much I enjoy my work is very much related to how confident I feel in my capabilities and in how much I continue to learn and grow in my abilities. Ultimately, I’m directly responsible for it all…

After I do a presentation or talk, people often ask how do I know all that stuff, how come I know so much about bloody indexes, where did I pick up all that 10g/ 11g stuff. It’s no secret, I spend a lot of energy researching, experimenting and investing time into learning more and more about that which I’m responsible for; which is lots of Oracle databases that have lots of important information for lots of people.

The four people I mentioned as influences certainly have had a big impact in how I approach my learning and my work generally, in how I attempt to better myself, in my drive to test things for myself, in how I view what’s possible and what’s important. However, they can’t actually put things in my brain; they can’t force me to spend hours determining how the behaviour of bitmap indexes changed in 10g, or make me spend hours practicing different types of database recoveries with and without RMAN, or make me start this Blog or make me research and write a 2 day index internals seminar, etc.

That’s all up to me.

How I deal with failure, how I learn (or not) from mistakes, how I determine right from wrong (at so many levels) how I handle criticism, how I admit and respond to errors, how I judge and police values and how I actually absorb and turn influences and feedback into positive experiences is also totally and entirely up to me as an individual as well.

The key point I want to make is that when discussing influences and who has contributed and had an impact in your successes and in your career, ultimately the person who has had the biggest impact is you.

Top 5 Most Influencial DBAs In My Oracle Career (“Heroes”) April 8, 2008

Posted by Richard Foote in Oracle General, Oracle Opinion, Richard's Musings.
17 comments

When I was at OpenWorld last year, I was asked by a couple of people a question that’s been asked of me quite a number of times before. Who has been the biggest influence in my career as an Oracle DBA, just who has had the biggest impact in shaping the Oracle DBA I am today.

It’s actually a really difficult question to answer because it first assumes I actually know exactly what sort of DBA I am, which I’m not sure is entirely the case. It also assumes that “this shape” is fixed, which it isn’t. I literally learn new things about Oracle on a daily basis so I’m continually evolving and developing and “growing” as a DBA.

The answer I generally give surprises most when I give it but when I explain my reasoning, it generally makes sense and they accept where I’m coming from. So I thought I might share the top 5 Oracle DBAs who have most shaped and influenced this Oracle DBA I am today.

Reducing what is overall quite a massive list of influences to just 5 is a really really difficult process, but these 5 are probably the most influential in not just what I actually “know” about Oracle, but more importantly, how I actually go about continually learning and growing and developing as a DBA.

Four of them in no particular order are:

Steve Adams. I’ve had the pleasure of meeting Steve a number of times and the most important thing he taught me was just how much I actually didn’t know about Oracle !! Initially, I looked at Oracle as simply being this “car” if you like, that had an “engine” and had a thing you did to switch it on and a thing you turned to make it go where you want and if you did these things every now and then, this “car” ran that little bit better (or so it seemed). However Steve made me realise that Oracle was actually made of lots and lots of little parts and that an “engine” was actually made of lots of different components that worked together and the more you knew how these components actually worked and interacted, the easier and more effective one would be in tuning and finding what might be at fault. However, he didn’t just know that this bit was a “starting motor”, he went way way down into knowing what all the little bits ‘n’ pieces were that made up the “starting motor” and the “distributor” and the “CD player” and pretty well every part of the whole “car” !! And not just for this model of Oracle, but for pretty well all models dating back to almost when Oracle began.

Steve really opened up my eyes into appreciating all that there really was to potentially learn about Oracle, the importance of having some understanding of the nuts ‘n’ bolts to be effective and that no matter what, I will never, ever, stop learning and relearning how Oracle actually works. I will never have the knowledge that Steve has about Oracle, I will likely never get close but he gave me the drive and ambition to at least try. He’s also a fellow Aussie so deserves additional bonus points ;)

Tom Kyte. I’ve only met Tom once very briefly at OpenWorld last year. However, I feel like I know him so well thanks to his fabulous Ask Tom website. Tom has probably taught me more about Oracle itself than just about anyone but he’s also taught me something far more important as well. Tom taught me the importance of “proof”, how to demonstrate and actually “show” how Oracle works and functions. Rather than just saying 1+1=2, he can actually demonstrate that 1+1=2, why it’s so and give me a script that I can run and test and modify so I can learn why and how 1+1=2. Most things in Oracle can be illustrated in this manner and these skills have been a huge influence on not only what to believe, but in how to determine and investigate things for myself. If someone claims 1+1=3 but doesn’t have the capabilty to show why it’s so, then in my experience there’s a very good chance that 1+1 doesn’t actually equal 3 afterall.

Also, Tom’s books are among the best Oracle books I’ve read and really showed me what a good Oracle book actually looks like. Basically Tom taught me “how to fish” and I’ve been catching fish in the Oracle Ocean for most of my career thanks in large part to Tom.

Jonathan Lewis. I had the pleasure of meeting Jonathan and showing him the wonders that is sunny Canberra a few years ago. Jonathan, like Tom and Steve knows more about Oracle and the internal workings of Oracle than I will ever hope to know. The Oracle knowledge this man has is amazing. 

The core, the brain even of the Oracle database is the CBO and very often when Oracle is “sick”, it’s directly related to the CBO not doing what it should be doing or not doing what you think it should be doing. Jonathan reminds me very much of a really really good doctor or surgeon who not only is able to quickly diagnose a specific problem with one quick glance of a “medical chart” but is able to get in and successfully perform the necessary surgical procedure with no fuss, ensuring the patient makes a quick and successful recovery.

Jonathan taught me the importance of correctly diagnosing a problem in order to apply an appropriate solution. He also highlighted just how complex the CBO really is, how important it is to actually understand how the CBO (and Oracle in general) works and why it’s vital to correctly understand and interpret the various costs and behaviours in order to apply an appropriate solution. Unless you understand the hows, the whats and the whys, unless you really understand the problem, you’re not really in a position to apply an appropriate solution. Jonathan never guesses, rarely assumes and if he does, it’s an educated guess and he’ll explain his reasoning for making any such assumptions.

This “discipline” of his and his process in diagnosing a problem has been extremely important in determining how I look at a problem. If I don’t know what’s actually going on, if I don’t understand the root cause of a problem, then how can I expect to solve it successfully.

Cary Millsap. A few years ago, I attended a class of Cary’s in Sydney where I had the opportunity in turn to explain the wonders that is Rugby League over a few beers. He’s notable also as being one of the very few people to fully appreciate my (somewhat infamous) “Rupert The Rat” joke …

During my time at Oracle, one class I never particularly enjoyed teaching was the Performance Tuning course as I felt really uncomfortable with the contents and the manner in which the topic of database tuning was addressed. Method C wasn’t a process I ever felt comfortable teaching. Although I had been focusing on the wait interface for quite some time, for me, Cary’s (and Jeff Holt’s) book Optimizing Oracle Performance was the first time I read a book specifically on performance tuning where I said, “Yes, yes that’s it”. Cary’s book and his teachings so perfectly articulate the importance of knowing exactly where time is being spent when poor response times are problematic, so one can focus on a solution that will actually make a difference. Again, understanding “what” the problem is by knowing where all the time is being spent. Again, the “don’t guess when you can know” principle. It’s a process I successfully apply again and again in diagnosing database problems and Cary’s focus on Method R has been very influential. Besides, anyone who appreciates Rupert The Rat deserves a special mention :)

The final person in my Top 5 list is most certainly the most important and significant in determining how I’ve evolved over the years into the DBA I am today. It’s possibly a somewhat controversial choice and there may be some who would possibly disagree. However, in my opinion, this person should likely be in everyone’s Top 5 list (big call I know) so I’ll leave the identity of this last person for my next post.

Buffer Cache Hit Ratios: Useful or Not ? December 16, 2007

Posted by Richard Foote in Buffer Cache Hit Ratio, Oracle General, Oracle Myths, Oracle Opinion, Richard's Musings.
6 comments

The question of whether the Buffer Cache Hit Ratio (BCHR) is a useful metric or not is one of those discussions in Oracle that seems to crop up on a regular basis. I thought it might be worth briefly mentioning the topic here.

The BCHR represents the percentage of LIOs in which the required block is already loaded in the buffer cache(s). The higher the BCHR value, the greater the percentage of blocks accessed directly from memory and the lower the subsequent ratio of physical reads. A “higher” BCHR is generally considered a good thing as it’s a good thing to find required blocks in memory (right ?), a “lower” BCHR is generally considered not such a good thing as it’s bad to perform a higher ratio of physical reads (right ?).

The first problem with the BCHR of course is that it’s often miscalculated with many not appreciating for example the impact of direct reads on the actual physical reads used in BCHR calculations.

Assuming the BCHR is correctly calculated, the second problem with the BCHR is that it represents a database wide average. Averages are always dangerous things as they generally completely hide specific details and anomalies. Unless the average represents and can guarantee some consistent database metric or behaviour, then the average quickly becomes just a number, without any actual or inherent meaning.

A specific BCHR value at any specific point in time doesn’t actually tell us anything meaningful about the performance of specific tasks within the database. A database generally performs 100s or 1000s or 10000s of tasks at any given point of time. Unless all tasks or a significant percentage of tasks exhibit the same performance issue, then a single database-wide metric will be unable to determine issues with these specific tasks. The “average” figure hides details of the specific issue.

Therefore, at what point or at what value does an average figure provide meaning ?

The third problem with the BCHR is that these specific tasks within the database sometimes perform activities that are “good” and “efficient” but can result in the BCHR either going up or down or remain unchanged. Sometimes these activities can be “bad” and “inefficient” but can also result in the BCHR either going up or down or remain unchanged. Therefore without understanding what these specific activities might be, it’s simply impossible to know whether a specific change in the BCHR is good or bad.

Let’s assume we have a BCHR of 90%. Is this a good thing or is this a bad thing ? Is database performance an issue with a BCHR at 90% or is database performance good ? Is database performance good or bad generally or are there specific processes within the database that are problematic ? The answer of course is that it entirely “depends” and a value of 90% or 95% or 50% can’t in of itself answer any of these questions.

We can have a BCHR at 90% and performance can be terrible. It can be terrible at a database wide level due to any number of problems or issues or it can be terrible for specific users using specific applications or processes.

We can have a BCHR at 90% and performance can be perfect, with all users achieving optimal response times.

A BHCR of 90% is useless on it’s own. We need to go and perform all manners of additional checks to ensure the database is “healthy”.

However, even those who claim the BCHR is a meaningful and useful tuning metric generally agree and admit the BCHR on its own has no inherent usefulness and that it needs to be used in combination with other database “checks”. They generally claim that it’s the BCHR when monitored and used from a historical point of view with fluctuations of its value over time that makes the BCHR useful.

Really ?

Let’s again assume the BCHR has been at 90% for the past few weeks (or whatever time-frame) and it has now dropped to 85%. The Quest Spotlight monitor is flashing red and obviously something has “changed”. However, has it changed for the better, for the worse or has it had no noticeable impact on the “health” of the database (perhaps simply the specific workload has changed) ?

Well again, without looking at other specific metrics, one can’t possibly answer these questions. Perhaps we have an important process or (processes) that have suddenly started performing expensive, inefficient Full Table Scans. That’s not good, as the extra logical and physical IOs have impacted response times detrimentally. Things have indeed gone worse. Or perhaps we have a process that was performing an extremely inefficient nested loop operation, reading the same cached table numerous of times that is now performing the same function much more efficiently, reducing LIOs significantly. Response times may have improved and things are indeed better. Or perhaps there’s a large batch program or report that needs to be urgently run during normal business hours that’s resulting in lots of physical IOs to the database, but is not actually impacting the more important online transactional systems. Actually, the health of the database may not have changed at all.

Therefore, having a BCHR that has dropped to 85% (or whatever value ) doesn’t actually tell us much other than something may have changed. But it may have changed for the worse or the better or not significantly at all. There’s no way of knowing without performing further database checks.

Let’s assume the BCHR has gone from 90% to 95% (or whatever higher value). The Quest Spotlight monitor is flashing nice and green but something appears to have changed. However, has it changed for the better, for the worse or has it had no noticeable impact on the “health” of the database ?

Well again, without looking at other specific metrics, one can’t possibly answer these questions. Perhaps we have a key process or (processes) that was previously performing expensive, inefficient Full Table Scans that are now performing efficient index scans. That’s good, the reduction in logical and physical IOs have impacted response times positively. Things have indeed gone better. Or perhaps we have a process that was previously performing efficiently that has suddenly started to perform extremely inefficient nested loop operations, reading the same cached table numerous of times, increasing LIOs significantly causing the overall BCHR to increase as well. Response times may have plummeted and things are much worse. Or perhaps there’s a series of large batch programs or reports that usually run during normal business hours but the reporting section are on a Xmas lunch and haven’t bothered to run them today resulting in a reduction of physical IOs to the database, but is not actually impacting the more important online transactional systems. Actually, the health of the database may not have changed.

Therefore, having a BCHR that has increased to 95% (or whatever value ) doesn’t actually tell us much other than something may have changed. But it may have changed for the worse or the better or not significantly at all. There’s no way of knowing without performing further database checks.

Let’s assume the BCHR has not changed at all and is still sitting at 90% (or whatever value). The Quest Spotlight monitor is flashing nice and green but nothings appears to have changed. However, has nothing really changed, or could things now be seriously wrong with the database ?

Well again, without looking at other specific metrics, one can’t possibly answer these questions. Perhaps we have a key process or (processes) that was previously performing expensive, inefficient Full Table Scans and are now performing efficient index scans. That’s good, the reduction in logical and physical IOs have impacted response times positively. Things have indeed gone better but because the BCHR is a database-wide metric, this improvement made have gone unnoticed. Or perhaps at the same time we have a process that was previously performing efficiently that has suddenly started to perform extremely inefficient nested loop operations, reading the same cached table numerous of times, increasing LIOs causing response times to plummet and making key business processes much worse. But again because the BCHR is a database-wide metric, the overall BCHR may not have been impacted.

Or of course, one of hundreds of things have caused serious database performance issues while the BCHR remains totally unchanged …

Therefore, having a BCHR that has remains unchanged doesn’t actually tell us much either. The database made be running better than previously, the database may be having serious problems or the “health” of the database may remain unaltered.

So, the BCHR at any specific value doesn’t tell us much without having to check other database metrics as well.

The BCHR increasing doesn’t tell us much without having to check other database metrics as well.

The BCHR decreasing doesn’t tell us much without having to check other database metrics as well.

The BCHR remaining unchanged doesn’t tell us much without having to check other database metrics as well.

Note the database metrics we would need to check (for example, those queries using the most LIOs, those queries using the most PIOs, those queries using the most CPU, those queries being executed the most, those queries with excessive response times, causes of the most significant waits in the database, specific application/process response times, etc. etc. etc…) are exactly the same in all the above scenarios.

The biggest problem of all with the BCHR is that regardless of it’s values, or whether it goes up or down or remains unchanged, we need to perform precisely the same database checks regardless as it doesn’t tell us whether the “health” of the database has improved, got worse or remains unchanged.

If we need to perform the same actions regardless of the BCHR, then I suggest the answer to whether the BCHR is useful or not is a big and resounding no.

Index Internals – Rebuilding The Truth December 11, 2007

Posted by Richard Foote in Oracle Indexes, Oracle Myths, Oracle Opinion, Index statistics, Richard's Musings, Index Height, Index Internals, Index Rebuild, Index Shrink, Index Coalesce.
7 comments

The issue of when to rebuild indexes crops up again and again and again. Unfortunately, so do the same incorrect, myth-filled uninspired responses which considering how important and “key” (no pun intended) indexes are to database design and performance generally, is very puzzling indeed.

In the coming days I’ll post why I believe these index related myths simply refuse to go away …

This presentation was originally written several years ago but is still as relevant today as it’s always been.

Recently updated version: Index Internals – Rebuilding The Truth

Follow

Get every new post delivered to your Inbox.

Join 1,704 other followers