jump to navigation

Why are there so many Oracle related myths ? The Inconvenient Truth December 12, 2007

Posted by Richard Foote in Buffer Cache Hit Ratio, Oracle General, Oracle Myths, Richard's Musings.

NOTE: If you’ve linked here from Burleson’s truely awful “Oracle Index Rebuilding” article, (no, I’m not Ken Adkins !!), I suggest you read this post very carefully, as it’s most applicable considering where you’ve linked from 😉

Hundreds of years ago, most people thought the world was flat and if you travelled far enough, you’ll simply fall off the edge of the world. It seemed like a reasonable thing to believe, I mean the world kinda looks flat doesn’t it? It just didn’t make sense to think otherwise. Also, as most people didn’t really travel very far in their lives, it wasn’t a big deal to most people. The chances of falling off the world were remote. Very few actually pushed the boundaries to find out the truth; whether the world was flat or not  just wasn’t relevant or important to most people.

Well about 10 – 15 years or so ago, most people thought Oracle was “flat”. OK, not flat exactly, but certainly that a segment with one extent was better for performance than say 20 extents, that having a PCTINCREASE of 1 reduced fragmentation, that a high BCHR meant all was likely well with the database, that indexes should be rebuilt on a regular basis etc. etc. I mean it all sounds like it makes sense doesn’t it? Many new to Oracle came from other databases and technologies where these sorts of things were perfectly acceptable practices in their old environments, so it made sense they were necessary in Oracle too. They were also all based or founded on some level of fact or half truths but half truths have a habit of becoming full truths and over time get set in stone as cold, hard facts.

In those days, it really wasn’t much of an issue anyways. Databases were relatively small, anything over a few GB was considered enormous so rebuilding and reorganising things, although inconvenient, was all very plausible. Very few databases were truly 24 x 7 with most having those long, dark, lonely nights fully available to play around with the database, without disrupting business processes.

There was no Internet in those days or it was just emerging and so communication in all things Oracle was mainly through books and conferences where relatively few “experts” dominated. There were very very few that pushed the boundaries, most people were content to simply read and hear they were spending those dark nights doing something constructive. Everyone suggested rebuilding indexes on a regular basis was a good thing and improved performance, so it must be right, right ?

However, things changed. Over time, databases kept getting bigger and bigger with hardware not keeping up with shrinking maintenance windows. Organisations demanded more and more availability and those lonely nights became as mission critical as the traditional 9am-5pm.

The Internet became a hugely important communication vehicle with people able to ask questions and get instant answers from the other side of the world. Suddenly someone could make an erroneous comment and instantly someone could correct them.

This enabled a whole new wave of Oracle professionals to make an impact. The previously few recognised authors and noted experts were swamped by a whole new breed of Oracle expert, who were able to easily communicate and share their knowledge through websites and a mass of Oracle related forums. They weren’t satisfied with simply rehashing Oracle manuals and books, but treated truly understanding Oracle as a scientist would try and truly understand their specialised field. Jonathan Lewis, Steve Adams, Tom Kyte,  Anjo Kolk, Mogens Norgaard etc. become well known in the Oracle sphere and began changing how the majority saw and understood Oracle. People questioned authority and began testing and proving things for themselves.

Rebuilding indexes on a regular basis not only became questionable (as did all the other myths), it become unfeasible and impracticable. A recent survey by the IOUG suggested 35% of Oracle shops surveyed had at least one database over 1TB. DBAs started looking after more and more databases, looking after bigger and bigger databases, looking after more and more complex database environments.

Do we really need to rebuild these damn indexes ? As these myths were investigated, tested, analysed and pulled part, the answer become more and more obvious.


It simply required a clear and accurate understanding of how indexes actually behaved for the truth to be known, as researched and documented by the new breed. It also required enough Oracle people needing a solution to their maintenance issues, needing to actually justify business unavailability, for actual facts to start replacing myths.

Most indexes never need to be rebuilt. Never, ever.


So is the answer above really no ? Is the world truly not flat and safe to walk on for as long as we could want ?

If you were to blindly walk (for example) to the Grand Canyon in the USA, someone could indeed plunge off the “end of the world” to their doom. Some indexes if not rebuilt could cause performance issues if not rebuilt.

Here lies one of the main reasons why these Oracle myths never seem to die, they’re often based on half truths and it’s determining which half is which that can be problematic (although the ratio is more accurately 999-1 than 50-50). Determining that 0.1% is not necessarily a straight forward process, it can’t always be determined by a simple formula or rule of thumb.

Also human nature can be a funny thing. People are often either too lazy or too busy or both to want a complex answer to a complex question. People love easy solutions, even if it’s not actually the right solution. The path of least resistence is often the most traveled path, even if it leads to a sudden drop at a cliff’s edge …

But why in this information age, is finding the answer of what is truth and what is myth so problematic ? With so much good, clear, accurate information out there, with so many respected and knowledgeable experts out there spreading a true understanding of Oracle, why can the truth still be so hard to determine ?

The answer is “Information Pollution“.

Historically, there has been so much inaccurate information written and published about Oracle, there have been so many “wrongs” written as “rights”, that technical  accuracy can often be “clouded” by this pollution. Also, some of the older breed simply refuse to accept today’s realities. They desperately cling to old habits else fear losing creditability.

Oracle Corportation itself must accept some of the blame. They’ve historitically documented things incorrectly, they still have metalink notes that are inaccurate, host forums that spread misinformation. Although Oracle produces incrediably detailed documentation and resource rich web sources, Oracle itself is one of the big “Information Polluters”.

So, there are still many people who spread myths. There are still many people who think all indexes should be rebuilt regularly if they experience lots of DML, that indexes should be rebuilt if they’re have a height greater than some level.

There a literally thousands of Oracle newbies coming into the scene each year. How can they tell what is right from wrong (or half right from half wrong) when they have no prior knowledge or experience with Oracle ?

Not easily.

And if they pick the wrong source and learn inaccurate information and adopt inefficient processes, guess what ? They in turn become information polluters and so the clouds potentially thicken …

And so these Oracle myths continue to persist and never seem to quite disappear. So is Oracle in danger of “Global Warming” ?

No, because thankfully, the amount of good information far outweighs the bad. Oracle as a community is producing far more “Oxygen” than “Carbon-Dioxide”. The number of people who think one extent is better or that indexes need constant rebuilding, as percentage of the whole, is dwindling and dwindling over time. More and more people appreciate the world really is a lovely round planet because they have to know. Just as they have to travel to the other side of the world, they have to maintain larger and larger databases with less and less disruption to their businesses.

Importantly, there’s also a very easy and generally very accurate way to determine the right from wrong, the wheat from the chafe.

Simply prove it.

If something within Oracle (or anything really) is right and accurate and correct, then it can generally be proven. The Jonathan Lewis and Steve Adams and Tom Kytes of this world not only accurately document how an Oracle process works and functions, they generally prove it with clear, consistent, repeatable test cases. They don’t simply say a=b they actually demonstrate and prove it.

Whereas those that deal in myths and inaccuracies by definition are unable to prove it. They makes claims and suggestions and generalisations but lack that one essential ingredient that differentiates them from those that deal with facts and technical accuracies.


And without proof, one can clearly pick out the “Information Polluters”. The next time you read something from somewhere, just ask yourself this one simple question. Are they simply stating something as fact or are they actually attempting to prove something as fact. Can that proof be copied across into your particular environment ? Is it still accurate ? Can you extend and expand on the claims and is it still accurate ?

An answer in the positive is a hell of a lot better than an answer that consists of nothing more than a “because it just is” …

Why are there so many Oracle related myths ?

Global Warming !!

Is it too late ?

Not if we meet the Oracle Kyoto Treaty and demand clean oxygen-based proofs 😉


1. Yas - December 12, 2007

Excellent post Richard.

As we all know some people claim that you cannot depend on simple test cases to prove or disprove something, they claim that you need complex real life scenarios to prove it. Based on this they say they cannot document such a complex scenario and they say they have seen cases where for example rebuilding an index boosted performance.

The claim that you need real life scenarios is surely true for some cases. But I think rebuilding indexes is not one of them. When or why to rebuild an index can be clearly explained with test cases as you mentioned. But we do not see those test cases mostly. Just claims.


2. Brian Tkatch - December 12, 2007

Great article.

“If something within Oracle (or anything really) is right and accurate and correct, then it can generally be proved. The Jonathan Lewis and Steve Adams and Tom Kytes of this world not only accurately document how an Oracle process works and functions, they generally prove it with clear, consistent, repeatable test cases. They don’t simply say a=b they actually demonstrate and prove it.”

I have one issue with this. The people you mentioned not only test, but they also know:

1) what to test – they know what the core issues are
2) where to test – they know where to look (system tables)
3) how to test – that comes from experience

If the average person would try to test, even if they want to document everything, they may just not know the internals well enough to know what and where to test, which could end up “proving” the wrong thing, simply because they tested or modified the wrong things.


A couple side comments

Well over two thousand years ago Pythagoras measured the circumference of the earth, Plato called it such, and Aristotle brought three proofs to its sphericity. There are probably more people today that believe the Earth is flat than there ever were.

The Internet has been around since the fifties (called ARPAnet then), and was available to many in the eighties, especially the newsgroups (which predates most other things). It became more popular in the 90 s with the introduction of HTML, and because attainable in 93-94 when GNN and other ISPs started providing residential phone access.


3. joel garry - December 13, 2007

Minor correction: see wikipedia for dates of ARPAnet. http://en.wikipedia.org/wiki/Image:First-arpanet-imp-log.jpg

I also think the limitations of testing aren’t often explicated as well as they should be to be called “science.” But you (Richard), Jonathan and Tom sure are moving in the right direction. Brian, you have a good point about vulgar testing 🙂 but I think the how doesn’t have to come from experience, I think a good methodology can be taught. Whether anyone has one may be the issue… but isn’t the basic scientific method taught in general these days? Yes, you need the what where and how for bigtime mythbusting, but reading through the “good” guru’s publications, they show how anyone can test for specific situations.

We should consider that the more complex issues are where the “art” is. In the real world, we have to make decisions without sufficient information for a mathematically provable solution. We need to define acceptable actions for that. Just because certain terms (like ROT) have become anathema doesn’t mean they aren’t useful in the real world, we can take them back by giving them a proper meaning and usage.

Great blog Richard, keep it up.


4. dizwell - December 13, 2007

I’m going to demur a bit, but only a bit.

As you said yourself, a lot of people a lot of time simply wants an answer that works, even if it’s not “right”.

You also said (and I again agree) that the trouble with the ‘99.9% of indexes don’t need rebuilding’ argument is in working out how you identify the 0.1% cases that do.

Put those two points together and you have a plausible case, in some circumstances, for continuing to do flat-earth type things because (a) they work and (b) it isn’t hard to get them to work whereas (c) getting it ‘right’ is too hard.

Which brings me back to: if you have a large maintenance window; and you truly have nothing else to be doing with your CPU; and you don’t mind giving it a go, just in case, why not rebuild every index in sight? If things go horribly wrong once the maintenance window is closed, you’ll know not to take this approach again. Hard labour it has to be for you! But if nothing happens, or if people say, ‘things seem a bit snappier today’, and if that maintenance window, CPU abundance and lack of better things to do with your time and resources persists, then doing the “wrong” thing would seem to be pragmatically OK by me.

Practically, I believe this approach will still mean that 99.9% of users will not rebuild 99.9% of their indexes. But it saves most people having to identify the 0.1% of “true” cases when that is hard to do and there may be better things to do with your time anyway.

I’m not sure that flat-earth analogy really holds, in any case. We know the Earth isn’t flat; even the ancient Greeks knew that. Flat-Earthers are therefore just wrong, plain and simple. But consider Newtonian versus Einsteinian gravitation: turns out that for any low-velocity frame of reference, Newton works fine. But in a high-velocity frame of reference, he becomes increasingly inaccurate and only Einstein gets it correct. Point is, Newton was ‘wrong’, but he’s still good enough to be used, even today, in most ‘human-scale’ calculations. You turn to Einstein only when you truly need to take account of relativistic effects. Somewhere in there is an analogy, therefore: “rebuild your indexes if you’ve got abundant time, cpu and nothing better to do and you haven’t determined before that it causes grief” is Newtonian, human-scale advice. The Einsteinian approach, based on internals, block dumps and a profound understanding of Oracle internals is undoubtedly more accurate and based on firmer foundations, but is a level of precision simply not needed for most people most of the time.


5. Richard Foote - December 13, 2007

Thanks for all the comments.

Brian, I wasn’t suggesting the web only started 10-15 years ago, only that it didn’t exist in most people’s homes and workplaces then.
I totally agree that it’s not easy for us “average” folk to have the same level of understanding or knowledge as a Jonathan Lewis. However, as mentioned by Joel, it’s their processes, their methodology that’s important and that we can all adopt. If we can see via (say) a test case or a test harness how a conclusion was reached, we then all have the capability to see and determine whether a conclusion is “wrong” or misses a point or is not appropriate in one’s own environment.

Howard, a point I was trying to get across is that many/most environments simply don’t have a large enough maintenance window to rebuild all indexes (or run a validate structure command). It’s just not an option, so they have to come up with a better strategy. That’s one of the reasons these myths are dying a slow death, DBAs need to know better strategy. And no, it doesn’t require block dumps or particularly profound database internals, it needs a good knowledge of the application, the data and an accurate understanding of how indexes are maintained.

Unfortunately, I know of sites which force scheduled downtimes on the business, to create “artificial” maintenance windows, artificial in the sense that business process are impacted for no good effect.

And that’s another point that’s often overlooked. Rebuilding all indexes is likely to cause as many issues as it solves. The impact of subsequent block splits when the pctfree is not carefully considered can be significant. The risk of archive areas filling up, standby databases having to keep up, etc all need to be considered as well.

The point with the flat-earth analogy is that there was a time when a significant number (majority) thought the earth was flat and were wrong. It wasn’t until people “cared”, that the majority saw things correctly. There was a time when a significant number of Oracle folk (the majority even) believed many Oracle myths to be true. It wasn’t until they “cared” (or business processes cared) that the majority saw things correctly. And 1 extent is better than 20 is simply wrong (perhaps even more wrong now with LMT, but still wrong). And the periodic rebuilding of all indexes to improve performance is wrong (perhaps more wrong now due to reasons I’ve explained, but still wrong).


6. joel garry - December 13, 2007

Ah, Richard, you finally almost made a mistake… 🙂

When you don’t qualify the 1 extent thing with “for performance,” you disallow one of the proper reasons for the old 1 extent thing: There was a time when accepting defaults meant you would be limited to 121 extents…

Sometimes the artificial maintenance window is a good thing: you can do maintenance that needs to be done without going through all the political rigamarole associated with bringing the db down – to me it appears common for management to require 7/24, when they really don’t need it. Especially with space issues that don’t much affect perceived performance, but simply waste space and push backup times into production times and make standby rebuilds that much longer, and lengthen scan times…

Amazing how many tail wagging the dog issues standbys create.


7. Brian Tkatch - December 13, 2007

How to test. Good points. Indeed, the methodology can be taught. It definitely has a lot in common with applying the scientific method to other areas.

The What and the Where, however, are still important. I may think i know where to look, but i do not, because i didn’t know about this or that. I would love to see an article that explains exactly this point. (And it would show the “How” in action.)

As for “The point with the flat-earth analogy is that there was a time when a significant number (majority) thought the earth was flat and were wrong.” I understand what the point of the analogy was, and you applied it very well. The anecdote itself, however, is incorrect. There is no proof that a majority of the people every believed in the flat-Earth. However, there is much evidence to the opposite belief.


8. Richard Foote - December 13, 2007

Joel, believe me, I make mistakes all the time 🙂

In the initial blog entry, I specifically stated “a segment with one extent was better for *performance* than say 20 extents”. In my follow-up, I intentionally said “1 extent is better than 20” rather than say 1000 for exactly the reason you describe. It’s also one of those half truths I mentioned, the half truth where the origins gets lost in the mist of time. Thanks for clarifying the issue though, it’s certainly a point well worth making.


9. dizwell - December 13, 2007

Quote: “Howard, a point I was trying to get across is that many/most environments simply don’t have a large enough maintenance window to rebuild all indexes (or run a validate structure command).”

Then we don’t have a problem, for if there isn’t the maintenance window available, a pragmatic ‘do it anyway, just in case’ rebuild isn’t an option and that line of attack is ruled out, force majeur, by mere circumstance.

Quote: “And that’s another point that’s often overlooked. Rebuilding all indexes is likely to cause as many issues as it solves. The impact of subsequent block splits when the pctfree is not carefully considered can be significant.”

Then we don’t have a problem again, for as I said first time round, you will see this as a major problem the day after your maintenance window, assuming you have one, closes. And, assuming for the moment that we are dealing with rational people, feeling the pain is probably good enough to ensure you won’t do the things again which made you feel it once.

Once again, the ‘do it anyway, just in case’ rebuild ceases to be an option after one wild experiment and that line of attack is ruled out for the future by mere experience.

In either case, where *circumstances* make rebuilds practically and obviously “wrong”, the outcome is not to do them. But where the maintenance window is available and the after-effects are non-measurable, what harm has doing the “wrong” thing done? Sure, it has offended your sense of what is technically ‘correct’, but who cares if, measurably, no databases -or businesses- were harmed in the carrying out of this process?

Define ‘wrong’, in other words. If it fits in with an appropriate business strategy; if it doesn’t actually do measurable harm; if it’s quicker and simpler just to do it instead of trying to do it ‘intelligently’, what you declare “wrong” might actually be the right, proper and intelligent thing to do. That was the point of the Newton/Einstein analogy: if you don’t need the higher precision of “doing it right”, the old, pragmatic and “wrong” way is maybe good enough.

Besides, it’s no good saying, “And no, it doesn’t require block dumps or particularly profound database internals” if you then go on, in the next breath, to say, “it needs a good knowledge of the application, the data and an accurate understanding of how indexes are maintained”. What’s “an accurate understanding of how indexes are maintained” if it isn’t “database internals”?! And in any case, I may have 96 databases to maintain and 42 applications and I’m not going to be able to have a “good knowledge” of all of them -so where does your advice leave me then?

A poor rule of thumb which might do some good and no noticeable or measurable harm (always a crucial qualification) is, I would suggest, sometimes a more useful approach than a technically excellent bit of advice which nevertheless makes unrealistic demands on a DBA’s time or abilities.


10. Richard Foote - December 15, 2007

Hi Howard

Thanks for sharing your thoughts and opinions. I don’t agree with them all but thanks for your comments nonetheless.


11. Oracle Myth Busting: Show, Don’t Tell | Structured Data - December 17, 2007

[…] Foote has recently started blogging (as of December 11th) and one of his recent posts discusses Oracle Myths and Information Pollution. I find this topic very interesting as I’m always amazed at the number of people who make […]


12. Ray - December 17, 2007

Your observations on human behavior is accurate. Your blog ranks with Machiavelli’s The Prince for your accurrate description (“… People are often either too lazy or too busy or both to want a complex answer to a complex question…. “) on how and what people really do. The blog is only partially technical, it deals with the ‘wet ware’ also.


13. Doug's Oracle Blog - December 17, 2007

[…] I need to tread carefully, in case I get ticked off by Dave Edwards, but there’s some seriously good stuff floating around this week. The more the merrier, I say, particularly when they’re pushing up […]


14. Ken Atkins - December 18, 2007

We just got bit by the .1%….

There is a lot of ranting about how everyone believes the myth and the rules should be proven…. I would like to see more about how to detect the .1% than how we should not rebuild indexes for the 99.9%…. I thought I would post one case where the indexes *did* need to be rebuilt.

We have a true 24×7 database with NO scheduled downtime…. There are some processing/audit tables that keep a rolling 6 days worth of data (not tons of data, something like 100,000 rows/day in the worst tables). A purge job runs every day to purge the oldest day, etc.

Our shop does *not* automatically reorg indexes, and the DBA’s know it is not normally needed, so do not even really think about it…. We started having some *major* performance problems with purging one of the tables (which was a parent to some other tables). It was taking hours to purge when it took minutes before. Because this table was a parent to many other tables being purged, and different children had different retention periods, the delete statement had multiple WHERE NOT EXISTS clauses to only purge data where the children had already purged…..

When the DELETE statement that purges the data was converted to a SELECT COUNT(*), it returned in seconds…. We were wondering if some sort of contention was occurring (the table being purged is constantly updated by an ongoing process), but the child tables are also updated and did not have the problem. We rewrote the purge code to commit more often, and other changes without effecting the performance. When we ran the purge in a test database, it ran fine…. The DBAs were pulling out their hair until they noticed that the size of the indexes were too large for the amount of data in the tables, and remembered this old “myth”, and decided to try rebuilding the indexes on these tables. The DELETE with the multiple NOT EXISTS went from running for 2 hours to delete 30,000 records, to deleting over 100,000 records in minutes. Simply by rebuilding the indexes…. Now actually, we still have no idea WHY this happened! The table being purged has 10 indexes, so maybe the delete on these indexes was the problem… Or maybe the way the NOT EXISTS clauses read indexes in the child tables (though why then would the SELECT COUNT(*) be fast?), or maybe the combination of the two…. If I had time, I might do some test cases… But the DBAs have fixed the problem, we are setting up a job to simply rebuild these indexes (not all the tables, just these few) and we will have learned a lesson.

I am not trying to debunk the debunking…. I agree that the default state should *not* be to rebuild indexes automatically, and I love Tom Kyte’s work because he *always* proves his claims with code. I just wanted to

PS. This was in Oracle 10 R2, on a linux box, and we do not have the table partitioning option on that database (the easy way to purge).


15. Pradip - December 18, 2007

Similar one would be Tuning/SQL Tuning. But I wouldn’t say that these are all myths. But one TIP works in one way or other depending on… blah blah.

Observation No 1. All the Myths and Tips are (somewhat) starting point (if you are wondering where to start from, both in terms of reading literature and/or doing the experiments/trial and error or climbing the learning curve). These are not the only one and the last step. This is true even if high paying/Guru looking Oracle-experts etc say “do this and you are done”.
(I don’t blame them as client expect that they have the solution up in their sleves right at the moment the step in and expects to mouth the pearls of wisdom).

2nd Observation: In a dog’s life of a consultant – he is not expected to TEST/OBSERVE/CONCLUDE (and repeat the process!!!) . He is expected to solve the problem. And he ALREADY Knows the STUFF!

But certainly it is a good thing to put all the myths and tips and Oracle recommendations thru some “periodic” grind and test and Publish with test data and results as CASE-STUDY. May be Oracle and ORACLE Users GROUP should do that.

On a second issue: I find FORUMS are useless to some extent. Also Metalink should clean up Good docs from Bad one. It has lots of Good stuff, but should be cleaned from the one that are NOT relevant or the expired ones.


16. Richard Foote - December 18, 2007

Ray, thanks for your comments, appreciated.

Ken, study this presentation:

Click to access index-internals-rebuilding-the-truth.pdf

It discusses how you can determine the 0.1%, the characteristics of the indexes that actually cause fragmentation. Note you’re possibly meeting the criteria listed on page 145, possibly because your delete operations are not fully cleaning out leaf blocks. The min select example of page 132 shows what can happen if the delete rate doesn’t match the subsequent insert rate.

However, as you’re not exactly sure what’s actually going on and what the cause of these excessive waits might be, I would strongly recommend running a level 12 10046 trace (or equivalent) and see exactly what everything is waiting on and what actual bind values (if any) are being used by the delete process.

Find out exactly what the issue is and if it’s due to excessive index related I/Os (or due to an index not being used as it’s now deemed too expensive by the CBO), then at least you’ll know for sure.

Pradip, I agree with your general comments that you need to be careful with what (and who) one reads and believes. I also agree that it’s important to know your stuff when responsible for databases which in turn goes back to knowing what to read and believe.

But importantly, one also needs to dedicate time and energy into “sharpening the saw”, into investigating and determining what is practical and appropriate in your environment(s) and what isn’t, what works and what doesn’t. That again goes back to knowing what to read and believe and how to verify fact from impracticalities.

I personally find most forums useful both in learning and picking ideas and initiatives and also into getting an insight into what’s important to other folk in Oracle and what issues others experience.

I also agree that Metalink could be improved but it’s a lot better resource than what other technologies offer.


17. Ken Atkins - December 18, 2007

Richard, thanks for the link to the article…. I will look it over and give it to the DBAs (I am an application architect). It is very useful to “really” understand what is going on.


18. Richard Foote - December 18, 2007

Hi Ken

No worries. If they (or you of course) have any questions, please don’t hesitate to ask 🙂


19. dizwell - December 19, 2007

If you could just confine yourself to technical answers to technical questions for this one, I’d be grateful.

I like the idea of having a checklist to identify the 0.1%, so could you please just clarify what “Large selectivity” means (on your page 145 of the presentation you referred to above)?

I would take “having large selectivity” to mean ‘being highly selective’. I’d say a unique index was highly selective: one key value, one table row. But elsewhere in your presentation you talk about one leaf node acquiring lots of AAA entries, so I’m wondering if you really meant ‘low selectivity’: one key value, lots of table rows.

I’m just wondering in either case what the rebuild is going to change, since the rebuild isn’t going to affect whether the table has lots of AAA entries or not. Presumably, therefore, it’s a leaf entry redistribution effect that would be achieved.


20. Richard Foote - December 19, 2007

Howard, “large selectivity” is attempting to make reference (and I admit not particularly clearly on the slide) to the overall percentage of rows within a table that is processed by index access paths within execution plans. This then corresponds to the cardinality or the actual number of rows processed in the step.

The higher the cardinality within an index access path, the greater the number of index row entries that need to be accessed, the greater the number of index blocks that may need to be read and the greater therefore the potential impact of the index in the overall costings if the index is poorly fragmented.

If an index only reads one row via an index access in an execution plan, it’s extremely unlikely that a rebuild of the index would make any difference at all to the performance of the corresponding SQL.

However if an index reads 100000 rows, that index has been heavily utilised and perhaps if the index were to be poorly fragmented, it might make a difference to rebuild it.

The rebuild may help in the scenario with lots of AAAs because due to the manner of the 50-50 block splits, the associated leaf blocks are only 1/2 full. After the rebuild, the number of AAAs will obviously be the same, but the leaf blocks they’re stored in will be as full as specified by the pctfree value of the index, hence reducing the associated index fragmentation.

So for an index rebuild to actually make a performance improvement, we need both the index to be very poorly fragmented AND we need to have execution plans that use the index to read lots of rows.

This is best represented therefore by a subsequent reduction in the response times of such activities.


21. dizwell - December 19, 2007

Thank you for the clarification.

I personally would have called the reading of lots of entries from the index ‘low selectivity’, and hence I’d personally change your slide to read ‘little selectivity’, but provided we’re clear it simply means ‘read lots of entries’, I guess it doesn’t matter much.


22. Richard Foote - December 20, 2007

Howard, glad the clarification made some sense.


23. Joseph Schwertner - December 28, 2007

This is very durty article of an unexperienced Oracle user.
1. Everybody who has studied at the University “Data Structures and algotithms” knows whta means “lazy delete” of entries in a tree. Indexes are trees and “lazy delete” means mark the deleted entries as ‘deleted” but not delete them physically from the hard disk (balancing takes time!). This phenomena is described as experience in comments on metalink – suddenly the DBA observs index 2-3 times bigger as expected …

2. If you have had more experience in maintaining Oracle instances (especially on Unix/Linux) you will know that due many changes on disk drives and Oracle bugs (yes – there are maaaaaaany Oracle bugs!!!!) indexes very often get corrupted (logically and seldom physically). This is a sad fact, but this is reality and there are no excuse for DBA when the system refuses to work ….

In both cases regular rebuilding of indexes helps.
Of course we have also measure the time and the best period we can rebuild the indexes.
But if you like the risk (or have quality disks and perfect Oracle support) – please, do not rebuild … This is the beauty of Oracle – the choice is yours …


24. Richard Foote - December 28, 2007

Hi Joseph

I’m sorry but what’s a “durty article” ?

You may just want to check out this presentation and see how Oracle generally automatically deals with and cleans out deleted data without the need for periodic rebuilds:

Click to access index-internals-rebuilding-the-truth.pdf

Balancing takes no time, none as Oracle b*tree indexes are automatically balanced , all of the time.

This is Oracle, not some University algorithm we’re taking about here right ?

And how precisely does periodic rebuilding help one from corruptions, do these “quality disks” you mention improve somehow with overuse do they 😉

BTW, I’ve been managing mission critical Oracle/Unix environments for many years and the number of corrupted indexes we get are ummm, take a guess …


25. Howard Rogers - December 30, 2007

I wasn’t going to say anything, but I will: the idea that you have to rebuild regularly because there are so many bugs in Oracle that indexes are forever getting corrupted is one of the silliest I’ve encountered. It’s complete crapola, in fact, and Richard’s count of zero corrupted indexes in his DBA career would match mine, byte for byte.

I can’t stand this “the choice is yours” crap, either, as if Choice A and Choice B were equally valid and meritorious. Yes, Oracle gives you freedom: freedom to do things well or freedom to do things really badly. Regular rebuilding of indexes (unless you genuinely have the downtime to do it and the proof that it makes no difference to subsequent performance) is BAD, unambiguously and utterly BAD. You have the choice to do it, yes, but you would be foolish, silly and insane to make that choice.

The technical evidence is in, all 166 pages of it. You don’t actually have to read all 166 pages of Richard’s Powerpoint to get the headlines: don’t regularly rebuild indexes for it makes absolutely no sense to do so, lazy deletes and fantasy bugs and corruptions notwithstanding.

That just happens to be the fact of the matter. The rest is down to your idiosyncratic view of what constitutes free choice.


26. Oracle myths and information pollution revisited « H.Tonguç YILMAZ Oracle Blog - January 6, 2008

[…] Oracle myths and information pollution revisited Filed under: Oracle How To, Oracle Other — H.Tonguç Yılmaz @ 9:25 am This post is parallel to these two very valuable recent posts; http://structureddata.org/2007/12/16/oracle-myth-busting-show-dont-tell/ https://richardfoote.wordpress.com/2007/12/12/why-are-there-so-many-oracle-related-myths-the-inconven… […]


27. mattyb - January 11, 2008

Not sure how it works in other companies, but we are now billing our clients for CPU use, network usage and disk space used among other metrics. One cocky client in particular has done some research and has suggested index rebuilds followed by usage of the shrink command (we have Oracle 10g). So far my little experiments haven’t shown large space savings but maybe for the bigger databases (thinking Siebel or SAP stuff) there might be quite a benefit.


28. Richard Foote - January 11, 2008

Matty, rebuilding indexes can often save some space, but for just how long is the question.

This can all become a little bit like a boxer or jockey before an event, when they do everything possible to be as light as possible for a period of time (starve, sauna, etc.) only to eventually increase back to their “normal” weight in the following days.

From your client’s point of view, it might be worth timing “putting the database in a sauna” just before you guys arrive with your storage scripts.

Of course, if I were you, I would just make sure you place all segments in a Locally Managed Tablespace with say a 10G Uniform extent size and just watch the monies roll in.

Would work a treat on a SAP database 😉


29. mattyb - January 11, 2008

The DBA team actually talked about this a bit after I posted. It seems that the client is going to ask that we run the rebuild/shrink/resize type operations *JUST* before we calculate their space usage for the month. Not all clients are as dumb as we think they are.

Reckon that I’ll take up carpentry.

Liked by 1 person

30. I Didn’t Know That 3 – What is Wrong with this Quote? « Charles Hooper's Oracle Notes - December 12, 2010

[…] https://richardfoote.wordpress.com/2007/12/12/why-are-there-so-many-oracle-related-myths-the-inconven… (excellent article, includes a link to Richard Foote’s Index Internals – Rebuilding the Truth PDF) […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: