jump to navigation

Quiz Time. Why Do Deletes Cause An Index To Grow ? (Up The Hill Backwards) June 25, 2015

Posted by Richard Foote in Oracle Indexes.

OK, time for a little quiz.

One of the things I’ve seen at a number of sites is the almost fanatical drive to make indexes as small as possible because indexes that are larger than necessary both waste storage and hurt performance.

Or so the theory goes …   🙂

In many cases, this drives DBAs to create or rebuild indexes with a PCTFREE set to 0 as this will make the index as compact and small as possible.

Of course, this is often the very worst setting for an index to remain small because the insert of a new index entry is likely to cause a 50-50 block split and result in two 1/2 empty leaf blocks (unless the index entry is the maximum current value). Before very long, the index is back to a bloated state and in some sad scenarios, the process is repeated again and again.

A point that is often missed though is that it doesn’t even take an insert to cause the index to expand out. A few delete statements is all that’s required.

To illustrate I create my favorite little table and populate it with a few rows:

SQL> create table bowie (id number, name varchar2(42));

Table created.

SQL> insert into bowie select rownum, 'DAVID BOWIE' from dual connect by level <=10000;

10000 rows created.

SQL> commit;

Commit complete.

OK, time to create an index but because I’m paranoid about having indexes larger than necessary, I’ll create it with a pctfree of 0:

SQL> create index bowie_id_i on bowie(id) pctfree 0;

Index created.

If we look at a tree dump of the index:

SQL> select object_id from dba_objects where object_name='BOWIE_ID_I';


SQL> alter session set events 'immediate trace name treedump level 98884';

Session altered.

—– begin tree dump
branch: 0x180050b 25167115 (0: nrow: 19, level: 1)
leaf: 0x180050c 25167116 (-1: row:540.540 avs:4)
leaf: 0x180050d 25167117 (0: row:533.533 avs:10)
leaf: 0x180050e 25167118 (1: row:533.533 avs:11)
leaf: 0x180050f 25167119 (2: row:533.533 avs:10)
leaf: 0x1800510 25167120 (3: row:533.533 avs:10)
leaf: 0x1800511 25167121 (4: row:533.533 avs:11)
leaf: 0x1800512 25167122 (5: row:533.533 avs:10)
leaf: 0x1800513 25167123 (6: row:533.533 avs:10)
leaf: 0x1800514 25167124 (7: row:533.533 avs:11)
leaf: 0x1800515 25167125 (8: row:533.533 avs:10)
leaf: 0x1800516 25167126 (9: row:533.533 avs:10)
leaf: 0x1800517 25167127 (10: row:533.533 avs:11)
leaf: 0x1800519 25167129 (11: row:533.533 avs:10)
leaf: 0x180051a 25167130 (12: row:533.533 avs:10)
leaf: 0x180051b 25167131 (13: row:533.533 avs:11)
leaf: 0x180051c 25167132 (14: row:533.533 avs:10)
leaf: 0x180051d 25167133 (15: row:533.533 avs:10)
leaf: 0x180051e 25167134 (16: row:533.533 avs:11)
leaf: 0x180051f 25167135 (17: row:399.399 avs:2019)
—– end tree dump

We note the index only has 19 leaf blocks and that most leaf blocks have 533 index entries and only an avs (available free space) of some 10 or 11 bytes. Only the last leaf block is partly full with some 2019 free bytes.

That’s fantastic, the index really is a small as can be. Trying to use index compression will be futile as the indexed values are effectively unique.

I’m now going to delete just a few rows. Surely deleting rows from the table (and hence entries from the index) can only have a positive impact (if any) on the index structure.

In one session, I delete some 20 odd rows:

SQL> delete bowie where id in (select rownum*499 from dual connect by level<=20);

20 rows deleted.

Meanwhile, in another session, I delete another 20 different rows:

SQL> delete bowie where id in (select rownum*500 from dual connect by level<=20); 

20 rows deleted.

I then commit the delete statement in both sessions.

So in total, I’ve just deleted some 40 rows in total out of the 10000 rows in the table.

If I look at a tree dump of the index now:

—– begin tree dump
branch: 0x180050b 25167115 (0: nrow: 37, level: 1)
leaf: 0x180050c 25167116 (-1: row:278.278 avs:3931)
leaf: 0x1800524 25167140 (0: row:262.260 avs:4049)
leaf: 0x180050d 25167117 (1: row:271.271 avs:3938)
leaf: 0x1800525 25167141 (2: row:262.260 avs:4048)
leaf: 0x180050e 25167118 (3: row:271.271 avs:3938)
leaf: 0x1800526 25167142 (4: row:262.260 avs:4049)
leaf: 0x180050f 25167119 (5: row:271.271 avs:3937)
leaf: 0x1800527 25167143 (6: row:262.260 avs:4049)
leaf: 0x1800510 25167120 (7: row:271.271 avs:3938)
leaf: 0x1800520 25167136 (8: row:262.260 avs:4048)
leaf: 0x1800511 25167121 (9: row:271.271 avs:3938)
leaf: 0x1800521 25167137 (10: row:262.260 avs:4049)
leaf: 0x1800512 25167122 (11: row:271.271 avs:3937)
leaf: 0x1800522 25167138 (12: row:262.260 avs:4049)
leaf: 0x1800513 25167123 (13: row:271.269 avs:3914)
leaf: 0x1800523 25167139 (14: row:262.262 avs:4072)
leaf: 0x1800514 25167124 (15: row:271.269 avs:3914)
leaf: 0x1800529 25167145 (16: row:262.262 avs:4073)
leaf: 0x1800515 25167125 (17: row:271.269 avs:3913)
leaf: 0x180052a 25167146 (18: row:262.262 avs:4073)
leaf: 0x1800516 25167126 (19: row:271.269 avs:3914)
leaf: 0x180052e 25167150 (20: row:262.262 avs:4072)
leaf: 0x1800517 25167127 (21: row:271.269 avs:3914)
leaf: 0x180052f 25167151 (22: row:262.262 avs:4073)
leaf: 0x1800519 25167129 (23: row:271.269 avs:3913)
leaf: 0x180052b 25167147 (24: row:262.262 avs:4073)
leaf: 0x180051a 25167130 (25: row:271.269 avs:3914)
leaf: 0x180052c 25167148 (26: row:262.262 avs:4072)
leaf: 0x180051b 25167131 (27: row:271.269 avs:3914)
leaf: 0x180052d 25167149 (28: row:262.260 avs:4049)
leaf: 0x180051c 25167132 (29: row:271.271 avs:3937)
leaf: 0x1800534 25167156 (30: row:262.260 avs:4049)
leaf: 0x180051d 25167133 (31: row:264.264 avs:4042)
leaf: 0x1800535 25167157 (32: row:269.267 avs:3944)
leaf: 0x180051e 25167134 (33: row:271.271 avs:3938)
leaf: 0x1800536 25167158 (34: row:262.260 avs:4049)
leaf: 0x180051f 25167135 (35: row:399.397 avs:1995)
—– end tree dump

I notice the index has basically doubled in size (37 leaf blocks up from the original 19), with most index leaf blocks now 1/2 empty.

So the key question. How could just deleting a handful of rows result in my beautifully compact index bloating to double the size ???

I’ve given away one clue within the post. I’ll post the answer in the coming days 🙂