jump to navigation

An Index or Disaster, You Choose (It’s The End Of The World As We Know It) October 30, 2015

Posted by Richard Foote in Oracle Indexes, skip_unusable_indexes.
3 comments

This come up recently with a customer and I saw a related question on OTN that made me realize I haven’t previously discussed the SKIP_UNUSABLE_INDEXES parameter.

Let me start by making a very simple point. All it potentially takes to make your entire database effectively inoperable or indeed to maybe bring down a RAC instance is for one teeny weeny index to become unusable. That’s it.

If the unusable/missing index is used often enough on a table that is big enough, the resultant explosion of load on the database generate by 10,000s of (say) massive Full Table Scans can be catastrophic on all users of the database (or indeed on all users of the disk subsystem or database servers as in the case of my customer).

Sometimes the lesser evil would be to simply ensure the statement(s) that were reliant on the now unusable index are prevented from executing and causing general havoc, so that all other database activities can continue unabated.

Hence why we have the SKIP_UNUSABLE_INDEXES parameter. When set to TRUE (the default), the CBO simply ignores the fact there’s an index in an unusable state it would have used and chooses the next best available plan (which might be plan horrendous, such as a FTS on a massive table).

If however SKIP_UNUSABLE_INDEXES is set to FALSE, then an error is generated if an index the CBO would have used is in an unusable state. Yes, such statements don’t now run causing possible frustration for some end users, but at least the “damage” is minimized to just those users, with the rest of the database community blissfully unaware and non-impacted by the unusable index.

To illustrate, a simple little example. Let’s just create a table and populate it with 1M rows. Note that the CODE column has 100,000 distinct values, so 10 rows returned for each value.

SQL> create table ziggy (id number, code number, name varchar2(42), description varchar(42));

Table created.

SQL> insert into ziggy select rownum, mod(rownum,100000), 'ZIGGY STARDUST', 'AND THE SPIDERS FROM MARS' from dual connect by level <= 1000000;

1000000 rows created.

SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'ZIGGY');

PL/SQL procedure successfully completed.

Let’s create an index on this column as this would be a really cheap way to find a particular CODE of interest:

SQL> create index ziggy_code_i on ziggy(code);

Index created.

So a query that just selects a CODE of interest is going to be nice and efficient:

SQL> select * from ziggy where code=42;

10 rows selected.
Execution Plan
----------------------------------------------------------
Plan hash value: 3294205578

----------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |              |    10 |   510 |    13   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| ZIGGY        |    10 |   510 |    13   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | ZIGGY_CODE_I |    10 |       |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("CODE"=42)

Statistics
----------------------------------------------------------
0  recursive calls
0  db block gets
14  consistent gets
0  physical reads
0  redo size
1325  bytes sent via SQL*Net to client
551  bytes received via SQL*Net from client
2  SQL*Net roundtrips to/from client
0  sorts (memory)
0  sorts (disk)
10  rows processed

The index is used as expected and at just 14 consistent gets, it burns little CPU and performs no physical IOs.

If however the index were to become unusable …

SQL> alter index ziggy_code_i unusable;

Index altered.

And we now re-run the query:

SQL> select * from ziggy where code=42;

10 rows selected.
Execution Plan
----------------------------------------------------------
Plan hash value: 2421001569

---------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       |    10 |   510 |  2122   (1)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| ZIGGY |    10 |   510 |  2122   (1)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("CODE"=42)
Statistics
----------------------------------------------------------
0  recursive calls
0  db block gets
7765  consistent gets
7760  physical reads
0  redo size
965  bytes sent via SQL*Net to client
551  bytes received via SQL*Net from client
2  SQL*Net roundtrips to/from client
0  sorts (memory)
0  sorts (disk)
10  rows processed

We can see the query is now using the next best execution plan involving a FTS on our “large” table, which is performing many more consistent gets and physical IOs. If this query (or others also dependent on the now unusable index) were executed frequently enough and if the table was big enough, the resultant increase in both CPU and IO could be crippling to the entire database environment(s).

Rather than continually running this now extremely costly query, perhaps the better alternative is to prevent this query from running at all.

By changing the SKIP_UNUSABLE_INDEXES parameter to FALSE:

SQL> alter system set skip_unusable_indexes=false;

System altered.

If we try now to execute this query:

SQL> select * from ziggy where code=42;
select * from ziggy where code=42
*
ERROR at line 1:
ORA-01502: index 'BOWIE.ZIGGY_CODE_I' or partition of such index is in unusable
state

We get our beloved (in this scenario) ORA-01502 error. We now prevent 10,000s of expensive FTS and the general database remains unaffected.

So you have the choice in what to do here. In the event of an unusable index, is there a sufficiently “good enough” alternate option that makes executing statements impacted by the unusable index viable or is the resultant impact on the database just too much and preventing such statements from executing at all the  better option.

It’s potentially a quick fix for a dire database emergency.

Richard Foote: Upcoming Presentation Events (David Live) October 20, 2015

Posted by Richard Foote in Oracle Indexes.
4 comments

I’ll be doing the rounds in Australia/NZ in the coming weeks so plenty of opportunity to catch-up and see me in action 🙂

I’ll be doing a “National Tour” of my Let’s Talk Oracle Database sessions that I’ve been running locally in Canberra for a number of years. All events have limited places available so please register early to avoid disappointment.

All Let’s Talk Oracle Database events are free, start at 8:30am – 9:00am for registration and coffee, conclude at 12:30pm and are followed by lunch, networking and an informal Q&A. Details of events and dates are:

11 November Brisbane: Let’s Talk Oracle Database: Oracle 12c Database New Features (click here to register). Note: This event only has very few places remaining.

17 November Canberra: Let’s Talk Oracle Database: Performance Diagnostics – How To Correctly Use AWR Reports (email richard.foote@oracle.com to register). Note: This event is now officially FULL.

19 November Sydney: Let’s Talk Oracle Database: Oracle 12c Database New Features (email mitch.foster@oracle.com or click here to register)

25 November Adelaide: Let’s Talk Oracle Database: Performance Diagnostics – How To Correctly Use AWR Reports (email brodie.james@oracle.com to register).

I’ll also be participating on the OTN APAC 2015 Tour with a number of other great presenters. I’ll be presenting at the following events:

30 November Sydney (click here to register)

2 December Perth (click here to register)

4 December Wellington (click here to register)

Summary of the Let’s Talk Oracle Database sessions:

Oracle 12c Database New Features

An in-depth technical look at many of the exciting enhancements and new capabilities introduced in the Oracle 12c Database that help make life easier for the DBA and Oracle Developer. Among a number of topics, we’ll have a look at changes with respect the Cost Based Optimizer, new statistics gathering options, discuss many of the new indexing related capabilities such as Partial Indexes, Advanced Index Compression, Asynchronous Global Index Maintenance, etc. and look at many of those cool little things such as In-Database Archiving, Automatic Data Optimization, Identity Columns, Row-Limiting Clauses, Temporary Undo, Online Move operations, Clustering Attributes, Temporal and Improved Flashback Queries etc. to name but a few that can sometimes make all the difference. Might even squeeze in a bit of Database In-Memory. So plenty to talk about !!!

Performance Diagnostics – How To Correctly Use AWR Reports

I often get called in by customers to determine and address the root cause of database performance issues. Depending on the issue, a request for a simple Automatic Workload Repository (AWR) report is often sufficient to accurately diagnose the root problem(s). However, many DBAs find AWR reports daunting as they contain so much database diagnostic information, they don’t even know where to begin to find the truly useful information.

In this session, I begin by discussing the basic concepts of response times, DB times, the Oracle Wait Interface and how to focus on what is truly important. I then look at how to best read an AWR report to quickly go to the most relevant sections that detail any specific issues. I also discuss a general tuning and diagnostic methodology that ensures one can quickly determine whether an AWR report will indeed be sufficient and how to accurately and consistently use the AWR report to pinpoint and determine root causes for global database performance issues. We’ll go through a number of actual “real-life” examples that highlight various performance issues and how one accurately determines the actual root issues through an appropriate AWR report.

Time permitting, we’ll go through some other useful diagnostic tools such as the AWR Warehouse, Automatic Database Diagnostic Monitor (ADDM), Active Session History (ASH)  and SQL Extended Tracing and scenarios when they can prove to be invaluable in addressing database performance issues.

Hope to see you at one of these events 🙂