jump to navigation

2019 Public Appearances (What In The World) August 15, 2019

Posted by Richard Foote in Oracle Indexes.
add a comment

I’ll be presenting at a number of Oracle events over the remainder of the year. Details as follows:

Oracle Open World – San Francisco (16-19 September 2019)

  • Session Type: Conference
  • Session Session ID: CON1432
  • Session Title: Oracle Database 19c: In-Depth Look into the New Automatic Indexing Feature
  • Room: Moscone South – Room 152A Date: 09/17/19
  • Start Time: 11:15:00 AM
  • End Time: 12:00:00 PM

Link: https://events.rainfocus.com/widget/oracle/oow19/catalogow19?

Trivadis Performance Days 2019 (26-27 September 2019)

I have the following 2 presentations:

  • Indexing With Partitioning
  • Oracle Database 18c and 19c New Indexing Related Features

Link: https://www.trivadis.com/en/training/performance-days-2019-tvdpdays

AUSOUG Connect 2019 Series (Melbourne 14 October, Perth 16 October)

I have the following presentation:

  • 10 Things You Might Not Know But Really Should About Oracle Indexes

Link: https://www.ausoug.org.au/whats-on/connect2019/

ANZ Let’s Talk Database Series

I have the following 2 presentations:

  • Oracle Database 19c New Features
  • Oracle Exadata X8 New Features

Following are the confirmed dates:

  • Canberra: 22 October 2019
  • Sydney: 23 October 2019
  • Melbourne: 24 October 2019
  • Brisbane: 29 October 2019
  • Auckland: 30 October 2019
  • Wellington: 31 October 2019

Registration links coming soon.

UKOUG Techfest19 Brighton, UK (1-4 December 2019)

I have the following 3 presentations:

  • Oracle Database 19c New Features (Tuesday, 3 December 11:00-11:45am)
  • Oracle Indexing Q&A with Richard Foote (Tuesday, 3 December 2:45-3:30pm)
  • 10 Things You Might Not Know But Really Should About Oracle Indexes (Wednesday, 2:45-3:30pm)

Link: https://ukoug.org/page/techfest19

Hopefully I can catch you at one of these events. Don’t be shy, please say hi ūüôā

Advertisements

Announcement: “Oracle Performance Diagnostics and Tuning” Webinar – 20-23 August 2019. July 30, 2019

Posted by Richard Foote in Oracle Performance Diagnostics and Tuning Webinar.
add a comment

seminar photo

I have just scheduled a new “Oracle Performance Diagnostics and Tuning” webinar to run between 20-23 August 2019, specifically for my friends in New Zealand, but is open to anyone.

It will run between 7am-11am AEST each day, which is perfect for those in NZ, but also other parts of the world such as Western USA.

This is a must attend webinar aimed at Oracle professionals (both DBAs and Developers) who are interested in Performance Tuning. It details how to maximise the performance of both Oracle databases and associated applications and how to diagnose and address any performance issues as quickly and effectively as possible.

A unique feature of this training is that you can submit YOUR AWR reports and using the tuning methodology and techniques discussed in the webinar can be applied directly to determining and addressing performance issues with YOUR own databases.

For all webinar content and how to register, please visit: https://richardfooteconsulting.com/performance-tuning-seminar/

As always, places are strictly limited and can be purchased below:

Webinar Series 20-23 August 2019 (start 7am AEST, end 11am AEST): Buy Now Button

If you have any questions at all, please contact me at richard@richardfooteconsulting.com

Oracle 19c Automatic Indexing: Configuration (All I Need) July 29, 2019

Posted by Richard Foote in Automatic Indexing, AUTO_INDEX_COMPRESSION, AUTO_INDEX_DEFAULT_TABLESPACE, AUTO_INDEX_MODE, AUTO_INDEX_REPORT_RETENTION, AUTO_INDEX_RETENTION_FOR_AUTO, AUTO_INDEX_RETENTION_FOR_MANUAL, AUTO_INDEX_SCHEMA, AUTO_INDEX_SPACE_BUDGET, DBA_AUTO_INDEX_CONFIG, DBMS_AUTO_INDEX.CONFIGURE, Oracle Indexes, Oracle19c, SMB$CONFIG.
add a comment

In Rainbows

In this post, I’ll discuss how to configure the new Automatic Indexing capability introduced in Oracle Database 19c.

The intent of Oracle here is to make the configuration of Automatic Indexing as simplistic as possible, with as few levers for DBAs to potentially stuff up as possible. The ultimate goal would be to have a switch that just turns the feature on and that all necessary indexes then simply be created/modified/dropped as required. It’s not quite there yet, but it’ll no doubt get closer with each new release.

By default, Automatic Indexing is turned OFF. To turn on these capabilities, you simply run the following using the DBMS_AUTO_INDEX.CONFIGURE procedure:

SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE(‘AUTO_INDEX_MODE‘,’IMPLEMENT‘);

PL/SQL procedure successfully completed.

That’s it. Automatic Indexing is now enabled and as discussed in the previous blog post on the Automatic Indexing Methodology, every 15 minutes, the SYS_AUTO_INDEX_TASK background task will kickoff and automatically create/replace/drop any database indexes as necessary.

Another option, is to enable Automatic Indexing in ‘REPORT ONLY‘ mode:

SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE(‘AUTO_INDEX_MODE’,’REPORT ONLY‘);

PL/SQL procedure successfully completed.

This will only create new indexes as INVISIBLE indexes, which are not considered by default by the CBO. The intent here is that the DBA can investigate the newly created Automatic Indexes and decide whether turning on this feature for real would be a good idea. Need to exercise some caution with this option though, as the limited options regarding how to subsequently administer the created Invisible Automatic Indexing can be problematic. I’ll discuss all this is more detail in a future post.

To turn off Automatic Indexing, simply set the AUTO_INDEX_MODE to ‘OFF’:

SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE(‘AUTO_INDEX_MODE’,’OFF‘);

PL/SQL procedure successfully completed.

Note here the documentation states “the existing auto indexes are disabled” which is incorrect. New Automatic Indexes will no longer be created, but existing Automatic Indexes will still be both Visible and Valid and available to the CBO for continued use.

By default, Automatic Indexing considers all tables in all “user created” schemas. However, this can be controlled with the AUTO_INDEX_SCHEMA option within the DBMS_AUTO_INDEX.CONFIGURE procedure. You can control which schemas to either explicitly include or exclude from Automatic Indexing considerations.

To add the BOWIE schema to an “Inclusion” list of schemas, you run the following:

SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE(‘AUTO_INDEX_SCHEMA‘,’BOWIE’,¬†TRUE);

PL/SQL procedure successfully completed.

 

To add the BOWIE schema to an “Exclusion” list of schemas, you run the following:

SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE(‘AUTO_INDEX_SCHEMA‘,’BOWIE’,¬†FALSE);

PL/SQL procedure successfully completed.

 

To remove the BOWIE schema from whichever list it belongs, you run the following:

SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE(‘AUTO_INDEX_SCHEMA‘, ‘BOWIE’, NULL);

PL/SQL procedure successfully completed.

 

To remove all schemas from the “Inclusion” list, you run the following:

SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE(‘AUTO_INDEX_SCHEMA‘, NULL, TRUE);

PL/SQL procedure successfully completed.

 

You can configure a tablespace to be the tablespace in which all Automatic Indexes are to now be created by running the following:

SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE(‘AUTO_INDEX_DEFAULT_TABLESPACE‘,’INDEX_TS’);

PL/SQL procedure successfully completed

The INDEX_TS tablespace is now the location of all newly created Automatic Indexes.

You can also control how much of the configured Automatic Indexing tablespace is to be reserved for use by Automatic Indexes (default 50%) by running the following:

SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE(‘AUTO_INDEX_SPACE_BUDGET‘, ’42’);

PL/SQL procedure successfully completed.

Now, only 42% of the INDEX_TS tablespace can be used by Automatic Indexes.

 

You can control the number of days (the default is 373 days) in which if an Automatic Index has been deemed NOT to have been used, it will be automatically dropped. The following command:

SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE(‘AUTO_INDEX_RETENTION_FOR_AUTO‘, ’42’);

PL/SQL procedure successfully completed.

will automatically drop any Automatic Index that is deemed not to have been used in the last 42 days.

Note: Oracle uses the new Indexing Tracking feature introduced in Oracle 12.2 to determine if an index has/has not been used, which has limitations that could potentially result in an Automatic Index that has been “lightly” used during the retention period being dropped. This will be discussed in more detail in a future post.

A similar retention configuration can be implemented for manually created indexes. The following command:

SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE(‘AUTO_INDEX_RETENTION_FOR_MANUAL’,’42’);

PL/SQL procedure successfully completed.

will automatically drop any manually created index that is deemed not to have been used in the last 42 days.

Note: the same Indexing Tracking limitations means that manually created indexes lightly used during the last 42 days could also be automatically dropped.

I will discuss various undocumented implications of automatically dropping both Automatic and Manual Indexes in future posts.

 

Very importantly,  the Automatic Indexing logs on which Automatic Indexing Reports are based are only retained by default for just 31 days. To change the time before the Automatic Indexing logs are deleted, you run the following:

SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE(‘AUTO_INDEX_REPORT_RETENTION‘,’342’);

PL/SQL procedure successfully completed.

The Automatic Indexing logs are now retained for 342 days. These Automatic Indexing reports (and hence logs) are critical for understanding what the Automatic Indexing featuring is doing within the database and will be discussed extensively in future posts. I would recommend increasing the retention period from the 31 days default.

 

The final configuration option is not officially documented (yet) and controls whether or not Advanced Compression is used for Automatic Indexes. The default is that Advanced Compression is disabled, but this can be changed as follows:

SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE(‘AUTO_INDEX_COMPRESSION‘,’ON’);

PL/SQL procedure successfully completed.

All Automatic Indexes are now automatically compressed using Advanced Low Compression.

I believe this was at some stage going to be the default behaviour, but due to licencing considerations and that many sites don’t have the Advanced Compression Option, this was subsequently changed. If you do have Advanced Compression, I would strongly recommend turning this ON, as implementing Advanced Low Compression is a relatively no-brainer beneficial decision.

I will however discuss the various undocumented implications of Advanced Compression in relation to Automatic Indexing in future posts.

The documented DBA_AUTO_INDEX_CONFIG view can be used to view the current setting for all of these configuration options:

SQL> select * from dba_auto_index_config;

PARAMETER_NAME                             PARAMETER_VALUE      LAST_MODIFIED                              MODIFIED_BY
---------------------------------------- -------------------- ---------------------------------------- --------------------
AUTO_INDEX_COMPRESSION                   OFF
AUTO_INDEX_DEFAULT_TABLESPACE
AUTO_INDEX_MODE                          REPORT ONLY           03-JUL-19 05.43.28.000000 AM            BOWIE
AUTO_INDEX_REPORT_RETENTION              31
AUTO_INDEX_RETENTION_FOR_AUTO            42                    20-JUN-19 06.32.06.000000 AM            BOWIE
AUTO_INDEX_RETENTION_FOR_MANUAL                                02-JUL-19 12.12.21.000000 AM            BOWIE
AUTO_INDEX_SCHEMA  schema IN (BOWIE)                           20-JUN-19 06.27.26.000000 AM            BOWIE
AUTO_INDEX_SPACE_BUDGET                  50

 

Additionally, you can view both the documented and undocumented settings regarding Automatic Indexing by looking at the SMB$CONFIG table:

 

SQL> select parameter_name, parameter_value
from sys.SMB$CONFIG;

PARAMETER_NAME                           PARAMETER_VALUE
---------------------------------------- ---------------
SPACE_BUDGET_PERCENT                                  10
PLAN_RETENTION_WEEKS                                  53
SPM_TRACING                                            0
AUTO_CAPTURE_PARSING_SCHEMA_NAME                       0
AUTO_CAPTURE_MODULE                                    0
AUTO_CAPTURE_ACTION                                    0
AUTO_CAPTURE_SQL_TEXT                                  0
AUTO_INDEX_SCHEMA                                      0
AUTO_INDEX_DEFAULT_TABLESPACE                          0
AUTO_INDEX_SPACE_BUDGET                               50
AUTO_INDEX_REPORT_RETENTION                           31
AUTO_INDEX_RETENTION_FOR_AUTO                          0
AUTO_INDEX_RETENTION_FOR_MANUAL                        0
AUTO_INDEX_MODE                                        0
_AUTO_INDEX_TRACE                                      0
_AUTO_INDEX_TASK_INTERVAL                            900
_AUTO_INDEX_TASK_MAX_RUNTIME                        3600
_AUTO_INDEX_IMPROVEMENT_THRESHOLD                     20
_AUTO_INDEX_REGRESSION_THRESHOLD                      10
_AUTO_INDEX_ABSDIFF_THRESHOLD                        100
_AUTO_INDEX_STS_CAPTURE_TASK                           0
_AUTO_INDEX_CONTROL                                    0
_AUTO_INDEX_DERIVE_STATISTICS                          0
_AUTO_INDEX_CONCURRENCY                                1
_AUTO_INDEX_SPA_CONCURRENCY                            1
_AUTO_INDEX_REBUILD_TIME_LIMIT                        30
_AUTO_INDEX_REBUILD_COUNT_LIMIT                        5
_AUTO_INDEX_REVERIFY_TIME                             30
AUTO_INDEX_COMPRESSION                                 0
AUTO_SPM_EVOLVE_TASK                                   0
AUTO_SPM_EVOLVE_TASK_INTERVAL                       3600
AUTO_SPM_EVOLVE_TASK_MAX_RUNTIME                    1800

 

I’ll discuss a number of these undocumented configuration options in future posts.

In my next post however, we’ll look at the Automatic Indexing feature in action with a very simple example to start with…

Oracle 19c Automatic Indexing: Methodology Introduction (After Today) July 24, 2019

Posted by Richard Foote in Automatic Indexing, Oracle Indexes, Oracle19c.
1 comment so far

young americans

For the past month or so I’ve been playing around extensively with the new Oracle 19c “Automatic Indexing” feature, so I thought it was time to start blogging about it.

Considering it’s only in “Version 1” status, my initial impression is very positive in that it works extremely well doing at what it’s initially designed to do. There are certainly “interesting” bits of behavior here and there and some missing functionality, but it’s a whole lot better than DB running sub-optimally due to missing indexes.

This post is purely an introduction in which I’ll just discuss the general methodology behind the Automatic Indexing (AI) capability, initially in a somewhat simplistic manner in order to convey the more important concepts. I’ll expand and discuss many of the actual complexities behind this cool feature in future posts.

The basic concept behind AI is that the DBA via a simple “switch” can turn on the AI¬† feature and have the Oracle Database automatically create/drop/modify any necessary database indexes as required.

The Oracle database continually captures SQL workloads and keeps track of any new column usages within SQL predicates (as well as associated plans and execution statistics) that might warrant the need for a new index. Note that currently, only equality predicates are considered when determining potential candidate Automatic Indexes.

By default, a background task is executed every 15 minutes that by default runs for up to an hour, to determine if the database can identify any new indexes that “might” be warranted. There are a number of reasons why the tasks might actually take considerably more time than the default 1 hour (I’ll expand on reasons why in future posts).

Candidate indexes that have been detected based on the previous 15 mins database workload are initially created as INVISIBLE/UNUSABLE indexes and hard parsed with the captured SQLs to determine if the index could be considered by the CBO.

If the candidate indexes are indeed viable, the indexes are then created as INVISIBLE/USABLE indexes and verified via the SQL Performance Analyzer using SQL Tuning Sets to determine if the performance of the captured SQLs have indeed improved from the existing plans when using the newly created Automatic Indexes.

If performance actually improves for all captured SQLs using a new Automatic Index, the Automatic Index is made VISIBLE and is now available for general database use. If performance is worse for all captured SQLs, the Automatic Index is made UNUSABLE again and hence not available to the CBO. If performance is better for some SQLs but worse for others, well the story gets a little complicated. In “theory”, the Automatic Index is made USABLE but SQL baselines are created for the SQLs that suffer performance degradation to not use the Automatic Index. Any such SQLs are effectively “blacklisted” and are not (easily) considered for future “new” AI deliberations.¬† As I’ll discuss in future posts, things are not actually quite as straightforward as that.

So depending on the scenario, Automatic Indexes can end up being in any of the following states:

  • Invisible and Unusable
  • Invisible and Valid
  • Visible and Unusable
  • Visible and Valid

When Visible and Valid, Automatic Indexes can ultimately be:

  • Used by the CBO
  • Not used by the CBO (even by the SQL that caused its creation)

As part of the identify candidate index process, Oracle will consider if a new index can be logically “merged” with an existing index and effectively replace an existing index by dropping and replacing it with a new Automatic Index.

AI will also monitor if existing (either Automatic or Manual) indexes are not currently being used within the database. If after a configurable period of time an index is deemed not to have been used within that time, the index will be automatically dropped during the AI 15 minute task.

This is the basic AI story. The actual story is a tad more complicated and which I’ll expand upon in many many future posts, so keep your questions until then ūüôā

 

Importantly, AI is ONLY available on the Exadata platform or on Oracle Cloud environments. It is NOT available on standard On-Premises Oracle 19c deployments. Attempts to turn the feature on where not supported will only result in disappointment:

 

SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE(‘AUTO_INDEX_MODE’,’IMPLEMENT’);
BEGIN DBMS_AUTO_INDEX.CONFIGURE(‘AUTO_INDEX_MODE’,’IMPLEMENT’); END;

*
ERROR at line 1:
ORA-40216: feature not supported
ORA-06512: at “SYS.DBMS_SYS_ERROR”, line 79
ORA-06512: at “SYS.DBMS_AUTO_INDEX_INTERNAL”, line 9180
ORA-06512: at “SYS.DBMS_AUTO_INDEX”, line 283
ORA-06512: at line 1

 

I’ll next discuss the AI configuration options available to the DBA…

 

 

Bitmap Index On Column With 212552698 Distinct Values, What Gives? (I’d Rather Be High) April 23, 2019

Posted by Richard Foote in Advanced Index Compression, Autonomous Data Warehouse, Autonomous Database, Index Compression, Oracle Indexes.
add a comment

The Next Day

In my previous post on Indexing The Autonomous Warehouse, I highlighted how it might be necessary to create indexes to improve the performance and scalability of highly selective queries, as it might on any Data Warehouse running on an Exadata platform. In the post, I created a Bitmap Index and showed how improve SQL performance insured.

I’ve had two subsequent correspondences asking why I decided to use a Bitmap Index in my example, when the column in question had a (relatively) massive number of distinct values, some 212552698. What gives, wouldn’t it have been far more efficient to have used a standard B-Tree index instead, as Bitmap Indexes are only suitable for columns that have low-medium numbers of distinct values? There’s nothing particularly low-medium cardinality about 212552698 distinct values.

It’s a good question and as 2 people asked basically the same question, thought it worthy of a separate post to try to address it.

Firstly, I must admit the only calculation I performed mentally when deciding which type of index to use was to look at the number of distinct values, 212552698 and the number of rows in the table, 1 billion and determine that there’s approximately 5 rows per distinct value. With 5 repeated values on average, this was sufficient for the Bitmap Index to likely be the more efficient type of index and so I created a Bitmap Index in my demo, considering this was a Data Warehouse environment where potential locking issues relating to concurrent DMLs was not an issue.

As I’ve discussed previously, as with this classic post from way back in 2010, “So What Is A Good Cardinality Estimate For A Bitmap Index Column“, it’s not the number of distinct values that’s important, it’s the column cardinality and the average number of repeated column values that’s important when deciding if a Bitmap Index might be appropriate in a Data Warehouse environment.

As above post discusses, a Bitmap Index might only need the one index entry per column value and as the resultant index bitmap string can be very very highly compressed if there are few actual occurrences per value, a column with as many as 5 repeated values of average would likely be more efficient than a corresponding B-Tree Index. A B-Tree Index with 5 repeated values would require the value to be stored 5 times, with their 5 corresponding rowids, whereas the Bitmap Index might only need to store the indexed value once with its 2 corresponding rowids and a trivial amount for the highly compressed bitmap string.

If we create an equivalent B-Tree Index to the previously created Bitmap Index:

SQL> create index big_ziggy_lo_orderkey_i2 on big_ziggy(lo_orderkey) invisible;

SQL> select index_name, index_type, leaf_blocks, compression from user_indexes
     where table_name = 'BIG_ZIGGY';

INDEX_NAME                     INDEX_TYPE LEAF_BLOCKS COMPRESSION
------------------------------ ---------- ----------- -------------
BIG_ZIGGY_LO_ORDERKEY_I        BITMAP          843144 DISABLED
BIG_ZIGGY_LO_ORDERKEY_I2       NORMAL         2506552 DISABLED

We notice the Bitmap Index at 843144 leaf blocks is indeed substantially smaller than the equivalent B-Tree Index at 2506552 leaf blocks.

However, this is the Oracle Autonomous Data Warehouse environment where I have accessed to the Advanced Compression option and the capability to potentially create highly compressed B-Tree index structures. See various previous posts on Index Advanced Compression.

Perhaps, I can create a smaller structure to the Bitmap Index by using Index Advanced Compress. Let’s try initially with Advanced Compression Low:

SQL> drop index big_ziggy_lo_orderkey_i2;

Index dropped.

SQL> create index big_ziggy_lo_orderkey_i3 on big_ziggy(lo_orderkey) compress advanced low invisible;
...

SQL> select index_name, index_type, leaf_blocks, compression from user_indexes
     where table_name = 'BIG_ZIGGY';

INDEX_NAME                     INDEX_TYPE LEAF_BLOCKS COMPRESSION
------------------------------ ---------- ----------- -------------
BIG_ZIGGY_LO_ORDERKEY_I        BITMAP          843144 DISABLED
BIG_ZIGGY_LO_ORDERKEY_I3       NORMAL         1921296 ADVANCED LOW

We notice the B-Tree Index is indeed now smaller at just 1921296 leaf blocks, down from 2506552 leaf blocks, but still not as small as the 843144 leaf blocks of the Bitmap Index.

However, this is a Data Warehouse environment where the DML overheads associated with the maintenance of Advanced Compression High indexes may not be of such concern. Additionally, I might have ample CPU resources to cater for any additional CPU requirements of accessing such Advanced Compression High indexes. Therefore, let’s create a B-Tree Index with Advanced Compression High:

SQL> drop index big_ziggy_lo_orderkey_i3;

Index dropped.

SQL> create index big_ziggy_lo_orderkey_i4 on big_ziggy(lo_orderkey) compress advanced high invisible;
...

SQL> select index_name, index_type, leaf_blocks, compression from user_indexes
     where table_name = 'BIG_ZIGGY';

INDEX_NAME                     INDEX_TYPE LEAF_BLOCKS COMPRESSION
------------------------------ ---------- ----------- -------------
BIG_ZIGGY_LO_ORDERKEY_I        BITMAP          843144 DISABLED
BIG_ZIGGY_LO_ORDERKEY_I4       NORMAL          786537 ADVANCED HIGH

We notice the index has significantly reduced further in size and at just 786537 leaf blocks in now indeed smaller than the corresponding Bitmap Index.

Note that although it might be beneficial to use Advanced Compressed High on Bitmap Indexes, such an option is not currently supported.

Of course your mileage will vary on which Index Type and Compression Option will be most appropriate depending on the characteristics of your data, however it might be worth considering these options in environments where you have access to these indexing options.

But yes, a Bitmap Index might indeed be the better option, even if there are billions of distinct values (if there are say 10s of billions of rows) for the columns to be indexed…

Indexing The Oracle Autonomous Data Warehouse (Autobahn) April 2, 2019

Posted by Richard Foote in Autonomous Data Warehouse, Autonomous Database, Oracle Indexes.
2 comments

autobahn

When the Autonomous Data Warehouse was initially released in March 2018, one of the notable restrictions was that it did not support typical Data Warehouse performance related structures such as Indexes, Partitioning or Materialized Views. The platform was deemed efficient enough to not require these features, which would unnecessarily complicate the autonomous environment.

In September 2018, these restrictions were finally lifted with the (for now) manual creation of Indexes, Partitioning and Materialized Views now all supported. Oracle however states that “Oracle recommends that you do not manually create these structures, and leave performance optimizations to the¬†Autonomous Data Warehouse. If you’re a highly skilled Oracle Database tuning expert and decide to manually create these access structures, please be advised to test the impact of your manual tuning efforts on your full workload.

If you’re not a “highly skilled Oracle Database tuning expert”, let me give you some advice on when to create indexes in the Autonomous Data Warehouse (on Oracle Database 18c where these features are NOT automatically created and maintained by autonomous processes).

I’ll initially focus on two of the key capabilities available in the Autonomous Data Warehouse environment that might indeed make some index structures unnecessary, although neither is unique to autonomous databases.

Let’s begin by creating a relatively large table based on the huge LINEORDER table found in the provided SSB schema. The following BIG_ZIGGY table is based on 1 billion rows from the LINEORDER table:

SQL> create table big_ziggy select * from ssb.lineorder where rownum<=1000000000;

Frustratingly, if you have a session that performs a database operation for an extended period, the session looses its connection and hangs indefinitely. You need to start a separate session to confirm when the task actually completes and track progress by say looking a v$session_longops or querying dba_segments.

So we have a decently sized BIG_ZIGGY table at around 27GB:

SQL> select table_name, num_rows, blocks, round(blocks/128) MB
from user_tables where table_name='BIG_ZIGGY';

TABLE_NAME     NUM_ROWS     BLOCKS         MB
------------ ---------- ---------- ----------
BIG_ZIGGY    1000000000    3534714      27615

Here are a couple of key points.

Firstly, if you run a query on a table using filtering columns for the first time, then none of the smarts in the autonomous database environments are going to help you here. Things will initially run slowly, because Oracle would not have detected yet there is anything that needs to be tuned. The tuning process is “reactive”, in that Oracle needs to find an issue first before it can potentially address it.

The second key point is that if only a small fraction of the overall data is required or need be accessed, if the data being accessed is very highly filtered, then a database index is likely going to be highly effective, even within the Autonomous Data Warehouse. This is precisely why indexes are typically required in databases.

Although very often, large volumes of data are indeed accessed in typical Data Warehouse workloads, this is not always the case. It’s certainly not uncommon for ad-hoc queries and the such to only access a small fraction of an available data set.

The following query on the large BIG_ZIGGY table only returns 9 rows:

SQL> select * from big_ziggy
where lo_orderkey = 2294059393;

9 rows selected.

Elapsed: 00:06:23.99

Execution Plan
----------------------------------------------------------
Plan hash value: 2097953955

---------------------------------------------------------------------------------------------------------
| Id | Operation                   | Name                       | Rows | Bytes | Cost (%CPU)| Time      |
---------------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT            |                            |    5 |   505 |  67250 (12)|  00:00:03 |
|  1 |  RESULT CACHE               | 2u5qwsq68ghc5bqn5spqhyqjyj |      |       |            |           |
|* 2 |   TABLE ACCESS STORAGE FULL | BIG_ZIGGY                  |    5 |   505 |  67250 (12)|  00:00:03 |
---------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

2 - storage("LO_ORDERKEY"=2294059393)
     filter("LO_ORDERKEY"=2294059393)

Result Cache Information (identified by operation id):
------------------------------------------------------

1 - column-count=17; dependencies=(BOWIE.BIG_ZIGGY); name="select * from big_
ziggy where lo_orderkey = 2294059393"

Note
-----

- automatic DOP: Computed Degree of Parallelism is 1

Statistics
----------------------------------------------------------
      1 recursive calls
      0 db block gets
3530785 consistent gets
3530770 physical reads
      0 redo size
   2116 bytes sent via SQL*Net to client
    552 bytes received via SQL*Net from client
      2 SQL*Net roundtrips to/from client
      0 sorts (memory)
      0 sorts (disk)
      9 rows processed

The query takes nearly 6.5 minutes to complete which is clearly a long time to return a few rows.

Note that the Autonomous Data Warehouse platform is Exadata and Exadata has a number of key capabilities, such a Hybrid Columnar Compression (HCC), Smarts Scans, Storage Indexes and the such that can be extremely useful in Data Warehouse environments.

For example, the table is smaller and can be accessed more efficiently thanks to HCC:

SQL> select table_name, compression, compress_for
from user_tables where table_name='BIG_ZIGGY';

TABLE_NAME   COMPRESS COMPRESS_FOR
------------ -------- ------------------------------
BIG_ZIGGY     ENABLED QUERY HIGH ROW LEVEL LOCKING

Exadata Smart Scans and associated Storage Indexes can result in extremely efficient Full Table Scans by potentially skipping large sections of the physical data from having to be accessed and by returning just the required data set back to the database. I’ve blogged previously a number of times on the power of Storage Indexes and Exadata related smarts.

However, if a query is executed using specific filtering columns for the first time, then Storage Indexes will not have yet been created. If we look at the effectiveness of Storage Indexes when running the previous query:

SQL> select name , value/1024/1024 MB from v$statname n, v$mystat s
where n.statistic# = s.statistic# and n.name in ('cell physical IO interconnect bytes returned by smart scan', 'cell physical IO bytes saved by storage index');

NAME                                                                     MB
---------------------------------------------------------------- ----------
cell physical IO bytes saved by storage index                             0
cell physical IO interconnect bytes returned by smart scan       7.79956818

Note that initially, no bytes are saved by storage indexes as they don’t initially exist. Although only a relatively small amount of data is actually returned to the database server, having to read the entire table is expensive and why the query is taking so long to complete.

If we run a number of different queries with a filter predicate on the same LO_CUSTKEY column and eventually run the following query:

SQL> select lo_orderkey, lo_linenumber, lo_custkey, lo_partkey, lo_quantity
from big_ziggy where lo_orderkey = 2294059491;

6 rows selected.

Elapsed: 00:01:41.95

Execution Plan
----------------------------------------------------------
Plan hash value: 2097953955

---------------------------------------------------------------------------------------------------------
| Id | Operation                   | Name                       | Rows | Bytes | Cost (%CPU)| Time      |
---------------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT            |                            |    5 |   125 |  67250 (12)|  00:00:03 |
|  1 |  RESULT CACHE               | 9ycjfft9fx7gy506tr9d98djdj |      |       |            |           |
|* 2 |   TABLE ACCESS STORAGE FULL | BIG_ZIGGY                  |    5 |   125 |  67250 (12)|  00:00:03 |
---------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

2 - storage("LO_ORDERKEY"=2294059491)
     filter("LO_ORDERKEY"=2294059491)

Result Cache Information (identified by operation id):
------------------------------------------------------

1 - column-count=5; dependencies=(BOWIE.BIG_ZIGGY); name="select lo_orderkey,
lo_linenumber, lo_custkey, lo_partkey, lo_quantity from big_ziggy where lo_orde
rkey = 2294059491"

Note
-----

- automatic DOP: Computed Degree of Parallelism is 1

Statistics
----------------------------------------------------------
      1 recursive calls
      0 db block gets
3530785 consistent gets
3530770 physical reads
      0 redo size
    760 bytes sent via SQL*Net to client
    552 bytes received via SQL*Net from client
      2 SQL*Net roundtrips to/from client
      0 sorts (memory)
      0 sorts (disk)
      6 rows processed

We note that the overall elapsed times have now decreased, taking only 1 minute, 45 seconds to complete.

If we now look at the effectiveness of storage indexes:

SQL> select name , value/1024/1024 MB from v$statname n, v$mystat s
where n.statistic# = s.statistic# and n.name in ('cell physical IO interconnect bytes returned by smart scan', 'cell physical IO bytes saved by storage index');

NAME                                                                     MB
---------------------------------------------------------------- ----------
cell physical IO bytes saved by storage index                     731.78125
cell physical IO interconnect bytes returned by smart scan       7.59265137

The storage index is now providing some benefit but as I’ve discussed previously, there are limitations to their effectiveness depending on data distribution and the physical clustering of the data.¬† So your mileage will vary on the effectiveness of smart scans and no, Oracle does not (currently) automatically re-cluster data to improve smart scan performance when most effective.

If you re-run the same query multiple times, another performance feature kicks in:

SQL> select lo_orderkey, lo_linenumber, lo_custkey, lo_partkey, lo_quantity from big_ziggy where lo_orderkey = 2294059491;

6 rows selected.

Elapsed: 00:00:01.74

Execution Plan
----------------------------------------------------------
Plan hash value: 2097953955

---------------------------------------------------------------------------------------------------------
|  Id | Operation                   | Name                       | Rows | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                            |    5 |   125 |  67250 (12)| 00:00:03 |
|   1 |  RESULT CACHE               | 9ycjfft9fx7gy506tr9d98djdj |      |       |            |          |
| * 2 |   TABLE ACCESS STORAGE FULL | BIG_ZIGGY                  |    5 |   125 |  67250 (12)| 00:00:03 |
---------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

2 - storage("LO_ORDERKEY"=2294059491)
     filter("LO_ORDERKEY"=2294059491)

Result Cache Information (identified by operation id):
------------------------------------------------------

1 - column-count=5; dependencies=(BOWIE.BIG_ZIGGY); name="select lo_orderkey,
lo_linenumber, lo_custkey, lo_partkey, lo_quantity from big_ziggy where lo_orderkey = 2294059491"

Note
-----

- automatic DOP: Computed Degree of Parallelism is 1

Statistics
----------------------------------------------------------
  0 recursive calls
  0 db block gets
  0 consistent gets
  0 physical reads
  0 redo size
760 bytes sent via SQL*Net to client
552 bytes received via SQL*Net from client
  2 SQL*Net roundtrips to/from client
  0 sorts (memory)
  0 sorts (disk)
  6 rows processed

Response times are now under 2 seconds (Australia is a long way from Ashburn, USA), with Oracle automatically using the Result Cache to directly access the necessary result and hence not requiring any table related consistent reads. This is as efficient as it gets, but it’s dependant on similar result sets being returned and no data changes that would invalidate the results cache such as a new data load into the data warehouse:

SQL> insert into big_ziggy select * from big_ziggy where rownum  commit;

Commit complete.

SQL> select lo_orderkey, lo_linenumber, lo_custkey, lo_partkey, lo_quantity from big_ziggy where lo_orderkey = 2294059491;

6 rows selected.

Elapsed: 00:01:44.79

Execution Plan
----------------------------------------------------------
Plan hash value: 2097953955

---------------------------------------------------------------------------------------------------------
|  Id | Operation                   | Name                       | Rows | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                            |    5 |   125 |  67250 (12)| 00:00:03 |
|   1 |  RESULT CACHE               | 9ycjfft9fx7gy506tr9d98djdj |      |       |            |          |
| * 2 |   TABLE ACCESS STORAGE FULL | BIG_ZIGGY                  |    5 |   125 |  67250 (12)| 00:00:03 |
---------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - storage("LO_ORDERKEY"=2294059491)
     filter("LO_ORDERKEY"=2294059491)

Result Cache Information (identified by operation id):
------------------------------------------------------

1 - column-count=5; dependencies=(BOWIE.BIG_ZIGGY); name="select lo_orderkey,
lo_linenumber, lo_custkey, lo_partkey, lo_quantity from big_ziggy where lo_orde
rkey = 2294059491"

Note
-----

- automatic DOP: Computed Degree of Parallelism is 1

Statistics
----------------------------------------------------------
      1 recursive calls
      0 db block gets
3530808 consistent gets
3530786 physical reads
      0 redo size
    760 bytes sent via SQL*Net to client
    552 bytes received via SQL*Net from client
      2 SQL*Net roundtrips to/from client
      0 sorts (memory)
      0 sorts (disk)
      6 rows processed

After new data gets loaded into the table, we’re back to being reliant on the effectiveness of the Exadata smart scan.

So we have a scenario here where we either have excellent performance via the result cache or just average performance is we want data for specific data based on a moderately effective storage index and resultant smart scan (in a different session, the following query is run):

SQL> select lo_orderkey, lo_linenumber, lo_custkey, lo_partkey, lo_quantity from big_ziggy where lo_orderkey = 2294059492;

Elapsed: 00:02:03.99

Execution Plan
----------------------------------------------------------
Plan hash value: 2097953955

---------------------------------------------------------------------------------------------------------
|  Id | Operation                   | Name                       | Rows | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                            |    5 |   125 |  67250 (12)| 00:00:03 |
|   1 |  RESULT CACHE               | 4s0z6mkqpufc33mxb9ddvuc69r |      |       |            |          |
| * 2 |   TABLE ACCESS STORAGE FULL | BIG_ZIGGY                  |    5 |   125 |  67250 (12)| 00:00:03 |
---------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

2 - storage("LO_ORDERKEY"=2294059492)
     filter("LO_ORDERKEY"=2294059492)

Result Cache Information (identified by operation id):
------------------------------------------------------

1 - column-count=5; dependencies=(BOWIE.BIG_ZIGGY); name="select lo_orderkey,
lo_linenumber, lo_custkey, lo_partkey, lo_quantity from big_ziggy where lo_orde
rkey = 2294059492"
Note
-----
- automatic DOP: Computed Degree of Parallelism is 1
Statistics
----------------------------------------------------------
      1 recursive calls
      0 db block gets
3530785 consistent gets
3530770 physical reads
      0 redo size
    695 bytes sent via SQL*Net to client
    552 bytes received via SQL*Net from client
      2 SQL*Net roundtrips to/from client
      0 sorts (memory)
      0 sorts (disk)
      2 rows processed

SQL> select name , value/1024/1024 MB from v$statname n, v$mystat s where n.statistic# = s.statistic# and n.name in ('cell physical IO interconnect bytes returned by smart scan', 'cell physical IO bytes saved by storage index');

NAME                                                                     MB
---------------------------------------------------------------- ----------
cell physical IO bytes saved by storage index                     731.78125
cell physical IO interconnect bytes returned by smart scan       7.59265137

To get both consistent and excellent performance when running a query that performs a high level of filtering, we need to create an appropriate index as we would in any other regular data warehouse database environment:

SQL> create bitmap index big_ziggy_lo_orderkey_i on big_ziggy(lo_orderkey);
SQL> select index_name, leaf_blocks, blevel, distinct_keys, status from user_indexes where table_name='BIG_ZIGGY';

INDEX_NAME                LEAF_BLOCKS     BLEVEL DISTINCT_KEYS STATUS
------------------------- ----------- ---------- ------------- --------
BIG_ZIGGY_LO_ORDERKEY_I        843144          3     212552698 VALID

If we now run random queries based on the highly selective LO_ORDERKEY column:

SQL> select lo_orderkey, lo_linenumber, lo_custkey, lo_partkey, lo_quantity from big_ziggy where lo_orderkey = 2294059489;

Elapsed: 00:00:01.67

Execution Plan
----------------------------------------------------------
Plan hash value: 1677294303

------------------------------------------------------------------------------------------------------------
| Id | Operation                      | Name                       | Rows | Bytes | Cost (%CPU)| Time      |
------------------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT               |                            |    5 |   125 |       5 (0)|  00:00:01 |
|  1 |  RESULT CACHE                  | 7tn121mujfw3kfg39z5pg4duwy |      |       |            |           |
|  2 |   TABLE ACCESS BY INDEX ROWID  | BIG_ZIGGY                  |    5 |   125 |       5 (0)|  00:00:01 |
|  3 |    BITMAP CONVERSION TO ROWIDS |                            |      |       |            |           |
|* 4 |     BITMAP INDEX SINGLE VALUE  | BIG_ZIGGY_LO_ORDERKEY_I    |      |       |            |           |
------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

4 - access("LO_ORDERKEY"=2294059489)

Result Cache Information (identified by operation id):
------------------------------------------------------

1 - column-count=5; dependencies=(BOWIE.BIG_ZIGGY); attributes=(ordered); nam
e="select lo_orderkey, lo_linenumber, lo_custkey, lo_partkey, lo_quantity from b
ig_ziggy where lo_orderkey = 2294059489"
Note
-----
- automatic DOP: Computed Degree of Parallelism is 1 because of no expensive
parallel operation

Statistics
----------------------------------------------------------
  1 recursive calls
  0 db block gets
  8 consistent gets
  0 physical reads
  0 redo size
647 bytes sent via SQL*Net to client
553 bytes received via SQL*Net from client
  2 SQL*Net roundtrips to/from client
  0 sorts (memory)
  0 sorts (disk)
  1 rows processed

We now get the required, consistent, high performance that we require, without having to resort to consuming additional CPU resources by increasing say the number of available CPUs and degree of parallelism to reduce response times.

In short, you index the Autonomous Data Warehouse as you would any database running on an Exadata platform…

Remember, this is all effectively version one of the Autonomous Data Warehouse. The capability of automatically creating necessary indexes is coming very soon…

Intro: Initial Thoughts On Oracle Autonomous Database Cloud Services (Automatic For The People) March 22, 2019

Posted by Richard Foote in 18c, 19c, 19c New Features, Autonomous Data Warehouse, Autonomous Database, Autonomous Transaction Processing, Oracle Indexes.
4 comments

automatic for the people

I’m currently writing up a few blog pieces on indexing in the Oracle Autonomous Database environments, but I thought I’ll begin by introducing what exactly are Oracle Autonomous Database Cloud Services and some of my initial thoughts, as there’s still some confusion on all this.

Introduced by Uncle Larry at Oracle OpenWorld 2017, Oracle Autonomous Databases are positioned as “self-driving” Cloud- Based database environments, that automatically address many of the tasks usually performed by DBAs such as patching, backups, security and what is of most interest to me, database tuning.

The first of these cloud environments, the Oracle Autonomous Data Warehouse Cloud Service  (ADW) was first released in March 2018, with the Oracle Autonomous Transaction Processing Cloud Service (ATP) released in August 2018.

So key point number one. These are all Oracle Autonomous Database Cloud Services, there are no actual Autonomous Databases as such. These environments currently consist of the following key components:

  • Oracle Database 18c (and above)
  • Oracle Cloud Environment
  • Oracle Exadata Infrastructure
  • Oracle Policy-Driven Automation

So the Oracle Database alone is not an autonomous database. An Oracle 18c database running on your On-Premises Exadata is still not enough to be an autonomous database platform. It requires all the above components, which is why this is only available on the Oracle Cloud environment (and will likely be soon available on the Oracle Cloud at Customer environment, where all these components can also be replicated).

Now having a database environment is which many of the mundane DBA tasks (such as database provisioning, patching, taking backups, etc.) can be automated can only be a good thing and the Autonomous Database Cloud services delivers on all these. Creating an Autonomous Database environment truly just takes a few minutes and I could easily connect via SQL*PLUS and SQL Developer on my laptop a few minutes later.

However, my key interest here is in database tuning (hey, I’m a database tuning guy) and the capability of the Autonomous Database Cloud Services in being able to self-tune and self-repair itself when database performance issues and inefficiencies are encountered.

So, are we there yet?

So my second key point is that is many ways, we already have a “self-tuning” database with Oracle and have had so for many many years. I’m old enough to remember the Rule-Based Optimizer, when the structure of the SQL was critical to performance. I remember tuning Rollback Segments and manually setting extent sizes to make sure no segment got too close to 121 extents else it couldn’t grow any further. I remember manually setting Freelists, I remember having to write my own jobs to run maintenance tasks and setting tablespaces to be in backup mode etc. etc. etc.

The Oracle Database has evolved over the years, where Oracle DBAs don’t have to worry about these things. If you wish, you can now configure the Oracle database to also automatically adjust memory components, automatically collect necessary schema statistics, automatically create baselines and to tune SQL queries, etc. etc. etc.

All of these Oracle database capabilities are crucial in the new Oracle autonomous database environments, as are new Oracle 18c features and as will be new Oracle 19c features (especially in relation to self-tuning the Autonomous Transaction Processing Cloud Service). The newer autonomous database capabilities are just part of this Oracle database self-tuning evolution, with in the future some new policy-driven based automation thrown into the mix.

So are we indeed there yet with a completely self-tuning database with these new autonomous database services? In a word, no.

Which brings me to my next key point. This is all very very new. All these autonomous database services are effectively at “Edition One” status. This will all take time to eventually evolve to be truly, fully self-tuning database environments. There’ll be some new cool capabilities and features which will assist in some areas but be initially deficient in other areas. But clearly this is the future and clearly in future releases, more and more self-tuning capabilities will be added that will make things easier to both manage and tune Oracle database environments.

Note Oracle Corporation itself (depending on who you talk to) is quite clear that it isn’t there yet, with the web-page on the Autonomous Transaction Processing Cloud services clearly stating that “Workload Optimization*¬†(coming soon)“, but with lots of clues on what’s to come with features such as “Database tunes itself (indexes, memory, partitions, SQL plans, parallelism, optimizer stats) for the incoming workload as data changes over time“.

Do many of these upcoming features sound familiar? If you’re not sure, check out the Oracle Database 19c New Features manual.

Which brings me to my final key point here. Even if you’re not particularly interested in the Cloud, if you view managing On-Premises environments as being your foreseeable future, some the best things that has happened to you in relation to the Oracle Database comes courtesy to you as a result of Oracle’s strategic direction with the cloud. Many of the best new features introduced in the past few Oracle Database releases, especially in relation to the CBO and much of the online stuff such moving tables and partitions online, moving data files online, converting tables to be partitioned online, converting partitioned tables differently online, merging/splitting partitions online, etc. etc. are clearly going to be critical in a self-managing/tuning database. As a DBA of an On-Premises database environment, you can also take advantage of these new capabilities.

It will enable Oracle in its autonomous environments for example to automatically convert that table to be partitioned in this specific manner to improve overall performance, all behind the scenes, without anyone necessarily knowing it’s done so.

Is it there yet? No. Is it coming? You bet.

That said, some newer Oracle Database 19c features that will clearly be critical to a self-tuning autonomous databases moving forward such as Real-Time Statistics, SQL Quarantine and Automatic Indexing will only¬†be available in the Oracle Cloud and Exadata platforms. So take note…

Which brings me to indexing.

When the first Oracle Autonomous Data Warehouse cloud service was announced in March 2018, one of the key “features” of the new autonomous platform was that indexing was disabled (as were other traditional Data Warehouse database capabilities such as Partitioning, Materialized Views, etc.). But how can you effectively “tune” a database when you take away some of the key tuning capabilities? The short answer is that you can’t, unless the database has somewhat simplistic data workloads and data structures.

Remember, this was “Version One” with the first autonomous database environment and the initial strategy was to make the Oracle database not smarter, but dumber. Simplify it such that DBAs can’t easily “break” things and by simply making the Exadata platform do all the heavy lifting with regards to database tuning. A more simplified environment makes things a little easier to “self-tune” as there are fewer moving parts to worry about.

For more simplistic Data Warehouse environments, this might all be adequate. For those of you who follow my blog and my previous discussions on indexing on Exadata, dropping all indexes on Exadata, even on Data Warehouse environments was generally a bad idea. So providing an Oracle database platform, even an autonomous one running on an Exadata platform, where all indexing was effectively disabled, was always going to have performance shortfalls in some scenarios. Try running ad-hoc queries on the supplied SSB LINEORDER table for example. Smart Scans, Storage Indexes, HCC, Result Caches, etc. will only take you so far.

So as I said, Oracle evolves and improves and now allows database indexes to be created in the Autonomous Data Warehouse cloud service.

Which will be the focus on upcoming blog posts, so stay tuned.

Announcement: “Oracle Performance Diagnostics and Tuning” Webinar – 9-12 July 2019 !! March 21, 2019

Posted by Richard Foote in Oracle Indexes, Oracle Performance Diagnostics and Tuning Webinar, Performance Tuning, Performance Tuning Webinar.
add a comment

OMC Training

I’m very excited to announce the first public running of my new “Oracle Performance Diagnostics and TuningWebinar will run between 9-12 July 2019 (6pm-10pm AEST):

Webinar Series 9-12 July 2019 (start 6pm AEST, end 10pm AEST): Buy Now Button

This is a must attend seminar aimed at Oracle professionals (both DBAs and Developers) who are interested in Performance Tuning.  The seminar will detail how to maximise the performance of both Oracle databases and associated applications and how to diagnose and address any performance issues as quickly and effectively as possible.

When an application suddenly runs ‚Äúslow‚ÄĚ or when people start complaining about the ‚Äúpoor performance‚ÄĚ of the database, there‚Äôs often some uncertainty in how to most quickly and most accurately determine the ‚Äúroot‚ÄĚ cause of any such slowdown and effectively address any associated issues. In this seminar, we explore a Tuning Methodology that helps Oracle professionals to both quickly and reliably determine the actual causes of performance issues and so ensure the effectiveness of any applied resolutions.

Looking at a number of real world scenarios and numerous actual examples and test cases, this seminar will show participants how to confidently and reliably diagnose performance issues. The seminar explores in much detail the various diagnostics tools and reports available in Oracle to assist in determining any database performance issue and importantly WHEN and HOW to effectively use each approach.

One of the more common reasons for poor Oracle performance is inefficient or poorly running SQL. This seminar explores in much detail how SQL is executed within the Oracle database, the various issues and related concepts important in understanding why SQL might be inefficient and the many capabilities and features Oracle has in helping to both resolve SQL performance issues and to maintain the stability and reliability of SQL execution.

It’s a fun, but intense, content rich seminar that is suitable for people of all experiences (from beginners to seasoned Oracle experts).

For a detailed look at seminar content, please visit my “Oracle Performance Diagnostics and Tuning” page.

The webinar will run over four days between 9-12 July 2019 between 6pm-10pm AEST.

For the following webinar series, you can purchase your registrations here via credit card (if NOT based in Australia), else contact me if you wish to pay via direct bank transfer or if you’re based in Australia (as GST must be added). Note: Australia, you have actual in person seminars running throughout Winter 2019 you can attend..

Webinar Series 9-12 July 2019 (start 6pm AEST, end 10pm AEST): Buy Now Button

I’ve had heaps of correspondence regarding this so please book early to avoid disappointment as there will strictly be limited places to ensure a quality training experience for all attendees, with plenty of opportunity for questions.

Further webinars will be scheduled for later in the year, so stay tuned.

If you have any questions, don’t hesitate to contact me at richard@richardfooteconsulting.com.

Oracle Database 19c Automatic Indexing: Predicted Back In 2008 (A Better Future) March 19, 2019

Posted by Richard Foote in Automatic Indexing, Autonomous Database, Oracle Indexes.
add a comment

heathen

I’ve recently received a number of correspondences regarding one of my most popular blog posts, dating back to February 2008: Index Rebuild vs. Coalesce vs. Shrink Space (Pigs ‚Äď 3 Different¬†Ones).

In the comments section, there’s an interesting discussion where I mention:

If Oracle19 does everything for you and all the various indexes structures get automatically created, used, applied, tuned, maintained, partitioned, etc. etc., then I‚Äôll have nothing to write about¬†ūüôā.

Actually, when I think about it, it may not take as long as Oracle19 for Oracle to get there.

The fact this was stated some 10 years before the release of Oracle Database 19c with indeed the new Automatic Indexing capability is kinda funny.

I would however like to state a few points.

Firstly, congratulations for the impressive timely predication should go to Robert who made the initial reference to Oracle 19c in the comments.

Although the changing in Oracle database versioning helped in getting the predicted Oracle version correct, I did at least make the comment that I thought some form Automatic Indexing was likely before Oracle 19c.

Although not all the predicted capabilities are there yet (no auto index maintenance, no auto index partitioning, not all index structures are yet supported with auto indexing, etc.), I do predict that these will all be implemented within a few Oracle database releases (early 20s).

I stated there would be nothing to blog about if automatic indexing became a reality, which of course is not true either. The more one understands how Oracle actually works, the more one can proactively identify and address any potential issues and deficiencies. Automatic Indexing is ONLY going to be available on Exadata and the Oracle Cloud platform, so the majority of Oracle On-Premises database users will still need to manually manage indexes. Understanding how Automatic Indexing capability works means one can potentially highlight and learn how to more appropriately manage indexes manually.

So finally, yes I will of course be blogging about Automatic Indexing and about indexing in general in the new Autonomous Database cloud environments.

I’ll start with some initial thoughts and examples of indexing the Autonomous Data Warehouse Database.

Stay tuned.

Announcement: “Oracle Performance Diagnostics and Tuning” Seminar – Australia/NZ Winter Dates March 14, 2019

Posted by Richard Foote in Oracle Indexes, Oracle Performance Diagnostics and Tuning Seminar.
1 comment so far

seminar photo

I’m very excited to announce the first public running of my new “Oracle Performance Diagnostics and Tuning” Seminar throughout Australia and New Zealand this coming winter 2019. (See my Oracle Performance Diagnostics and Tuning Seminar page for all the seminar content and details).

This is a must attend seminar aimed at Oracle professionals (both DBAs and Developers) who are interested in Performance Tuning.  The seminar will detail how to maximise the performance of both Oracle databases and associated applications and how to diagnose and address any performance issues as quickly and effectively as possible.

When an application suddenly runs ‚Äúslow‚ÄĚ or when people start complaining about the ‚Äúpoor performance‚ÄĚ of the database, there‚Äôs often some uncertainty in how to most quickly and most accurately determine the ‚Äúroot‚ÄĚ cause of any such slowdown and effectively address any associated issues. In this seminar, we explore a Tuning Methodology that helps Oracle professionals to both quickly and reliably determine the actual causes of performance issues and so ensure the effectiveness of any applied resolutions.

Looking at a number of real world scenarios and numerous actual examples and test cases, this seminar will show participants how to confidently and reliably diagnose performance issues. The seminar explores in much detail the various diagnostics tools and reports available in Oracle to assist in determining any database performance issue and importantly WHEN and HOW to effectively use each approach. Additionally, participants are also invited to share their own database/SQL reports where we can apply the principles learnt in diagnosing the performance of their actual databases/applications (we can communicate prior to the event on how to best get any reports of interest).

One of the more common reasons for poor Oracle performance is inefficient or poorly running SQL. This seminar explores in much detail how SQL is executed within the Oracle database, the various issues and related concepts important in understanding why SQL might be inefficient and the many capabilities and features Oracle has in helping to both resolve SQL performance issues and to maintain the stability and reliability of SQL execution.

It’s a fun, but intense, content rich seminar that is suitable for people of all experiences (from beginners to seasoned Oracle professionals).

As usual, class sizes will be intentionally small to ensure all attendees get the maximum educational benefit from seminar. Early bird rates are available so please book early to avoid disappointment as I don’t get the opportunity to run these events very often.

Dates for the upcoming seminars in Australia/New Zealand:

Sydney: 22-23 July (Training Choice, 60 Clarence St): Registration Link

Canberra: 24-25 July (Training Choice, 54 Marcus Clark St): Registration Link

Brisbane: 29-30 July (Training Choice, 88 Creek St): Registration Link

Melbourne: 31 July – 1 August (Training Choice, 50 Queen St): Registration Link

Perth: 5-6 August (Training Choice, 2/220 St Georges Terrace): Registration Link

Adelaide: 7-8 August (Training Choice, 19 Young St): Registration Link

Wellington: 19-20 August (Auldhouse, Jackson Stone House 11 Hunter St): Registration Link

Auckland: 21-22 August (Karstens, Level 4, 205 Queen Street): Registration Link

 

I’ll soon announce dates for a seminar in my native London, UK and the first series of webinars for this training. Stay tuned !!

If you have any questions at all, please contact me at richard@richardfooteconsulting.com

Unique Indexes Force Hints To Be ‚ÄúIgnored‚ÄĚ Part II (One Of The Few) February 19, 2019

Posted by Richard Foote in CBO, Hash Join, Hints, Oracle Indexes, Transitive Closure, Unique Indexes.
2 comments

Final Cut

In Part I, I showed a demo of how the introduction of a Unique¬†Index appears to force a hint to be “ignored”. This is a classic case of what difference a Unique Index can make in the CBO deliberations.

So what’s going on here?

When I run the first, un-hinted query:

SQL> select * from bowie1, bowie2
where bowie1.id=bowie2.code and bowie1.id=1;
we notice something a little odd in the Predicate Information section of the execution plan:
Predicate Information (identified by operation id):
---------------------------------------------------
3 - access("BOWIE1"."ID"=1)
6 - access("BOWIE2"."CODE"=1)
Where the hell is the join condition, it’s not listed? Additionally, where does the BOWIE2.CODE=1 condition come from, it’s not a predicate within the above SQL statement?
The answer is “Transitive Closure“, whereby the CBO can automatically infer that BOWIE2.CODE must equal 1, if BOWIE2.CODE=BOWIE1.ID and BOWIE1.ID=1. This is something that the CBO master Jonathan Lewis has blogged about a number of times, including this post on Cartesian Merge Join.
Because the CBO is picking up the fact (based on the column statistics) that BOWIE1.ID is basically a unique column, it can effectively drop the join condition and simply use a  Merge Join Cartesian to get all rows from BOWIE1 that match the BOWIE1.ID=1 predicate (just 1 row), with all those rows from BOWIE2 that match the BOWIE2.CODE=1 predicate (estimated 50 rows).
-------------------------------------------------------------------------------------------------------
| Id | Operation                           | Name          | Rows | Bytes | Cost (%CPU) |   Time     |
-------------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT                    |               |   50 |  1150 |       5 (0) |   00:00:01 |
|  1 | MERGE JOIN CARTESIAN                |               |   50 |  1150 |       5 (0) |   00:00:01 |
|  2 | TABLE ACCESS BY INDEX ROWID BATCHED | BOWIE1        |    1 |    16 |       2 (0) |   00:00:01 |
|* 3 | INDEX RANGE SCAN                    | BOWIE1_ID_I   |    1 |       |       1 (0) |   00:00:01 |
|  4 | BUFFER SORT                         |               |   50 |   350 |       3 (0) |   00:00:01 |
|  5 | TABLE ACCESS BY INDEX ROWID BATCHED | BOWIE2        |   50 |   350 |       3 (0) |   00:00:01 |
|* 6 | INDEX RANGE SCAN                    | BOWIE2_CODE_I |   50 |       |       1 (0) |   00:00:01 |
-------------------------------------------------------------------------------------------------------
The CBO is picking up the fact that a Merge Join Cartesian is not as bad as it might sound if only 1 row is likely to be returned from one side of this process that can be combined with just the rows of interest from the other table, with no unnecessary row throwaways.
However, the CBO might not get this right, the efficiency of this depends somewhat on there really only being the one row returned from the BOWIE1.ID=1 condition. If there were many more than one row possible, then this can become relatively inefficient.
The second hinted query is therefore directing Oracle to perform a Hash Join, as I potentially know my data better than Oracle and think it a better option in this case:
SQL> select /*+ use_hash(bowie2) */ * from bowie1, bowie2
where bowie1.id=bowie2.code and bowie1.id=1;
The hint is directing Oracle to access the BOWIE2 table to be the probe table in a Hash Join operation.
We notice that the join predicate is now listed in the Predicate Information of the execution plan:
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("BOWIE1"."ID"="BOWIE2"."CODE")
3 - access("BOWIE1"."ID"=1)
5 - access("BOWIE2"."CODE"=1)
The key point being that there could be a many to many join operation that needs to be performed and Oracle can’t perform a Hash Join unless there is a join predicate listed.
As such, the CBO uses a Hash Join as requested by the hint:
------------------------------------------------------------------------------------------------------
| Id | Operation                           | Name          | Rows | Bytes | Cost (%CPU) |  Time     |
------------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT                    |               |   50 |  1150 |       5 (0) |  00:00:01 |
|* 1 | HASH JOIN                           |               |   50 |  1150 |       5 (0) |  00:00:01 |
|  2 | TABLE ACCESS BY INDEX ROWID BATCHED | BOWIE1        |    1 |    16 |       2 (0) |  00:00:01 |
|* 3 | INDEX RANGE SCAN                    | BOWIE1_ID_I   |    1 |       |       1 (0) |  00:00:01 |
|  4 | TABLE ACCESS BY INDEX ROWID BATCHED | BOWIE2        |   50 |   350 |       3 (0) |  00:00:01 |
|* 5 | INDEX RANGE SCAN                    | BOWIE2_CODE_I |   50 |       |       1 (0) |  00:00:01 |
------------------------------------------------------------------------------------------------------
Note currently, there is no Unique constraint on the BOWIE1.ID column and the index on BOWIE1.ID is non-unique. Although the column statistics suggests the ID is basically unique (the number of distinct values matches the number of rows in the table), there is no certainly that this is correct. The statistics might not be accurate and there could be a bunch of duplicate IDs that would result in a many to many join operation whereby a Hash Join might be preferable.
But by replacing the non-unique index on BOWIE1.ID with a unique index, the CBO now knows there is indeed just the one viable row from the BOWIE1 side of the join, with the BOWIE1.ID=1 predicate. As such, the CBO goes back to using Transitive Closure to again effectively eliminate the join predicate.

 

Predicate Information (identified by operation id):
---------------------------------------------------
3 - access("BOWIE1"."ID"=1)
5 - access("BOWIE2"."CODE"=1)
The CBO can now safely get just the required row from BOWIE1 table via the BOWIE1.ID=1 predicate and the required data from BOWIE2 directly via the BOWIE2.CODE=1 predicate. The CBO makes this decision before considering the most appropriate join strategy, which can now not possibly be the Hash Join, as the Hash Join is only possible with a join predicate in place.
------------------------------------------------------------------------------------------------------
| Id | Operation                           | Name          | Rows | Bytes | Cost (%CPU) | Time     |
------------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT                    |               |   50 |  1150 |       5 (0) | 00:00:01 |
|  1 | NESTED LOOPS                        |               |   50 |  1150 |       5 (0) | 00:00:01 |
|  2 | TABLE ACCESS BY INDEX ROWID         | BOWIE1        |    1 |    16 |       2 (0) | 00:00:01 |
|* 3 | INDEX UNIQUE SCAN                   | BOWIE1_ID_I   |    1 |       |       1 (0) | 00:00:01 |
|  4 | TABLE ACCESS BY INDEX ROWID BATCHED | BOWIE2        |   50 |   350 |       3 (0) | 00:00:01 |
|* 5 | INDEX RANGE SCAN                    | BOWIE2_CODE_I |   50 |       |       1 (0) | 00:00:01 |
------------------------------------------------------------------------------------------------------
As such, the USE_HASH hint is now “ignored”, because it’s simply now not a viable option for the CBO. A Nested Loop is now performed instead, in which the row for the Outer Table (BOWIE1) can be retrieved and all “corresponding” rows for the Inner Table (BOWIE2) can be likewise accessed via just the BOWIE2.CODE=1 predicate.
A Nested Loop is the join type you can have when you’re not necessarily performing a join…

“Oracle Indexing Internals and Best Practices” Seminar – Berlin 8-9 May: DOAG Website February 8, 2019

Posted by Richard Foote in Oracle Indexes.
add a comment

DOAG logo

Just a short note to say that DOAG have now a registration page for my upcoming “Oracle Indexing Internals and Best Practices” seminar running in Berlin, Germany on 8-9 May 2019.

For all the details regarding this acclaimed educational experience and how to book your place, please visit:

https://www.doag.org/de/eventdetails?tx_doagevents_single[id]=577320

Please mention you heard this seminar from me when booking and just note that places are strictly limited for these events, so please book early to avoid disappointment.

This is likely to be the only time I run this seminar in mainland Europe this year, so don’t miss this unique opportunity to learn in person all that’s worth learning in to how to use and maintain indexes judiciously to improve Oracle database/application performance.

If you have any questions, don’t hesitate to contact me at richard@richardfooteconsulting.com.

Unique Indexes Force Hints To Be “Ignored” Part I (What’s Really Happening) February 5, 2019

Posted by Richard Foote in CBO, Hash Join, Hints, Non-Unique Indexes, Oracle Indexes, Unique Indexes.
2 comments

hours album

As I was compiling my new “Oracle Diagnostics and Performance Tuning” seminar, I realised there were quite a number of indexing performance issues I haven’t discussed here previously.

The following is a classic example of what difference a Unique Index can have over a Non-Unique index, while also covering the classic myth that Oracle sometimes chooses to “ignore” hints that might be present in your SQL.

To set the scene, I’m going to create two simple little tables, but importantly initially create only non-unique indexes for columns of interest (Note: I’ve had to remove the “<” in the “<=” predicate when populating the table to avoid formatting issues):

SQL> create table bowie1 as
select rownum id, 'David Bowie' name from dual connect by level = 1000;

Table created.

SQL> create table bowie2 as
select rownum id, mod(rownum,20)+1 code from dual connect by level = 1000;

Table created.
SQL> exec dbms_stats.gather_table_stats(ownname=> null, tabname=>'BOWIE1', method_opt=>'FOR ALL COLUMNS SIZE 1');

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.gather_table_stats(ownname=> null, tabname=>'BOWIE2', method_opt=>'FOR ALL COLUMNS SIZE 1');

PL/SQL procedure successfully completed.

SQL> create index bowie1_id_i on bowie1(id);

Index created.

SQL> create index bowie2_id_i on bowie2(id);

Index created.

SQL> create index bowie2_code_i on bowie2(code);

Index created.

I’m now going to run the following query which does a simple join between the two tables and filters on the ID column from the BOWIE1 table:

 

SQL> select * from bowie1, bowie2
where bowie1.id=bowie2.code and bowie1.id=1;

50 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 4266778954

-------------------------------------------------------------------------------------------------------
| Id | Operation                           | Name          | Rows | Bytes | Cost (%CPU) |   Time     |
-------------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT                    |               |   50 |  1150 |       5 (0) |   00:00:01 |
|  1 | MERGE JOIN CARTESIAN                |               |   50 |  1150 |       5 (0) |   00:00:01 |
|  2 | TABLE ACCESS BY INDEX ROWID BATCHED | BOWIE1        |    1 |    16 |       2 (0) |   00:00:01 |
|* 3 | INDEX RANGE SCAN                    | BOWIE1_ID_I   |    1 |       |       1 (0) |   00:00:01 |
|  4 | BUFFER SORT                         |               |   50 |   350 |       3 (0) |   00:00:01 |
|  5 | TABLE ACCESS BY INDEX ROWID BATCHED | BOWIE2        |   50 |   350 |       3 (0) |   00:00:01 |
|* 6 | INDEX RANGE SCAN                    | BOWIE2_CODE_I |   50 |       |       1 (0) |   00:00:01 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access("BOWIE1"."ID"=1)
6 - access("BOWIE2"."CODE"=1)

Statistics
----------------------------------------------------------
   0 recursive calls
   0 db block gets
   8 consistent gets
   0 physical reads
   0 redo size
1815 bytes sent via SQL*Net to client
 641 bytes received via SQL*Net from client
   5 SQL*Net roundtrips to/from client
   1 sorts (memory)
   0 sorts (disk)
  50 rows processed

 

The query uses a MERGE JOIN which I (incorrectly) think is a concern and decide that a HASH JOIN should be a better option. So I now put in a basic USE_HASH hint:

SQL> select /*+ use_hash(bowie2) */ * from bowie1, bowie2
where bowie1.id=bowie2.code and bowie1.id=1;

50 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 1413846643

------------------------------------------------------------------------------------------------------
| Id | Operation                           | Name          | Rows | Bytes | Cost (%CPU) |  Time     |
------------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT                    |               |   50 |  1150 |       5 (0) |  00:00:01 |
|* 1 | HASH JOIN                           |               |   50 |  1150 |       5 (0) |  00:00:01 |
|  2 | TABLE ACCESS BY INDEX ROWID BATCHED | BOWIE1        |    1 |    16 |       2 (0) |  00:00:01 |
|* 3 | INDEX RANGE SCAN                    | BOWIE1_ID_I   |    1 |       |       1 (0) |  00:00:01 |
|  4 | TABLE ACCESS BY INDEX ROWID BATCHED | BOWIE2        |   50 |   350 |       3 (0) |  00:00:01 |
|* 5 | INDEX RANGE SCAN                    | BOWIE2_CODE_I |   50 |       |       1 (0) |  00:00:01 |
------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - access("BOWIE1"."ID"="BOWIE2"."CODE")
3 - access("BOWIE1"."ID"=1)
5 - access("BOWIE2"."CODE"=1)

Statistics
----------------------------------------------------------
   0 recursive calls
   0 db block gets
  15 consistent gets
   0 physical reads
   0 redo size
1815 bytes sent via SQL*Net to client
 641 bytes received via SQL*Net from client
   5 SQL*Net roundtrips to/from client
   0 sorts (memory)
   0 sorts (disk)
  50 rows processed

And the hint has worked as I had hoped.

I then decide that perhaps a Unique Index on the ID column might be a good idea (perhaps because I read up on all the advantages on Unique Indexes in this blog). So I drop and recreate the index as a Unique Index:

SQL> drop index bowie1_id_i;

Index dropped.

SQL> create unique index bowie1_id_i on bowie1(id);

Index created.

I now re-run my hinted query to again use the Hash Join:

SQL> select /*+ use_hash(bowie2) */ * from bowie1, bowie2
where bowie1.id=bowie2.code and bowie1.id=1;

50 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 4272245076

------------------------------------------------------------------------------------------------------
| Id | Operation                           | Name          | Rows | Bytes | Cost (%CPU) | Time     |
------------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT                    |               |   50 |  1150 |       5 (0) | 00:00:01 |
|  1 | NESTED LOOPS                        |               |   50 |  1150 |       5 (0) | 00:00:01 |
|  2 | TABLE ACCESS BY INDEX ROWID         | BOWIE1        |    1 |    16 |       2 (0) | 00:00:01 |
|* 3 | INDEX UNIQUE SCAN                   | BOWIE1_ID_I   |    1 |       |       1 (0) | 00:00:01 |
|  4 | TABLE ACCESS BY INDEX ROWID BATCHED | BOWIE2        |   50 |   350 |       3 (0) | 00:00:01 |
|* 5 | INDEX RANGE SCAN                    | BOWIE2_CODE_I |   50 |       |       1 (0) | 00:00:01 |
------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access("BOWIE1"."ID"=1)
5 - access("BOWIE2"."CODE"=1)

Statistics
----------------------------------------------------------
   0 recursive calls
   0 db block gets
  15 consistent gets
   0 physical reads
   0 redo size
1815 bytes sent via SQL*Net to client
 641 bytes received via SQL*Net from client
   5 SQL*Net roundtrips to/from client
   0 sorts (memory)
   0 sorts (disk)
  50 rows processed

And we notice the hint is now being “ignored”. The hint isn’t really ignored, it’s just no longer relevant to how the CBO has now constructed the query and associated plan with the Unique Index now making a key difference (no pun intended).

I’ll discuss in Part II why the Unique Index has made such a difference and why the hint is no longer viable.

Of course, to learn all this and a lot lot more, you can always attend my new Oracle Diagnostics and Performance Tuning” seminar one day ūüôā

Announcement: ‚ÄúOracle Indexing Internals and Best Practices Seminar‚ÄĚ ‚Äď Berlin 8-9 May 2019 !! January 30, 2019

Posted by Richard Foote in Oracle Indexes.
add a comment

I’m very excited to announce I’ll be running my highly acclaimed “Oracle Indexing Internals and Best Practices” seminar in conjunction with the German Oracle User Group (DOAG) in Berlin, Germany on 8-9 May 2019.

The venue will be the DOAG Offices in Berlin (DOAG Office, 6th Floor, Tempelhofer Weg 64, 12347 Berlin).

Because of venue size and for a quality educational experience, numbers will be strictly limited.¬†This will be one of the very few times I’ll have to run this seminar in Europe this year, so please book early to avoid disappointment.

For all the details on how to maximise Database/Application performance with judicious use of indexes and why this seminar has been such a popular and well received educational experience, please visit my Indexing Seminar Page.

The cost of the seminar is as follows:

DOAG Members: 980 Euro Buy Now Button

Other Participants: 1400 Euro Buy Now Button

DOAG membership will need to be validated, else fee will be refunded and place lost.

Note: If you can make it to this seminar in Berlin, you there are still some places available for both webinar series running on 4-8 March 2019 and on 26-30 March 2019. Full details on my Indexing Internals Webinar Page.

Hope to see some of you in Berlin on 8-9 May 2019 for what I’m sure will be a fantastic event !!

If you have any questions, please contact me: richard@richardfooteconsulting.com.

Announcement: New “Oracle Diagnostics and Performance Tuning” Seminar Now Available !! January 14, 2019

Posted by Richard Foote in Performance Tuning Seminar, Performance Tuning Webinar.
1 comment so far

OMC Training

It’s been a work in progress for quite some time, with many of my customers asking when will it be completed. Well, I’m very excited to announce that I have finally completed my new 2 day “Oracle Diagnostics and Performance Tuning” seminar and that it’s ready to be presented.

I already have a number of private bookings for this in the coming months, with some public events and webinars to be scheduled shortly. If you’re interested in having this run at your site,¬†contact me as I can¬†customise it and run it exclusively at your premises for a great price. No waiting for the seminar to come near you and no travel expenses for your employees. Just the seminar/webinar for your staff, with just the topics that are of most interest to your organisation. We can even look at some of your specific performance issues, collectively analyse your associated diagnostic reports and actually diagnose and resolve your performance issues as part of the seminar.

For more information or to request a seminar near you, please email me at richard@richardfooteconsulting.com.

For all the details of this new seminar, see below or visit my “Oracle¬†Diagnostics and Performance¬†Tuning Seminar” page.

 

This is a must attend seminar aimed at Oracle professionals (both DBAs and Developers) who are interested in Performance Tuning.  The seminar will detail how to maximise the performance of both Oracle databases and associated applications and how to diagnose and address any performance issues as quickly and effectively as possible.

When an application suddenly runs ‚Äúslow‚ÄĚ or when people start complaining about the ‚Äúpoor performance‚ÄĚ of the database, there‚Äôs often some uncertainty in how to most quickly and most accurately determine the ‚Äúroot‚ÄĚ cause of any such slowdown and effectively address any associated issues. In this seminar, we explore a Tuning Methodology that helps Oracle professionals to both quickly and reliably determine the actual causes of performance issues and so ensure the effectiveness of any applied resolutions.

Looking at a number of real world scenarios and numerous actual examples and test cases, this seminar will show participants how to confidently and reliably diagnose performance issues. The seminar explores in much detail the various diagnostics tools and reports available in Oracle to assist in determining any database performance issue and importantly WHEN and HOW to effectively use each approach. Additionally, participants are also invited to share their own database/SQL reports where we can apply the principles learnt in diagnosing the performance of their actual databases/applications (we can communicate prior to the event on how to best get any reports of interest).

One of the more common reasons for poor Oracle performance is inefficient or poorly running SQL. This seminar explores in much detail how SQL is executed within the Oracle database, the various issues and related concepts important in understanding why SQL might be inefficient and the many capabilities and features Oracle has in helping to both resolve SQL performance issues and to maintain the stability and reliability of SQL execution.

It’s a fun, but intense, content rich seminar that is suitable for people of all experiences (from beginners to seasoned Oracle experts).

 

Note: Numbers are strictly limited due to the small class nature of the seminars. Please book early to avoid disappointment as seminars are not scheduled regularly.

For a list of all upcoming seminars, see the Seminar Dates page.

 

Seminar Content (subject to minor change)

Section One: Basic Tuning Concepts

  • Common (but flawed) Tuning Methodology
  • THE Correct Tuning Methodology
  • Importance of Response Times
  • Importance of Database Time and Concurrent Active Sessions
  • Introduction to Oracle Wait Interface
  • The Correct Mentality To Performance Tuning
  • Importance Of Instrumentation

 

Section Two: Database Tuning

In Section Two, we explore how to best diagnose any performance issues as quickly as possible to accurately determine any root issue(s) and so apply the most appropriate resolutions. We explore a number of real world examples as well as providing the opportunity to go through any reports participants may wish to share.

As part of determining how to best resolve a number of real world examples, we get the opportunity to explore the internals of the database and the importance of understanding a number of crucial Oracle database concepts (e.g. latch contention, redo log mechanism, RAC cache fusion, etc.).

Because of the abundance of Oracle database instrumentation and the wealth of data available in (say) AWR Reports, many Oracle professionals feel swamped and are uncertain how to correctly read these types of reports when trying to diagnose a performance issue. In this section, we look at both WHEN and HOW to read the various reports to very quickly and reliably determine any performance issues and so know with certainty how to apply the most appropriate solution.

We explore the following Oracle supplied database diagnostics methods and when each of them are most appropriate to determine and resolve performance issues using a methodical tuning methodology:

  • Statspack Reports
  • Automatic Workload Repository (AWR) Reports
  • Automatic Database Diagnostic Monitor (ADDM) Reports
  • Active Session History (ASH) Reports
  • (Advanced) SQL Trace Files

 

Section Three: SQL Tuning

One of the most common reasons for poor performance is related to poorly running, inefficient SQL statements. In this section we explore in details a number of important concepts and Oracle capabilities (up to and including Oracle Database 18c) in relation to maximising the performance and stability of SQL statements. Topics covered include:

  • CBO Internals And How The CBO Costs SQL Statements
  • Why Is The More ‚ÄúExpensive‚ÄĚ Costed SQL Often The Faster
  • Differences between ALL_ROWS and FIRST_ROWS(n) optimization
  • Setting Optimizer Based Parameters
  • CPU Cost Model and System Statistics
  • Schema Statistics Gathering Tips
  • Histogram Internals
  • Extended Statistics and Virtual Columns
  • Dynamic Sampling
  • Displaying and Reading Execution Plans
  • GATHER_PLAN_STATISTICS Hint
  • Managing Outlier Data
  • Join Processing, Join Types and Join Methods
  • Adaptive Query Optimization
  • Adaptive Plans
  • Cardinality Feedback
  • Adaptive Statistics
  • SQL Plan Directives
  • SQL Blocks
  • SQL Transformations
  • 10053 Trace Report
  • SQL Monitor Reports
  • SQL Hints (and when/how to use appropriately)
  • SQL Patching
  • SQL Plan Stability
  • SQL Profiles
  • SQL Stored Outlines
  • Migrating Stored Outlines to SQL Plan Management (Baselines)
  • SQL Plan Management (Baselines)
  • Baselines Use Cases

 

For more information or to request a seminar near you, please email richard@richardfooteconsulting.com.