jump to navigation

Bitmap Index On Column With 212552698 Distinct Values, What Gives? (I’d Rather Be High) April 23, 2019

Posted by Richard Foote in Advanced Index Compression, Autonomous Data Warehouse, Autonomous Database, Index Compression, Oracle Indexes.
add a comment

The Next Day

In my previous post on Indexing The Autonomous Warehouse, I highlighted how it might be necessary to create indexes to improve the performance and scalability of highly selective queries, as it might on any Data Warehouse running on an Exadata platform. In the post, I created a Bitmap Index and showed how improve SQL performance insured.

I’ve had two subsequent correspondences asking why I decided to use a Bitmap Index in my example, when the column in question had a (relatively) massive number of distinct values, some 212552698. What gives, wouldn’t it have been far more efficient to have used a standard B-Tree index instead, as Bitmap Indexes are only suitable for columns that have low-medium numbers of distinct values? There’s nothing particularly low-medium cardinality about 212552698 distinct values.

It’s a good question and as 2 people asked basically the same question, thought it worthy of a separate post to try to address it.

Firstly, I must admit the only calculation I performed mentally when deciding which type of index to use was to look at the number of distinct values, 212552698 and the number of rows in the table, 1 billion and determine that there’s approximately 5 rows per distinct value. With 5 repeated values on average, this was sufficient for the Bitmap Index to likely be the more efficient type of index and so I created a Bitmap Index in my demo, considering this was a Data Warehouse environment where potential locking issues relating to concurrent DMLs was not an issue.

As I’ve discussed previously, as with this classic post from way back in 2010, “So What Is A Good Cardinality Estimate For A Bitmap Index Column“, it’s not the number of distinct values that’s important, it’s the column cardinality and the average number of repeated column values that’s important when deciding if a Bitmap Index might be appropriate in a Data Warehouse environment.

As above post discusses, a Bitmap Index might only need the one index entry per column value and as the resultant index bitmap string can be very very highly compressed if there are few actual occurrences per value, a column with as many as 5 repeated values of average would likely be more efficient than a corresponding B-Tree Index. A B-Tree Index with 5 repeated values would require the value to be stored 5 times, with their 5 corresponding rowids, whereas the Bitmap Index might only need to store the indexed value once with its 2 corresponding rowids and a trivial amount for the highly compressed bitmap string.

If we create an equivalent B-Tree Index to the previously created Bitmap Index:

SQL> create index big_ziggy_lo_orderkey_i2 on big_ziggy(lo_orderkey) invisible;

SQL> select index_name, index_type, leaf_blocks, compression from user_indexes
     where table_name = 'BIG_ZIGGY';

INDEX_NAME                     INDEX_TYPE LEAF_BLOCKS COMPRESSION
------------------------------ ---------- ----------- -------------
BIG_ZIGGY_LO_ORDERKEY_I        BITMAP          843144 DISABLED
BIG_ZIGGY_LO_ORDERKEY_I2       NORMAL         2506552 DISABLED

We notice the Bitmap Index at 843144 leaf blocks is indeed substantially smaller than the equivalent B-Tree Index at 2506552 leaf blocks.

However, this is the Oracle Autonomous Data Warehouse environment where I have accessed to the Advanced Compression option and the capability to potentially create highly compressed B-Tree index structures. See various previous posts on Index Advanced Compression.

Perhaps, I can create a smaller structure to the Bitmap Index by using Index Advanced Compress. Let’s try initially with Advanced Compression Low:

SQL> drop index big_ziggy_lo_orderkey_i2;

Index dropped.

SQL> create index big_ziggy_lo_orderkey_i3 on big_ziggy(lo_orderkey) compress advanced low invisible;
...

SQL> select index_name, index_type, leaf_blocks, compression from user_indexes
     where table_name = 'BIG_ZIGGY';

INDEX_NAME                     INDEX_TYPE LEAF_BLOCKS COMPRESSION
------------------------------ ---------- ----------- -------------
BIG_ZIGGY_LO_ORDERKEY_I        BITMAP          843144 DISABLED
BIG_ZIGGY_LO_ORDERKEY_I3       NORMAL         1921296 ADVANCED LOW

We notice the B-Tree Index is indeed now smaller at just 1921296 leaf blocks, down from 2506552 leaf blocks, but still not as small as the 843144 leaf blocks of the Bitmap Index.

However, this is a Data Warehouse environment where the DML overheads associated with the maintenance of Advanced Compression High indexes may not be of such concern. Additionally, I might have ample CPU resources to cater for any additional CPU requirements of accessing such Advanced Compression High indexes. Therefore, let’s create a B-Tree Index with Advanced Compression High:

SQL> drop index big_ziggy_lo_orderkey_i3;

Index dropped.

SQL> create index big_ziggy_lo_orderkey_i4 on big_ziggy(lo_orderkey) compress advanced high invisible;
...

SQL> select index_name, index_type, leaf_blocks, compression from user_indexes
     where table_name = 'BIG_ZIGGY';

INDEX_NAME                     INDEX_TYPE LEAF_BLOCKS COMPRESSION
------------------------------ ---------- ----------- -------------
BIG_ZIGGY_LO_ORDERKEY_I        BITMAP          843144 DISABLED
BIG_ZIGGY_LO_ORDERKEY_I4       NORMAL          786537 ADVANCED HIGH

We notice the index has significantly reduced further in size and at just 786537 leaf blocks in now indeed smaller than the corresponding Bitmap Index.

Note that although it might be beneficial to use Advanced Compressed High on Bitmap Indexes, such an option is not currently supported.

Of course your mileage will vary on which Index Type and Compression Option will be most appropriate depending on the characteristics of your data, however it might be worth considering these options in environments where you have access to these indexing options.

But yes, a Bitmap Index might indeed be the better option, even if there are billions of distinct values (if there are say 10s of billions of rows) for the columns to be indexed…

Advertisements

Indexing The Oracle Autonomous Data Warehouse (Autobahn) April 2, 2019

Posted by Richard Foote in Autonomous Data Warehouse, Autonomous Database, Oracle Indexes.
2 comments

autobahn

When the Autonomous Data Warehouse was initially released in March 2018, one of the notable restrictions was that it did not support typical Data Warehouse performance related structures such as Indexes, Partitioning or Materialized Views. The platform was deemed efficient enough to not require these features, which would unnecessarily complicate the autonomous environment.

In September 2018, these restrictions were finally lifted with the (for now) manual creation of Indexes, Partitioning and Materialized Views now all supported. Oracle however states that “Oracle recommends that you do not manually create these structures, and leave performance optimizations to the Autonomous Data Warehouse. If you’re a highly skilled Oracle Database tuning expert and decide to manually create these access structures, please be advised to test the impact of your manual tuning efforts on your full workload.

If you’re not a “highly skilled Oracle Database tuning expert”, let me give you some advice on when to create indexes in the Autonomous Data Warehouse (on Oracle Database 18c where these features are NOT automatically created and maintained by autonomous processes).

I’ll initially focus on two of the key capabilities available in the Autonomous Data Warehouse environment that might indeed make some index structures unnecessary, although neither is unique to autonomous databases.

Let’s begin by creating a relatively large table based on the huge LINEORDER table found in the provided SSB schema. The following BIG_ZIGGY table is based on 1 billion rows from the LINEORDER table:

SQL> create table big_ziggy select * from ssb.lineorder where rownum<=1000000000;

Frustratingly, if you have a session that performs a database operation for an extended period, the session looses its connection and hangs indefinitely. You need to start a separate session to confirm when the task actually completes and track progress by say looking a v$session_longops or querying dba_segments.

So we have a decently sized BIG_ZIGGY table at around 27GB:

SQL> select table_name, num_rows, blocks, round(blocks/128) MB
from user_tables where table_name='BIG_ZIGGY';

TABLE_NAME     NUM_ROWS     BLOCKS         MB
------------ ---------- ---------- ----------
BIG_ZIGGY    1000000000    3534714      27615

Here are a couple of key points.

Firstly, if you run a query on a table using filtering columns for the first time, then none of the smarts in the autonomous database environments are going to help you here. Things will initially run slowly, because Oracle would not have detected yet there is anything that needs to be tuned. The tuning process is “reactive”, in that Oracle needs to find an issue first before it can potentially address it.

The second key point is that if only a small fraction of the overall data is required or need be accessed, if the data being accessed is very highly filtered, then a database index is likely going to be highly effective, even within the Autonomous Data Warehouse. This is precisely why indexes are typically required in databases.

Although very often, large volumes of data are indeed accessed in typical Data Warehouse workloads, this is not always the case. It’s certainly not uncommon for ad-hoc queries and the such to only access a small fraction of an available data set.

The following query on the large BIG_ZIGGY table only returns 9 rows:

SQL> select * from big_ziggy
where lo_orderkey = 2294059393;

9 rows selected.

Elapsed: 00:06:23.99

Execution Plan
----------------------------------------------------------
Plan hash value: 2097953955

---------------------------------------------------------------------------------------------------------
| Id | Operation                   | Name                       | Rows | Bytes | Cost (%CPU)| Time      |
---------------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT            |                            |    5 |   505 |  67250 (12)|  00:00:03 |
|  1 |  RESULT CACHE               | 2u5qwsq68ghc5bqn5spqhyqjyj |      |       |            |           |
|* 2 |   TABLE ACCESS STORAGE FULL | BIG_ZIGGY                  |    5 |   505 |  67250 (12)|  00:00:03 |
---------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

2 - storage("LO_ORDERKEY"=2294059393)
     filter("LO_ORDERKEY"=2294059393)

Result Cache Information (identified by operation id):
------------------------------------------------------

1 - column-count=17; dependencies=(BOWIE.BIG_ZIGGY); name="select * from big_
ziggy where lo_orderkey = 2294059393"

Note
-----

- automatic DOP: Computed Degree of Parallelism is 1

Statistics
----------------------------------------------------------
      1 recursive calls
      0 db block gets
3530785 consistent gets
3530770 physical reads
      0 redo size
   2116 bytes sent via SQL*Net to client
    552 bytes received via SQL*Net from client
      2 SQL*Net roundtrips to/from client
      0 sorts (memory)
      0 sorts (disk)
      9 rows processed

The query takes nearly 6.5 minutes to complete which is clearly a long time to return a few rows.

Note that the Autonomous Data Warehouse platform is Exadata and Exadata has a number of key capabilities, such a Hybrid Columnar Compression (HCC), Smarts Scans, Storage Indexes and the such that can be extremely useful in Data Warehouse environments.

For example, the table is smaller and can be accessed more efficiently thanks to HCC:

SQL> select table_name, compression, compress_for
from user_tables where table_name='BIG_ZIGGY';

TABLE_NAME   COMPRESS COMPRESS_FOR
------------ -------- ------------------------------
BIG_ZIGGY     ENABLED QUERY HIGH ROW LEVEL LOCKING

Exadata Smart Scans and associated Storage Indexes can result in extremely efficient Full Table Scans by potentially skipping large sections of the physical data from having to be accessed and by returning just the required data set back to the database. I’ve blogged previously a number of times on the power of Storage Indexes and Exadata related smarts.

However, if a query is executed using specific filtering columns for the first time, then Storage Indexes will not have yet been created. If we look at the effectiveness of Storage Indexes when running the previous query:

SQL> select name , value/1024/1024 MB from v$statname n, v$mystat s
where n.statistic# = s.statistic# and n.name in ('cell physical IO interconnect bytes returned by smart scan', 'cell physical IO bytes saved by storage index');

NAME                                                                     MB
---------------------------------------------------------------- ----------
cell physical IO bytes saved by storage index                             0
cell physical IO interconnect bytes returned by smart scan       7.79956818

Note that initially, no bytes are saved by storage indexes as they don’t initially exist. Although only a relatively small amount of data is actually returned to the database server, having to read the entire table is expensive and why the query is taking so long to complete.

If we run a number of different queries with a filter predicate on the same LO_CUSTKEY column and eventually run the following query:

SQL> select lo_orderkey, lo_linenumber, lo_custkey, lo_partkey, lo_quantity
from big_ziggy where lo_orderkey = 2294059491;

6 rows selected.

Elapsed: 00:01:41.95

Execution Plan
----------------------------------------------------------
Plan hash value: 2097953955

---------------------------------------------------------------------------------------------------------
| Id | Operation                   | Name                       | Rows | Bytes | Cost (%CPU)| Time      |
---------------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT            |                            |    5 |   125 |  67250 (12)|  00:00:03 |
|  1 |  RESULT CACHE               | 9ycjfft9fx7gy506tr9d98djdj |      |       |            |           |
|* 2 |   TABLE ACCESS STORAGE FULL | BIG_ZIGGY                  |    5 |   125 |  67250 (12)|  00:00:03 |
---------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

2 - storage("LO_ORDERKEY"=2294059491)
     filter("LO_ORDERKEY"=2294059491)

Result Cache Information (identified by operation id):
------------------------------------------------------

1 - column-count=5; dependencies=(BOWIE.BIG_ZIGGY); name="select lo_orderkey,
lo_linenumber, lo_custkey, lo_partkey, lo_quantity from big_ziggy where lo_orde
rkey = 2294059491"

Note
-----

- automatic DOP: Computed Degree of Parallelism is 1

Statistics
----------------------------------------------------------
      1 recursive calls
      0 db block gets
3530785 consistent gets
3530770 physical reads
      0 redo size
    760 bytes sent via SQL*Net to client
    552 bytes received via SQL*Net from client
      2 SQL*Net roundtrips to/from client
      0 sorts (memory)
      0 sorts (disk)
      6 rows processed

We note that the overall elapsed times have now decreased, taking only 1 minute, 45 seconds to complete.

If we now look at the effectiveness of storage indexes:

SQL> select name , value/1024/1024 MB from v$statname n, v$mystat s
where n.statistic# = s.statistic# and n.name in ('cell physical IO interconnect bytes returned by smart scan', 'cell physical IO bytes saved by storage index');

NAME                                                                     MB
---------------------------------------------------------------- ----------
cell physical IO bytes saved by storage index                     731.78125
cell physical IO interconnect bytes returned by smart scan       7.59265137

The storage index is now providing some benefit but as I’ve discussed previously, there are limitations to their effectiveness depending on data distribution and the physical clustering of the data.  So your mileage will vary on the effectiveness of smart scans and no, Oracle does not (currently) automatically re-cluster data to improve smart scan performance when most effective.

If you re-run the same query multiple times, another performance feature kicks in:

SQL> select lo_orderkey, lo_linenumber, lo_custkey, lo_partkey, lo_quantity from big_ziggy where lo_orderkey = 2294059491;

6 rows selected.

Elapsed: 00:00:01.74

Execution Plan
----------------------------------------------------------
Plan hash value: 2097953955

---------------------------------------------------------------------------------------------------------
|  Id | Operation                   | Name                       | Rows | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                            |    5 |   125 |  67250 (12)| 00:00:03 |
|   1 |  RESULT CACHE               | 9ycjfft9fx7gy506tr9d98djdj |      |       |            |          |
| * 2 |   TABLE ACCESS STORAGE FULL | BIG_ZIGGY                  |    5 |   125 |  67250 (12)| 00:00:03 |
---------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

2 - storage("LO_ORDERKEY"=2294059491)
     filter("LO_ORDERKEY"=2294059491)

Result Cache Information (identified by operation id):
------------------------------------------------------

1 - column-count=5; dependencies=(BOWIE.BIG_ZIGGY); name="select lo_orderkey,
lo_linenumber, lo_custkey, lo_partkey, lo_quantity from big_ziggy where lo_orderkey = 2294059491"

Note
-----

- automatic DOP: Computed Degree of Parallelism is 1

Statistics
----------------------------------------------------------
  0 recursive calls
  0 db block gets
  0 consistent gets
  0 physical reads
  0 redo size
760 bytes sent via SQL*Net to client
552 bytes received via SQL*Net from client
  2 SQL*Net roundtrips to/from client
  0 sorts (memory)
  0 sorts (disk)
  6 rows processed

Response times are now under 2 seconds (Australia is a long way from Ashburn, USA), with Oracle automatically using the Result Cache to directly access the necessary result and hence not requiring any table related consistent reads. This is as efficient as it gets, but it’s dependant on similar result sets being returned and no data changes that would invalidate the results cache such as a new data load into the data warehouse:

SQL> insert into big_ziggy select * from big_ziggy where rownum  commit;

Commit complete.

SQL> select lo_orderkey, lo_linenumber, lo_custkey, lo_partkey, lo_quantity from big_ziggy where lo_orderkey = 2294059491;

6 rows selected.

Elapsed: 00:01:44.79

Execution Plan
----------------------------------------------------------
Plan hash value: 2097953955

---------------------------------------------------------------------------------------------------------
|  Id | Operation                   | Name                       | Rows | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                            |    5 |   125 |  67250 (12)| 00:00:03 |
|   1 |  RESULT CACHE               | 9ycjfft9fx7gy506tr9d98djdj |      |       |            |          |
| * 2 |   TABLE ACCESS STORAGE FULL | BIG_ZIGGY                  |    5 |   125 |  67250 (12)| 00:00:03 |
---------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - storage("LO_ORDERKEY"=2294059491)
     filter("LO_ORDERKEY"=2294059491)

Result Cache Information (identified by operation id):
------------------------------------------------------

1 - column-count=5; dependencies=(BOWIE.BIG_ZIGGY); name="select lo_orderkey,
lo_linenumber, lo_custkey, lo_partkey, lo_quantity from big_ziggy where lo_orde
rkey = 2294059491"

Note
-----

- automatic DOP: Computed Degree of Parallelism is 1

Statistics
----------------------------------------------------------
      1 recursive calls
      0 db block gets
3530808 consistent gets
3530786 physical reads
      0 redo size
    760 bytes sent via SQL*Net to client
    552 bytes received via SQL*Net from client
      2 SQL*Net roundtrips to/from client
      0 sorts (memory)
      0 sorts (disk)
      6 rows processed

After new data gets loaded into the table, we’re back to being reliant on the effectiveness of the Exadata smart scan.

So we have a scenario here where we either have excellent performance via the result cache or just average performance is we want data for specific data based on a moderately effective storage index and resultant smart scan (in a different session, the following query is run):

SQL> select lo_orderkey, lo_linenumber, lo_custkey, lo_partkey, lo_quantity from big_ziggy where lo_orderkey = 2294059492;

Elapsed: 00:02:03.99

Execution Plan
----------------------------------------------------------
Plan hash value: 2097953955

---------------------------------------------------------------------------------------------------------
|  Id | Operation                   | Name                       | Rows | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                            |    5 |   125 |  67250 (12)| 00:00:03 |
|   1 |  RESULT CACHE               | 4s0z6mkqpufc33mxb9ddvuc69r |      |       |            |          |
| * 2 |   TABLE ACCESS STORAGE FULL | BIG_ZIGGY                  |    5 |   125 |  67250 (12)| 00:00:03 |
---------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

2 - storage("LO_ORDERKEY"=2294059492)
     filter("LO_ORDERKEY"=2294059492)

Result Cache Information (identified by operation id):
------------------------------------------------------

1 - column-count=5; dependencies=(BOWIE.BIG_ZIGGY); name="select lo_orderkey,
lo_linenumber, lo_custkey, lo_partkey, lo_quantity from big_ziggy where lo_orde
rkey = 2294059492"
Note
-----
- automatic DOP: Computed Degree of Parallelism is 1
Statistics
----------------------------------------------------------
      1 recursive calls
      0 db block gets
3530785 consistent gets
3530770 physical reads
      0 redo size
    695 bytes sent via SQL*Net to client
    552 bytes received via SQL*Net from client
      2 SQL*Net roundtrips to/from client
      0 sorts (memory)
      0 sorts (disk)
      2 rows processed

SQL> select name , value/1024/1024 MB from v$statname n, v$mystat s where n.statistic# = s.statistic# and n.name in ('cell physical IO interconnect bytes returned by smart scan', 'cell physical IO bytes saved by storage index');

NAME                                                                     MB
---------------------------------------------------------------- ----------
cell physical IO bytes saved by storage index                     731.78125
cell physical IO interconnect bytes returned by smart scan       7.59265137

To get both consistent and excellent performance when running a query that performs a high level of filtering, we need to create an appropriate index as we would in any other regular data warehouse database environment:

SQL> create bitmap index big_ziggy_lo_orderkey_i on big_ziggy(lo_orderkey);
SQL> select index_name, leaf_blocks, blevel, distinct_keys, status from user_indexes where table_name='BIG_ZIGGY';

INDEX_NAME                LEAF_BLOCKS     BLEVEL DISTINCT_KEYS STATUS
------------------------- ----------- ---------- ------------- --------
BIG_ZIGGY_LO_ORDERKEY_I        843144          3     212552698 VALID

If we now run random queries based on the highly selective LO_ORDERKEY column:

SQL> select lo_orderkey, lo_linenumber, lo_custkey, lo_partkey, lo_quantity from big_ziggy where lo_orderkey = 2294059489;

Elapsed: 00:00:01.67

Execution Plan
----------------------------------------------------------
Plan hash value: 1677294303

------------------------------------------------------------------------------------------------------------
| Id | Operation                      | Name                       | Rows | Bytes | Cost (%CPU)| Time      |
------------------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT               |                            |    5 |   125 |       5 (0)|  00:00:01 |
|  1 |  RESULT CACHE                  | 7tn121mujfw3kfg39z5pg4duwy |      |       |            |           |
|  2 |   TABLE ACCESS BY INDEX ROWID  | BIG_ZIGGY                  |    5 |   125 |       5 (0)|  00:00:01 |
|  3 |    BITMAP CONVERSION TO ROWIDS |                            |      |       |            |           |
|* 4 |     BITMAP INDEX SINGLE VALUE  | BIG_ZIGGY_LO_ORDERKEY_I    |      |       |            |           |
------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

4 - access("LO_ORDERKEY"=2294059489)

Result Cache Information (identified by operation id):
------------------------------------------------------

1 - column-count=5; dependencies=(BOWIE.BIG_ZIGGY); attributes=(ordered); nam
e="select lo_orderkey, lo_linenumber, lo_custkey, lo_partkey, lo_quantity from b
ig_ziggy where lo_orderkey = 2294059489"
Note
-----
- automatic DOP: Computed Degree of Parallelism is 1 because of no expensive
parallel operation

Statistics
----------------------------------------------------------
  1 recursive calls
  0 db block gets
  8 consistent gets
  0 physical reads
  0 redo size
647 bytes sent via SQL*Net to client
553 bytes received via SQL*Net from client
  2 SQL*Net roundtrips to/from client
  0 sorts (memory)
  0 sorts (disk)
  1 rows processed

We now get the required, consistent, high performance that we require, without having to resort to consuming additional CPU resources by increasing say the number of available CPUs and degree of parallelism to reduce response times.

In short, you index the Autonomous Data Warehouse as you would any database running on an Exadata platform…

Remember, this is all effectively version one of the Autonomous Data Warehouse. The capability of automatically creating necessary indexes is coming very soon…

Intro: Initial Thoughts On Oracle Autonomous Database Cloud Services (Automatic For The People) March 22, 2019

Posted by Richard Foote in 18c, 19c, 19c New Features, Autonomous Data Warehouse, Autonomous Database, Autonomous Transaction Processing, Oracle Indexes.
4 comments

automatic for the people

I’m currently writing up a few blog pieces on indexing in the Oracle Autonomous Database environments, but I thought I’ll begin by introducing what exactly are Oracle Autonomous Database Cloud Services and some of my initial thoughts, as there’s still some confusion on all this.

Introduced by Uncle Larry at Oracle OpenWorld 2017, Oracle Autonomous Databases are positioned as “self-driving” Cloud- Based database environments, that automatically address many of the tasks usually performed by DBAs such as patching, backups, security and what is of most interest to me, database tuning.

The first of these cloud environments, the Oracle Autonomous Data Warehouse Cloud Service  (ADW) was first released in March 2018, with the Oracle Autonomous Transaction Processing Cloud Service (ATP) released in August 2018.

So key point number one. These are all Oracle Autonomous Database Cloud Services, there are no actual Autonomous Databases as such. These environments currently consist of the following key components:

  • Oracle Database 18c (and above)
  • Oracle Cloud Environment
  • Oracle Exadata Infrastructure
  • Oracle Policy-Driven Automation

So the Oracle Database alone is not an autonomous database. An Oracle 18c database running on your On-Premises Exadata is still not enough to be an autonomous database platform. It requires all the above components, which is why this is only available on the Oracle Cloud environment (and will likely be soon available on the Oracle Cloud at Customer environment, where all these components can also be replicated).

Now having a database environment is which many of the mundane DBA tasks (such as database provisioning, patching, taking backups, etc.) can be automated can only be a good thing and the Autonomous Database Cloud services delivers on all these. Creating an Autonomous Database environment truly just takes a few minutes and I could easily connect via SQL*PLUS and SQL Developer on my laptop a few minutes later.

However, my key interest here is in database tuning (hey, I’m a database tuning guy) and the capability of the Autonomous Database Cloud Services in being able to self-tune and self-repair itself when database performance issues and inefficiencies are encountered.

So, are we there yet?

So my second key point is that is many ways, we already have a “self-tuning” database with Oracle and have had so for many many years. I’m old enough to remember the Rule-Based Optimizer, when the structure of the SQL was critical to performance. I remember tuning Rollback Segments and manually setting extent sizes to make sure no segment got too close to 121 extents else it couldn’t grow any further. I remember manually setting Freelists, I remember having to write my own jobs to run maintenance tasks and setting tablespaces to be in backup mode etc. etc. etc.

The Oracle Database has evolved over the years, where Oracle DBAs don’t have to worry about these things. If you wish, you can now configure the Oracle database to also automatically adjust memory components, automatically collect necessary schema statistics, automatically create baselines and to tune SQL queries, etc. etc. etc.

All of these Oracle database capabilities are crucial in the new Oracle autonomous database environments, as are new Oracle 18c features and as will be new Oracle 19c features (especially in relation to self-tuning the Autonomous Transaction Processing Cloud Service). The newer autonomous database capabilities are just part of this Oracle database self-tuning evolution, with in the future some new policy-driven based automation thrown into the mix.

So are we indeed there yet with a completely self-tuning database with these new autonomous database services? In a word, no.

Which brings me to my next key point. This is all very very new. All these autonomous database services are effectively at “Edition One” status. This will all take time to eventually evolve to be truly, fully self-tuning database environments. There’ll be some new cool capabilities and features which will assist in some areas but be initially deficient in other areas. But clearly this is the future and clearly in future releases, more and more self-tuning capabilities will be added that will make things easier to both manage and tune Oracle database environments.

Note Oracle Corporation itself (depending on who you talk to) is quite clear that it isn’t there yet, with the web-page on the Autonomous Transaction Processing Cloud services clearly stating that “Workload Optimization* (coming soon)“, but with lots of clues on what’s to come with features such as “Database tunes itself (indexes, memory, partitions, SQL plans, parallelism, optimizer stats) for the incoming workload as data changes over time“.

Do many of these upcoming features sound familiar? If you’re not sure, check out the Oracle Database 19c New Features manual.

Which brings me to my final key point here. Even if you’re not particularly interested in the Cloud, if you view managing On-Premises environments as being your foreseeable future, some the best things that has happened to you in relation to the Oracle Database comes courtesy to you as a result of Oracle’s strategic direction with the cloud. Many of the best new features introduced in the past few Oracle Database releases, especially in relation to the CBO and much of the online stuff such moving tables and partitions online, moving data files online, converting tables to be partitioned online, converting partitioned tables differently online, merging/splitting partitions online, etc. etc. are clearly going to be critical in a self-managing/tuning database. As a DBA of an On-Premises database environment, you can also take advantage of these new capabilities.

It will enable Oracle in its autonomous environments for example to automatically convert that table to be partitioned in this specific manner to improve overall performance, all behind the scenes, without anyone necessarily knowing it’s done so.

Is it there yet? No. Is it coming? You bet.

That said, some newer Oracle Database 19c features that will clearly be critical to a self-tuning autonomous databases moving forward such as Real-Time Statistics, SQL Quarantine and Automatic Indexing will only be available in the Oracle Cloud and Exadata platforms. So take note…

Which brings me to indexing.

When the first Oracle Autonomous Data Warehouse cloud service was announced in March 2018, one of the key “features” of the new autonomous platform was that indexing was disabled (as were other traditional Data Warehouse database capabilities such as Partitioning, Materialized Views, etc.). But how can you effectively “tune” a database when you take away some of the key tuning capabilities? The short answer is that you can’t, unless the database has somewhat simplistic data workloads and data structures.

Remember, this was “Version One” with the first autonomous database environment and the initial strategy was to make the Oracle database not smarter, but dumber. Simplify it such that DBAs can’t easily “break” things and by simply making the Exadata platform do all the heavy lifting with regards to database tuning. A more simplified environment makes things a little easier to “self-tune” as there are fewer moving parts to worry about.

For more simplistic Data Warehouse environments, this might all be adequate. For those of you who follow my blog and my previous discussions on indexing on Exadata, dropping all indexes on Exadata, even on Data Warehouse environments was generally a bad idea. So providing an Oracle database platform, even an autonomous one running on an Exadata platform, where all indexing was effectively disabled, was always going to have performance shortfalls in some scenarios. Try running ad-hoc queries on the supplied SSB LINEORDER table for example. Smart Scans, Storage Indexes, HCC, Result Caches, etc. will only take you so far.

So as I said, Oracle evolves and improves and now allows database indexes to be created in the Autonomous Data Warehouse cloud service.

Which will be the focus on upcoming blog posts, so stay tuned.

Announcement: “Oracle Performance Diagnostics and Tuning” Webinar – 9-12 July 2019 !! March 21, 2019

Posted by Richard Foote in Oracle Indexes, Oracle Performance Diagnostics and Tuning Webinar, Performance Tuning, Performance Tuning Webinar.
add a comment

OMC Training

I’m very excited to announce the first public running of my new “Oracle Performance Diagnostics and TuningWebinar will run between 9-12 July 2019 (6pm-10pm AEST):

Webinar Series 9-12 July 2019 (start 6pm AEST, end 10pm AEST): Buy Now Button

This is a must attend seminar aimed at Oracle professionals (both DBAs and Developers) who are interested in Performance Tuning.  The seminar will detail how to maximise the performance of both Oracle databases and associated applications and how to diagnose and address any performance issues as quickly and effectively as possible.

When an application suddenly runs “slow” or when people start complaining about the “poor performance” of the database, there’s often some uncertainty in how to most quickly and most accurately determine the “root” cause of any such slowdown and effectively address any associated issues. In this seminar, we explore a Tuning Methodology that helps Oracle professionals to both quickly and reliably determine the actual causes of performance issues and so ensure the effectiveness of any applied resolutions.

Looking at a number of real world scenarios and numerous actual examples and test cases, this seminar will show participants how to confidently and reliably diagnose performance issues. The seminar explores in much detail the various diagnostics tools and reports available in Oracle to assist in determining any database performance issue and importantly WHEN and HOW to effectively use each approach.

One of the more common reasons for poor Oracle performance is inefficient or poorly running SQL. This seminar explores in much detail how SQL is executed within the Oracle database, the various issues and related concepts important in understanding why SQL might be inefficient and the many capabilities and features Oracle has in helping to both resolve SQL performance issues and to maintain the stability and reliability of SQL execution.

It’s a fun, but intense, content rich seminar that is suitable for people of all experiences (from beginners to seasoned Oracle experts).

For a detailed look at seminar content, please visit my “Oracle Performance Diagnostics and Tuning” page.

The webinar will run over four days between 9-12 July 2019 between 6pm-10pm AEST.

For the following webinar series, you can purchase your registrations here via credit card (if NOT based in Australia), else contact me if you wish to pay via direct bank transfer or if you’re based in Australia (as GST must be added). Note: Australia, you have actual in person seminars running throughout Winter 2019 you can attend..

Webinar Series 9-12 July 2019 (start 6pm AEST, end 10pm AEST): Buy Now Button

I’ve had heaps of correspondence regarding this so please book early to avoid disappointment as there will strictly be limited places to ensure a quality training experience for all attendees, with plenty of opportunity for questions.

Further webinars will be scheduled for later in the year, so stay tuned.

If you have any questions, don’t hesitate to contact me at richard@richardfooteconsulting.com.

Oracle Database 19c Automatic Indexing: Predicted Back In 2008 (A Better Future) March 19, 2019

Posted by Richard Foote in Automatic Indexing, Autonomous Database, Oracle Indexes.
add a comment

heathen

I’ve recently received a number of correspondences regarding one of my most popular blog posts, dating back to February 2008: Index Rebuild vs. Coalesce vs. Shrink Space (Pigs – 3 Different Ones).

In the comments section, there’s an interesting discussion where I mention:

If Oracle19 does everything for you and all the various indexes structures get automatically created, used, applied, tuned, maintained, partitioned, etc. etc., then I’ll have nothing to write about 🙂.

Actually, when I think about it, it may not take as long as Oracle19 for Oracle to get there.

The fact this was stated some 10 years before the release of Oracle Database 19c with indeed the new Automatic Indexing capability is kinda funny.

I would however like to state a few points.

Firstly, congratulations for the impressive timely predication should go to Robert who made the initial reference to Oracle 19c in the comments.

Although the changing in Oracle database versioning helped in getting the predicted Oracle version correct, I did at least make the comment that I thought some form Automatic Indexing was likely before Oracle 19c.

Although not all the predicted capabilities are there yet (no auto index maintenance, no auto index partitioning, not all index structures are yet supported with auto indexing, etc.), I do predict that these will all be implemented within a few Oracle database releases (early 20s).

I stated there would be nothing to blog about if automatic indexing became a reality, which of course is not true either. The more one understands how Oracle actually works, the more one can proactively identify and address any potential issues and deficiencies. Automatic Indexing is ONLY going to be available on Exadata and the Oracle Cloud platform, so the majority of Oracle On-Premises database users will still need to manually manage indexes. Understanding how Automatic Indexing capability works means one can potentially highlight and learn how to more appropriately manage indexes manually.

So finally, yes I will of course be blogging about Automatic Indexing and about indexing in general in the new Autonomous Database cloud environments.

I’ll start with some initial thoughts and examples of indexing the Autonomous Data Warehouse Database.

Stay tuned.

Announcement: “Oracle Performance Diagnostics and Tuning” Seminar – Australia/NZ Winter Dates March 14, 2019

Posted by Richard Foote in Oracle Indexes, Oracle Performance Diagnostics and Tuning Seminar.
1 comment so far

seminar photo

I’m very excited to announce the first public running of my new “Oracle Performance Diagnostics and Tuning” Seminar throughout Australia and New Zealand this coming winter 2019. (See my Oracle Performance Diagnostics and Tuning Seminar page for all the seminar content and details).

This is a must attend seminar aimed at Oracle professionals (both DBAs and Developers) who are interested in Performance Tuning.  The seminar will detail how to maximise the performance of both Oracle databases and associated applications and how to diagnose and address any performance issues as quickly and effectively as possible.

When an application suddenly runs “slow” or when people start complaining about the “poor performance” of the database, there’s often some uncertainty in how to most quickly and most accurately determine the “root” cause of any such slowdown and effectively address any associated issues. In this seminar, we explore a Tuning Methodology that helps Oracle professionals to both quickly and reliably determine the actual causes of performance issues and so ensure the effectiveness of any applied resolutions.

Looking at a number of real world scenarios and numerous actual examples and test cases, this seminar will show participants how to confidently and reliably diagnose performance issues. The seminar explores in much detail the various diagnostics tools and reports available in Oracle to assist in determining any database performance issue and importantly WHEN and HOW to effectively use each approach. Additionally, participants are also invited to share their own database/SQL reports where we can apply the principles learnt in diagnosing the performance of their actual databases/applications (we can communicate prior to the event on how to best get any reports of interest).

One of the more common reasons for poor Oracle performance is inefficient or poorly running SQL. This seminar explores in much detail how SQL is executed within the Oracle database, the various issues and related concepts important in understanding why SQL might be inefficient and the many capabilities and features Oracle has in helping to both resolve SQL performance issues and to maintain the stability and reliability of SQL execution.

It’s a fun, but intense, content rich seminar that is suitable for people of all experiences (from beginners to seasoned Oracle professionals).

As usual, class sizes will be intentionally small to ensure all attendees get the maximum educational benefit from seminar. Early bird rates are available so please book early to avoid disappointment as I don’t get the opportunity to run these events very often.

Dates for the upcoming seminars in Australia/New Zealand:

Sydney: 22-23 July (Training Choice, 60 Clarence St): Registration Link

Canberra: 24-25 July (Training Choice, 54 Marcus Clark St): Registration Link

Brisbane: 29-30 July (Training Choice, 88 Creek St): Registration Link

Melbourne: 31 July – 1 August (Training Choice, 50 Queen St): Registration Link

Perth: 5-6 August (Training Choice, 2/220 St Georges Terrace): Registration Link

Adelaide: 7-8 August (Training Choice, 19 Young St): Registration Link

Wellington: 19-20 August (Auldhouse, Jackson Stone House 11 Hunter St): Registration Link

Auckland: 21-22 August (Karstens, Level 4, 205 Queen Street): Registration Link

 

I’ll soon announce dates for a seminar in my native London, UK and the first series of webinars for this training. Stay tuned !!

If you have any questions at all, please contact me at richard@richardfooteconsulting.com

Unique Indexes Force Hints To Be “Ignored” Part II (One Of The Few) February 19, 2019

Posted by Richard Foote in CBO, Hash Join, Hints, Oracle Indexes, Transitive Closure, Unique Indexes.
2 comments

Final Cut

In Part I, I showed a demo of how the introduction of a Unique Index appears to force a hint to be “ignored”. This is a classic case of what difference a Unique Index can make in the CBO deliberations.

So what’s going on here?

When I run the first, un-hinted query:

SQL> select * from bowie1, bowie2
where bowie1.id=bowie2.code and bowie1.id=1;
we notice something a little odd in the Predicate Information section of the execution plan:
Predicate Information (identified by operation id):
---------------------------------------------------
3 - access("BOWIE1"."ID"=1)
6 - access("BOWIE2"."CODE"=1)
Where the hell is the join condition, it’s not listed? Additionally, where does the BOWIE2.CODE=1 condition come from, it’s not a predicate within the above SQL statement?
The answer is “Transitive Closure“, whereby the CBO can automatically infer that BOWIE2.CODE must equal 1, if BOWIE2.CODE=BOWIE1.ID and BOWIE1.ID=1. This is something that the CBO master Jonathan Lewis has blogged about a number of times, including this post on Cartesian Merge Join.
Because the CBO is picking up the fact (based on the column statistics) that BOWIE1.ID is basically a unique column, it can effectively drop the join condition and simply use a  Merge Join Cartesian to get all rows from BOWIE1 that match the BOWIE1.ID=1 predicate (just 1 row), with all those rows from BOWIE2 that match the BOWIE2.CODE=1 predicate (estimated 50 rows).
-------------------------------------------------------------------------------------------------------
| Id | Operation                           | Name          | Rows | Bytes | Cost (%CPU) |   Time     |
-------------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT                    |               |   50 |  1150 |       5 (0) |   00:00:01 |
|  1 | MERGE JOIN CARTESIAN                |               |   50 |  1150 |       5 (0) |   00:00:01 |
|  2 | TABLE ACCESS BY INDEX ROWID BATCHED | BOWIE1        |    1 |    16 |       2 (0) |   00:00:01 |
|* 3 | INDEX RANGE SCAN                    | BOWIE1_ID_I   |    1 |       |       1 (0) |   00:00:01 |
|  4 | BUFFER SORT                         |               |   50 |   350 |       3 (0) |   00:00:01 |
|  5 | TABLE ACCESS BY INDEX ROWID BATCHED | BOWIE2        |   50 |   350 |       3 (0) |   00:00:01 |
|* 6 | INDEX RANGE SCAN                    | BOWIE2_CODE_I |   50 |       |       1 (0) |   00:00:01 |
-------------------------------------------------------------------------------------------------------
The CBO is picking up the fact that a Merge Join Cartesian is not as bad as it might sound if only 1 row is likely to be returned from one side of this process that can be combined with just the rows of interest from the other table, with no unnecessary row throwaways.
However, the CBO might not get this right, the efficiency of this depends somewhat on there really only being the one row returned from the BOWIE1.ID=1 condition. If there were many more than one row possible, then this can become relatively inefficient.
The second hinted query is therefore directing Oracle to perform a Hash Join, as I potentially know my data better than Oracle and think it a better option in this case:
SQL> select /*+ use_hash(bowie2) */ * from bowie1, bowie2
where bowie1.id=bowie2.code and bowie1.id=1;
The hint is directing Oracle to access the BOWIE2 table to be the probe table in a Hash Join operation.
We notice that the join predicate is now listed in the Predicate Information of the execution plan:
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("BOWIE1"."ID"="BOWIE2"."CODE")
3 - access("BOWIE1"."ID"=1)
5 - access("BOWIE2"."CODE"=1)
The key point being that there could be a many to many join operation that needs to be performed and Oracle can’t perform a Hash Join unless there is a join predicate listed.
As such, the CBO uses a Hash Join as requested by the hint:
------------------------------------------------------------------------------------------------------
| Id | Operation                           | Name          | Rows | Bytes | Cost (%CPU) |  Time     |
------------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT                    |               |   50 |  1150 |       5 (0) |  00:00:01 |
|* 1 | HASH JOIN                           |               |   50 |  1150 |       5 (0) |  00:00:01 |
|  2 | TABLE ACCESS BY INDEX ROWID BATCHED | BOWIE1        |    1 |    16 |       2 (0) |  00:00:01 |
|* 3 | INDEX RANGE SCAN                    | BOWIE1_ID_I   |    1 |       |       1 (0) |  00:00:01 |
|  4 | TABLE ACCESS BY INDEX ROWID BATCHED | BOWIE2        |   50 |   350 |       3 (0) |  00:00:01 |
|* 5 | INDEX RANGE SCAN                    | BOWIE2_CODE_I |   50 |       |       1 (0) |  00:00:01 |
------------------------------------------------------------------------------------------------------
Note currently, there is no Unique constraint on the BOWIE1.ID column and the index on BOWIE1.ID is non-unique. Although the column statistics suggests the ID is basically unique (the number of distinct values matches the number of rows in the table), there is no certainly that this is correct. The statistics might not be accurate and there could be a bunch of duplicate IDs that would result in a many to many join operation whereby a Hash Join might be preferable.
But by replacing the non-unique index on BOWIE1.ID with a unique index, the CBO now knows there is indeed just the one viable row from the BOWIE1 side of the join, with the BOWIE1.ID=1 predicate. As such, the CBO goes back to using Transitive Closure to again effectively eliminate the join predicate.

 

Predicate Information (identified by operation id):
---------------------------------------------------
3 - access("BOWIE1"."ID"=1)
5 - access("BOWIE2"."CODE"=1)
The CBO can now safely get just the required row from BOWIE1 table via the BOWIE1.ID=1 predicate and the required data from BOWIE2 directly via the BOWIE2.CODE=1 predicate. The CBO makes this decision before considering the most appropriate join strategy, which can now not possibly be the Hash Join, as the Hash Join is only possible with a join predicate in place.
------------------------------------------------------------------------------------------------------
| Id | Operation                           | Name          | Rows | Bytes | Cost (%CPU) | Time     |
------------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT                    |               |   50 |  1150 |       5 (0) | 00:00:01 |
|  1 | NESTED LOOPS                        |               |   50 |  1150 |       5 (0) | 00:00:01 |
|  2 | TABLE ACCESS BY INDEX ROWID         | BOWIE1        |    1 |    16 |       2 (0) | 00:00:01 |
|* 3 | INDEX UNIQUE SCAN                   | BOWIE1_ID_I   |    1 |       |       1 (0) | 00:00:01 |
|  4 | TABLE ACCESS BY INDEX ROWID BATCHED | BOWIE2        |   50 |   350 |       3 (0) | 00:00:01 |
|* 5 | INDEX RANGE SCAN                    | BOWIE2_CODE_I |   50 |       |       1 (0) | 00:00:01 |
------------------------------------------------------------------------------------------------------
As such, the USE_HASH hint is now “ignored”, because it’s simply now not a viable option for the CBO. A Nested Loop is now performed instead, in which the row for the Outer Table (BOWIE1) can be retrieved and all “corresponding” rows for the Inner Table (BOWIE2) can be likewise accessed via just the BOWIE2.CODE=1 predicate.
A Nested Loop is the join type you can have when you’re not necessarily performing a join…

“Oracle Indexing Internals and Best Practices” Seminar – Berlin 8-9 May: DOAG Website February 8, 2019

Posted by Richard Foote in Oracle Indexes.
add a comment

DOAG logo

Just a short note to say that DOAG have now a registration page for my upcoming “Oracle Indexing Internals and Best Practices” seminar running in Berlin, Germany on 8-9 May 2019.

For all the details regarding this acclaimed educational experience and how to book your place, please visit:

https://www.doag.org/de/eventdetails?tx_doagevents_single[id]=577320

Please mention you heard this seminar from me when booking and just note that places are strictly limited for these events, so please book early to avoid disappointment.

This is likely to be the only time I run this seminar in mainland Europe this year, so don’t miss this unique opportunity to learn in person all that’s worth learning in to how to use and maintain indexes judiciously to improve Oracle database/application performance.

If you have any questions, don’t hesitate to contact me at richard@richardfooteconsulting.com.

Unique Indexes Force Hints To Be “Ignored” Part I (What’s Really Happening) February 5, 2019

Posted by Richard Foote in CBO, Hash Join, Hints, Non-Unique Indexes, Oracle Indexes, Unique Indexes.
2 comments

hours album

As I was compiling my new “Oracle Diagnostics and Performance Tuning” seminar, I realised there were quite a number of indexing performance issues I haven’t discussed here previously.

The following is a classic example of what difference a Unique Index can have over a Non-Unique index, while also covering the classic myth that Oracle sometimes chooses to “ignore” hints that might be present in your SQL.

To set the scene, I’m going to create two simple little tables, but importantly initially create only non-unique indexes for columns of interest (Note: I’ve had to remove the “<” in the “<=” predicate when populating the table to avoid formatting issues):

SQL> create table bowie1 as
select rownum id, 'David Bowie' name from dual connect by level = 1000;

Table created.

SQL> create table bowie2 as
select rownum id, mod(rownum,20)+1 code from dual connect by level = 1000;

Table created.
SQL> exec dbms_stats.gather_table_stats(ownname=> null, tabname=>'BOWIE1', method_opt=>'FOR ALL COLUMNS SIZE 1');

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.gather_table_stats(ownname=> null, tabname=>'BOWIE2', method_opt=>'FOR ALL COLUMNS SIZE 1');

PL/SQL procedure successfully completed.

SQL> create index bowie1_id_i on bowie1(id);

Index created.

SQL> create index bowie2_id_i on bowie2(id);

Index created.

SQL> create index bowie2_code_i on bowie2(code);

Index created.

I’m now going to run the following query which does a simple join between the two tables and filters on the ID column from the BOWIE1 table:

 

SQL> select * from bowie1, bowie2
where bowie1.id=bowie2.code and bowie1.id=1;

50 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 4266778954

-------------------------------------------------------------------------------------------------------
| Id | Operation                           | Name          | Rows | Bytes | Cost (%CPU) |   Time     |
-------------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT                    |               |   50 |  1150 |       5 (0) |   00:00:01 |
|  1 | MERGE JOIN CARTESIAN                |               |   50 |  1150 |       5 (0) |   00:00:01 |
|  2 | TABLE ACCESS BY INDEX ROWID BATCHED | BOWIE1        |    1 |    16 |       2 (0) |   00:00:01 |
|* 3 | INDEX RANGE SCAN                    | BOWIE1_ID_I   |    1 |       |       1 (0) |   00:00:01 |
|  4 | BUFFER SORT                         |               |   50 |   350 |       3 (0) |   00:00:01 |
|  5 | TABLE ACCESS BY INDEX ROWID BATCHED | BOWIE2        |   50 |   350 |       3 (0) |   00:00:01 |
|* 6 | INDEX RANGE SCAN                    | BOWIE2_CODE_I |   50 |       |       1 (0) |   00:00:01 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access("BOWIE1"."ID"=1)
6 - access("BOWIE2"."CODE"=1)

Statistics
----------------------------------------------------------
   0 recursive calls
   0 db block gets
   8 consistent gets
   0 physical reads
   0 redo size
1815 bytes sent via SQL*Net to client
 641 bytes received via SQL*Net from client
   5 SQL*Net roundtrips to/from client
   1 sorts (memory)
   0 sorts (disk)
  50 rows processed

 

The query uses a MERGE JOIN which I (incorrectly) think is a concern and decide that a HASH JOIN should be a better option. So I now put in a basic USE_HASH hint:

SQL> select /*+ use_hash(bowie2) */ * from bowie1, bowie2
where bowie1.id=bowie2.code and bowie1.id=1;

50 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 1413846643

------------------------------------------------------------------------------------------------------
| Id | Operation                           | Name          | Rows | Bytes | Cost (%CPU) |  Time     |
------------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT                    |               |   50 |  1150 |       5 (0) |  00:00:01 |
|* 1 | HASH JOIN                           |               |   50 |  1150 |       5 (0) |  00:00:01 |
|  2 | TABLE ACCESS BY INDEX ROWID BATCHED | BOWIE1        |    1 |    16 |       2 (0) |  00:00:01 |
|* 3 | INDEX RANGE SCAN                    | BOWIE1_ID_I   |    1 |       |       1 (0) |  00:00:01 |
|  4 | TABLE ACCESS BY INDEX ROWID BATCHED | BOWIE2        |   50 |   350 |       3 (0) |  00:00:01 |
|* 5 | INDEX RANGE SCAN                    | BOWIE2_CODE_I |   50 |       |       1 (0) |  00:00:01 |
------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - access("BOWIE1"."ID"="BOWIE2"."CODE")
3 - access("BOWIE1"."ID"=1)
5 - access("BOWIE2"."CODE"=1)

Statistics
----------------------------------------------------------
   0 recursive calls
   0 db block gets
  15 consistent gets
   0 physical reads
   0 redo size
1815 bytes sent via SQL*Net to client
 641 bytes received via SQL*Net from client
   5 SQL*Net roundtrips to/from client
   0 sorts (memory)
   0 sorts (disk)
  50 rows processed

And the hint has worked as I had hoped.

I then decide that perhaps a Unique Index on the ID column might be a good idea (perhaps because I read up on all the advantages on Unique Indexes in this blog). So I drop and recreate the index as a Unique Index:

SQL> drop index bowie1_id_i;

Index dropped.

SQL> create unique index bowie1_id_i on bowie1(id);

Index created.

I now re-run my hinted query to again use the Hash Join:

SQL> select /*+ use_hash(bowie2) */ * from bowie1, bowie2
where bowie1.id=bowie2.code and bowie1.id=1;

50 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 4272245076

------------------------------------------------------------------------------------------------------
| Id | Operation                           | Name          | Rows | Bytes | Cost (%CPU) | Time     |
------------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT                    |               |   50 |  1150 |       5 (0) | 00:00:01 |
|  1 | NESTED LOOPS                        |               |   50 |  1150 |       5 (0) | 00:00:01 |
|  2 | TABLE ACCESS BY INDEX ROWID         | BOWIE1        |    1 |    16 |       2 (0) | 00:00:01 |
|* 3 | INDEX UNIQUE SCAN                   | BOWIE1_ID_I   |    1 |       |       1 (0) | 00:00:01 |
|  4 | TABLE ACCESS BY INDEX ROWID BATCHED | BOWIE2        |   50 |   350 |       3 (0) | 00:00:01 |
|* 5 | INDEX RANGE SCAN                    | BOWIE2_CODE_I |   50 |       |       1 (0) | 00:00:01 |
------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access("BOWIE1"."ID"=1)
5 - access("BOWIE2"."CODE"=1)

Statistics
----------------------------------------------------------
   0 recursive calls
   0 db block gets
  15 consistent gets
   0 physical reads
   0 redo size
1815 bytes sent via SQL*Net to client
 641 bytes received via SQL*Net from client
   5 SQL*Net roundtrips to/from client
   0 sorts (memory)
   0 sorts (disk)
  50 rows processed

And we notice the hint is now being “ignored”. The hint isn’t really ignored, it’s just no longer relevant to how the CBO has now constructed the query and associated plan with the Unique Index now making a key difference (no pun intended).

I’ll discuss in Part II why the Unique Index has made such a difference and why the hint is no longer viable.

Of course, to learn all this and a lot lot more, you can always attend my new Oracle Diagnostics and Performance Tuning” seminar one day 🙂

Announcement: “Oracle Indexing Internals and Best Practices Seminar” – Berlin 8-9 May 2019 !! January 30, 2019

Posted by Richard Foote in Oracle Indexes.
add a comment

I’m very excited to announce I’ll be running my highly acclaimed “Oracle Indexing Internals and Best Practices” seminar in conjunction with the German Oracle User Group (DOAG) in Berlin, Germany on 8-9 May 2019.

The venue will be the DOAG Offices in Berlin (DOAG Office, 6th Floor, Tempelhofer Weg 64, 12347 Berlin).

Because of venue size and for a quality educational experience, numbers will be strictly limited. This will be one of the very few times I’ll have to run this seminar in Europe this year, so please book early to avoid disappointment.

For all the details on how to maximise Database/Application performance with judicious use of indexes and why this seminar has been such a popular and well received educational experience, please visit my Indexing Seminar Page.

The cost of the seminar is as follows:

DOAG Members: 980 Euro Buy Now Button

Other Participants: 1400 Euro Buy Now Button

DOAG membership will need to be validated, else fee will be refunded and place lost.

Note: If you can make it to this seminar in Berlin, you there are still some places available for both webinar series running on 4-8 March 2019 and on 26-30 March 2019. Full details on my Indexing Internals Webinar Page.

Hope to see some of you in Berlin on 8-9 May 2019 for what I’m sure will be a fantastic event !!

If you have any questions, please contact me: richard@richardfooteconsulting.com.

Announcement: New “Oracle Diagnostics and Performance Tuning” Seminar Now Available !! January 14, 2019

Posted by Richard Foote in Performance Tuning Seminar, Performance Tuning Webinar.
1 comment so far

OMC Training

It’s been a work in progress for quite some time, with many of my customers asking when will it be completed. Well, I’m very excited to announce that I have finally completed my new 2 day “Oracle Diagnostics and Performance Tuning” seminar and that it’s ready to be presented.

I already have a number of private bookings for this in the coming months, with some public events and webinars to be scheduled shortly. If you’re interested in having this run at your site, contact me as I can customise it and run it exclusively at your premises for a great price. No waiting for the seminar to come near you and no travel expenses for your employees. Just the seminar/webinar for your staff, with just the topics that are of most interest to your organisation. We can even look at some of your specific performance issues, collectively analyse your associated diagnostic reports and actually diagnose and resolve your performance issues as part of the seminar.

For more information or to request a seminar near you, please email me at richard@richardfooteconsulting.com.

For all the details of this new seminar, see below or visit my “Oracle Diagnostics and Performance Tuning Seminar” page.

 

This is a must attend seminar aimed at Oracle professionals (both DBAs and Developers) who are interested in Performance Tuning.  The seminar will detail how to maximise the performance of both Oracle databases and associated applications and how to diagnose and address any performance issues as quickly and effectively as possible.

When an application suddenly runs “slow” or when people start complaining about the “poor performance” of the database, there’s often some uncertainty in how to most quickly and most accurately determine the “root” cause of any such slowdown and effectively address any associated issues. In this seminar, we explore a Tuning Methodology that helps Oracle professionals to both quickly and reliably determine the actual causes of performance issues and so ensure the effectiveness of any applied resolutions.

Looking at a number of real world scenarios and numerous actual examples and test cases, this seminar will show participants how to confidently and reliably diagnose performance issues. The seminar explores in much detail the various diagnostics tools and reports available in Oracle to assist in determining any database performance issue and importantly WHEN and HOW to effectively use each approach. Additionally, participants are also invited to share their own database/SQL reports where we can apply the principles learnt in diagnosing the performance of their actual databases/applications (we can communicate prior to the event on how to best get any reports of interest).

One of the more common reasons for poor Oracle performance is inefficient or poorly running SQL. This seminar explores in much detail how SQL is executed within the Oracle database, the various issues and related concepts important in understanding why SQL might be inefficient and the many capabilities and features Oracle has in helping to both resolve SQL performance issues and to maintain the stability and reliability of SQL execution.

It’s a fun, but intense, content rich seminar that is suitable for people of all experiences (from beginners to seasoned Oracle experts).

 

Note: Numbers are strictly limited due to the small class nature of the seminars. Please book early to avoid disappointment as seminars are not scheduled regularly.

For a list of all upcoming seminars, see the Seminar Dates page.

 

Seminar Content (subject to minor change)

Section One: Basic Tuning Concepts

  • Common (but flawed) Tuning Methodology
  • THE Correct Tuning Methodology
  • Importance of Response Times
  • Importance of Database Time and Concurrent Active Sessions
  • Introduction to Oracle Wait Interface
  • The Correct Mentality To Performance Tuning
  • Importance Of Instrumentation

 

Section Two: Database Tuning

In Section Two, we explore how to best diagnose any performance issues as quickly as possible to accurately determine any root issue(s) and so apply the most appropriate resolutions. We explore a number of real world examples as well as providing the opportunity to go through any reports participants may wish to share.

As part of determining how to best resolve a number of real world examples, we get the opportunity to explore the internals of the database and the importance of understanding a number of crucial Oracle database concepts (e.g. latch contention, redo log mechanism, RAC cache fusion, etc.).

Because of the abundance of Oracle database instrumentation and the wealth of data available in (say) AWR Reports, many Oracle professionals feel swamped and are uncertain how to correctly read these types of reports when trying to diagnose a performance issue. In this section, we look at both WHEN and HOW to read the various reports to very quickly and reliably determine any performance issues and so know with certainty how to apply the most appropriate solution.

We explore the following Oracle supplied database diagnostics methods and when each of them are most appropriate to determine and resolve performance issues using a methodical tuning methodology:

  • Statspack Reports
  • Automatic Workload Repository (AWR) Reports
  • Automatic Database Diagnostic Monitor (ADDM) Reports
  • Active Session History (ASH) Reports
  • (Advanced) SQL Trace Files

 

Section Three: SQL Tuning

One of the most common reasons for poor performance is related to poorly running, inefficient SQL statements. In this section we explore in details a number of important concepts and Oracle capabilities (up to and including Oracle Database 18c) in relation to maximising the performance and stability of SQL statements. Topics covered include:

  • CBO Internals And How The CBO Costs SQL Statements
  • Why Is The More “Expensive” Costed SQL Often The Faster
  • Differences between ALL_ROWS and FIRST_ROWS(n) optimization
  • Setting Optimizer Based Parameters
  • CPU Cost Model and System Statistics
  • Schema Statistics Gathering Tips
  • Histogram Internals
  • Extended Statistics and Virtual Columns
  • Dynamic Sampling
  • Displaying and Reading Execution Plans
  • GATHER_PLAN_STATISTICS Hint
  • Managing Outlier Data
  • Join Processing, Join Types and Join Methods
  • Adaptive Query Optimization
  • Adaptive Plans
  • Cardinality Feedback
  • Adaptive Statistics
  • SQL Plan Directives
  • SQL Blocks
  • SQL Transformations
  • 10053 Trace Report
  • SQL Monitor Reports
  • SQL Hints (and when/how to use appropriately)
  • SQL Patching
  • SQL Plan Stability
  • SQL Profiles
  • SQL Stored Outlines
  • Migrating Stored Outlines to SQL Plan Management (Baselines)
  • SQL Plan Management (Baselines)
  • Baselines Use Cases

 

For more information or to request a seminar near you, please email richard@richardfooteconsulting.com.

 

New “Let’s Talk Database” events in Australia/NZ in February/March 2019 !! January 8, 2019

Posted by Richard Foote in Oracle Indexes.
add a comment

seminar photo

Very excited to announce some new “Let’s Talk Database” events scheduled for February 2019 in various locations in Australia/NZ:

Canberra: Wednesday, 20 February 2019: Registration Link

Sydney: Thursday, 21 Feburary 2019: Registration Link

Brisbane: Friday, 22 Feburary 2019: Registration Link

Melbourne: Wednesday, 27 February 2019: Registration Link

Wellington: Friday, 1 March 2019: Registration Link

 

I’ll be covering two interesting topics at these events:

Oracle Database Appliance: During my days at Oracle Corporation, I had quite a bit to do with helping customers on the ODA and always felt it was an great solution that often got drowned out by Exadata and other Oracle engineered solutions. In the session, I’ll be discussing the various unique advantages on running Oracle Databases on the Oracle Database Appliance (ODA), some of the new features available on the ODA (including now Hybrid Columnar Compression), some use cases for deploying Oracle Databases on the ODA and importantly, some of the more common mistakes and misunderstanding regarding how to best maintain an ODA environment.

Indexing and Partitioning: As databases get ever larger in size and complexity, the Oracle Partitioning option has become more common and a more important feature in Oracle database deployments. This presentation will look at some of the important indexing strategies that can be applied in a Partitioned Database environment in order to effectively maximise both database performance and database management practices. It will also discuss some newer features introduced in 12c R2 and 18c that simplifies the migration to an effective partitioning strategy.

The agenda on the day will be as follows:

8:30 a.m.  Registration & Coffee

9:00 a.m.  Oracle Database Appliance

10:15 a.m. Networking break

10:30 a.m. Indexing with Partitioning

12:00 p.m. Networking lunch

1:00 p.m. Event Close

 

So lots of database stuff to talk about and the best part is that’s all free !!

Simply follow the registration link to book your place. But please book early to avoid disappointment as places are strictly limited (all venues are the local Oracle offices).

Hope to see you at one of these events !!

UKOUG “Lifetime Achievement Award” Speaker Award December 19, 2018

Posted by Richard Foote in Lifetime Achievement Award, Oracle Indexes, UKOUG.
add a comment

IMG_2494

I was recently very honoured and flattered to have received the “Lifetime Achievement Award” Speaker Award from the UKOUG. I have only managed to get to the excellent UKOUG Tech conferences on a couple of occasions, so it was both a thrill and a surprise to be so honoured.

Unfortunately, I wasn’t able to make it for my third visit this year and so was unable to be there to accept this award.

However in a sign of true dedication from the UKOUG President, Martin Widlake has come all the way over to beautiful Canberra, Australia just to present the award personally to me (OK, he might have plans do a couple of other things during his visit to Australia).

With plans to run the UKOUG Tech conference next year in Brighton, I’ll have to try and make it for my third visit, if only to annoy Mark Rittman when Crystal Palace no doubt beat Brighton when we next meet…

AUSOUG “Oracle Master” Award November 22, 2018

Posted by Richard Foote in AUSOUG, Oracle Indexes, Oracle Master.
3 comments

IMG_7456

 

I was very flattered and honoured to be named one of the first 4 “Oracle Masters” by the Australian Oracle User Group at the recent AUSOUG Connect 2018 conference in Melbourne.

Thank you to all the AUSOUG members involved for this award. As always, it’s a pleasure to help fellow Oracle Database professionals in Australia in whatever capacity I can, especially at cool events such as Connect 2018.

Congratulations to the other members of the “Famous First Four”, Connor McDonald, Penny Cookson and Dennis Remmer. Exalted company indeed.

 

Presenting at AUSOUG Connect 2018 Conference in Melbourne, 21 November 2018. November 18, 2018

Posted by Richard Foote in Connect 2018, Oracle Indexes.
add a comment

ausoug

 

After initially not being in a position to make it this year, I will now be presenting at the AUSOUG Connect 2018 Conference in Melbourne this coming Wednesday, 21 November 2018.

My presentation will be:

12c Release 2 and 18c – New Indexing Related Features

Oracle Database 12.2 and 18.3 releases have introduced a number of extremely useful new indexing features and enhancements. These include cool capabilities such as automatically tracking index usage, advanced index compression enhancements, deferred invalidation of cursors during index creation/rebuild, automatic index maintenance during new online operations (such as online table moves and conversion to partitioned objects), JSON indexing enhancements, Memoptimized Rowstore and Scalable Sequences. There might even be an Oracle Database 19c surprise thrown in.

These will all be discussed in detail with practical examples on how they can be usefully deployed to improve overall database performance.

 

For all the conference details, see: https://www.ausoug.org.au/whats-on/connect-2018/melbourne/

This conference has a fabulous lineup and I’m really excited at being able to now take part in it all. A special thank you to the AUSOUG team for catering for me at such late notice.

Looking forward to catching up with many of my Aussie Oracle mates there 🙂

 

ausoug