jump to navigation

12.2 Online Conversion of a Non-Partitioned Table to a Partitioned Table (A Small Plot Of Land) March 27, 2017

Posted by Richard Foote in 12c Release 2 New Features, Attribute Clustering, Clustering Factor, Online DDL, Oracle, Oracle Indexes, Partitioning.
add a comment

Image result for outside bowie

In my previous post, I discussed how you can now move heap tables online with Oracle Database 12.2 and how this can be very beneficial in helping to address issues with the Clustering Factor of key indexes.

A problem with this technique is that is requires the entire table to be effectively reorganised when most of the data might already be well clustered. It would be much more efficient if we could somehow only move and reorganise just the portion of a table that has poorly clustered data introduced to the table since the last reorg.

Partitioning the table appropriately would help to address this disadvantage but converting a non-partitioned table to be partitioned can be a pain. To do this online with as little complication as possible one could use the dbms_redefintion package which has improved with latter releases.

However, with Oracle Database 12.2, there is now an even easier, more flexible method of performing such a conversion.

Using the same table definition and data as from my previous post, I’m going to first create a couple of additional indexes (on the ID column and on the DATE_CREATED column) :


SQL> create unique index ziggy_id_i on ziggy(id);

Index created.

SQL> create index ziggy_date_created_i on ziggy(date_created);

Index created.

To convert a non-partitioned table to a partitioned table online, we can now use this new extension to the ALTER TABLE syntax:


SQL> alter table ziggy
2 modify partition by range (date_created)
3 (partition p1 values less than (TO_DATE('01-JAN-2015', 'DD-MON-YYYY')),
4 partition p2 values less than (TO_DATE('01-JAN-2016', 'DD-MON-YYYY')),
5 partition p3 values less than (maxvalue)) online;

Table altered.

How simple is that !! We now have a table that is range partitioned based on the DATE_CREATED column and this conversion was performed online.

We notice not only is the table now partitioned with all the indexes remaining Valid, but the index based on the partitioning key (DATE_CREATED) has also been implicitly converted to be a Local partitioned index:


SQL> select table_name, status, partitioned from dba_tables
where table_name='ZIGGY';

TABLE_NAME   STATUS   PAR
------------ -------- ---
ZIGGY        VALID    YES

SQL> select index_name, status, partitioned, num_rows
from dba_indexes where table_name='ZIGGY';

INDEX_NAME           STATUS   PAR   NUM_ROWS
-------------------- -------- --- ----------
ZIGGY_DATE_CREATED_I      N/A YES    2000000
ZIGGY_CODE_I VALID             NO    2000000
ZIGGY_ID_I VALID               NO    2000000

SQL> select index_name, partition_name, status, leaf_blocks from dba_ind_partitions
     where index_name like 'ZIGGY%';

INDEX_NAME           PARTITION_NAME  STATUS   LEAF_BLOCKS
-------------------- --------------- -------- -----------
ZIGGY_DATE_CREATED_I              P1   USABLE         865
ZIGGY_DATE_CREATED_I              P2   USABLE        1123
ZIGGY_DATE_CREATED_I              P3   USABLE        1089

SQL> select index_name, partitioning_type, partition_count, locality
from dba_part_indexes where table_name='ZIGGY';

INDEX_NAME           PARTITION PARTITION_COUNT LOCALI
-------------------- --------- --------------- ------
ZIGGY_DATE_CREATED_I     RANGE               3 LOCAL

As part of the table conversion syntax, we have the option to also update all the associated indexes and partition them in any manner we may want. For example:


SQL> alter table ziggy
2 modify partition by range (date_created)
3 (partition p1 values less than (TO_DATE('01-JAN-2015', 'DD-MON-YYYY')),
4 partition p2 values less than (TO_DATE('01-JAN-2016', 'DD-MON-YYYY')),
5 partition p3 values less than (maxvalue)) online
6 update indexes
7 (ziggy_code_i local,
8 ziggy_id_i global partition by range (id)
9 (partition ip1 values less than (maxvalue)));

Table altered.

In this example, not only are we converting the non-partitioned table to be partitioned, but we’re also explicitly converting the index on the CODE column to be a Locally partitioned index and the index on the ID column to be Globally partitioned in its own manner.

If we look at the definition of these indexes, we see that they also have all been converted to partitioned indexes online along with the table:


SQL> select table_name, status, partitioned from dba_tables
where table_name='ZIGGY';

TABLE_NAME   STATUS   PAR
------------ -------- ---
ZIGGY           VALID YES

SQL> select index_name, status, partitioned from dba_indexes
where table_name = 'ZIGGY';

INDEX_NAME           STATUS   PAR
-------------------- -------- ---
ZIGGY_CODE_I              N/A YES
ZIGGY_ID_I                N/A YES
ZIGGY_DATE_CREATED_I      N/A YES

SQL> select index_name, partitioning_type, partition_count, locality
from dba_part_indexes where table_name='ZIGGY';

INDEX_NAME           PARTITION PARTITION_COUNT LOCALI
-------------------- --------- --------------- ------
ZIGGY_CODE_I             RANGE               3 LOCAL
ZIGGY_ID_I               RANGE               1 GLOBAL
ZIGGY_DATE_CREATED_I     RANGE               3 LOCAL

If we look at the Clustering Factor of the important CODE column index, we see that all partitions have an excellent Clustering Factor as all partitions have just been created.


SQL> select partition_name, num_rows, clustering_factor from dba_ind_partitions
where index_name='ZIGGY_CODE_I';

PARTITION_NAME         NUM_ROWS CLUSTERING_FACTOR
-------------------- ---------- -----------------
P1                       490000              2275
P2                       730000              3388
P3                       780000              3620

However, if we now add new rows to the table as would occur with a real application, the data from the “current” partition results in the Clustering Factor “eroding” over time for this partition.


SQL> insert into ziggy select 2000000+rownum, mod(rownum,100), sysdate, 'DAVID BOWIE'
from dual connect by level <= 500000; 500000 rows created. SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_index_stats(ownname=>null,indname=>'ZIGGY_CODE_I');

PL/SQL procedure successfully completed.

SQL> select partition_name, num_rows, clustering_factor from dba_ind_partitions
     where index_name='ZIGGY_CODE_I';

PARTITION_NAME         NUM_ROWS CLUSTERING_FACTOR
-------------------- ---------- -----------------
P1                       490000              2275
P2                       730000              3388
P3                      1280000            238505

As discussed previously, the Clustering Attribute has no effect with standard DML operations. Therefore, the efficiency of the CODE index reduces over time in the partition where new data is being introduced. The Clustering Factor has now substantially increased from 3620 to 238505. Note for all the other partitions where there are no modifications to the data, the Clustering Factor remains excellent.

Having the table/index partitioned means we can therefore periodically reorg just the problematic partition:


SQL> alter table ziggy move partition p3 update indexes online;

Table altered.

SQL> select partition_name, num_rows, clustering_factor from dba_ind_partitions
     where index_name='ZIGGY_CODE_I';

PARTITION_NAME         NUM_ROWS CLUSTERING_FACTOR
-------------------- ---------- -----------------
P1                       490000              2275
P2                       730000              3388
P3                      1280000              5978

The Clustering Factor for this partition has now reduced substantially from 238505 to just 5978.

For those of you with the Partitioning database option, the ability in 12.2 to now so easily convert a non-partitioned table to be partitioned, along with its associated indexes is just brilliant 🙂

Let’s Talk Database: New Dates Released (You Can’t Talk) March 16, 2017

Posted by Richard Foote in Let's Talk Database, Oracle Indexes.
add a comment

Oracle-Database

Good news for all you Oracle Database folk in ANZ. I’ll be running a new series of “Let’s Talk Database” sessions at a venue near you in the coming months. I’ll be discussing two interesting topics, “Introduction to the Oracle Cloud for DBAs” and “Introduction to Oracle Database 12c Release 2“. The dates for this series and registration links as follows:

Agenda

8:30 – 9:00am Registration and coffee
9:00 – 10:00am Introduction to Oracle Cloud for DBAs
10:00 – 10:15am Break
10:15 – 11:15am Introduction to Oracle Database 12c Release 2–Part I
11.15 – 11:30am Break
11:30 – 12:30pm Introduction to Oracle Database 12c Release 2–Part II
12:30 – 1:30pm Lunch, Networking and informal Q&A

Session Abstracts

Introduction to the Oracle Cloud for DBAs

The Oracle Cloud has many key features and benefits that would be of much interest to Oracle DBAs including increased flexibility and elasticity, better security and compliance, reduced complexity with regard to maintenance and patching, improved access to database features and faster database deployments. This session will introduce the various options and capabilities available when deploying Oracle Databases within the Oracle Public Cloud, such as database services and deployment options, database option bundling, backup/recovery and DR services, hybrid manageability features and database accessing options. Oracle Exadata Express Cloud Service will be used as an example of how to quickly deploy and access an Oracle Database in the cloud. We will also look at some options with regard to a Hybrid Cloud model, including deploying Cloud based infrastructure within your own data centre. Finally, we introduce the Oracle Cloud data centre which has recently been expanded in Sydney, Australia.

Introduction to Oracle Database 12c Release 2

Oracle Database 12c Release 2 continues Oracle’s innovation with its leading database with the introduction of many new exciting features and enhancements. This session will discuss in some detail many of these new 12.2 capabilities that makes the Oracle Database the cloud database of choice, including improvements with the Database In-Memory option, new indexing compression and tracking features, new online maintenance operations such as online tablespace encryption, online table move, online table/index partition conversion, new pluggable database capabilities such as hot cloning and application containers, Oracle Sharding, Active Data Guard and RAC improvements, Big Data and Analytics innovations as well as several new developer features such as JSON support improvements, SQL enhancements and case insensitive support.

So lots to talk about and lots of opportunity to ask questions and network with your fellow Oracle professionals.

As usual, places are unfortunately limited so contact your location and enroll ASAP to avoid disappointment.

Hope to see many of you at these upcoming events 🙂

12.2 Moving Tables Online (Real Cool World) March 15, 2017

Posted by Richard Foote in Oracle Indexes.
add a comment

real cool world

One of the really cool new features introduced in Oracle Database 12c Release 2 is the capability to MOVE tables ONLINE, while maintaining all the associated index structures. This was possible for table partitions in Oracle Database 12c Release 1 but only for Indexed Organized Tables before then.

Rather than show a demo that just moves a table, thought I might also provide a classic example of when moving a table might be extremely beneficial.

I’ll start by creating a table and specifically populate a CODE column with 100 distinct values that are distributed throughout the entire table structure:


SQL> create table ziggy (id number, code number, date_created date, name varchar2(42));

Table created.

SQL> insert into ziggy select rownum, mod(rownum,100), sysdate – mod(rownum,1000), DAVID BOWIE’ from dual connect by level <= 2000000; 2000000 rows created. SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats(ownname=>user, tabname=>'ZIGGY', estimate_percent=>null, method_opt=>'FOR ALL COLUMNS SIZE 1');

PL/SQL procedure successfully completed.

If I create an index on the CODE column, we’ll see the index has a terrible Clustering Factor (a massive 923,900), as the values basically appear throughout the entire table in each of the table blocks:


SQL> create index ziggy_code_i on ziggy(code);

Index created.

SQL> select index_name, clustering_factor, num_rows from user_indexes
where index_name='ZIGGY_CODE_I';

INDEX_NAME           CLUSTERING_FACTOR   NUM_ROWS
-------------------- ----------------- ----------
ZIGGY_CODE_I                    923900    2000000

If we now run a query that returns rows from just one CODE value (i.e. just 1% of the table):


SQL> set arraysize 5000
SQL> select * from ziggy where code = 42;

20000 rows selected.

Execution Plan
-----------------------------------------------------------------------------------
| Id | Operation                 | Name  | Rows  | Bytes | Cost (%CPU) | Time     |
-----------------------------------------------------------------------------------
| 0  | SELECT STATEMENT          |       | 20000 |  546K |    2750 (1) | 00:00:01 |
|* 1 | TABLE ACCESS STORAGE FULL | ZIGGY | 20000 |  546K |    2750 (1) | 00:00:01 |
-----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - storage("CODE"=42)
filter("CODE"=42)

Statistics
----------------------------------------------------------
     0 recursive calls
     0 db block gets
  9354 consistent gets
     0 physical reads
     0 redo size
367165 bytes sent via SQL*Net to client
   533 bytes received via SQL*Net from client
     5 SQL*Net roundtrips to/from client
     0 sorts (memory)
     0 sorts (disk)
 20000 rows processed

We see the Cost Based Optimizer (CBO) ignores the index and opts for a Full Table Scan. With such a terrible Clustering Factor and with each block in the table having to be accessed, the CBO is making the correct decision here. The index is effectively useless, even though we’re only interested in just 1% of the table. At 9,354 consistent gets, it’s the best we can do.

If this query was important to us and executed frequently, we might want to look at improving the Clustering Factor and hence the efficiency of this index. Unlike rumors to the contrary (which unfortunately still exist, believe), the key here is not to rebuild the index, but to reorganise the table such that data in the table is sorted/clustered in CODE order.

Oracle database 12.1.0.2 introduced the concept of the table Clustering Attribute, by which the table after a reorg or bulk load will store the data based on this Clustering Attribute (as I’ve discussed previously). For example:


SQL> alter table ziggy add clustering by linear order(code) without materialized zonemap;

Table altered.

This now ensures that the data within the table will be in CODE order after a table re-org, but we need to re-org the table for this to take effect. However, prior to 12.2, it wasn’t possible to subsequently MOVE the table ONLINE:


SQL> alter table ziggy move online;
alter table ziggy move online
*
ERROR at line 1:
ORA-01735: invalid ALTER TABLE option

Moving a heap table previously caused locking issues and resulted in all associated indexes beoming unusable and having to be rebuilt, further adding to the effective unavailability of the table:


SQL> alter table ziggy move;
alter table ziggy move
*
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired

SQL> alter table ziggy move;

Table altered.

SQL> select index_name, leaf_blocks, status from dba_indexes where table_name = 'ziggy';

INDEX_NAME      LEAF_BLOCKS STATUS
--------------- ----------- --------
ZIGGY_CODE_I           1936 UNUSABLE

That’s now changed with Oracle Database 12c Release 2, where we can now move heap tables online while fully maintaining all associated indexes:


SQL> alter table ziggy move online;

Table altered.

SQL> select index_name, clustering_factor, num_rows, status from user_indexes
where index_name='ZIGGY_CODE_I';

INDEX_NAME      CLUSTERING_FACTOR   NUM_ROWS STATUS
--------------- ----------------- ---------- ------
ZIGGY_CODE_I                 9277    2000000 VALID

After the table re-org, we note that not only is the index still in a valid state, but because the table re-org effectively sorted the data in CODE order due to the Clustering Attribute, it has result in a much much reduced Clustering Factor (just 9,277 reduced down from 923,900).

If we now re-run our original query:


SQL> select * from ziggy where code=42;

20000 rows selected.

Execution Plan
----------------------------------------------------------------------------------------------------
| Id | Operation                           | Name         | Rows  | Bytes | Cost (%CPU) | Time     |
----------------------------------------------------------------------------------------------------
| 0  | SELECT STATEMENT                    |              | 20000 |  546K |     126 (0) | 00:00:01 |
| 1  | TABLE ACCESS BY INDEX ROWID BATCHED | ZIGGY        | 20000 |  546K |     126 (0) | 00:00:01 |
|* 2 | INDEX RANGE SCAN                    | ZIGGY_CODE_I | 20000 |       |      33 (0) | 00:00:01 |
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("CODE"=42)

Statistics
----------------------------------------------------------
     0 recursive calls
     0 db block gets
   136 consistent gets
     0 physical reads
     0 redo size
670725 bytes sent via SQL*Net to client
   533 bytes received via SQL*Net from client
     5 SQL*Net roundtrips to/from client
     0 sorts (memory)
     0 sorts (disk)
 20000 rows processed

We notice the index is now used and has resulted in a significant reduction of consistent gets (just 136, down from 9,354). The query is now much much more efficient than it was previously.

An issue here though is that we have to re-org the entire table in order to address the clustering of any new rows that are subsequently added with standard DML (as the Clustering Attribute is ineffective in this case).

Wouldn’t it be nice if we could easily convert this table (and associated indexes) to be partitioned and hence only have to concern ourselves with subsequently reorganising just those partitions that have new/changed data.

Topic of my next post 🙂

12.2 Index Advanced Compression “High” Part IV (The Width of a Circle) March 10, 2017

Posted by Richard Foote in 12c Rel 2, Advanced Index Compression, Index Compression, Oracle Indexes.
1 comment so far

DAVID_BOWIE_THE+MAN+WHO+SOLD+THE+WORLD-291998

A quick post (for me) with a long weekend beckoning…

In Part I, Part II and Part III of looking at the new Index Advanced Compression level of “High”, we discussed how it can significantly decrease the size of your indexes in a manner not previously possible. This can result in significant reductions of index storage and the potential subsequent reduction in IO and memory related overheads.

This is of course good.

However, if you have applications which have tables/indexes that are very sensitive regarding DML performance, you need to exercise caution before compressing indexes in this manner. This is due to the extra CPU overheads and file IO wait times that can result in maintaining the highly compressed index structures.

To quickly illustrate this point, let’s first look at the timings when performing DML with an associated index that has no compression:

SQL> create table bowie (id number, code number, name varchar2(42));

Table created.

SQL> create index bowie_code_idx on bowie(code) nocompress;

Index created.

SQL> insert into bowie select rownum, rownum, 'ZIGGY STARDUST'
from dual connect by level <= 1000000;

1000000 rows created.
Elapsed: 00:00:06.61

SQL> commit;

Commit complete.

SQL> update bowie set code = 42 where id between 250000 and 499999;

250000 rows updated.
Elapsed: 00:00:12.91

If we now repeat the same demo, but this time with an index that’s compressed with advanced compression set to “HIGH”:


SQL> drop table bowie;

Table dropped.

SQL> create table bowie (id number, code number, name varchar2(42));

Table created.

SQL> create index bowie_code_idx on bowie(code) compress advanced high;

Index created.

SQL> insert into bowie select rownum, rownum, 'ZIGGY STARDUST' from dual connect by level <= 1000000;

1000000 rows created.
Elapsed: 00:00:39.08

SQL> commit;

Commit complete.

SQL> update bowie set code = 42 where id between 250000 and 499999;

250000 rows updated.
Elapsed: 00:01:09.83

We see there’s a significant increase in timings when both inserting into the table and when updating the highly compressed indexed column.

Therefore you need to consider the detrimental impact on DML performance due to the additional resources required in maintaining the highly compressed indexes, as it might offset the potential benefits of having the smaller index structures. Your mileage may vary.

More to come 🙂

Hotsos Symposium 2017: Feb 27 – Mar 2 Dallas, Texas (I’m Afraid of Americans) February 6, 2017

Posted by Richard Foote in Oracle Indexes.
3 comments

sym17_logo

In just 3 weeks time, I’ll again have the pleasure of presenting at the awesome Hotsos Symposium in Dallas, Texas. It’s simply the best conference with regard to Oracle performance tuning anywhere.

This year, I’ll be presenting on a couple of topics:

New Index Features Introduced in Oracle Database 12c Release 2

Oracle Database 12c Release 2 is now available in both Oracle DBaaS and Exadata Express Cloud Services and will soon be generally available. Oracle 12.2 has introduced a number of extremely useful new indexing features and enhancements. These include cool capabilities such as tracking index usage, advanced index compression enhancements, deferred invalidation of cursors during index creation/rebuild, automatic index maintenance during new online operations and JSON indexing enhancements. These will all be discussed in much detail with examples on how they can be usefully deployed.

Real World Examples on how to use AWR Reports to Solve Performance Issues

I often get called in by customers to determine and address the root cause of database performance issues. Depending on the issue, a request for a simple Automatic Workload Repository (AWR) report is often sufficient to accurately diagnose the root problem(s). However, many DBAs find AWR reports daunting as they contain so much database diagnostic information, they don’t even know where to begin to find the truly useful information.

In this session, I begin by discussing the basic concepts of response times, DB times, the Oracle Wait Interface and how to focus on what is truly important. I then look at how to best read an AWR report to quickly go to the most relevant sections that detail any specific issues. I also discuss a general tuning and diagnostic methodology that ensures one can quickly determine whether an AWR report will indeed be sufficient and how to accurately and consistently use the AWR report to pinpoint and determine root causes for global database performance issues. We’ll go through a number of actual “real-life” examples that highlight various performance issues and how one accurately determines the actual root issues through an appropriate AWR report.

 

Looking forward to catching up with a number of my Oracle friends. Hopefully you can make it, learning lots of really useful stuff in relation to Oracle Performance Tuning is guaranteed 🙂

12.2 Index Advanced Compression “High” Part III (I Can’t Give Everything Away) January 25, 2017

Posted by Richard Foote in 12c Rel 2, 12c Release 2 New Features, Advanced Index Compression, Oracle Indexes.
1 comment so far

If you like the look of the new Index Advanced Compression setting of “High” available with Oracle Database 12c Release 2 as I’ve discussed previously in Part I and Part II, well there’s a way to make this the default index compression method in your 12.2 database.

Let’s begin by creating a table and explicitly creating a NOCOMPRESS index in the BOWIE tablespace:


SQL> create table bowie (id number, code number, name varchar2(42));

Table created.

SQL> insert into bowie select rownum, mod(rownum,10), 'DAVID BOWIE' from dual connect by level <= 1000000;

1000000 rows created.

SQL> commit;

Commit complete.

SQL> create index bowie_code_idx on bowie(code) tablespace bowie nocompress;

Index created.

If we look at the size and compression type of this index:


SQL> select index_name, tablespace_name, leaf_blocks, compression
from dba_indexes where index_name='BOWIE_CODE_IDX';

INDEX_NAME      TABLESPACE_NAME      LEAF_BLOCKS COMPRESSION
--------------- -------------------- ----------- -------------
BOWIE_CODE_IDX  BOWIE                       1939 DISABLED

We notice the index has 1939 leaf blocks and that index compression is indeed disabled as expected.

Let’s now drop the index and recreate again it in the BOWIE tablespace, but this time without explicitly stating any compression option:


SQL> drop index bowie_code_idx;

Index dropped.

SQL> create index bowie_code_idx on bowie(code) tablespace bowie;

Index created.

SQL> exec dbms_stats.gather_index_stats(ownname=>null, indname=>'BOWIE_CODE_IDX');

PL/SQL procedure successfully completed.

If we look at the index now:


SQL> select index_name, tablespace_name, leaf_blocks, compression from dba_indexes
where index_name='BOWIE_CODE_IDX';

INDEX_NAME      TABLESPACE_NAME      LEAF_BLOCKS COMPRESSION
--------------- -------------------- ----------- -------------
BOWIE_CODE_IDX  BOWIE                        355 ADVANCED HIGH

We notice the index now only has 355 leaf blocks (down from 1939 leaf blocks) and that it has automatically used the new index advanced compression option of “HIGH”.

The secret lies with the following new settings.

Firstly, with the new db_index_compression_inheritance parameter, you can specify how during index creation the index inherits its index compression attributes (tablespace or table or not at all):


SQL> alter system set db_index_compression_inheritance=tablespace scope=both;

System altered.

Secondly, you can give a tablespace an index compression attribute on how indexes are compressed by default within the tablespace:


SQL> alter tablespace bowie default index compress advanced high;

Tablespace altered.

SQL> select tablespace_name, def_index_compression, index_compress_for
from dba_tablespaces where tablespace_name = 'BOWIE';

TABLESPACE_NAME      DEF_INDE INDEX_COMPRES
-------------------- -------- -------------
BOWIE                ENABLED  ADVANCED HIGH

So in this database, all indexes created within the BOWIE tablespace are automatically created with index advanced compression set to HIGH.

There are however some disadvantages with high index advanced compression that need to be considered.

More to come.

12.2 Index Advanced Compression “High” Part II (One Of My Turns) December 12, 2016

Posted by Richard Foote in 12c Rel 2, 12c Release 2 New Features, Advanced Index Compression, Oracle Indexes.
4 comments

In Part I, I introduced the new Index Advanced Compression default value of “HIGH”, which has the potential to significantly compress indexes much more than previously possible. This is due to new index compression algorithms that do more than simply de-duplicate indexed values within a leaf block.

Previously, any attempt to completely compress a Unique Index was doomed to failure as a Unique Index by definition only has unique values and so has nothing to de-duplicate. As such, you were previously restricted (quite rightly) to only being able to compress n-1 columns within a Unique Index. An attempt compress all columns in a Unique Index would only result in a larger index structure due to the associated overheads of the prefix-table within the leaf blocks.

But what happens if we now use Index Advanced Compression set to “HIGH” on a Unique Index ?

Let’s see.

Let’s first create a simple table with a unique ID column:

SQL> create table bowie (id number, code number, name varchar2(42));

Table created.

SQL> insert into bowie select rownum, rownum, 'ZIGGY STARDUST' from dual connect by level <= 1000000;

1000000 rows created.

SQL> commit;

Commit complete.

Let’s start by creating an uncompressed unique index on the ID column:

SQL> create unique index bowie_id_i on bowie(id);

Index created.

SQL> select index_name, leaf_blocks, compression from user_indexes where index_name='BOWIE_ID_I';

INDEX_NAME   LEAF_BLOCKS COMPRESSION
------------ ----------- -------------
BOWIE_ID_I          2088 DISABLED

So the uncompressed unique index has 2088 leaf blocks.

If we try and use normal compression on the index:

SQL> alter index bowie_id_i rebuild compress;
alter index bowie_id_i rebuild compress
*
ERROR at line 1:
ORA-25193: cannot use COMPRESS option for a single column key

We get an error saying we’re not allowed to compress a single column unique index. Doing so makes no sense, as there’s no benefit in de-duplicating such an index.

If we attempt to use advanced index compression with a value of “LOW”:

SQL> alter index bowie_id_i rebuild compress advanced low;
alter index bowie_id_i rebuild compress advanced low
*
ERROR at line 1:
ORA-25193: cannot use COMPRESS option for a single column key

We get the same error. Although advanced index compression of LOW is clever enough to automatically compress only those leaf blocks where there is a benefit in compression, there can be no such index leaf block that benefits from compression via the de-duplication method. Therefore, the error is really there to just let you know that you’re wasting your time in attempting to do this on a unique index.

If however we use the new HIGH option with index advanced compression:

SQL> alter index bowie_code_i rebuild compress advanced high;

Index altered.

SQL> exec dbms_stats.gather_index_stats(ownname=>null, indname=>'BOWIE_ID_I');

PL/SQL procedure successfully completed.

SQL> select index_name, leaf_blocks, compression from user_indexes where index_name='BOWIE_ID_I';

INDEX_NAME   LEAF_BLOCKS COMPRESSION
------------ ----------- -------------
BOWIE_ID_I           965 ADVANCED HIGH

Not only does it not give us an error, but it has actually managed to successfully compress such a unique index containing nothing but a bunch of unique numbers to just 965 leaf blocks, down from 2088. The index is now less than half its previous size.

So any Oracle B-tree index, even if it’s a single column unique index, is a possible candidate to be compressed with “High” advanced index compression.

More to come.

12.2 Index Advanced Compression “High” – Part I (High Hopes) December 6, 2016

Posted by Richard Foote in 12c Rel 2, 12c Release 2 New Features, Advanced Index Compression, Oracle Indexes.
7 comments

Oracle first introduced Advanced Compression for Indexes in 12.1 as I’ve discussed here a number of times.

With Oracle Database 12c Release 2, you can now use Index Advanced Compression “High” to further (and potentially dramatically) improve the index compression ratio.  Instead of simply de-duplicating the index entries within an index leaf block, High Index Compression uses more complex compression algorithms and stores the index entries in a Compression Unit (similar to that as used with Hybrid Columnar Compression). The net result is generally a much better level of compression, but at the potential cost of more CPU resources to both access and maintain the index structures.

To give you an idea on the possible compression improvements, let’s re-run the demo I used previously when I first discussed Advanced Index Compression.

So I first create a table, where the CODE column that has many distinct values, but a portion (25%) of data that is replicated:

SQL> create table bowie (id number, code number, name varchar2(30));

Table created.

SQL> insert into bowie select rownum, rownum, 'ZIGGY STARDUST' from dual connect by level <= 1000000;

1000000 rows created.

SQL> update bowie set code = 42 where id between 250000 and 499999;

250000 rows updated.

SQL> commit;

Commit complete.

I then create an index on the CODE column and check out its initial size:

SQL> create index bowie_code_i on bowie(code);

Index created.

SQL> select index_name, leaf_blocks, compression from user_indexes where table_name='BOWIE';

INDEX_NAME   LEAF_BLOCKS COMPRESSION
------------ ----------- -------------
BOWIE_CODE_I        2158 DISABLED

 

If I just use normal compression on this index:

SQL> alter index bowie_code_i rebuild compress;

Index altered.

SQL> select index_name, leaf_blocks, compression from user_indexes where table_name='BOWIE';

INDEX_NAME   LEAF_BLOCKS COMPRESSION
------------ ----------- -------------
BOWIE_CODE_I        2684 ENABLED

 

We notice the index actually increases in size (2684 up from 2158), as most (75%) of the CODE values are unique and so the overheads associated with the resultant prefix table in the leaf blocks used with normal index compression overrides the savings of compression on the 25% of the index where compression is beneficial.

If we use “Low” advanced index compression as introduced in 12.1:

SQL> alter index bowie_code_i rebuild compress advanced low;

Index altered.

SQL> select index_name, leaf_blocks, compression from user_indexes where table_name='BOWIE';

INDEX_NAME   LEAF_BLOCKS COMPRESSION
------------ ----------- -------------
BOWIE_CODE_I        2057 ADVANCED LOW

 

We notice the index has now indeed decreased in size (2057 down from 2158), as Oracle has automatically compressed just the 25% of the index where compression was beneficial and not touched the 75% of the index where compression wasn’t possible when de-duplicating values.

If we now however use the new 12.2 Advanced Index Compression “High” option:

SQL> alter index bowie_code_i rebuild compress advanced high;

Index altered.

SQL> select index_name, leaf_blocks, compression from user_indexes where table_name='BOWIE';

INDEX_NAME   LEAF_BLOCKS COMPRESSION
------------ ----------- -------------
BOWIE_CODE_I           0 ADVANCED HIGH

Wow, an index with now no leaf blocks, that’s unbelievably small. Actually, I don’t believe it as this is due to bug 22094934. We need to gather index statistics to see the new index size:

 

SQL> exec dbms_stats.gather_index_stats(ownname=>null, indname=>'BOWIE_CODE_I');

PL/SQL procedure successfully completed.

SQL> select index_name, leaf_blocks, compression from user_indexes where table_name='BOWIE';

INDEX_NAME   LEAF_BLOCKS COMPRESSION
------------ ----------- -------------
BOWIE_CODE_I         815 ADVANCED HIGH

 

We notice that the index hasn’t just gone now a tad in size, but is now substantially smaller than before (down to just 815 leaf blocks, rather than the smaller 2057 from 2158 reduction we previously achieved with low index advanced compression.

So Index Advanced Compression, with the now default “HIGH” option can substantially reduce index sizes. Note this new capability of course requires the Advanced Compression Option.

More to come.

Oracle Database 12c Release 2: New Indexing Features – Long Identifiers (Several Species of Small Furry Animals Gathered Together in a Cave and Grooving With a Pict) November 24, 2016

Posted by Richard Foote in 12c Rel 2, 12c Release 2 New Features, Long Identifiers, Oracle Indexes.
1 comment so far

Oracle Database 12c Release 2 has now been officially released (at least on the Oracle Database Cloud and Oracle Database Exadata Express Cloud Services).

In the coming weeks, I’ll be blogging about quite a number of new indexing features/capabilities/improvements that have been introduced in Oracle Database 12c Release 2. These include:

  • Advanced Index Compression
  • Tracking Index Usage
  • Online Table Move (and automatic Index maintenance)
  • Online Conversion to Partitioned Table (and Partitioned Indexes)
  • Deferred Invalidation of Cursors During Index Creation/Rebuild
  • Indexing JSON Improvements
  • Text Index Improvements

To start things off, how many times have you wanted to name an index something such as follows:

SQL> create table bowie (id number, code number, name varchar2(42));

Table created.

SQL> insert into bowie select rownum, mod(rownum,10), 'DAVID BOWIE' from dual connect by level <= 1000000;

1000000 rows created.

SQL> commit;

Commit complete.

SQL> create index this_index_will_be_used_to_get_data_from_the_bowie_table_in_sc
enarios_when_the_code_column_is_used_in_predicates_idx on bowie(code);
create index this_index_will_be_used_to_get_data_from_the_bowie_table_in_scenari
os_when_the_code_column_is_used_in_predicates_idx on bowie(code)
*
ERROR at line 1:
ORA-00972: identifier is too long

Only to get an error that you’ve exceeded the 30 character identifier.

Well, with 12.2, the maximum length of an identifier has been increased to 128 characters, so now creating such an index works:


SQL> create index this_index_will_be_used_to_get_data_from_the_bowie_table_in_scenarios_when_the_code_column_is_used_in_predicates_idx on 
bowie(code);

Index created.

SQL> select index_name, leaf_blocks, status from dba_indexes where table_name='BOWIE';

INDEX_NAME                                         LEAF_BLOCKS STATUS
-------------------------------------------------- ----------- --------
THIS_INDEX_WILL_BE_USED_TO_GET_DATA_FROM_THE_BOWIE        1939 VALID
_TABLE_IN_SCENARIOS_WHEN_THE_CODE_COLUMN_IS_USED_I
N_PREDICATES_IDX

 

The possibilities are endless:

SQL> alter table bowie add constraint the_primary_key_of_the_bowie_table_is_the_
id_column_so_please_stop_trying_to_insert_a_duplicate_id_value_dumbo primary key
(id);

Table altered.

SQL> insert into bowie values (42, 42, 'David Bowie');
insert into bowie values (42, 42, 'David Bowie')
*
ERROR at line 1:
ORA-00001: unique constraint
(BOWIE.THE_PRIMARY_KEY_OF_THE_BOWIE_TABLE_IS_THE_ID_COLUMN_SO_PLEASE_STOP_TRYING _TO_INSERT_A_DUPLICATE_ID_VALUE_DUMBO) violated

Developers can now be, shall we say, more “expressive” with the database objects they create …

Oracle 12c: Indexing JSON in the Database Part III (Paperback Writer) September 2, 2016

Posted by Richard Foote in 12c, JSON, JSON Text Index, Oracle Indexes.
2 comments

In Part I and Part II, we looked at how to index specific attributes within a JSON document store within an Oracle 12c database.

But what if we’re not sure which specific attributes might benefit from an index or indeed, as JSON is by it’s nature a schema-less way to store data, what if we’re not entirely sure what attributes might be present currently or in the future.

On a JSON document store within the Oracle Database, you can create a special JSON aware Text Index that can automatically index any field/attribute within a JSON document and use a Text based function to then search efficiently for data from any attribute.

Using the same table created in Part I, you can create a JSON Text index as follows:

SQL> CREATE INDEX ziggy_search_idx ON ziggy_json (ziggy_order)
2 INDEXTYPE IS CTXSYS.CONTEXT
3 PARAMETERS ('section group CTXSYS.JSON_SECTION_GROUP SYNC (ON COMMIT)');
Index created.

Note this Text index is (optionally) defined to be automatically synchronised when data in the ZIGGY_JSON table is committed.

We can use the JSON_TEXTCONTAINS Oracle Text function to efficiently access data for any data within the JSON defined column. For example:

SQL> SELECT * FROM ziggy_json WHERE json_textcontains(ziggy_order, '$.Reference', 'DBOWIE-201642');

Elapsed: 00:00:00.00

Execution Plan
----------------------------------------------------------
Plan hash value: 3069169778

------------------------------------------------------------------------------------------------
| Id | Operation                   | Name             | Rows | Bytes | Cost (%CPU) | Time      |
------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT            |                  |    1 |  1534 |    2550 (0) |  00:00:01 |
|  1 | TABLE ACCESS BY INDEX ROWID | ZIGGY_JSON       |    1 |  1534 |    2550 (0) |  00:00:01 |
|* 2 | DOMAIN INDEX                | ZIGGY_SEARCH_IDX |      |       |    2549 (0) |  00:00:01 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("CTXSYS"."CONTAINS"("ZIGGY_JSON"."ZIGGY_ORDER",'{DBOWIE-201642}
 INPATH(/Reference)')>0)

Statistics
----------------------------------------------------------
  65 recursive calls
   0 db block gets
 118 consistent gets
   0 physical reads
   0 redo size
1863 bytes sent via SQL*Net to client
1088 bytes received via SQL*Net from client
   6 SQL*Net roundtrips to/from client
   0 sorts (memory)
   0 sorts (disk)
   2 rows processed

Note the JSON-based Text index is used to retrieve data efficiently.

The Text index can also be used to search data efficiently from within an array set:

SQL> SELECT * FROM ziggy_json WHERE json_textcontains(ziggy_order, '$.LineItems.Part.Description', 'Low');

no rows selected

Elapsed: 00:00:00.01

Execution Plan
----------------------------------------------------------
Plan hash value: 3069169778

------------------------------------------------------------------------------------------------
| Id | Operation                   | Name             | Rows | Bytes | Cost (%CPU) | Time     |
------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT            |                  |    1 |  1534 |    5927 (0) | 00:00:01 |
|  1 | TABLE ACCESS BY INDEX ROWID | ZIGGY_JSON       |    1 |  1534 |    5927 (0) | 00:00:01 |
|* 2 | DOMAIN INDEX                | ZIGGY_SEARCH_IDX |      |       |    5927 (0) | 00:00:01 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("CTXSYS"."CONTAINS"("ZIGGY_JSON"."ZIGGY_ORDER",'{Low}
 INPATH(/LineItems/Part/Description)')>0)

Statistics
----------------------------------------------------------
132 recursive calls
  0 db block gets
182 consistent gets
  0 physical reads
  0 redo size
627 bytes sent via SQL*Net to client
552 bytes received via SQL*Net from client
  2 SQL*Net roundtrips to/from client
  0 sorts (memory)
  0 sorts (disk)
  0 rows processed

We can also search for a specific data value across any attribute within the JSON document store:

SQL> SELECT * FROM ziggy_json WHERE json_textcontains(ziggy_order, '$', '4242');

Elapsed: 00:00:00.00

Execution Plan
----------------------------------------------------------
Plan hash value: 3069169778

------------------------------------------------------------------------------------------------
| Id | Operation                   | Name             | Rows | Bytes | Cost (%CPU) | Time     |
------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT            |                  |    1 |  1534 |       2 (0) | 00:00:01 |
|  1 | TABLE ACCESS BY INDEX ROWID | ZIGGY_JSON       |    1 |  1534 |       2 (0) | 00:00:01 |
|* 2 | DOMAIN INDEX                | ZIGGY_SEARCH_IDX |      |       |       1 (0) | 00:00:01 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

 2 - access("CTXSYS"."CONTAINS"("ZIGGY_JSON"."ZIGGY_ORDER",'{4242}')>0)
     
Statistics
----------------------------------------------------------
  10 recursive calls
   0 db block gets
  32 consistent gets
   0 physical reads
   0 redo size
1865 bytes sent via SQL*Net to client
1088 bytes received via SQL*Net from client
   6 SQL*Net roundtrips to/from client
   0 sorts (memory)
   0 sorts (disk)
   2 rows processed

Let’s now add more data to the JSON column, but this time introducing a few new attributes (such as AlbumName):

SQL> insert into ziggy_json
 2 select
 3 rownum,
 4 SYSdate,
 5 '{"AlbumId" : ' || rownum || ',
 6 "AlbumName" : "HUNKY DORY",
 7 "ArtistName" : "David Bowie"}'
 8 from dual connect by level <= 10;
10 rows created.

SQL> commit;

Commit complete.

As the JSON-based Text index was defined to be automatically synchronised when we commit data in the table, these new attributes can be immediately searched and accessed via the index:

SQL> SELECT * FROM ziggy_json WHERE json_textcontains(ziggy_order, '$.AlbumName', 'HUNKY DORY');

10 rows selected.

Elapsed: 00:00:00.02

Execution Plan
----------------------------------------------------------
Plan hash value: 3069169778

------------------------------------------------------------------------------------------------
| Id | Operation                   | Name             | Rows | Bytes | Cost (%CPU) | Time     |
------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT            |                  |  198 |  296K |    1948 (0) | 00:00:01 |
|  1 | TABLE ACCESS BY INDEX ROWID | ZIGGY_JSON       |  198 |  296K |    1948 (0) | 00:00:01 |
|* 2 | DOMAIN INDEX                | ZIGGY_SEARCH_IDX |      |       |    1780 (0) | 00:00:01 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("CTXSYS"."CONTAINS"("ZIGGY_JSON"."ZIGGY_ORDER",'{HUNKY DORY}
 INPATH(/AlbumName)')>0)
     
Statistics
----------------------------------------------------------
  48 recursive calls
   0 db block gets
 103 consistent gets
   0 physical reads
   0 redo size
6751 bytes sent via SQL*Net to client
3232 bytes received via SQL*Net from client
  22 SQL*Net roundtrips to/from client
   0 sorts (memory)
   0 sorts (disk)
  10 rows processed

Not only can JSON data be stored within the Oracle 12c database, but we have a number of index strategies available to search such data efficiently.

Oracle 12c: Indexing JSON in the Database Part II (Find A Little Wood) August 5, 2016

Posted by Richard Foote in 12c, JSON, JSON Text Index, Oracle Indexes.
4 comments

In Part I, we looked at how you can now store JSON documents within the Oracle 12c Database. For efficient accesses to JSON documents stored in the Oracle database, we can either create a function-based index based on the JSON_VALUE function or on JSON .dot notation.

These indexes are useful for indexing specific JSON attributes, but what if we want to index multiple JSON attributes within a single index structure.

To start, I’m just going to add an extra row to increase the selectivity of other columns.

SQL> insert into ziggy_json
2 select
3 100001,
4 SYSdate,
5 '{"PONumber" : 1000001,
6 "Reference" : "MTOM-20161",
7 "Requestor" : "Major Tom",
8 "User" : "MTOM",
9 "CostCenter" : "B42",
10 "ShippingInstructions" : {"name" : "Major Tom",
11 "Address": {"street" : "42 Ziggy Street",
12 "city" : "Canberra",
13 "state" : "ACT",
14 "zipCode" : 2601,
15 "country" : "Australia"},
16 "Phone" : [{"type" : "Office", "number" : "417-555-7777"},
17 {"type" : "Mobile", "number" : "417-555-1234"}]},
18 "Special Instructions" : null,
19 "AllowPartialShipment" : true,
20 "LineItems" : [{"ItemNumber" : 1,
21 "Part" : {"Description" : "Hunky Dory",
22 "UnitPrice" : 10.95},
23 "Quantity" : 5.0},
24 {"ItemNumber" : 2,
25 "Part" : {"Description" : "Pin-Ups",
26 "UnitPrice" : 10.95},
27 "Quantity" : 3.0}]}'
28 from dual;

1 row created.

SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'ZIGGY_JSON');

PL/SQL procedure successfully completed.

We can still create composite indexes based on the JSON_VALUE function as we can with conventional columns:

SQL> create index ziggy_json_idx3 on 
     ziggy_json(json_value(ziggy_order, '$.User' RETURNING VARCHAR2(20)), 
                json_value(ziggy_order, '$.CostCenter' RETURNING VARCHAR2(6)));

Index created.

SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'ZIGGY_JSON');

PL/SQL procedure successfully completed.

If we now run a query with a couple of JSON_VALUE based predicates:

SQL> select * from ziggy_json
     where json_value(ziggy_order, '$.User' RETURNING VARCHAR2(20))='MTOM' and 
           json_value(ziggy_order, '$.CostCenter' RETURNING VARCHAR2(6)) = 'B42';

Execution Plan
----------------------------------------------------------
Plan hash value: 3402615542

-------------------------------------------------------------------------------------------------------
| Id | Operation                           | Name            | Rows | Bytes | Cost (%CPU) | Time     |
-------------------------------------------------------------------------------------------------------
| 0  | SELECT STATEMENT                    |                 |    1 |  1533 |       4 (0) | 00:00:01 |
| 1  | TABLE ACCESS BY INDEX ROWID BATCHED | ZIGGY_JSON      |    1 |  1533 |       4 (0) | 00:00:01 |
|*2  | INDEX RANGE SCAN                    | ZIGGY_JSON_IDX3 |    1 |       |       3 (0) | 00:00:01 |
-------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

2 - access(JSON_VALUE("ZIGGY_ORDER" FORMAT JSON , '$.User' RETURNING VARCHAR2(20) NULL ON
ERROR)='MTOM' AND JSON_VALUE("ZIGGY_ORDER" FORMAT JSON , '$.CostCenter' RETURNING VARCHAR2(6) NULL ON ERROR)='B42')
 
Statistics
----------------------------------------------------------
   0 recursive calls
   0 db block gets
   5 consistent gets
   0 physical reads
   0 redo size
1248 bytes sent via SQL*Net to client
 820 bytes received via SQL*Net from client
   4 SQL*Net roundtrips to/from client
   0 sorts (memory)
   0 sorts (disk)
   1 rows processed

The composite index is effectively used by the CBO as expected.

It does though make our SQL a little cumbersome to write. To simplify things a tad, we could create a couple of virtual columns based on these functions, create the JSON function-based indexes on these virtual columns and simplify the SQL accordingly.

First, we create the virtual columns (note they’re virtual columns and so consume no storage):

 

SQL> ALTER TABLE ziggy_json ADD (userid VARCHAR2(20)
2 GENERATED ALWAYS AS (json_value(ziggy_order, '$.User' RETURNING VARCHAR2(20))));

Table altered.

SQL> ALTER TABLE ziggy_json ADD (costcenter VARCHAR2(6)
2 GENERATED ALWAYS AS (json_value(ziggy_order, '$.CostCenter' RETURNING VARCHAR2(6))));

Table altered.

 

Next, create the index based on these newly created virtual columns:

SQL> CREATE INDEX ziggy_user_costctr_idx on ziggy_json(userid, costcenter);

Index created.

SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'ZIGGY_JSON');

PL/SQL procedure successfully completed.

And then write a simplified version of the SQL to reference the virtual columns:

SQL> select * from ziggy_json where userid='MTOM' and costcenter='B42';

ID ZIGGY_DAT
---------- ---------
ZIGGY_ORDER
--------------------------------------------------------------------------------

USERID COSTCE
-------------------- ------
100001 24-JUN-16
{"PONumber" : 1000001,
"Reference" : "MTOM-20161",
MTOM B42
  
Elapsed: 00:00:00.01

Execution Plan
----------------------------------------------------------
Plan hash value: 5717455

--------------------------------------------------------------------------------------------------------------
| Id | Operation                           | Name                   | Rows | Bytes | Cost (%CPU) | Time     |
--------------------------------------------------------------------------------------------------------------
| 0  | SELECT STATEMENT                    |                        |    1 |  1535 |       4 (0) | 00:00:01 |
| 1  | TABLE ACCESS BY INDEX ROWID BATCHED | ZIGGY_JSON             |    1 |  1535 |       4 (0) | 00:00:01 |
|* 2 | INDEX RANGE SCAN                    | ZIGGY_USER_COSTCTR_IDX |    1 |       |       3 (0) | 00:00:01 |
--------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("USERID"='MTOM' AND "COSTCENTER"='B42')
   
Statistics
----------------------------------------------------------
   0 recursive calls
   0 db block gets
   5 consistent gets
   0 physical reads
   0 redo size
1396 bytes sent via SQL*Net to client
 820 bytes received via SQL*Net from client
   4 SQL*Net roundtrips to/from client
   0 sorts (memory)
   0 sorts (disk)
   1 rows processed

The index is again used as expected.

Of course, if we still want to reference the JSON functions directly within the SQL, the query can still be written as previously:

SQL> select * from ziggy_json 
     where json_value(ziggy_order, '$.User' returning varchar2(20))='MTOM' and
           json_value(ziggy_order, '$.CostCenter' returning varchar2(6))='B42';

ID ZIGGY_DAT
---------- ---------
ZIGGY_ORDER
--------------------------------------------------------------------------------

USERID COSTCE
-------------------- ------
100001 24-JUN-16
{"PONumber" : 1000001,
"Reference" : "MTOM-20161",
MTOM B42
   
Elapsed: 00:00:00.00

Execution Plan
----------------------------------------------------------
Plan hash value: 5717455

--------------------------------------------------------------------------------------------------------------
| Id | Operation                           | Name                   | Rows | Bytes | Cost (%CPU) | Time     |
--------------------------------------------------------------------------------------------------------------
| 0  | SELECT STATEMENT                    |                        |    1 |  1535 |       4 (0) | 00:00:01 |
| 1  | TABLE ACCESS BY INDEX ROWID BATCHED | ZIGGY_JSON             |    1 |  1535 |       4 (0) | 00:00:01 |
|*2  | INDEX RANGE SCAN                    | ZIGGY_USER_COSTCTR_IDX |    1 |       |       3 (0) | 00:00:01 |
--------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("ZIGGY_JSON"."USERID"='MTOM' AND "ZIGGY_JSON"."COSTCENTER"='B42')

Statistics
----------------------------------------------------------
   0 recursive calls
   0 db block gets
   5 consistent gets
   0 physical reads
   0 redo size
1396 bytes sent via SQL*Net to client
 820 bytes received via SQL*Net from client
   4 SQL*Net roundtrips to/from client
   0 sorts (memory)
   0 sorts (disk)
   1 rows processed

The index is again used as expected.

I’ll next look at using a JSON Text based index to effectively index the entire JSON document.

Oracle 12c: Indexing JSON In The Database Part I (Lazarus) August 2, 2016

Posted by Richard Foote in 12c, JSON, Oracle Indexes.
add a comment

One of the very cool new features introduced in Oracle Database 12c Rel 1 is the ability to store JavaScript Object Notation (JSON) documents within the database. Unlike XML which has its own data type, JSON data can be stored as VARCHAR2, CLOB or BLOB data types, but with a JSON check constraint to ensure the stored parsed document meets JSON document standards.

This enables ‘No-SQL’ schema-less type development within the Oracle Database  for “next generation” applications. Although developers can work with the JSON store without using SQL (via say RESTful APIs directly within the database), you can still leverage the power of SQL for reporting and analytics type purposes. Of course, reading JSON data efficiently then becomes important, and that’s where indexing the JSON document store kicks in.

Let’s look at a simple example.

Firstly, let’s create a table with a column called ZIGGY_ORDER which stores JSON documents.

SQL> CREATE TABLE ziggy_json  (id number,  ziggy_date date,  ziggy_order CLOB  CONSTRAINT ensure_ziggy_json CHECK (ziggy_order IS JSON));</pre>
Table created.

The ZIGGY_ORDER column has a JSON check constraint which ensures only valid JSON documents can be stored. An attempt to insert a row with invalid JSON data will fail:

SQL> insert into ziggy_json values (1, sysdate, '{"This is not legal JSON"}');
insert into ziggy_json values (1, sysdate, '{"This is not legal JSON"}')
*
ERROR at line 1:
ORA-02290: check constraint (BOWIE.ENSURE_ZIGGY_JSON) violated

Let’s insert some JSON data into the table. Note I’m using ROWNUM to insert some almost unique JSON data and then re-inserting the data again to get data worth accessing via an index:

SQL> insert into ziggy_json
2 select
3 rownum,
4 SYSdate,
5 '{"PONumber" : ' || rownum || ',
6 "Reference" : "DBOWIE-2016' || rownum || '",
7 "Requestor" : "David Bowie",
8 "User" : "DBOWIE",
9 "CostCenter" : "A42",
10 "ShippingInstructions" : {"name" : "David Bowie",
11 "Address": {"street" : "42 Ziggy Street",
12 "city" : "Canberra",
13 "state" : "ACT",
14 "zipCode" : 2601,
15 "country" : "Australia"},
16 "Phone" : [{"type" : "Office", "number" : "417-555-7777"},
17 {"type" : "Mobile", "number" : "417-555-1234"}]},
18 "Special Instructions" : null,
19 "AllowPartialShipment" : true,
20 "LineItems" : [{"ItemNumber" : 1,
21 "Part" : {"Description" : "Hunky Dory",
22 "UnitPrice" : 10.95},
23 "Quantity" : 5.0},
24 {"ItemNumber" : 2,
25 "Part" : {"Description" : "Pin-Ups",
26 "UnitPrice" : 10.95},
27 "Quantity" : 3.0}]}'
28 from dual connect by level <= 1000000;

1000000 rows created.

SQL> insert into ziggy_json select * from ziggy_json;

1000000 rows created.

SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'ZIGGY_JSON');

PL/SQL procedure successfully completed.

There are a number of ways we can reference and access data from within JSON. One method is use .dot notation to access specific JSON data elements of interest:

SQL> SELECT z.ziggy_order.PONumber FROM ziggy_json z where z.ziggy_order.PONumber=42;

PONUMBER
--------------------------------------------------------------------------------

42
42

We can also use the JSON_VALUE function to access specific JSON data of interest:

SQL> select json_value(ziggy_order, '$.Reference') from ziggy_json  
     where json_value(ziggy_order, '$.PONumber' returning number)=42;

JSON_VALUE(ZIGGY_ORDER,'$.REFERENCE')
--------------------------------------------------------------------------------

DBOWIE-201642
DBOWIE-201642

Without an index, the CBO has no choice but to use an expensive Full Table Scan:

SQL> SELECT z.ziggy_order.PONumber FROM ziggy_json z where z.ziggy_order.PONumber=42;

Elapsed: 00:00:34.04

Execution Plan
----------------------------------------------------------
Plan hash value: 1413303849

--------------------------------------------------------------------------------
| Id | Operation         | Name       | Rows  | Bytes | Cost (%CPU) | Time     |
--------------------------------------------------------------------------------
| 0  | SELECT STATEMENT  |            | 20000 |   28M |    129K (3) | 00:00:06 |
|* 1 | TABLE ACCESS FULL | ZIGGY_JSON | 20000 |   28M |    129K (3) | 00:00:06 |
--------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter(TO_NUMBER(JSON_QUERY("Z"."ZIGGY_ORDER" FORMAT JSON ,
'$.PONumber' RETURNING VARCHAR2(4000) ASIS WITHOUT ARRAY WRAPPER NULL
ON ERROR))=42)

Statistics
----------------------------------------------------------
     0 recursive calls
     0 db block gets
500057 consistent gets
299745 physical reads
     0 redo size
   596 bytes sent via SQL*Net to client
   552 bytes received via SQL*Net from client
     2 SQL*Net roundtrips to/from client
     0 sorts (memory)
     0 sorts (disk)
     2 rows processed

 

At some 500,057 consistent gets and an elapsed time of 34.04 seconds, the above query is “slow” and expensive.

So one option to speed things up is to create a function-based index using the JSON_VALUE function. This can then be used to quickly access data that matches specific JSON name values of interest:

SQL> CREATE INDEX ziggy_po_num_idx ON 
     ziggy_json (json_value(ziggy_order, '$.PONumber' RETURNING NUMBER ERROR ON ERROR));

Index created.

SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'ZIGGY_JSON');

PL/SQL procedure successfully completed.

As it’s a function-based index, collecting statistics on the implicitly created virtual column is advisable. If we now re-run the query:

SQL> select * from ziggy_json 
     where json_value(ziggy_order, '$.PONumber' returning number)=42;

ID ZIGGY_DAT
---------- ---------
ZIGGY_ORDER
------------------------------------------------------------
42 24-JUN-16
{"PONumber" : 42,
"Reference" : "DBOWIE-201642",
42 24-JUN-16
{"PONumber" : 42,
"Reference" : "DBOWIE-201642",

Elapsed: 00:00:00.01

Execution Plan
----------------------------------------------------------
Plan hash value: 1939019025

--------------------------------------------------------------------------------------------------------
| Id | Operation                           | Name             | Rows | Bytes | Cost (%CPU) | Time     |
--------------------------------------------------------------------------------------------------------
| 0  | SELECT STATEMENT                    |                  |    2 |  3058 |       5 (0) | 00:00:01 |
| 1  | TABLE ACCESS BY INDEX ROWID BATCHED | ZIGGY_JSON       |    2 |  3058 |       5 (0) | 00:00:01 |
|* 2 | INDEX RANGE SCAN                    | ZIGGY_PO_NUM_IDX |    2 |       |       3 (0) | 00:00:01 |
--------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

2 - access(JSON_VALUE("ZIGGY_ORDER" FORMAT JSON , '$.PONumber' RETURNING NUMBER ERROR ON
ERROR)=42)
Statistics
----------------------------------------------------------
   0 recursive calls
   0 db block gets
   9 consistent gets
   0 physical reads
   0 redo size
1863 bytes sent via SQL*Net to client
1088 bytes received via SQL*Net from client
   6 SQL*Net roundtrips to/from client
   0 sorts (memory)
   0 sorts (disk)
   2 rows processed

The query now uses the JSON function-based index, performs just 9 consistent gets and completes in 0.01 second.

The index can be used for any data accesses in which the CBO considers the index the cheaper alternative:

SQL> select * from ziggy_json where json_value(ziggy_order, '$.PONumber' returning number)<42;

82 rows selected.

Elapsed: 00:00:00.01

Execution Plan
----------------------------------------------------------
Plan hash value: 1939019025

--------------------------------------------------------------------------------------------------------
| Id | Operation                           | Name             | Rows | Bytes | Cost (%CPU) | Time     |
--------------------------------------------------------------------------------------------------------
| 0  | SELECT STATEMENT                    |                  |   82 |  122K |      86 (0) | 00:00:01 |
| 1  | TABLE ACCESS BY INDEX ROWID BATCHED | ZIGGY_JSON       |   82 |  122K |      86 (0) | 00:00:01 |
|* 2 | INDEX RANGE SCAN                    | ZIGGY_PO_NUM_IDX |   82 |       |       3 (0) | 00:00:01 |
--------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

2 - access(JSON_VALUE("ZIGGY_ORDER" FORMAT JSON , '$.PONumber' RETURNING NUMBER ERROR ON
ERROR)<42)

Statistics
----------------------------------------------------------
    0 recursive calls
    0 db block gets
  249 consistent gets
    0 physical reads
    0 redo size
50623 bytes sent via SQL*Net to client
22528 bytes received via SQL*Net from client
  166 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
   82 rows processed

Indexes can also be created based on the .dot JSON notation:

SQL> CREATE INDEX ziggy_po_num_idx2 ON ziggy_json z (z.ziggy_order.PONumber);

Index created.

SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'ZIGGY_JSON');

PL/SQL procedure successfully completed.

A query based on the associated JSON .dot notation can now run efficiently via the index:

SQL> SELECT * FROM ziggy_json z where z.ziggy_order.PONumber='42';

Elapsed: 00:00:00.01

Execution Plan
----------------------------------------------------------
Plan hash value: 4224387816

---------------------------------------------------------------------------------------------------------
| Id | Operation                           | Name              | Rows | Bytes| Cost (%CPU) | Time     |
---------------------------------------------------------------------------------------------------------
| 0  | SELECT STATEMENT                    |                   |    2 | 3062 |       6 (0) | 00:00:01 |
| 1  | TABLE ACCESS BY INDEX ROWID BATCHED | ZIGGY_JSON        |    2 | 3062 |       6 (0) | 00:00:01 |
|* 2 | INDEX RANGE SCAN                    | ZIGGY_PO_NUM_IDX2 |    2 |      |       3 (0) | 00:00:01 |
---------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

2 - access(JSON_QUERY("ZIGGY_ORDER" FORMAT JSON , '$.PONumber' RETURNING VARCHAR2(4000) ASIS
WITHOUT ARRAY WRAPPER NULL ON ERROR)='42')

Statistics
----------------------------------------------------------
   0 recursive calls
   0 db block gets
   9 consistent gets
   0 physical reads
   0 redo size
1863 bytes sent via SQL*Net to client
1088 bytes received via SQL*Net from client
   6 SQL*Net roundtrips to/from client
   0 sorts (memory)
   0 sorts (disk)
   2 rows processed

Query again uses the index and is just as efficient with almost immediate response times at just 9 consistent gets.

JSON indexes can also be used to police and ensure data constraints and integrity (which can be problematic with JSON documents). The following numeric index example also implicitly adds data constraint capabilities:

SQL> CREATE INDEX ziggy_po_num_idx3 ON ziggy_json z (to_number(z.ziggy_order.PONumber));

Index created.

SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'ZIGGY_JSON');

PL/SQL procedure successfully completed.

SQL> insert into ziggy_json
2 select
3 rownum,
4 SYSdate,
5 '{"PONumber" : "200000A",
6 "Reference" : "DBOWIE-2016' || rownum || '",
7 "Requestor" : "David Bowie",
8 "User" : "DBOWIE",
9 "CostCenter" : "A42",
10 "ShippingInstructions" : {"name" : "David Bowie",
11 "Address": {"street" : "42 Ziggy Street",
12 "city" : "Canberra",
13 "state" : "ACT",
14 "zipCode" : 2601,
15 "country" : "Australia"},
16 "Phone" : [{"type" : "Office", "number" :"417-555-7777"},
17 {"type" : "Mobile", "number" :"417-555-1234"}]},
18 "Special Instructions" : null,
19 "AllowPartialShipment" : true,
20 "LineItems" : [{"ItemNumber" : 1,
21 "Part" : {"Description" : "Hunky Dory",
22 "UnitPrice" : 10.95},
23 "Quantity" : 5.0},
24 {"ItemNumber" : 2,
25 "Part" : {"Description" : "Pin-Ups",
26 "UnitPrice" : 10.95},
27 "Quantity" : 3.0}]}'
28 from dual;
insert into ziggy_json
*
ERROR at line 1:
ORA-01722: invalid number

The PONumber value has to now be numeric for it to be successfully added to the JSON document store. The index of course can also be used for efficient data access:

SQL> SELECT * FROM ziggy_json z where to_number(z.ziggy_order.PONumber)=42;

Elapsed: 00:00:00.01

Execution Plan
----------------------------------------------------------
Plan hash value: 692052820

---------------------------------------------------------------------------------------------------------
| Id | Operation                           | Name              | Rows | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------
| 0  | SELECT STATEMENT                    |                   |    2 |  3048 |      5 (0) | 00:00:01 |
| 1  | TABLE ACCESS BY INDEX ROWID BATCHED | ZIGGY_JSON        |    2 |  3048 |      5 (0) | 00:00:01 |
|* 2 | INDEX RANGE SCAN                    | ZIGGY_PO_NUM_IDX3 |    2 |       |      3 (0) | 00:00:01 |
---------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

2 - access(TO_NUMBER(JSON_QUERY("ZIGGY_ORDER" FORMAT JSON , '$.PONumber' RETURNING
VARCHAR2(4000) ASIS WITHOUT ARRAY WRAPPER NULL ON ERROR))=42)

Statistics
----------------------------------------------------------
   0 recursive calls
   0 db block gets
   9 consistent gets
   0 physical reads
   0 redo size
1863 bytes sent via SQL*Net to client
1088 bytes received via SQL*Net from client
   6 SQL*Net roundtrips to/from client
   0 sorts (memory)
   0 sorts (disk)
   2 rows processed

In Part II, I’ll look at how to create composite JSON indexes and how to use a text index to automatically index all name fields within a JSON document.

Next Round Of ANZ “Let’s Talk Database” Events (July/August 2016) June 20, 2016

Posted by Richard Foote in Let's Talk Database, Oracle Indexes.
11 comments

I’ll be presenting the next round of “Let’s Talk Database” events around Australia and NZ this winter in July/August 2016. These are free events but due to limited places have often “sold out” in the past, so booking early is recommended to avoid disappointment.

All events run between 9:00am – 12:30pm and are followed by a networking lunch.

We always have a fun time at these events and feedback suggest everyone learns something, so I hope if you’re lucky enough to be in the ANZ region, you can make it to your local event 🙂

Currently, the confirmed events and topics are as follows:

  • Sydney – Wednesday, 20 July 2016: Let’s Talk Oracle Database: Intro To CBO Costings, Database Futures. To register, email: mitch.foster@oracle.com
  • Melbourne – Thursday, 21 July 2016: Let’s Talk Oracle Database: Performance Diagnostics – How To Correctly Use AWR Reports. To register, email: johanne.sergeant@oracle.com SOLD OUT !!
  • Canberra – Thursday, 28 July 2016: Let’s Talk Database: Oracle Database 12c – Built for Data Warehousing. To register, email: richard.foote@oracle.com
  • Wellington –  Wednesday, 10 August 2016: Let’s Talk Oracle Database: Performance Diagnostics – How To Correctly Use AWR Reports. To register, email: tanya.fea@oracle.com
  • Auckland – Thursday, 11 August 2016: Let’s Talk Oracle Database: Performance Diagnostics – How To Correctly Use AWR Reports. To register, email: tanya.fea@oracle.com
  • Perth – Wednesday, 17 August 2016: Let’s Talk Database: Intro To CBO Costings, Database Futures. To register, email: dennis.ward@oracle.com
  • Adelaide – Thursday, 18 August 2016: Let’s Talk Database: Oracle Database 12c – Built for Data Warehousing. To register, email: brodie.james@oracle.com
  • Brisbane: Wednesday, 24 August 2016: Let’s Talk Oracle Database: Intro To CBO Costings, Database Futures. To register, email: gordon.laing@oracle.com

 

Details of sessions.

Performance Diagnostics – How To Correctly Use AWR Reports

I often get called in by customers to determine and address the root cause of database performance issues. Depending on the issue, a request for a simple Automatic Workload Repository (AWR) report is often sufficient to accurately diagnose the root problem(s). However, many DBAs find AWR reports daunting as they contain so much database diagnostic information, they don’t even know where to begin to find the truly useful information.

In this session, I begin by discussing the basic concepts of response times, DB times, the Oracle Wait Interface and how to focus on what is truly important. I then look at how to best read an AWR report to quickly go to the most relevant sections that detail any specific issues. I also discuss a general tuning and diagnostic methodology that ensures one can quickly determine whether an AWR report will indeed be sufficient and how to accurately and consistently use the AWR report to pinpoint and determine root causes for global database performance issues. We’ll go through a number of actual “real-life” examples that highlight various performance issues and how one accurately determines the actual root issues through an appropriate AWR report.

Time permitting, we’ll go through some other useful diagnostic tools such as the AWR Warehouse, Automatic Database Diagnostic Monitor (ADDM), Active Session History (ASH)  and SQL Extended Tracing and scenarios when they can prove to be invaluable in addressing database performance issues.

 

Introduction Into The CBO (Why Is The CBO doing a Full Table Scan?)

A common issue is confusion on why the Oracle Cost Based Optimizer (CBO) has made the decision the use a full table scan or use that index and not the one that appears more appropriate. The costing figures that the CBO derives and displays in execution plans are often viewed as cryptic, which is unfortunate as these costings are key to CBO decisions. Often the attempt to resolve such issues is to set inappropriate database parameters that introduces as many issues as it tries to resolve. This session looks under the covers of the CBO to see how it precisely derives its various costings and hence how to correctly troubleshoot and resolve inefficient queries when they arise. Bring a calculator, it might come in handy 🙂

Database Futures – Exadata Secret Sauce and Software on Silicon

Oracle has recently released the new generation of the Exadata machine, the X6-2. Not only is it bigger and faster but also has a host of features and capabilities that is unique to the Exadata storage platform including Smart Scans, Storage Indexes, Exafusion, Smart Fusion Block Transfer, Dual Format Columnar Flash Cache, I/O Latency Capping, Hybrid Columnar Compression, Sparse Snapshots and In-Memory Database Fault Tolerance. All these unique features and their enhancements with the new X6-2 will be discussed. Additionally, Oracle’s Software-in-Silicon features extend the concepts of co-engineering hardware and software inherent in engineered systems such as the Exadata. Embedding software which has traditionally sat in the database layer into the hardware layer can yield both performance and security beyond that available of a traditional CPU. This presentation also covers how Oracle Database 12c can leverage Software-in-Silicon features, where the Oracle Database is heading in the future and why you should care.

 

Oracle Database 12c – Built for Data Warehousing

The Oracle Database is the leading database in market but it might come as a surprise to some that it’is also the leading database with respect to Data Warehousing in terms of both sales and analyst rankings. The focus today is a detailed look at all the special database capabilities that makes the Oracle Database the perfect platform for Data Warehouse type applications.

We explore the overall Data Warehousing Information Management architecture and how it relates to both data warehouses and data reservoirs. We then deep dive into standard database features such as Materialized Views, Dimensions, Bitmap Indexes, Query Re-Write, JSON in the database, Partitioning, Parallel Processing, Compression, Attribute Clustering and SQL for analysis, reporting, aggregation and pattern matching.

We then discuss more advanced Oracle database capabilities such as Data Mining, Predictive Analytics, R in the database, Database In-Memory, Big Data SQL, Spatial & Graph and the specific engineered systems capabilities that makes Oracle such an effective Data Warehouse platform, especially when combined with the enterprise capabilities already associated with the Oracle Database such as security, high availability, scalability and manageability.

Storing Date Values As Numbers (The Numbers) June 1, 2016

Posted by Richard Foote in 12c, CBO, Histograms, Oracle Indexes, Storing Dates As Numbers.
7 comments

In my last couple of posts, I’ve been discussing how storing date data in a character based column is a really really bad idea.

In a follow-up question, I was asked if storing dates in NUMBER format was a better option. The answer is that it’s probably an improvement from storing dates as strings but it’s still a really really bad idea. Storing dates in DATE format is easily the best option as is storing any data in its native data type.

In this post, I’ll highlight a few of the classic issues with storing dates in basic number format as well as showing you some of the calculations on the CBO cardinality estimates.

As usual, the demo starts with a basic little table that I’ll populate with date data stored in a NUMBER column (ZIGGY_DATE):

SQL> create table ziggy (id number, code number, ziggy_date number);
    
Table created.

SQL> insert into ziggy select rownum, mod(rownum,1000), 
to_number(to_char(sysdate-mod(rownum,10000), 'YYYYMMDD')) 
from dual connect by level <=1000000;

1000000 rows created.

SQL> commit;

Commit complete.

We’ll now collect statistics on the table:

SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'ZIGGY');

PL/SQL procedure successfully completed.

SQL> select column_name, num_distinct, density, histogram, hidden_column, virtual_column from dba_tab_cols where table_name='ZIGGY';

COLUMN_NAME NUM_DISTINCT    DENSITY HISTOGRAM       HID VIR
----------- ------------ ---------- --------------- --- ---
ZIGGY_DATE         10000      .0001 NONE            NO  NO
CODE                1000       .001 NONE            NO  NO
ID               1000000    .000001 NONE            NO  NO

So the ZIGGY_DATE column has 10,000 distinct dates (with 100 rows per distinct date), with a column density of 1/10000 = 0.0001.

Let’s now create a standard B-Tree index on the ZIGGY_DATE column:

SQL> create index ziggy_date_i on ziggy(ziggy_date);
                  
Index created.

If we look a sample of the data in the column and the min/max date ranges:

SQL> select * from ziggy where rownum <11;
        
        ID       CODE ZIGGY_DATE
---------- ---------- ----------
       776        776   20140412
       777        777   20140411
       778        778   20140410
       779        779   20140409
       780        780   20140408
       781        781   20140407
       782        782   20140406
       783        783   20140405
       784        784   20140404
       785        785   20140403

SQL> select min(ziggy_date) min, max(ziggy_date) max from ziggy;

       MIN        MAX
---------- ----------
  19890110   20160527

We see that all the data in the ZIGGY_DATE column are just number representations of dates, with a range between 10 Jan 1989 and 27 May 2016.

Note there are actually 10,000 days between the dates but the CBO would estimate a range of  270,417 possible days (20160527 – 19890110 = 270,417). The CBO has no idea that the “numbers” within the column are all dates and that there are ranges of values in which data is relatively popular (e.g. between say 20160101 and 20160131) and ranges of values in which data is relatively unpopular (e.g. say between 20154242 and 20159999).

Although not as bad as the range of possible unpopular values found within a character data type as I discussed previously when storing date data as a string, there is still enough data skew when storing dates as numbers to be problematic to the CBO.

If we select just one date with an equality predicate:

SQL> select * from ziggy where ziggy_date = 20150613;
                 
100 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 2700236208

----------------------------------------------------------------------------------------------------
| Id | Operation                           | Name         | Rows | Bytes | Cost (%CPU) | Time     |
----------------------------------------------------------------------------------------------------
| 0  | SELECT STATEMENT                    |              |  100 |  1500 |     103 (0) | 00:00:01 |
| 1  | TABLE ACCESS BY INDEX ROWID BATCHED | ZIGGY        |  100 |  1500 |     103 (0) | 00:00:01 |
|* 2 | INDEX RANGE SCAN                    | ZIGGY_DATE_I |  100 |       |       3 (0) | 00:00:01 |
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("ZIGGY_DATE"=20150613)

Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
110 consistent gets
0 physical reads
0 redo size
3883 bytes sent via SQL*Net to client
618 bytes received via SQL*Net from client
8 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
100 rows processed

The CBO gets things spot on, correctly estimating 100 rows to be returned, as the CBO knows there are only 10,000 distinct values of which only one of those values is being selected.

Selectivity is basically the density of the column = 1/10000 = 0.0001, so the estimated cardinality is 0.0001 x 1M rows = 100 rows. Perfect.

However, if we perform a range based query as follows:

SQL> select * from ziggy where ziggy_date between 20151010 and 20151111;
     
3300 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 2700236208

----------------------------------------------------------------------------------------------------
| Id | Operation                           | Name         | Rows | Bytes | Cost (%CPU) | Time     |
----------------------------------------------------------------------------------------------------
| 0  | SELECT STATEMENT                    |              |  573 |  8595 |     580 (1) | 00:00:01 |
| 1  | TABLE ACCESS BY INDEX ROWID BATCHED | ZIGGY        |  573 |  8595 |     580 (1) | 00:00:01 |
|* 2 | INDEX RANGE SCAN                    | ZIGGY_DATE_I |  573 |       |       4 (0) | 00:00:01 |
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("ZIGGY_DATE">=20151010 AND "ZIGGY_DATE"<=20151111)

Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
3531 consistent gets
0 physical reads
0 redo size
108973 bytes sent via SQL*Net to client
2961 bytes received via SQL*Net from client
221 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
3300 rows processed

The CBO has got things somewhat incorrect in this example and has underestimated the expect number of rows (573 rows vs. the 3,300 rows actually returned).

The actual number of days between these dates is 33 so the actual ratio of data returned is 33/10000 x 1M rows = 3,300 rows. This is a range of “numbers” that overall covers a relatively “popular” range of  date values.

However Oracle is estimating a range of some 20151111 – 20151010 = 101 days between these dates. As the total range of possible days 20160527-19890110 = 270,417, the estimated ratio of returned rows is 101/270417 plus 2 x selectivity of a day for the implicit 2 equality conditions (as a between is effectively >= and <=). The selectivity of one day is just the density of the column, 0.0001 as illustrated in the previous query.

Therefore, the query selectivity is derived as being (101/270417) + (2 x 0.0001) = 0.000573 when multiplied by 1M rows = 573 rows as estimated by the CBO.

So the CBO is rather significantly *under* estimating the rows to be returned which could result in a sub-optimal execution plan (such as the inappropriate use of an index range scan as in this example, noting the poor clustering of the data).

If we now look at another range scan below:

SQL> select * from ziggy where ziggy_date between 20151225 and 20160101;
    
800 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 2421001569

---------------------------------------------------------------------------
| Id | Operation         | Name  | Rows  | Bytes | Cost (%CPU) | Time     |
---------------------------------------------------------------------------
| 0  | SELECT STATEMENT  |       | 33023 |  483K |    810 (15) | 00:00:01 |
|* 1 | TABLE ACCESS FULL | ZIGGY | 33023 |  483K |    810 (15) | 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("ZIGGY_DATE">=20151225 AND "ZIGGY_DATE"<=20160101)

Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
2824 consistent gets
0 physical reads
0 redo size
23850 bytes sent via SQL*Net to client
1135 bytes received via SQL*Net from client
55 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
800 rows processed

The actual number of days between these dates is only 8 so the actual ratio of data returned is 8/10000 x 1M rows = 800 rows. This is a range of “numbers” that overall covers a relatively “unpopular” range of date values.

However Oracle is estimating a range of some 20160101 – 20151225 = 8876 days between these dates. As the total range of possible days is 20160527-19890110 = 270,417, the estimated ratio of returned rows is 8876/270417 plus 2 x the selectivity of a single day again for the 2 implicit equality conditions.

Therefore, the query selectivity is derived as being (8876/270417) + (2 x 0.0001) = 0.033023 when multiplied by 1M rows = 33,023 rows as estimated by the CBO.

So the CBO is rather significantly *over* estimating the rows to be returned which could again result in a sub-optimal execution plan (or the inappropriate use of a Full Table Scan in this example). The CBO is simply not picking up the fact that most of the possible values between the “number” ranges aren’t valid dates and can’t possibly exist.

Of course, having dates stored as simple numbers means Oracle has no way of ensuring data integrity and can allow “invalid” dates to be inserted:

SQL> insert into ziggy values (1000001, 42, 20160599);
            
1 row created.

SQL> rollback;

Rollback complete.

As with dates stored as strings, we can again address these issues by either collecting histograms for such columns and/or by creating a function-based date index on the column:

SQL> create index ziggy_date_fn_i on ziggy(to_date(ziggy_date,'YYYYMMDD'));

Index created.

SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'ZIGGY');

PL/SQL procedure successfully completed.

SQL> select column_name, num_distinct, density, histogram, hidden_column, virtual_column from dba_tab_cols where table_name='ZIGGY';

COLUMN_NAME  NUM_DISTINCT    DENSITY HISTOGRAM       HID VIR
------------ ------------ ---------- --------------- --- ---
SYS_NC00004$        10000      .0001 NONE            YES YES
ZIGGY_DATE          10000      .0001 HYBRID          NO  NO
CODE                 1000       .001 NONE            NO  NO
ID                1000000    .000001 NONE            NO  NO

The associated query with the equality predicate has accurate estimates as it did previously:

SQL> select * from ziggy where to_date(ziggy_date, 'YYYYMMDD') = '13-JUN-2015';

100 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 945728471

-------------------------------------------------------------------------------------------------------
| Id | Operation                           | Name            | Rows | Bytes  | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------
| 0  | SELECT STATEMENT                    |                 |  100 |   2300 |     103 (0)| 00:00:01 |
| 1  | TABLE ACCESS BY INDEX ROWID BATCHED | ZIGGY           |  100 |   2300 |     103 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN                    | ZIGGY_DATE_FN_I |  100 |        |       3 (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access(TO_DATE(TO_CHAR("ZIGGY_DATE"),'YYYYMMDD')=TO_DATE(' 2015-06-13 00:00:00',
'syyyy-mm-dd hh24:mi:ss'))

Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
111 consistent gets
0 physical reads
0 redo size
2877 bytes sent via SQL*Net to client
618 bytes received via SQL*Net from client
8 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
100 rows processed

As the virtual column created for the function-based index also has 10,000 distinct values and a corresponding density of 0.0001, the CBO is getting the cardinality estimate of 100 rows spot on.

But importantly, both associated range based queries are now also being accurately costed by the CBO as it now knows the data being searched is date based and hence can more accurately determine the actual expected dates to be returned within the specified “date” ranges.

SQL> select * from ziggy where to_date(ziggy_date, 'YYYYMMDD') between '10-OCT-2015' and '11-NOV-2015';

3300 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 2421001569

---------------------------------------------------------------------------
| Id | Operation         | Name  | Rows | Bytes | Cost (%CPU) | Time     |
---------------------------------------------------------------------------
| 0  | SELECT STATEMENT  |       | 3400 | 78200 |   1061 (35) | 00:00:01 |
|* 1 | TABLE ACCESS FULL | ZIGGY | 3400 | 78200 |   1061 (35) | 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter(TO_DATE(TO_CHAR("ZIGGY_DATE"),'YYYYMMDD')>=TO_DATE('
2015-10-10 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
TO_DATE(TO_CHAR("ZIGGY_DATE"),'YYYYMMDD')<=TO_DATE(' 2015-11-11
00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

Statistics
----------------------------------------------------------
8 recursive calls
0 db block gets
2991 consistent gets
0 physical reads
0 redo size
95829 bytes sent via SQL*Net to client
2961 bytes received via SQL*Net from client
221 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
3300 rows processed

The CBO is now estimating not 573 rows, but 3,400 rows which is much closer to the actual 3,300 rows being returned. As a result, the CBO is now performing a more efficient Full Table Scan (due to the poor Clustering Factor of the index) than the Index Range Scan performed previously.

If we look at the other range scan query:

SQL> select * from ziggy where to_date(ziggy_date, 'YYYYMMDD') between '25-DEC-2015' and '01-JAN-2016';

800 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 945728471

-------------------------------------------------------------------------------------------------------
| Id | Operation                           | Name            | Rows | Bytes | Cost (%CPU) | Time     |
-------------------------------------------------------------------------------------------------------
| 0  | SELECT STATEMENT                    |                 |  900 | 20700 |      909 (1)| 00:00:01 |
| 1  | TABLE ACCESS BY INDEX ROWID BATCHED | ZIGGY           |  900 | 20700 |      909 (1)| 00:00:01 |
|* 2 | INDEX RANGE SCAN                    | ZIGGY_DATE_FN_I |  900 |       |        5 (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access(TO_DATE(TO_CHAR("ZIGGY_DATE"),'YYYYMMDD')>=TO_DATE(' 2015-12-25 00:00:00',
'syyyy-mm-dd hh24:mi:ss') AND TO_DATE(TO_CHAR("ZIGGY_DATE"),'YYYYMMDD')<=TO_DATE(' 2016-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

Statistics
----------------------------------------------------------
8 recursive calls
0 db block gets
861 consistent gets
7 physical reads
0 redo size
18917 bytes sent via SQL*Net to client
1135 bytes received via SQL*Net from client
55 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
800 rows processed

The CBO is now estimating not 33023 rows, but 900 rows which is again much closer to the actual 800 rows being returned. As a result, the CBO is now performing a more efficient Index Range Scan than the Full Table Scan is was previously.

And of course, the database via the function-based date index now has a manner in which protect the integrity of the date data:

SQL> insert into ziggy values (1000001, 42, 20160599);
insert into ziggy values (1000001, 42, 20160599)
*
ERROR at line 1:
ORA-01847: day of month must be between 1 and last day of month

However, the best way in Oracle to store “Date” data is within a Date data type column …

Storing Date Values As Characters Part II (A Better Future) May 30, 2016

Posted by Richard Foote in 12c, CBO, Function Based Indexes, Oracle Indexes, Storing Dates as Characters.
2 comments

In the previous post, I discussed how storing date values within a character data type is a really really bad idea and illustrated how the CBO can easily get its costings totally wrong as a result. A function-based date index helped the CBO get the correct costings and protect the integrity of the date data.

During the demo, I re-collected statistics on the table as the associated hidden virtual column after creating the function-based index doesn’t have statistics.

Before re-collecting statistics:

SQL> select column_name, num_distinct, density, histogram, hidden_column, virtual_column from dba_tab_cols where table_name='BOWIE';</pre>
 
COLUMN_NAME  NUM_DISTINCT DENSITY     HISTOGRAM      HID VIR
------------ ------------ ---------- --------------- --- ---
SYS_NC00004$                         NONE            YES YES
BOWIE_DATE          10000      .0001 NONE            NO  NO
CODE                 1000       .001 NONE            NO  NO
ID                1000000    .000001 NONE            NO  NO

And afterwards:

SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'BOWIE');

PL/SQL procedure successfully completed. 

SQL> select column_name, num_distinct, density, histogram, hidden_column, virtual_column 
from dba_tab_cols where table_name='BOWIE'; 

COLUMN_NAME  NUM_DISTINCT DENSITY    HISTOGRAM       HID VIR 
------------ ------------ ---------- --------------- --- --- 
SYS_NC00004$        10000      .0001 NONE            YES YES 
BOWIE_DATE          10000      .0001 HYBRID          NO  NO 
CODE                 1000       .001 NONE            NO  NO 
ID                1000000    .000001 NONE            NO  NO

 

We can see that the hidden virtual column now has statistics.

But we also notice another difference, that being the BOWIE_DATE column now has a histogram (of type Hybrid).

As discussed in the previous post, the issue here is that the date data within the character column covers only a very specific subset of all the potential character values that could reside within the column. Therefore the CBO is getting the range scan selectivity hopelessly incorrect.

Now that we’ve run a few queries featuring the BOWIE_DATE column in the predicates and as there’s effectively data skew within the column, the column becomes a candidate for a histogram with the default SIZE AUTO collection method.

The histogram now provides the CBO with a much more accurate picture of the distribution of the data within the BOWIE_DATE and that between discrete “date” column values, there are only so many rows that qualify.

As a result of the histogram, the CBO can now make much more accurate cardinality estimates.

If we now re-run the query that actually returns 8300 rows but the CBO previously estimated only 100 rows be returned:

SQL> select * from bowie where bowie_date between '2015 10 10' and '2015 12 31'
                                             
8300 rows selected.
                                                            
Execution Plan
----------------------------------------------------------
Plan hash value: 1845943507

---------------------------------------------------------------------------
| Id | Operation         | Name  | Rows | Bytes | Cost (%CPU) | Time     |
---------------------------------------------------------------------------
| 0  | SELECT STATEMENT  |       | 4152 | 83040 |   1000 (12) | 00:00:01 |
|* 1 | TABLE ACCESS FULL | BOWIE | 4152 | 83040 |   1000 (12) | 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("BOWIE_DATE">='2015 10 10' AND "BOWIE_DATE"<='2015 12 31')
Statistics
----------------------------------------------------------
14 recursive calls
0 db block gets
4063 consistent gets
0 physical reads
0 redo size
282075 bytes sent via SQL*Net to client
6635 bytes received via SQL*Net from client
555 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
8300 rows processed

We see that at an estimated 4,152 rows, it’s a much better estimate. Not perfect, but maybe good enough to now get the more efficient Full Table Scan execution plan.

If we re-run the query that returned over 1/2 the table at some 570,000 rows but with the CBO previously estimating only 116 rows:

SQL> select * from bowie where bowie_date between '2000 10 10' and '2016 12 31';</pre>
  
570800 rows selected.
    
Execution Plan
----------------------------------------------------------
Plan hash value: 1845943507

---------------------------------------------------------------------------
| Id | Operation         | Name  | Rows | Bytes | Cost (%CPU) | Time     |
---------------------------------------------------------------------------
| 0  | SELECT STATEMENT  |       | 572K |   10M |   1012 (13) | 00:00:01 |
|* 1 | TABLE ACCESS FULL | BOWIE | 572K |   10M |   1012 (13) | 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("BOWIE_DATE">='2000 10 10' AND "BOWIE_DATE"<='2016 12 31')
   
Statistics
----------------------------------------------------------
14 recursive calls
0 db block gets
41456 consistent gets
4 physical reads
0 redo size
19292352 bytes sent via SQL*Net to client
419135 bytes received via SQL*Net from client
38055 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
570800 rows processed

 

We see that at an estimate of 572K rows, it’s now got this just about right and again has made the right decision with the Full Table Scan execution plan.

Storing date data in character based columns is still a really really bad idea and limits the manner in which date data can be analysed, protected and accessed, but with appropriate histograms in place, at least the CBO has some chance of making a reasonable fist of things with some range based queries.

As a follow-up, I was asked if storing dates in NUMBER format is a better option than as a string. I’ll discuss that next.