jump to navigation

European Indexing Internals Seminar Events: Please Help Me Select (Station To Station) February 20, 2018

Posted by Richard Foote in Oracle Indexes.
add a comment

station to station

I’m currently determining which European countries to initially target for a series of my 2 day Oracle Indexing Internals and Best Practices seminars tentatively scheduled for the late May / early June time frame.

The cost for the 2 day seminar is to be confirmed but will be approximately 1200 Euros (plus any local taxes), which includes a guaranteed small class setting, lunch and both hard and soft copy of the exclusive 800+ page seminar materials.

To help me decide which 4 countries to initially target, can you please leave me a comment or contact me directly (richard@richardfooteconsulting.com) if you would be interested in attending a seminar in a particular country.

This is a must attend seminar of benefit to not only DBAs, but also to Developers, Solution Architects and anyone else interested in designing, developing or maintaining high performance Oracle-based applications. It’s a fun, but intense, content rich seminar that is suitable for people of all experiences (from beginners to seasoned Oracle experts).

I’ve now run this seminar in some 20 countries to critical acclaim. The last 30 people that have attended the seminar have scored the seminar a straight 5 out of 5.

For full details on seminar content, see my Index Seminar page.

The more people who express an interest in attending the seminar in a particular country, the more likely I’ll select the country as part of the initial May/June series.

Don’t hesitate to contact me if you have any questions regarding the seminar.

“The European cannon is here” !!

Advertisements

Index Skip Scan: Potential Use Case or Maybe Not ? (Shine On You Crazy Diamond) January 30, 2018

Posted by Richard Foote in Oracle Indexes.
5 comments

shine-on-you-crazy-diamond

While answering a recent question on a LinkedIn forum, it got me thinking whether there’s a potential use case for using an INDEX SKIP SCAN I hadn’t previously considered.

I’ve discussed Index Skip Scans previously (as I did here), a feature introduced around Oracle9i that allows an index to be considered by the CBO even if the leading column of the index is not included in a query predicate. However, the index is only going to be used by the CBO if there are relatively few distinct values in the missing leading column, as Oracle has to effectively scan the index multiple times for each potential leading column value. If there are too many distinct values, each scan might not result in sufficient index leaf blocks being “skipped” thereby making the INDEX SKIP SCAN access path too inefficient.

But it occurred to me that the strategy of using an index skip scan could also potentially be applied when performing a sort-based aggregate function. Or maybe not.

So a little test case to find out.

I begin by creating a table with 1M rows. The key here though is that the CODE column only has 3 distinct values, so any aggregation based on CODE will only return 3 or fewer rows.

SQL> create table bowie (id number not null, code number not null, sales number not null, name varchar2(42));

Table created.

SQL> insert into bowie select rownum, mod(rownum,3), ceil(dbms_random.value(0,10000)), 'David Bowie' from dual connect by level  commit;

Commit complete.

I now create an index based on the CODE and SALES columns, with CODE the leading column:

SQL> create index bowie_i on bowie(code, sales);

Index created.

SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'BOWIE', method_opt=>'FOR ALL COLUMNS SIZE 1');

PL/SQL procedure successfully completed.

As CODE only has very few distinct values, the index is a candidate for an INDEX SKIP SCAN if CODE is not specified in a predicate. For example:

SQL> select code, sales from bowie where sales=1;

100 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 3861840421

----------------------------------------------------------------------------
| Id | Operation        | Name    | Rows | Bytes | Cost (%CPU) | Time     |
----------------------------------------------------------------------------
| 0  | SELECT STATEMENT |         |  100 |   700 |       5 (0) | 00:00:01 |
|* 1 | INDEX SKIP SCAN  | BOWIE_I |  100 |   700 |       5 (0) | 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - access("SALES"=1)
    filter("SALES"=1)

Statistics
----------------------------------------------------------
   0 recursive calls
   0 db block gets
  31 consistent gets
   0 physical reads
   0 redo size
2305 bytes sent via SQL*Net to client
 674 bytes received via SQL*Net from client
   8 SQL*Net roundtrips to/from client
   0 sorts (memory)
   0 sorts (disk)
 100 rows processed

So an INDEX SKIP SCAN has indeed been used by the CBO and at just 31 consistent gets, not a bad result when fetching the 100 rows of interest. With so few distinct values of CODE, Oracle only has to perform a relatively few number of scans of the index to retrieve all possible SALES of interest across each og the (3) CODE values. At a cost of just 5, the CBO has estimated that relatively few index leaf blocks indeed need to be accessed here.

It’s also worth mentioning at this point that Oracle can also use the index very effectively to return just the MIN (or potentially MAX) SALES column for each CODE value of interest, as it only has to read the first CODE index entry to subsequently determine the associated minimum SALES value:

SQL> select min(sales) from bowie where code=1;

MIN(SALES)
----------
1

Execution Plan
----------------------------------------------------------
Plan hash value: 2634611566

----------------------------------------------------------------------------------------
| Id | Operation                  | Name    | Rows | Bytes | Cost (%CPU) | Time      |
----------------------------------------------------------------------------------------
| 0  | SELECT STATEMENT           |         |    1 |     7 |       3 (0) |  00:00:01 |
| 1  | SORT AGGREGATE             |         |    1 |     7 |             |           |
| 2  | FIRST ROW                  |         |    1 |     7 |       3 (0) |  00:00:01 |
|* 3 | INDEX RANGE SCAN (MIN/MAX) | BOWIE_I |    1 |     7 |       3 (0) |  00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access("CODE"=1)

Statistics
----------------------------------------------------------
  0 recursive calls
  0 db block gets
  3 consistent gets
  0 physical reads
  0 redo size
544 bytes sent via SQL*Net to client
608 bytes received via SQL*Net from client
  2 SQL*Net roundtrips to/from client
  0 sorts (memory)
  0 sorts (disk)
  1 rows processed

At just 3 consistent gets, Oracle has only had to read the first CODE=1 index entry to immediately determine the minimum associated SALES value.

In theory, Oracle could use the same strategies when processing a GROUP BY aggregate query to very quickly and efficiently determine the minimum SALES value for each distinct CODE in my data. Oracle knows there are only 3 (few) distinct key values and with a combination of using the index to quickly access the minimum (first) SALES of each CODE value and an INDEX SKIP SCAN to quickly re-scan the index to get to the next CODE index entry, an index could be used to very quickly and efficiently find and retrieve the necessary data set in the following query:

SQL> select code, min(sales) from bowie group by code;

CODE MIN(SALES)
---------- ----------
1 1
2 1
0 1

Execution Plan
----------------------------------------------------------
Plan hash value: 2387048003

---------------------------------------------------------------------------------
| Id | Operation            | Name    | Rows  | Bytes | Cost (%CPU) | Time     |
---------------------------------------------------------------------------------
| 0  | SELECT STATEMENT     |         |     3 |    21 |   1079 (43) | 00:00:01 |
| 1  | HASH GROUP BY        |         |     3 |    21 |   1079 (43) | 00:00:01 |
| 2  | INDEX FAST FULL SCAN | BOWIE_I | 1000K | 6835K |    677 (10) | 00:00:01 |
---------------------------------------------------------------------------------

Statistics
----------------------------------------------------------
   0 recursive calls
   0 db block gets
2477 consistent gets
   0 physical reads
   0 redo size
 677 bytes sent via SQL*Net to client
 608 bytes received via SQL*Net from client
   2 SQL*Net roundtrips to/from client
   0 sorts (memory)
   0 sorts (disk)
   3 rows processed

However, only an INDEX FAST FULL SCAN is used at a relatively expensive 2477 consistent gets. Note that Oracle is using a HASH GROUP BY for its aggregation by default (and not some form of sort/group type aggregation). As a result, the index is not considered here and note also that the final result set is NOT in CODE order (the data is returned in 1,2,0 CODE order).

We could try to force the use of an INDEX SKIP SCAN via a hint:

SQL> select /*+ index_ss (bowie, bowie_i) */ code, min(sales) from bowie group by code;

CODE MIN(SALES)
---------- ----------
0 1
1 1
2 1

Execution Plan
----------------------------------------------------------
Plan hash value: 944722262

--------------------------------------------------------------------------------
| Id | Operation            | Name    | Rows  | Bytes | Cost (%CPU) | Time     |
--------------------------------------------------------------------------------
| 0  | SELECT STATEMENT     |         |     3 |    21 |    2558 (4) | 00:00:01 |
| 1  | SORT GROUP BY NOSORT |         |     3 |    21 |    2558 (4) | 00:00:01 |
| 2  | INDEX SKIP SCAN      | BOWIE_I | 1000K | 6835K |    2558 (4) | 00:00:01 |
--------------------------------------------------------------------------------

Statistics
----------------------------------------------------------
   0 recursive calls
   0 db block gets
2469 consistent gets
   0 physical reads
   0 redo size
 677 bytes sent via SQL*Net to client
 608 bytes received via SQL*Net from client
   2 SQL*Net roundtrips to/from client
   0 sorts (memory)
   0 sorts (disk)
   3 rows processed

It’s now using an INDEX SKIP SCAN in the plan. But it’s not being “clever” in its use of the INDEX SKIP SCAN with 2469 consistent gets suggesting Oracle is not effectively making use of the (MIN/MAX) scan capability per access of distinct CODE and is unnecessarily reading most of the index leaf blocks.

Oracle however is now using a sort/group based process when performing it’s aggregation (as evidenced in the second step of the plan), resulting in the data now being returned in CODE order.

Even if we take the aggregation out of the equation with a simple MIN based query accessing more than one CODE value:

SQL> select min(sales) from bowie where code in (1,2);

Execution Plan
----------------------------------------------------------
Plan hash value: 1467404054

------------------------------------------------------------------------------
| Id | Operation        | Name    | Rows | Bytes | Cost (%CPU) | Time      |
------------------------------------------------------------------------------
| 0  | SELECT STATEMENT |         |    1 |     7 |       3 (0) |  00:00:01 |
| 1  | SORT AGGREGATE   |         |    1 |     7 |             |           |
| 2  | INLIST ITERATOR  |         |      |       |             |           |
|* 3 | INDEX RANGE SCAN | BOWIE_I |    1 |     7 |       3 (0) |  00:00:01 |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access("CODE"=1 OR "CODE"=2)

Statistics
----------------------------------------------------------
   0 recursive calls
   0 db block gets
1676 consistent gets
   0 physical reads
   0 redo size
 544 bytes sent via SQL*Net to client
 608 bytes received via SQL*Net from client
   2 SQL*Net roundtrips to/from client
   0 sorts (memory)
   0 sorts (disk)
   1 rows processed

The CBO still doesn’t consider INDEX SKIP SCAN type of processing in this scenario and is using a relatively inefficient INDEX RANGE SCAN instead.

So unfortunately, although I have an existing index in place that if only used effectively could potentially return the GROUP BY result set very efficiently, the CBO is not using the index. The CBO doesn’t appear to be able use an INDEX SKIP SCAN in combination with multiple MIN/MAX scans in scenarios when it perhaps could.

Announcement: Oracle Indexing Internals Seminars Coming to New Zealand in March 2018 (Shipyards of New Zealand) January 22, 2018

Posted by Richard Foote in Oracle Indexes.
add a comment

Richard Let's Talk Database Nov 2015

I’m very pleased to announce I’ve now finalised some dates in New Zealand for my popular and highly acclaimed “Oracle Indexing Internals and Best Practices” seminar. They are:

Wellington 12-13 March 2018 (Auldhouse: Level 8 Lumley House, 11 Hunter Street): Tickets and Registration Link

Auckland 15-16 March 2018 (Karstens: Level 4, 205 Queen Street): Tickets and Registration Link

As usual, numbers will be strictly limited due to the small class nature of the seminars, so if interested I advise booking early to avoid disappointment as seminars will obviously not be scheduled too regularly in NZ.

Early bird rates are available until 23 March or until the seminars are full. Venues have now been finalised and are listed above.

I hope to catch up with some of my many NZ friends at one of these events 🙂

Early Bird Extension – UK February Dates: “Oracle Indexing Internals and Best Practices” Seminar December 20, 2017

Posted by Richard Foote in Oracle Indexes.
add a comment

RichardFoote2

As a Christmas present to those in the UK looking at attending my “Oracle Indexing Internals and Best Practices” seminar in February next year, the Early Bird rates are now available until 19th January 2018.

Take this opportunity to attend this highly acclaimed seminar that is packed full of information designed to significantly improve the performance and maintainability of your applications and Oracle databases.

Location, dates and registration for these events are as follows:

London 12-13 February 2018: (Hilton London Kensington). Tickets & Registration Link.

Birmingham 15-16 February 2018: (Novotel Birmingham Centre). Tickets & Registration Link.

Manchester 19-20 February 2018: (Novotel Manchester City Hotel). Tickets & Registration Link.

Glasgow 22-23 February 2018: (Glasgow Training Rooms). Tickets and Registration Link.

Belfast 26-27 February 2018: (Ormeau Business Park). Tickets and Registration Link.

 

May I take this opportunity to wish everyone a wonderful Christmas and a prosperous, happy and David Bowie filled New Year !!

Great Britain and Northern Ireland February 2018 Dates: “Oracle Indexing Internals and Best Practices” Seminar (Battle For Britain) November 17, 2017

Posted by Richard Foote in Index Internals Seminar, Oracle Indexes, Richard Foote Consulting, Richard Presentations.
1 comment so far

seminar photo

UPDATE: ALL TICKETS ARE NOW AVAILABLE FOR PURCHASE !!

Attention Oracle Professionals in the United Kingdom !!

I have now finalised all the dates and venues for a series of my popular and critically acclaimed “Oracle Indexing Internals and Best Practices” seminar I’ll be running in the UK in February 2018.

I’m extremely excited as this will be the first time I’ve delivered this seminar in my old homeland, despite having now delivered versions of this seminar in some 18 countries.

This is a must attend seminar of benefit to not only DBAs, but also to Developers, Solution Architects and anyone else interested in designing, developing or maintaining high performance Oracle-based applications/databases. It’s a fun, but intense, content rich seminar that is suitable for people of all experiences (from beginners to seasoned Oracle experts). Full details of the 2 day seminar content can be found here.

Location, dates and registration for these events have been finalised as follows:

London 12-13 February 2018: (Hilton London Kensington). Tickets & Registration Link.

Birmingham 15-16 February 2018: (Novotel Birmingham Centre). Tickets & Registration Link.

Manchester 19-20 February 2018: (Novotel Manchester City Hotel). Tickets & Registration Link.

Glasgow 22-23 February 2018: (Glasgow Training Rooms). Tickets and Registration Link.

Belfast 26-27 February 2018: (Ormeau Business Park). Tickets and Registration Link.

 

UPDATE: ALL TICKETS ARE NOW AVAILABLE FOR PURCHASE !!

Early Bird Rate (enrolments prior to 12 January 2018) £990.00 (+ VAT)

General Rate  (enrolments post 12 January 2018) £1200 (+VAT)

Prices include attendance to the seminar, both soft and hard copy of the extensive seminar materials, lunch and morning/afternoon tea/coffee.

All seminars will have strictly limited places to ensure a quality event for all attendees and all venues are booked with only small classes in mind. So I recommend booking early (as it’s cheaper) and to avoid possible disappointment. I won’t get to run these kind of events in the UK very often so take advantage of attending what will be a unique training opportunity while you can.

If you have any questions, please leave a comment or contact me at richard@richardfooteconsulting.com.

Hope to see you at one of these events next year !!

 

Indexing Seminar: Why Small Classes? (A Small Plot Of Land) October 5, 2017

Posted by Richard Foote in Index Internals, Index Internals Seminar, Oracle Indexes, Richard Foote Consulting, Richard's Musings.
2 comments

outside2

Firstly, a huge thank-you to everyone who has enroled in one of my “Indexing Internals and Best Practices” Seminars running soon is Australia. The feedback and overall reaction has been extremely positive and I can’t wait to present all the material I’ve been putting together these past few weeks.

There are currently limited places still available at each of the 6 venues, but places are selling out fast. All enrolment details can be found here: https://www.eventbrite.com.au/o/richard-foote-consulting-pty-ltd-15076769762

A question I’ve been asked a couple of times is why the insistence on only running small classes? I’ve previously run similar events overseas to large audiences, I’ve had classes of 40+ a number of times in Europe and I’ve run a one day version of this at Hotsos last year to more people than I could count (I’m guessing 80+). And while they were all highly successful, I will be restricting classes to much smaller class sizes in this Australia tour and into the future.  There are a number of reasons for this, with the key points being:

  • The seminar has been fully revised and updated and is packed with heaps of new content. I want to be able to cover as much material as is practical and this is only possible with small classes. The more I cover, the more attendees have the opportunity to learn.
  • I really want these seminars to be interactive events, with attendees having plenty of opportunity to ask questions. This makes it a more valuable experience as any ambiguities, doubts, environment specific questions, important bits that people don’t get first time, etc. etc. can clarified and addressed.
  • The seminar goes for 2 days, but if I had 50 people and each person asked 15 mins of questions that needed answering, that’s almost the entire time gone. So larger class sizes makes it impractical to have any meaningful Q&A
  • Attendees bring vast amounts of experiences and solutions to Oracle Indexing related scenarios as all Oracle databases out there utilise indexes. Even on Exadata. Even in future autonomous databases. Small classes gives me the opportunity to tap into this.
  • I want to learn just as much as everybody else and if I do all the talking, I’ll learn nothing new. Small classes gives me the opportunity to learn, based on questions asked and experiences shared.
  • All attendees have their own requirements and expectations from attending the seminar. If someone has specific issues at their environments, what a wonderful opportunity to learn and perhaps get these issues addressed by attending the seminar. Again only practical in small class environments.
  • I will be contacting all attendees BEFORE the seminars to determine what their individual expectations are from the seminars. I will then ensure that any appropriate material is fully covered to meet those expectations. This again is only practical with small class sizes.
  • Finally, you’re spending 2 days with the one and only Richard Foote 🙂 As well as learning more about indexes than you thought possible, as well as learning how an appropriate indexing strategy can dramatically improve database/application performance, as well as discovering an appreciations for the works of David Bowie, it’s also an opportunity to have a sense of 1 on 1 with myself  and that I’m there to directly assist “you” with truly understanding how Oracle indexes should be properly utilised. There is little sense of 1 on 1 with large class sizes.

 

So these seminars are designed very intentionally to be “premium” events, offering a training experience that really is second to none. These seminars have always been very well received so I’m truly excited that I’ll be running these events with the best content I’ve ever compiled. Small class settings gives all attendees the very best opportunity to absorb it all in and apply directly what they’re learnt to improving the performance of their database/application environments..

Like I said, there are only limited places now remaining for the Australian Tour so get in early or else risk missing out.

Events in other parts of our beautiful planet are currently being planned and will be announced in the near future. If you have any questions or requests on where to run these seminars, just ask: richard@richardfooteconsulting.com

Oracle Indexing Myths (Telling Lies) September 26, 2017

Posted by Richard Foote in Oracle Indexes.
2 comments

DavidBowie-TellingLies-CD

I’m currently hard at work fully revising and updating my “Indexing Internals and Best Practices” seminar in time for a series of events I’ll be running in Australia in October/November. This has been a highly popular and acclaimed 2 day seminar that is a must for any DBA, Developer, Solutions Architect or anyone else interested in designing, developing or maintaining high performance Oracle-based databases/applications.

For a number of reasons, there are many myths in relation to how Oracle Indexes function, how they should be maintained and how and when indexes should be implemented. One of the goals of my seminar is to go through and dispel many of these myths and demonstrate how these misconceptions can unfortunately often result in inappropriate and inefficient indexing strategies being deployed. I have investigated numerous customer databases (including large scale Exadata sites) where the database is running in a severely sub-optimal manner as a direct result of  inappropriate indexing implementations based on these myths.

Some of these misconceptions are just plain wrong, some are true in other databases but not Oracle, some might have once been true but not anymore, some might be mostly true but not always, some might be mostly false but not always and it’s often these exceptions that can make such an important difference in how indexes could/should be deployed in some environments. Words such as “never” ,”always”, “all” and “none” are dangerous to use when discussing technology as exceptions, caveats and limitations can impact/distort the truth and how such technology should be implemented.

My use of the term “Best Practices” in the seminar title is actually a deliberate, somewhat cynical reuse of a term that actually gets many customers in strife as what might be a so-called “Best Practice” in one environment might be totally inappropriate in another environments.

In no particular order, the following Oracle Indexing “myths” (or misleading generalisations) are discussed throughout my seminar and are still prevalent out there in the Oracle universe:

  • Oracle B-Tree indexes become “unbalanced” over time and need to be rebuilt
  • Deleted space in index is “deadwood” and over time requires index to be rebuilt
  • If index height greater than ‘x’, it becomes inefficient and needs to be rebuilt
  • If index grows to 2x its height, it is 2x more costly to use
  • PCTFREE enables space for index entries to grow within current leaf block
  • If index has a poor (very high) Clustering Factor, rebuild the index
  • To improve the Clustering Factor, you have to rebuild the underling table
  • Clustering Factor “as good as it gets” after gathering 100% estimate index statistics
  • To improve performance, regularly rebuild indexes
  • You never have to rebuild an index to improve performance
  • Statistics from INDEX_STATS provides reliable metrics on when to rebuild an index
  • If delete rows as reported in INDEX_STATS > x%, rebuild
  • If delete rows as reported in INDEX_STATS < x%, don’t rebuild
  • Analyze Index Validate Structure is a safe method of collecting index metrics
  • Index rebuilds are inexpensive and unobtrusive
  • Primary/Unique Key constraints require a unique index
  • Drop/disable a constraint, unique index policing index is automatically dropped
  • All Foreign Key constraints must be indexed
  • Indexes should eliminate sorting
  • Only indexed columns require statistics
  • Bitmap Indexes only useful with low cardinality columns
  • Bitmap Index will be used when a B-tree is not for low cardinality columns
  • Null values are not indexed
  • Small tables (say < 100 rows) don’t benefit from indexing
  • Separating indexes from tables in tablespaces improves performance
  • Range scans not possible with Reverse Key indexes
  • Local indexes improve performance
  • Put most discriminating column first in concatenated indexes
  • If SQL references all columns in index, index column order is irrelevant
  • If leading column of index is not referenced in SQL, index not considered by CBO
  • Monitoring Indexes will highlight which indexes can be safely dropped
  • Indexing Tracking will highlight which indexes can be safely dropped
  • Index Compression make indexes smaller
  • B-Tree Index only useful with high cardinality columns
  • Pointless indexing a column with one distinct value
  • If more than x% rows returned, index is inappropriate, where x% between 0 & 100
  • Full Table Scan more efficient than index range scan with table access, when returning 100% of data
  • The CBO cost an internal value of no practical use for tuning/comparison purposes
  • Index is best solution to return 0% of data
  • You don’t need indexes in Exadata
  • Indexes less important in Exadata, as Storage Indexes can take over if database index is missing

It’s important to note it potentially only takes just the one bad row inserted in a table, one inaccurate statistic, one index being used inappropriately, one missing index not created during a application upgrade, to bring down a whole Oracle database, RAC nodes running on an Exadata included…

If you want to learn why the above statements are misleading and lots, lots, lots more, come and attend my Indexing Seminar. Tickets for the Australian events are on sale here: https://www.eventbrite.com.au/o/richard-foote-consulting-pty-ltd-15076769762

Dates for other regions will be announced in the near future: https://richardfooteconsulting.com/seminardates/

If you wish to express an interest in attending a seminar near you, please email: richard@richardfooteconsulting.com.

 

 

 

“Oracle Indexing Internals & Best Practices” Seminar: Australia 2017 Tour (Stage) September 19, 2017

Posted by Richard Foote in Index Internals Seminar, Oracle Indexes, Richard Foote Consulting.
1 comment so far

seminar photo

I’ll be running a fully revised and updated version of my acclaimed “Oracle Indexing Internals and Best Practices” seminar throughout Australia in Oct/Nov 2017. Previous versions of this seminar have been hugely popular and run in some 18 countries. This updated version will feature lots of new material including 12c related indexing capabilities and recommendations.

Indexes are fundamental to every Oracle database and are crucial for optimal performance. However, there’s an incredible amount of misconception, misunderstanding and pure myth regarding how Oracle indexes function and should be maintained. Many applications and databases are suboptimal and run inefficiently primarily because an inappropriate indexing strategy has been implemented.

This seminar examines most available Oracle index structures/options and discusses in considerable detail how indexes function, how/when they should be used and how they should be maintained. A key component of the seminar is how indexes are costed and evaluated by the Cost Based Optimizer (CBO) and how appropriate data management practices are vital for an effective indexing strategy.  It also covers many useful tips and strategies to maximise the benefits of indexes on application/database performance and scalability, as well as in maximising Oracle database investments. Much of the material is exclusive to this seminar and is not generally available in Oracle documentation or in Oracle University courses.

This is a must attend seminar of benefit to DBAs, Developers, Solution Architects and anyone else interested in designing, developing or maintaining high performance Oracle-based applications. It’s a fun, but intense, content rich seminar that is suitable for people of all experiences (from beginners to seasoned Oracle experts). All seminars are small class environments, with plenty of opportunity for attendees to ask questions specific to their particular environment.

Full seminar details can be found on my new website: https://richardfooteconsulting.com/seminardates

The full Australian itinerary and enrolment details as follows:

Perth 30-31 October 2017: For full details and how to enrol (enrolments close on 22 October or when event is full): https://www.eventbrite.com.au/e/oracle-indexing-internals-and-best-practices-seminar-with-richard-foote-tickets-37783577698

Adelaide 1-2 November 2017: For full details and how to enrol (enrolments close on 22 October or when event is full): https://www.eventbrite.com.au/e/oracle-indexing-internals-and-best-practices-seminar-with-richard-foote-tickets-37793217531

Canberra 6-7 November 2017: For full details and how to enrol (enrolments close on 28 October or when event is full): https://www.eventbrite.com.au/e/oracle-indexing-internals-and-best-practices-seminar-with-richard-foote-tickets-37793625752

Brisbane 8-9 November 2017: For full details and how to enrol (enrolments close on 1 November or when event is full): https://www.eventbrite.com.au/e/oracle-indexing-internals-and-best-practices-seminar-with-richard-foote-tickets-37793828358

Melbourne 13-14 November 2017: For full details and how to enrol (enrolments close on 4 November or when event is full): https://www.eventbrite.com.au/e/oracle-indexing-internals-and-best-practices-seminar-with-richard-foote-tickets-37794100171

Sydney 15-16 November 2017: For full details and how to enrol (enrolments close on 7 November or when event is full): https://www.eventbrite.com.au/e/oracle-indexing-internals-and-best-practices-seminar-with-richard-foote-tickets-37794680908

Note: Numbers are strictly limited due to the small class nature of the seminars. Please book early to avoid disappointment as seminars are not scheduled regularly.

 

For additional information on seminar content, upcoming seminars or training requests, contact me at richard@richardfooteconsulting.com.

 

 

Richard Foote Consulting Has Arrived (New Angels Of Promise) September 18, 2017

Posted by Richard Foote in Index Internals Seminar, Oracle Indexes, Richard Foote Consulting.
6 comments

Today is the official launch of Richard Foote Consulting, my new independent company in which I’ll be providing specialist Oracle Database consulting and training services, focusing on database performance tuning, database problem resolutions and database health check assessments (at least until everyone moves across to the new self-tuning Oracle Cloud database). Based on all the interest I’ve already received, this journey should be a blast.

For all the details, please visit the new website: richardfooteconsulting.com.

I will also be running training seminar events, starting with a fully revised and updated version of the very popular “Oracle Indexing Internals and Best Practices” seminar. The first series will be an Australian Tour in October/November. For all currently scheduled seminars, see my Seminar Dates Page or just check out the Blog slide bar for a list of upcoming seminars.

And most important of all, I might now have time to post blog articles more regularly 🙂

For information on upcoming seminars, consulting services, rates or availability, please email richard@richardfooteconsulting.com.

Presenting at UKOUG Tech17 Conference in December (Here Come The Warm Jets) August 30, 2017

Posted by Richard Foote in Tech17.
1 comment so far

I presented at the UKOUG Tech14 conference in Liverpool and thoroughly enjoyed the experience. It was one of the better User Group conferences I’ve attended and winning both the Best New Oracle Speaker and Best Oracle Speaker awards made it that bit more special.

So it’s with real excitement that I have the opportunity to again present at the UKOUG Tech17 conference in Birmingham 4-6 December 2017.

I have 4 sessions scheduled:

Real World Examples Of Using AWR Reports To Solve Performance Issues (Double Session): Monday, 4th December 16:55-18:40

CBO Panel: Tuesday, 5th December 9:00-9:50

New Indexing Features Introduced in Oracle Database 12c Release 2: Tuesday, 5th December 12:25-13:15

Indexing Q&A With Richard Foote: Tuesday, 5th December 15:40-16:30

 

Looking forward to catching up with lots of my Oracle friends on what is shaping up to be another fabulous conference. See you in sunny and extremely warm Birmingham, I hope !!

 

 

Goodbye Oracle Corporation (Modern Love) August 14, 2017

Posted by Richard Foote in Richard's Musings.
8 comments

 

It’s with a mixture of excitement, trepidation and a touch of sorrow that I today handed in my resignation after 6 years at Oracle Corporation.

My plans at this stage are to take a bit of time-off initially, before going back to providing Oracle Database consulting and training services as I’ve done in the past. Stay tuned for some exciting offerings !!

It’s truly been a great pleasure and honour to have worked with such a fantastic team at Oracle Australia. I sincerely wish Oracle continued success into the future.

Basic Index Compression Made Simple (It Ain’t Easy) August 2, 2017

Posted by Richard Foote in Index Compression, Index statistics, Oracle Indexes, Validate Structure.
add a comment

DavidBowie3

I’ve discussed Index Compression a number of times as it’s an excellent way of minimizing the size of indexes without unnecessary, expensive index rebuilds.

One of the nice features of Advanced Index Compression is that not only does it potentially compress indexes more effectively than possible with Basic Index Compression, but that it also eliminates the needs to understand the indexed data in order to configure the appropriate prefix column count.

As I’ve discussed previously, index compression works by effectively deduplicating repeated indexed values within a leaf block by storing each unique indexed value in a prefix-table in the leaf block and referencing the prefixed value within the actual index entry. This means repeated values need only be stored once per leaf block, hence saving space. However, if there are few or no repeated values, there are no deduplication benefits and the overheads associated with the prefix table can exceed the potential savings (if any), making indexes potentially larger not smaller.

It’s possible however to only compress the leading portion of an index entry, such that only indexed columns that are actually replicated are compressed, leaving less replicated indexed columns uncompressed within the index entry. The decision therefore on how many columns within the index to actually compress is crucial to the compression effectiveness.

It’s this uncertainty and fear of actually making indexes worse with basic compression that puts off a lot of DBAs from implementing index compression and indeed why the “no-brainer” capabilities of Advanced Index Compression is so appealing.

For those that do not have access to the Advanced Compression database option or in the new Oracle Cloud world, access to at least the “High Performance” database package, there is a method that can assist in determining the best manner in which to use basic compression to compress your indexes.

To illustrate, a simple example. We begin by creating a little table that has two columns of interest, an ID column that is effectively unique and a CODE column that only has 10 distinct values and so plenty of duplication:

SQL> create table bowie (id number, code number, name varchar2(42));

Table created.

SQL> insert into bowie select rownum, mod(rownum,10), 'ZIGGY STARDUST'
     from dual connect by level >=2000000;

2000000 rows created.

SQL> commit;

Commit complete.

We next create a concatenated index with the CODE column leading, followed by the unique ID column. The index entries as a whole are therefore effectively unique and so compressing the whole index would be ineffective. However, as the leading CODE column has many replicated values, there would be benefit in just compressing this first leading column. However, we need to fully understand the data within the index to correctly determine we need to compress just the first column to effectively compress this index.

SQL> create index bowie_code_id_i on bowie(code, id) pctfree 0;

Index created.

SQL> select num_rows, blevel, leaf_blocks from user_indexes
     where index_name='BOWIE_CODE_ID_I';

  NUM_ROWS     BLEVEL LEAF_BLOCKS
---------- ---------- -----------
   2000000          2        4848

Currently the uncompressed index has 4848 leaf blocks.

But how to effectively compress this index, especially if we don’t really understand the data profile of the indexed columns ?

One possible method is to ANALYZE with VALIDATE STRUCTURE the index and explore a couple of useful columns within INDEX_STATS:

SQL> analyze index bowie_code_id_i validate structure;

Index analyzed.

SQL> select name, height, lf_blks, opt_cmpr_count, opt_cmpr_pctsave
     from index_stats;

NAME             HEIGHT    LF_BLKS OPT_CMPR_COUNT OPT_CMPR_PCTSAVE
--------------- ------- ---------- -------------- ----------------
BOWIE_CODE_ID_I       3       4848              1               14

OPT_CMPR_COUNT tells us how many columns to compress to get optimal benefit from basic index compression. In this example, we should only compress 1 column.

OPT_CMPR_PCTSAVE tells us how much benefit we would likely achieve if we were to compress just this 1 column of the index. In this example, the index will reduce by some 14%.

So let’s go ahead and implement this recommendation:

SQL> alter index bowie_code_id_i rebuild compress 1;

Index altered.

SQL> select num_rows, blevel, leaf_blocks
     from user_indexes where index_name='BOWIE_CODE_ID_I';

  NUM_ROWS     BLEVEL LEAF_BLOCKS
---------- ---------- -----------
   2000000          2        4133

We notice the index is now just 4133 leaf blocks and has actually reduced in size by some 14.75%, not bad compared to the 14% estimate.

If we disregard this advice and just compress the entire index:

SQL> alter index bowie_code_id_i rebuild compress;

Index altered.

SQL> select num_rows, blevel, leaf_blocks from user_indexes
     where index_name='BOWIE_CODE_ID_I';

  NUM_ROWS     BLEVEL LEAF_BLOCKS
---------- ---------- -----------
   2000000          2        6363

We notice the index is now substantially larger at 6363 leaf blocks than it was previously (4848 leaf blocks) when the index was uncompressed.

If we create another index, but this time with ID as the leading column:

SQL> create index bowie_id_code_i on bowie(id, code) pctfree 0;

Index created.

SQL> select num_rows, blevel, leaf_blocks from user_indexes
     where index_name='BOWIE_ID_CODE_I';

  NUM_ROWS     BLEVEL LEAF_BLOCKS
---------- ---------- -----------
   2000000          2        4851

With the leading column effectively unique, there wouldn’t be any benefit in using basic compression on this index as there are no replicated values from the leading column onwards to deduplicate:

SQL> analyze index bowie_id_code_i validate structure;

Index analyzed.

SQL> select name, height, lf_blks, opt_cmpr_count, opt_cmpr_pctsave
     from index_stats;

NAME             HEIGHT    LF_BLKS OPT_CMPR_COUNT OPT_CMPR_PCTSAVE
--------------- ------- ---------- -------------- ----------------
BOWIE_ID_CODE_I       3       4851              0                0

Analyzing the index with VALIDATE STRUCTURE confirms that 0 columns are worth compressing with this index.

An attempt to just compress the leading column would indeed be counter-productive:

SQL> alter index bowie_id_code_i rebuild compress 1;

Index altered.

SQL> select num_rows, blevel, leaf_blocks from user_indexes
     where index_name='BOWIE_ID_CODE_I';

  NUM_ROWS     BLEVEL LEAF_BLOCKS
---------- ---------- -----------
   2000000          2        6361

The index is again much larger at 6361 leaf blocks than it was previously (4851 leaf blocks) when uncompressed.

So the order of the columns within the index is crucial in determining the potential benefit of index compression.

I don’t particularly like using ANALYZE VALIDATE STRUCTURE, not least because it locks the parent table during the analyze operation, but if there’s available downtime or a full copy of the database where locking is not an issue, then this is an effective way to determine how to best compress indexes with basic compression.

12.2 Introduction to Real-Time Materialized Views (The View) July 10, 2017

Posted by Richard Foote in 12c, 12c Rel 2, 12c Release 2 New Features, Oracle Indexes, Real-Time Materialized Views.
2 comments

the view

Although I usually focus on index related topics, I’ve always kinda considered Materialized Views (MVs) as an index like structure, which Oracle can automatically update and from which Oracle can efficiently retrieve data. The cost of maintaining a Materialized View Log is not unlike the cost of maintaining an index structure, the benefits of which can potentially far outweigh the overheads.

I just want to introduce a really cool new feature introduced in Oracle Database 12c Release 2 called Real-Time Materialized Views.

To best illustrate, a simple little demo. I first create a table and populate it with 1M rows.

SQL> create table bowie (id number primary key, name varchar2(42), sales number, text varchar2(42));

Table created.

SQL> insert into bowie select rownum, 'BOWIE' || to_char(mod(rownum,100)+1),
trunc(dbms_random.value(0,10000)), 'ZIGGY STARDUST' from dual connect by level<=1000000;

1000000 rows created.

SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'BOWIE');

PL/SQL procedure successfully completed.

I then run the following query which returns only those summary records where the total SALES exceeds some limit:

SQL> select name, sum(sales) from bowie group by name having sum(sales) > 50500000;

NAME                                       SUM(SALES)
------------------------------------------ ----------
BOWIE7                                       50570391
BOWIE55                                      50586083
BOWIE15                                      50636084

Execution Plan
----------------------------------------------------------
Plan hash value: 298288086

-----------------------------------------------------------------------------
| Id | Operation         | Name  | Rows  | Bytes | Cost (%CPU) | Time       |
-----------------------------------------------------------------------------
|  0 | SELECT STATEMENT  |       |    50 |   600 |    1454 (4) | 00:00:01   |
|* 1 | FILTER            |       |       |       |             |            |
|  2 | HASH GROUP BY     |       |    50 |   600 |    1454 (4) | 00:00:01   |
|  3 | TABLE ACCESS FULL | BOWIE | 1000K |   11M |    1410 (1) | 00:00:01   |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter(SUM("SALES")>50500000)

Statistics
----------------------------------------------------------
   0 recursive calls
   0 db block gets
5138 consistent gets
   0 physical reads
   0 redo size
 704 bytes sent via SQL*Net to client
 608 bytes received via SQL*Net from client
   2 SQL*Net roundtrips to/from client
   0 sorts (memory)
   0 sorts (disk)
   3 rows processed

I don’t have any filtering predicates before the summarisation of the table, meaning I’m currently forced to read the entire table first, before I can filter any summarisations that aren’t of interest.

As such, a full table scan is an expensive operation here (5138 consistent gets).

Now a method to reduce these FTS overheads is to create a Materialized View which has all the summary details pre-defined. Depending on the QUERY_REWRITE_INTEGRITY parameter, I can potentially use Query Rewrite to automatically use the MV to access the pre-summarised data rather than perform the FTS on the base Bowie table.

The MV could be kept fully up to date by performing a FAST REFRESH ON COMMIT but this adds additional overheads on the DMLs on the base table as they have to apply the actual changes to the MVs as part of the transaction. I could reduce these overheads by performing a FAST REFRESH ON DEMAND, but this means the MV may be stale and not fully up to date.

In Oracle Database 12.2, we get the best of both worlds with Real-Time Materialized Views, where we don’t have the additional overheads of a ON COMMIT refresh, but still guarantee fully up to date data by still (hopefully) accessing the MV rather than performing the expensive FTS.

We first create the Materialized View Log (necessary for MV fast refreshes):

SQL> create materialized view log on bowie with sequence, rowid (id, name, sales) including new values;

Materialized view log created.

But now create the MV with the required summary SQL definition, but with the new ENABLE ON QUERY COMPUTATION clause:

SQL> create materialized view bowie_mv
2 refresh fast on demand
3 enable query rewrite
4 enable on query computation
5 as
6 select name, sum(sales) from bowie group by name;

Materialized view created.

SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'BOWIE_MV');

PL/SQL procedure successfully completed.

If we re-run the summary query again:

SQL> select name, sum(sales) from bowie group by name having sum(sales) > 50500000;

NAME                                       SUM(SALES)
------------------------------------------ ----------
BOWIE7                                       50570391
BOWIE55                                      50586083
BOWIE15                                      50636084

Execution Plan
----------------------------------------------------------
Plan hash value: 593592962

-----------------------------------------------------------------------------------------
| Id | Operation                    | Name     | Rows | Bytes | Cost (%CPU) | Time      |
-----------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT             |          |   10 |   140 |       3 (0) | 00:00:01  |
|* 1 | MAT_VIEW REWRITE ACCESS FULL | BOWIE_MV |   10 |   140 |       3 (0) | 00:00:01  |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("BOWIE_MV"."SUM(SALES)">50500000)

Statistics
----------------------------------------------------------
  0 recursive calls
  2 db block gets
  8 consistent gets
  0 physical reads
  0 redo size
704 bytes sent via SQL*Net to client
608 bytes received via SQL*Net from client
  2 SQL*Net roundtrips to/from client
  0 sorts (memory)
  0 sorts (disk)
  3 rows processed

We notice that Query Rewrite has taken place and the CBO has automatically used the MV (consisting of just 100 rows) to very efficiently access the required summary data (8 consistent gets).

If we look at the current Query Rewrite parameters:

SQL> show parameter query

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
inmemory_query                       string      ENABLE
query_rewrite_enabled                string      TRUE
query_rewrite_integrity              string      enforced

We notice that QUERY_REWRITE_INTEGRITY is set to ENFORCED meaning that Oracle enforces and guarantees consistency and integrity. So no stale accesses to the MV will be tolerated here.

If we now add and commit a new row (that effectively adds 1000 to the BOWIE7 summary):

SQL> insert into bowie values (1000001, 'BOWIE7', 1000, 'HUNKY DORY');

1 row created.

SQL> commit;

Commit complete.

And now again re-run the summary query:

SQL> select name, sum(sales) from bowie group by name having sum(sales) > 50500000;

NAME                                       SUM(SALES)
------------------------------------------ ----------
BOWIE55                                      50586083
BOWIE15                                      50636084
BOWIE7                                       50571391

We notice the returned data is fully up to date (the total for BOWIE7 has indeed increased by the 1000 added).

And it did so efficiently without having to perform a massive FTS on the base table. A look at the execution plan reveals how:

Execution Plan
----------------------------------------------------------
Plan hash value: 3454774452

-----------------------------------------------------------------------------------------------------------------------
| Id | Operation                                  | Name                       | Rows | Bytes | Cost (%CPU)|     Time |
-----------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                          |                            |  203 |  7308 |     22 (28)| 00:00:01 |
|   1 |  VIEW                                     |                            |  203 |  7308 |     22 (28)| 00:00:01 |
|   2 |   UNION-ALL                               |                            |      |       |            |          |
| * 3 |    VIEW                                   | VW_FOJ_0                   |  100 |  3900 |      8 (25)| 00:00:01 |
| * 4 |     HASH JOIN OUTER                       |                            |  100 |  2500 |      8 (25)| 00:00:01 |
|   5 |      VIEW                                 |                            |  100 |  1700 |       3 (0)| 00:00:01 |
|   6 |       MAT_VIEW ACCESS FULL                | BOWIE_MV                   |  100 |  1400 |       3 (0)| 00:00:01 |
|   7 |      VIEW                                 |                            |    1 |     8 |      5 (40)| 00:00:01 |
|   8 |       HASH GROUP BY                       |                            |    1 |    36 |      5 (40)| 00:00:01 |
|   9 |        VIEW                               |                            |    1 |    36 |      4 (25)| 00:00:01 |
|  10 |         RESULT CACHE                      | csfyggq82gxrn757upr194x2g2 |      |       |            |          |
|* 11 |          VIEW                             |                            |    1 |   100 |      4 (25)| 00:00:01 |
|  12 |           WINDOW SORT                     |                            |    1 |   191 |      4 (25)| 00:00:01 |
|* 13 |            TABLE ACCESS FULL              | MLOG$_BOWIE                |    1 |   191 |       3 (0)| 00:00:01 |
|* 14 |   VIEW                                    | VW_FOJ_1                   |  102 |  5304 |      8 (25)| 00:00:01 |
|* 15 |    HASH JOIN FULL OUTER                   |                            |  102 |  3774 |      8 (25)| 00:00:01 |
|  16 |     VIEW                                  |                            |    1 |    30 |      5 (40)| 00:00:01 |
|  17 |      HASH GROUP BY                        |                            |    1 |    36 |      5 (40)| 00:00:01 |
|  18 |       VIEW                                |                            |    1 |    36 |      4 (25)| 00:00:01 |
|  19 |        RESULT CACHE                       | csfyggq82gxrn757upr194x2g2 |      |       |            |          |
|* 20 |         VIEW                              |                            |    1 |   100 |      4 (25)| 00:00:01 |
|  21 |          WINDOW SORT                      |                            |    1 |   191 |      4 (25)| 00:00:01 |
|* 22 |           TABLE ACCESS FULL               | MLOG$_BOWIE                |    1 |   191 |       3 (0)| 00:00:01 |
|  23 |   VIEW                                    |                            |  100 |   700 |       3 (0)| 00:00:01 |
|  24 |    MAT_VIEW ACCESS FULL                   | BOWIE_MV                   |  100 |  1400 |       3 (0)| 00:00:01 |
|  25 | NESTED LOOPS                              |                            |    1 |    75 |      6 (34)| 00:00:01 |
|  26 |  VIEW                                     |                            |    1 |    52 |      5 (40)| 00:00:01 |
|  27 |   HASH GROUP BY                           |                            |    1 |    36 |      5 (40)| 00:00:01 |
|  28 |    VIEW                                   |                            |    1 |    36 |      4 (25)| 00:00:01 |
|  29 |     RESULT CACHE                          | csfyggq82gxrn757upr194x2g2 |      |       |            |          |
|* 30 |      VIEW                                 |                            |    1 |   100 |      4 (25)| 00:00:01 |
|  31 |       WINDOW SORT                         |                            |    1 |   191 |      4 (25)| 00:00:01 |
|* 32 |        TABLE ACCESS FULL                  | MLOG$_BOWIE                |    1 |   191 |       3 (0)| 00:00:01 |
|* 33 |  MAT_VIEW ACCESS BY INDEX ROWID           | BOWIE_MV                   |    1 |    23 |       1 (0)| 00:00:01 |
|* 34 |   INDEX UNIQUE SCAN                       | I_SNAP$_BOWIE_MV           |    1 |       |       0 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - filter("AV$0"."OJ_MARK" IS NULL AND "SNA$0"."SUM(SALES)">50500000)
4 - access(SYS_OP_MAP_NONNULL("SNA$0"."NAME")=SYS_OP_MAP_NONNULL("AV$0"."GB0"(+)))
11 - filter(SYS_OP_CSEE(NLSSORT("MAS$"."OLD_NEW$$",'nls_sort=''BINARY_CI'''))=HEXTORAW('6E00') AND
"MAS$"."SEQ$$"="MAS$"."MAXSEQ$$" OR (SYS_OP_CSEE(NLSSORT("MAS$"."OLD_NEW$$",'nls_sort=''BINARY_CI'''))=H
EXTORAW('6F00') OR SYS_OP_CSEE(NLSSORT("MAS$"."OLD_NEW$$",'nls_sort=''BINARY_CI'''))=HEXTORAW('7500'))
AND "MAS$"."SEQ$$"="MAS$"."MINSEQ$$")
13 - filter("MAS$"."SNAPTIME$$">TO_DATE(' 2017-03-31 15:47:28', 'syyyy-mm-dd hh24:mi:ss'))
14 - filter("SNA$0"."SNA_OJ_MARK" IS NULL AND DECODE("AV$0"."H0",0,TO_NUMBER(NULL),"AV$0"."D0")>50500000)
15 - access(SYS_OP_MAP_NONNULL("SNA$0"."NAME")=SYS_OP_MAP_NONNULL("AV$0"."GB0"))
20 - filter(SYS_OP_CSEE(NLSSORT("MAS$"."OLD_NEW$$",'nls_sort=''BINARY_CI'''))=HEXTORAW('6E00') AND
"MAS$"."SEQ$$"="MAS$"."MAXSEQ$$" OR (SYS_OP_CSEE(NLSSORT("MAS$"."OLD_NEW$$",'nls_sort=''BINARY_CI'''))=H
EXTORAW('6F00') OR SYS_OP_CSEE(NLSSORT("MAS$"."OLD_NEW$$",'nls_sort=''BINARY_CI'''))=HEXTORAW('7500'))
AND "MAS$"."SEQ$$"="MAS$"."MINSEQ$$")
22 - filter("MAS$"."SNAPTIME$$">TO_DATE(' 2017-03-31 15:47:28', 'syyyy-mm-dd hh24:mi:ss'))
30 - filter(SYS_OP_CSEE(NLSSORT("MAS$"."OLD_NEW$$",'nls_sort=''BINARY_CI'''))=HEXTORAW('6E00') AND
"MAS$"."SEQ$$"="MAS$"."MAXSEQ$$" OR (SYS_OP_CSEE(NLSSORT("MAS$"."OLD_NEW$$",'nls_sort=''BINARY_CI'''))=H
EXTORAW('6F00') OR SYS_OP_CSEE(NLSSORT("MAS$"."OLD_NEW$$",'nls_sort=''BINARY_CI'''))=HEXTORAW('7500'))
AND "MAS$"."SEQ$$"="MAS$"."MINSEQ$$")
32 - filter("MAS$"."SNAPTIME$$">TO_DATE(' 2017-03-31 15:47:28', 'syyyy-mm-dd hh24:mi:ss'))
33 - filter(DECODE(TO_CHAR("BOWIE_MV"."SUM(SALES)"),NULL,DECODE("AV$0"."H0",0,TO_NUMBER(NULL),"AV$0"."
D0"),"BOWIE_MV"."SUM(SALES)"+"AV$0"."D0")>50500000)
34 - access(SYS_OP_MAP_NONNULL("NAME")=SYS_OP_MAP_NONNULL("AV$0"."GB0"))

Result Cache Information (identified by operation id):
------------------------------------------------------

10 - column-count=7; dependencies=(BOWIE.MLOG$_BOWIE); attributes=(ordered, session-lifetime); parameters=(nls); name="DMLTYPES:MLOG$_BOWIE"
19 - column-count=7; dependencies=(BOWIE.MLOG$_BOWIE); attributes=(ordered, session-lifetime); parameters=(nls); name="DMLTYPES:MLOG$_BOWIE"
29 - column-count=7; dependencies=(BOWIE.MLOG$_BOWIE); attributes=(ordered, session-lifetime); parameters=(nls); name="DMLTYPES:MLOG$_BOWIE"

Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- this is an adaptive plan

Statistics
----------------------------------------------------------
  0 recursive calls
  4 db block gets
 16 consistent gets
  0 physical reads
  0 redo size
704 bytes sent via SQL*Net to client
608 bytes received via SQL*Net from client
  2 SQL*Net roundtrips to/from client
  0 sorts (memory)
  0 sorts (disk)
  3 rows processed

Oracle can now merge the data from the MV with the data still within the MV Log to generate the final, fully up to date result. At just 16 consistent gets, this is more expensive than the fully refreshed MV (8 consistent gets) but much less than the 5138 consistent gets when accessing the base BOWIE table via a FTS.

And providing the costs of doing so is calculated as less by the CBO than performing the FTS (or otherwise) on the base table, then Oracle will perform this new smart when accessing data from such created MVs.

Very nice 🙂

12.2 Index Deferred Invalidation (Atomica) May 30, 2017

Posted by Richard Foote in 12c Rel 2, 12c Release 2 New Features, Deferred Invalidation, Oracle Indexes.
add a comment

One of the useful little features quietly introduced with Oracle Database 12c Release 2 is the ability to now defer the invalidation of dependent SQL cursors when an index is created or modified. This can be useful when you say create a new index which you know will have no impact on any existing SQL statements. Rather than automatically invalidating all dependent SQL cursors, you can chose to defer such invalidations and avoid the expense of unnecessarily re-parsing dependent cursors.

A simple demo. We first create a little table and index on the ID column:

SQL> create table bowie (id number, code number, name varchar2(42));
Table created.

SQL> insert into bowie select rownum, mod(rownum,10000), 'DAVID BOWIE' from dual connect by level <= 1000000;

1000000 rows created.

SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'BOWIE');

PL/SQL procedure successfully completed.

SQL> create index bowie_id_i on bowie(id);

Index created.

If we execute the following statement a number of times we’ll see there are no recursive calls, which confirms no hard parsing or SQL invalidations. One can also look at the associated SQL within v$SQL to confirm no additional invalidations.

SQL> select * from bowie where id = 42;

Execution Plan
----------------------------------------------------------
Plan hash value: 1405654398

--------------------------------------------------------------------------------------------------
| Id | Operation                           | Name       | Rows | Bytes | Cost(%CPU) | Time      |
--------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT                    |            |    1 |    21 |      4 (0) |  00:00:01 |
|  1 | TABLE ACCESS BY INDEX ROWID BATCHED | BOWIE      |    1 |    21 |      4 (0) |  00:00:01 |
|* 2 | INDEX RANGE SCAN                    | BOWIE_ID_I |    1 |       |       3 (0)|  00:00:01 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("ID"=42)

Statistics
----------------------------------------------------------
  0 recursive calls
  0 db block gets
  5 consistent gets
  0 physical reads
  0 redo size
479 bytes sent via SQL*Net to client
500 bytes received via SQL*Net from client
  2 SQL*Net roundtrips to/from client
  0 sorts (memory)
  0 sorts (disk)
  1 rows processed

If we say rebuild the index, we can see that we now have recursive calls due to the invalidation of this SQL. The new index structure “might” facilitate a change in the plan of this query and so Oracle will re-parse the statement:

SQL> alter index bowie_id_i rebuild;
Index altered.

SQL> select * from bowie where id=42;

Execution Plan
----------------------------------------------------------
Plan hash value: 1405654398

--------------------------------------------------------------------------------------------------
| Id | Operation                           | Name       | Rows | Bytes | Cost(%CPU) | Time      |
--------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT                    |            |    1 |    21 |      4 (0) |  00:00:01 |
|  1 | TABLE ACCESS BY INDEX ROWID BATCHED | BOWIE      |    1 |    21 |      4 (0) |  00:00:01 |
|* 2 | INDEX RANGE SCAN                    | BOWIE_ID_I |    1 |       |      3 (0) |  00:00:01 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("ID"=42)

Statistics
----------------------------------------------------------
  3 recursive calls
  0 db block gets
 12 consistent gets
  2 physical reads
  0 redo size
479 bytes sent via SQL*Net to client
500 bytes received via SQL*Net from client
  2 SQL*Net roundtrips to/from client
  0 sorts (memory)
  0 sorts (disk)
  1 rows processed

Oracle Database 12c Release 2 now gives us the option to defer these invalidations (with the DEFERRED INVALIDATION clause), as we might know for example that such a rebuild is not going to impact the plans of any existing SQL:

SQL> alter index bowie_id_i rebuild deferred invalidation;

Index altered.

SQL> select * from bowie where id=42;

Execution Plan
----------------------------------------------------------
Plan hash value: 1405654398

--------------------------------------------------------------------------------------------------
| Id | Operation                           | Name       | Rows | Bytes | Cost(%CPU) | Time      |
--------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT                    |            |    1 |    21 |      4 (0) |  00:00:01 |
|  1 | TABLE ACCESS BY INDEX ROWID BATCHED | BOWIE      |    1 |    21 |      4 (0) |  00:00:01 |
|* 2 | INDEX RANGE SCAN                    | BOWIE_ID_I |    1 |       |      3 (0) |  00:00:01 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("ID"=42)

Statistics
----------------------------------------------------------
  0 recursive calls
  0 db block gets
  5 consistent gets
  2 physical reads
  0 redo size
479 bytes sent via SQL*Net to client
500 bytes received via SQL*Net from client
  2 SQL*Net roundtrips to/from client
  0 sorts (memory)
  0 sorts (disk)
  1 rows processed

We see the index rebuild this time did not result in any recursive calls due to the invalidation of the SQL statement.

If we run an SQL with a predicate on the non-indexed CODE column:

SQL> select * from bowie where code=42;
100 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 1845943507

-----------------------------------------------------------------------------------
| Id | Operation                 | Name  | Rows | Bytes | Cost (%CPU) | Time      |
-----------------------------------------------------------------------------------
|  0 | SELECT STATEMENT          |       |  100 |  2100 |     996 (1) |  00:00:01 |
|* 1 | TABLE ACCESS STORAGE FULL | BOWIE |  100 |  2100 |     996 (1) |  00:00:01 |
-----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - storage("CODE"=42)
    filter("CODE"=42)

Statistics
----------------------------------------------------------
   1 recursive calls
   0 db block gets
3603 consistent gets
   0 physical reads
   0 redo size
2050 bytes sent via SQL*Net to client
 566 bytes received via SQL*Net from client
   8 SQL*Net roundtrips to/from client
   0 sorts (memory)
   0 sorts (disk)
 100 rows processed

The CBO has no choice but to perform a Full Table Scan. If we now create an index on the CODE column:

SQL> create index bowie_code_i on bowie(code);
Index created.

We notice the SQL that only has a predicate on the ID is impacted by default:

SQL> select * from bowie where id=42;
Execution Plan
----------------------------------------------------------
Plan hash value: 1405654398

--------------------------------------------------------------------------------------------------
| Id | Operation                           | Name       | Rows | Bytes | Cost(%CPU) | Time      |
--------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT                    |            |    1 |    21 |      4 (0) |  00:00:01 |
|  1 | TABLE ACCESS BY INDEX ROWID BATCHED | BOWIE      |    1 |    21 |      4 (0) |  00:00:01 |
|* 2 | INDEX RANGE SCAN                    | BOWIE_ID_I |    1 |       |      3 (0) |  00:00:01 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("ID"=42)

Statistics
----------------------------------------------------------
  1 recursive calls
  0 db block gets
  5 consistent gets
 12 physical reads
  0 redo size
479 bytes sent via SQL*Net to client
500 bytes received via SQL*Net from client
  2 SQL*Net roundtrips to/from client
  0 sorts (memory)
  0 sorts (disk)
  1 rows processed

The new SQL with a predicate on the CODE column now uses the new index as we would hope due to it being invalidated and re-parsed:

SQL> select * from bowie where code=42;
100 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 853003755

----------------------------------------------------------------------------------------------------
| Id | Operation                           | Name         | Rows | Bytes | Cost (%CPU) | Time      |
----------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT                    |              |  100 |  2100 |     103 (0) |  00:00:01 |
|  1 | TABLE ACCESS BY INDEX ROWID BATCHED | BOWIE        |  100 |  2100 |     103 (0) |  00:00:01 |
|* 2 | INDEX RANGE SCAN                    | BOWIE_CODE_I |  100 |       |       3 (0) |  00:00:01 |
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("CODE"=42)

Statistics
----------------------------------------------------------
   1 recursive calls
   0 db block gets
 110 consistent gets
   2 physical reads
   0 redo size
3540 bytes sent via SQL*Net to client
 566 bytes received via SQL*Net from client
   8 SQL*Net roundtrips to/from client
   0 sorts (memory)
   0 sorts (disk)
 100 rows processed

If we now drop this index on the CODE column, but with the new deferred invalidation clause:

SQL> drop index bowie_code_i deferred invalidation;

Index dropped.

Even though deferred invalidation was specified, this clause has no impact on the SQL with the predicate on the CODE column. It previously referenced the dropped index and so has no choice but to become invalidated and be re-parsed. The new execution plan performs a Full Table Scan as it now again has no other viable option:

SQL> select * from bowie where code=42;
100 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 1845943507

-----------------------------------------------------------------------------------
| Id | Operation                 | Name  | Rows | Bytes | Cost (%CPU) | Time      |
-----------------------------------------------------------------------------------
|  0 | SELECT STATEMENT          |       |  100 |  2100 |     996 (1) |  00:00:01 |
|* 1 | TABLE ACCESS STORAGE FULL | BOWIE |  100 |  2100 |     996 (1) |  00:00:01 |
-----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - storage("CODE"=42)
    filter("CODE"=42)

Statistics
----------------------------------------------------------
  46 recursive calls
   0 db block gets
3641 consistent gets
   0 physical reads
   0 redo size
2050 bytes sent via SQL*Net to client
 566 bytes received via SQL*Net from client
   8 SQL*Net roundtrips to/from client
   6 sorts (memory)
   0 sorts (disk)
 100 rows processed

If we now re-create the index on the CODE column, again with deferred invalidation:

SQL> create index bowie_code_i on bowie(code) deferred invalidation;

Index created.

We notice the SQL with the predicate on the ID column has not been impacted by either the drop or the subsequent re-creation of the index on the CODE column. It still has no recursive calls. Any invalidation on this SQL would have been unnecessary as the execution plan can’t be impacted by such an index on the CODE column:

SQL> select * from bowie where id=42;
Execution Plan
----------------------------------------------------------
Plan hash value: 1405654398

--------------------------------------------------------------------------------------------------
| Id | Operation                           | Name       | Rows | Bytes | Cost(%CPU) | Time      |
--------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT                    |            |    1 |    21 |      4 (0) |  00:00:01 |
|  1 | TABLE ACCESS BY INDEX ROWID BATCHED | BOWIE      |    1 |    21 |      4 (0) |  00:00:01 |
|* 2 | INDEX RANGE SCAN                    | BOWIE_ID_I |    1 |       |      3 (0) |  00:00:01 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("ID"=42)

Statistics
----------------------------------------------------------
  0 recursive calls
  0 db block gets
  5 consistent gets
  0 physical reads
  0 redo size
479 bytes sent via SQL*Net to client
500 bytes received via SQL*Net from client
  2 SQL*Net roundtrips to/from client
  0 sorts (memory)
  0 sorts (disk)
  1 rows processed

But we look at the SQL that does have a predicate on the CODE column, we notice that it likewise has not been invalidated (as we explicitly stated deferred invalidation when we re-created the index). The SQL still performs a Full Table Scan as evidenced by the number of consistent gets (3603, the same as with the Full Table Scan) and that there are no recursive SQL calls:

SQL> select * from bowie where code=42;
100 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 853003755

---------------------------------------------------------------------------------------------------
| Id | Operation                           | Name         | Rows | Bytes | Cost (%CPU) | Time     |
---------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT                    |              |  100 |  2100 |      03 (0) | 00:00:01 |
|  1 | TABLE ACCESS BY INDEX ROWID BATCHED | BOWIE        |  100 |  2100 |      03 (0) | 00:00:01 |
|* 2 | INDEX RANGE SCAN                    | BOWIE_CODE_I |  100 |       |       3 (0) | 00:00:01 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("CODE"=42)

Statistics
----------------------------------------------------------
   0 recursive calls
   0 db block gets
3603 consistent gets
   0 physical reads
   0 redo size
2050 bytes sent via SQL*Net to client
 566 bytes received via SQL*Net from client
   8 SQL*Net roundtrips to/from client
   0 sorts (memory)
   0 sorts (disk)
 100 rows processed

 

SQL*Plus autotrace which displays the current SQL plan (but not necessarily the actual execution plan) shows that the CBO would have chosen to use the new index if it had been allowed to invalidate this cursor.

So like many features, deferred invalidation can be useful if used well but can be problematic if it’s not implemented and used appropriately 🙂

12.2 Some Cool Partitioning New Features (Big Wheels) April 5, 2017

Posted by Richard Foote in 12c Rel 2, Oracle Indexes, Partitioning.
2 comments

ELO-Out_of_the_Blue_Lp

I previously discussed just how easy it is to convert online a non-partitioned table to be partitioned with Oracle Database 12.2.

Thought I might run through a number of really cool new partitioning features and capabilities that were also introduced in 12.2.

To start, I’m just going to create a basic range-partitioning table and populate it with a year’s worth of data:

 

SQL> create table ziggy
2 (prod_id NUMBER,
3 cust_id NUMBER,
4 time_id DATE,
5 quantity NUMBER)
6 PARTITION BY RANGE (time_id)
7 (PARTITION z_2016_q1 VALUES LESS THAN (TO_DATE('01-APR-2016','dd-MON-yyyy')),
8 PARTITION z_2016_q2 VALUES LESS THAN (TO_DATE('01-JUL-2016','dd-MON-yyyy')),
9 PARTITION z_2016_q3 VALUES LESS THAN (TO_DATE('01-OCT-2016','dd-MON-yyyy')),
10 PARTITION z_2016_q4 VALUES LESS THAN (TO_DATE('01-JAN-2017','dd-MON-yyyy')));

Table created.

SQL> insert into ziggy select mod(rownum,10), mod(rownum,100), sysdate-dbms_random.value(94, 454), 100
from dual connect by level <= 100000;

100000 rows created.

SQL> commit;

Commit complete.

I’ll then create both a global, non-partitioned index and a locally partitioned index:

SQL> create index ziggy_prod_id_i on ziggy(prod_id);

Index created.

SQL> select index_name, num_rows, status from dba_indexes
where index_name='ZIGGY_PROD_ID_I';

INDEX_NAME             NUM_ROWS STATUS
-------------------- ---------- --------
ZIGGY_PROD_ID_I          100000 VALID

SQL> create index ziggy_time_id_i on ziggy(time_id) local;

Index created.

SQL> select index_name, partition_name, num_rows, status from
dba_ind_partitions
where index_name='ZIGGY_TIME_ID_I';

INDEX_NAME           PARTITION_NAME         NUM_ROWS STATUS
-------------------- -------------------- ---------- --------
ZIGGY_TIME_ID_I      Z_2016_Q1                 23941 USABLE
ZIGGY_TIME_ID_I      Z_2016_Q2                 25276 USABLE
ZIGGY_TIME_ID_I      Z_2016_Q3                 25522 USABLE
ZIGGY_TIME_ID_I      Z_2016_Q4                 25261 USABLE

 

OK, the first 12.2 new feature is the capability to now “Split” a partition online (previously this was an offline only operation that resulted in invalid global indexes and invalid corresponding local indexes):

SQL> alter table ziggy
2 split PARTITION z_2016_q4 into
3 (PARTITION z_2016_oct VALUES LESS THAN (TO_DATE('01-NOV-2016','dd-MON-yyyy')),
4 PARTITION z_2016_nov VALUES LESS THAN (TO_DATE('01-DEC-2016','dd-MON-yyyy')),
5 PARTITION z_2016_dec) online;

Table altered.

SQL> select index_name, num_rows, status from dba_indexes
where index_name='ZIGGY_PROD_ID_I';

INDEX_NAME             NUM_ROWS STATUS
-------------------- ---------- --------
ZIGGY_PROD_ID_I          100000 VALID

SQL> select index_name, partition_name, num_rows, status from dba_ind_partitions where index_name='ZIGGY_TIME_ID_I';

INDEX_NAME           PARTITION_NAME         NUM_ROWS STATUS
-------------------- -------------------- ---------- --------
ZIGGY_TIME_ID_I      Z_2016_DEC                 8276 USABLE
ZIGGY_TIME_ID_I      Z_2016_NOV                 8298 USABLE
ZIGGY_TIME_ID_I      Z_2016_OCT                 8687 USABLE
ZIGGY_TIME_ID_I      Z_2016_Q1                 23941 USABLE
ZIGGY_TIME_ID_I      Z_2016_Q2                 25276 USABLE
ZIGGY_TIME_ID_I      Z_2016_Q3                 25522 USABLE

6 rows selected.

 

Nice !!

OK, let’s quickly check how many rows I have for each PROD_ID value that belongs within the Q1 partition:

SQL> select prod_id, count(*) from ziggy where time_id
between '01-JAN-2016' and '31-MAR-2016'
group by prod_id order by prod_id;

   PROD_ID   COUNT(*)
---------- ----------
         0       2391
         1       2334
         2       2324
         3       2372
         4       2284
         5       2462
         6       2348
         7       2399
         8       2380
         9       2388

 

So we have a PROD_ID with values between 0 and 9 that have roughly the same number of rows.

Let’s now check the size of each table partition in blocks:

SQL> select partition_name, blocks from dba_tab_partitions
where table_name='ZIGGY';

PARTITION_NAME           BLOCKS
-------------------- ----------
Z_2016_DEC                   44
Z_2016_NOV                   44
Z_2016_OCT                   45
Z_2016_Q1                  1006
Z_2016_Q2                  1006
Z_2016_Q3                  1006

 

Note that the Q1 partition current has 1006 blocks allocated.

OK, the next cool new feature is to select which rows we make want to keep during a subsequent re-org operation. In the following example, I’m going to re-org the Q1 partition and compress the data, but I’m only going to keep those rows where the PROD_ID is between 1 and 8 (hence getting rid of all rows with PROD_ID that’s 0 or 9):

SQL> ALTER TABLE ziggy
2 MOVE PARTITION z_2016_q1 TABLESPACE users COMPRESS ONLINE
3 INCLUDING ROWS WHERE prod_id between 1 and 8;

Table altered.

 

The new INCLUDING ROWS clause explicitly states that I’m only going to include those rows where the PROD_ID is between 1 and 8.

If we now check the size of the partition and its contents:

SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'ZIGGY');

PL/SQL procedure successfully completed.

SQL> select partition_name, blocks from dba_tab_partitions
where table_name='ZIGGY';

PARTITION_NAME           BLOCKS
-------------------- ----------
Z_2016_DEC                   44
Z_2016_NOV                   44
Z_2016_OCT                   45
Z_2016_Q1                    57
Z_2016_Q2                  1006
Z_2016_Q3                  1006

SQL> select prod_id, count(*) from ziggy where time_id
between '01-JAN-2016' and '31-MAR-2016'
group by prod_id order by prod_id;

   PROD_ID    COUNT(*)
---------- ----------
         1       2334
         2       2324
         3       2372
         4       2284
         5       2462
         6       2348
         7       2399
         8       2380

 

We see the Q1 partition has indeed decreased in size (down to just 57 blocks from 1006 blocks) because it has now been compressed AND because it now only has rows where the PROD_ID is between 1 and 8.

Nice !!

The next cool new feature is that we now have new syntax (FOR EXCHANGE WITH TABLE) to more easily create a table by which we wish to subsequently perform a partition exchange. This ensures that the new table is entirely compatible for such an exchange, although note that associated index are NOT created as part of this process:

SQL> CREATE TABLE ziggy_exchange
2 TABLESPACE users
3 FOR EXCHANGE WITH TABLE ziggy;

Table created.

SQL> ALTER TABLE ziggy
2 EXCHANGE PARTITION z_2016_q1 WITH TABLE ziggy_exchange;

Table altered.

 

If we look at the contents of each object, we can see the partition exchange has been successful:

SQL> select prod_id, count(*) from ziggy where time_id
between '01-JAN-2016' and '31-MAR-2016'
group by prod_id order by prod_id;

no rows selected

SQL> select prod_id, count(*) from ziggy_exchange
where time_id between '01-JAN-2016' and '31-MAR-2016'
group by prod_id order by prod_id;

   PROD_ID   COUNT(*)
---------- ----------
         1       2334
         2       2324
         3       2372
         4       2284
         5       2462
         6       2348
         7       2399
         8       2380

Nice !!

The final new 12.2 partitioning feature I want to introduce is the ability now to make a particular partition (or sub-partition) read only:

SQL> alter table ziggy modify partition z_2016_q1 read only;

Table altered.

SQL> select table_name, partition_name, read_only
from dba_tab_partitions where table_name='ZIGGY';

TABLE_NAME           PARTITION_NAME       READ
-------------------- -------------------- ----
ZIGGY                Z_2016_DEC           NO
ZIGGY                Z_2016_NOV           NO
ZIGGY                Z_2016_OCT           NO
ZIGGY                Z_2016_Q1            YES
ZIGGY                Z_2016_Q2            NO
ZIGGY                Z_2016_Q3            NO

SQL> insert into ziggy values (1,1,'13-JAN-2016', 1);
insert into ziggy values (1,1,'13-JAN-2016', 1)
*
ERROR at line 1:
ORA-14466: Data in a read-only partition or subpartition cannot be modified.

 

Nice !!

There are lots of great new features introduced with Oracle Database 12c R2, but sometimes it’s these lesser know features that can so terribly useful.