jump to navigation

“Let’s Talk Database” is Back !! Canberra/Sydney/Melbourne May 1, 2018

Posted by Richard Foote in 18c New Features, Data Clustering, Let's Talk Database, Oracle Indexes.
add a comment

seminar photo

Due to popular demand, I’ve been asked by Oracle to again run some “Let’s Talk Database” events this month. Dates and venues are as follows:

Wednesday, 23 May – Canberra (Cliftons Canberra, 10 Moore St): Registration Link.

Tuesday, 29 MayMelbourne (Oracle Melbourne Office, 417 St Kilda Road): Registration Link.

Wednesday, 30 MaySydney (Oracle Sydney Office, North Ryde): Registration Link.

 

Agenda:

8:30 – 9:00am – Registration and coffee

9:00 – 10:30am – Data Clustering

10:30 – 11:00am – Break

11:00 – 12:30pm – Oracle Database 18c – New Features

12:30 – 1:30pm – Lunch, Networking and Informal Q&A

 

Data Clustering: A Key To Developing High Performance & Scalable Apps”

Today’s agile applications have to deal with ever increasing data volumes; rich varieties of data types with their associated intricate/flexibility requirements; and complex hybrid cloud-based environments, where critical high volume transactional-based applications have to function in combination with equally important real-time advanced data analytics reporting solutions. As such, having an innovative data clustering strategy in combination with appropriate data-aware deployments is vital to ensure today’s complex applications are high-performing, scalable, and robust. Many of today’s applications struggle to perform or scale because they lack the necessary flexible indexing and data management strategies at the database layer. This session will demonstrate various innovative data clustering and indexing-based tricks and tactics that will ensure applications run as efficiently as possible, regardless of the size or complexity of the underlying data management layer.

“Oracle Database 18c New Features”

This session will look at some of the key new features and capabilities introduced in Oracle Database 18c. New features discussed include Memory Optimized Row Store for OLTP workloads, Database In-Memory for External Tables, Inline External Tables, In-Memory Database improvements, Zero Impact Grid Infrastructure Patching, Alter Partitioned Table Merge Online, Alter Table Modify Partitioned Table to Partitioned Table, Approximate Query improvements, Private Temporary Tables and Polymorphic Table Functions. The session will also discuss how to play with some of these new features now without the need for an Oracle Cloud account.”

Advertisements

18c Scalable Sequences Part I (Saviour Machine) April 30, 2018

Posted by Richard Foote in 18c New Features, Oracle Indexes, Scalable Sequences.
7 comments

man who sold the world

One of the problems with populating the value of a Primary Key (or some such), is that they can cause contention, especially in relation to indexes. By having an indexed value that monotonically increases, the right-hand most index leaf block is continually being accessed, which results in a “hot block”, which in turn results in buffer block contention. This can be extremely problematic in RAC environments, with this leaf block continually bouncing between RAC instances resulting in excessive cluster wait events.

There are a number of possible methods to address this contention, including the use of Reverse Key Indexes, Hash Partitioned Indexes, the caching of Sequence values through to RAC aware Sequence implementations.

Oracle Database 18c has introduced a new option to resolve this issue, Scalable Sequences. These are sequences that have an automatically applied 6 digit prefix value that consists of 3 digits for the Instance offset and 3 digits for a Session offset value. This ensures that different Instances no longer contend for the same index leaf block and that sessions within a particular instance also no longer contend for the same index leaf block.

Reduced contention therefore means a far more scalable environment enabling faster inserts of data, although it can come with some disadvantages worth noting.

If we first look at just a standard Sequence:

SQL> create sequence bowie_seq;

Sequence created.

SQL> select sequence_name, scale_flag, extend_flag from user_sequences

where sequence_name='BOWIE_SEQ';

SEQUENCE_NAME   SCALE_FLAG EXTEND_FLAG
--------------- ---------- ------------
BOWIE_SEQ       N          N

SQL> select bowie_seq.nextval from dual;

NEXTVAL
----------
         1

We notice that just a standard number is generated from the sequence. This number can be controlled somewhat by whether we want to guarantee order or not, if we want to cache a series of sequence values, how they get incremented, etc.

If we create a table with a Primary Key ID column:

SQL> create table bowie (id number constraint bowie_id_i primary key, name varchar2(42));

Table created.

Then next a procedure to populate the table with a bunch of rows, using the normal sequence for the ID column:

SQL> create or replace procedure pop_bowie as
begin
for i in 1..100000 loop
insert into bowie values (bowie_seq.nextval, 'DAVID BOWIE');
commit;
end loop;
end;
/

Procedure created.

We then run this procedure from 3 different sessions:

SQL> exec pop_bowie --- 3 sessions

PL/SQL procedure successfully completed.

If we look at the data from INDEX_STATS:

SQL> analyze index bowie_id_i validate structure;

Index analyzed.

SQL> select name, lf_blks, pct_used from index_stats;

NAME          LF_BLKS   PCT_USED
---------- ---------- ----------
BOWIE_ID_I        672         93

We notice the ID index has 672 leaf blocks and has a relatively high 93% PCT_USED, primarily because the majority of the index block splits have been 90-10 due to the largest indexed value more often than not causing the right most index leaf block to split.

Let’s now perform the same demo, but this time with an 18c Scalable Sequence. A Scalable Sequence is simply defined with the new SCALE clause:

SQL> create sequence ziggy_seq scale;

Sequence created.

SQL> select sequence_name, scale_flag, extend_flag from user_sequences

where sequence_name='ZIGGY_SEQ';

SEQUENCE_NAME   SCALE_FLAG EXTEND_FLAG
--------------- ---------- ------------
ZIGGY_SEQ       Y          N

If we look at a default sequence value now:

SQL> select ziggy_seq.nextval from dual;

NEXTVAL
----------------------------------
      1013890000000000000000000001

We notice the value is now much larger by default (28 digits) with the first 6 digits assigned (as previously mentioned, the first 3 digits is an Instance offset value and the next 3 digits a Session offset value). The last digit represents the incremental sequence value.

If we create a similar table and procedure as previously:

SQL> create table ziggy (id number constraint ziggy_id_i primary key, name varchar2(42));

Table created.

SQL> create or replace procedure pop_ziggy as
begin
for i in 1..100000 loop
insert into ziggy values (ziggy_seq.nextval, 'DAVID BOWIE');
commit;
end loop;
end;
/

Procedure created.

And again run the procedure in 3 different sessions, this time using the Scalable Sequence:

SQL> exec pop_ziggy --- 3 sessions

PL/SQL procedure successfully completed.

SQL> analyze index ziggy_id_i validate structure;

Index analyzed.

SQL> select name, lf_blks, pct_used from index_stats;

NAME          LF_BLKS   PCT_USED
---------- ---------- ----------
ZIGGY_ID_I       1858         71

We notice a couple of key differences. Firstly, the number of leaf blocks is much greater at 1858 from the previous 672 leaf blocks. This is due in large part to the larger ID values being generated via the Scalable Sequence,  with fewer ID values now fitting in each leaf block. The second difference is the somewhat worse PCT_USED of 71% (previously 93%) due to the fewer 90-10 index block splits with the index entries now less likely to be the greatest value currently within the index. Only one of the 3 sessions is likely to now be inserting into the right hand most leaf block on the index. This also contributes to the increased number of leaf blocks as each leaf block has more free space on average than previously.

More on Scalable Sequences in Part II.

Europe June 2018 Seminars: Confirmed Venues for Munich and Zürich April 26, 2018

Posted by Richard Foote in Index Internals Seminar, Oracle Indexes.
add a comment

RichardFoote2

I’m very excited to confirm all venues for the European June 2018 dates of my popular “Oracle Indexing Internals and Best Practices” seminar.

This is a must attend seminar of benefit to not only DBAs, but also to Developers, Solution Architects and anyone else interested in designing, developing or maintaining high performance Oracle-based applications. It’s a fun, but intense, content rich seminar that is suitable for people of all experiences (from beginners to seasoned Oracle experts).  Full seminar content.

The last 44 people to have attended this seminar have all rated the seminar as a 5 out of 5 training experience !!

Places as usual are strictly limited with venues selected with small classes in mind for that intimate training experience, with plenty of time for questions and discussions.

When was the last time you attended an instructor-led training event with world class content you can immediately apply back at work to potentially significantly improve the performance of your Oracle databases/applications? If it’s been too long, here now is your chance to do something about it !!

Early Bird Rates are available until 25 May but I would recommend booking early to avoid disappointment. I don’t get the opportunity to present in these countries very often.

The European June 2018 locations and confirmed venues are as follows:

Munich, Germany 25-26 June 2018: (IntercityHotel München). Tickets and Registration Link

Zürich, Switzerland 28-29 June 2018: (Crowne Plaza Zürich). Tickets and Registration Link

 

If you have any questions, please contact me at richard@richardfooteconsulting.com.

Answer: Anything Wrong With Query Performance? (Red Right Hand) April 11, 2018

Posted by Richard Foote in 12c, Attribute Clustering, Clustering Factor, Oracle Indexes.
add a comment

red right hand

I of course attract a highly clever readership :). As some have commented, for a single table to require 1000+ consistent gets to retrieve 1000 rows implies that each row needs to be accessed from a different block. This in turn implies the Clustering Factor for this index to be relatively bad and the associated index relatively inefficient.

If this query is very infrequently executed, then no real damage done and the index is likely a better alternative than a Full Table Scan.

However, if this query was executed very frequently (maybe 100’s of times per second), if this query featured as one of the top consuming CPU queries in an AWR report, then you could be burning more CPU than necessary. Maybe a lot lot more CPU…

Improving database performance is of course desirable but reducing a significant amount of CPU usage is always a good thing. For a start you usually pay database licenses and cloud subscriptions based on CPU consumption. The less CPU your systems use, the more head-room you have in case anything goes wrong as running out of CPU usually means performance hell for your database systems. Less CPU means more time until you need to update your infrastructure, more database systems you can run in your current environment, more time until you need to pay for more database licenses, more time until you have to increase your cloud subscriptions etc.

I have assisted many customers in significantly improving performance, in delaying IT investments costs by significantly reducing CPU wastage. Often this is based on improving queries that individually perform adequately and often when the number of rows to number of consistent gets/logical reads ratios appear OK.

So in this particular example, although things are currently deemed hunky dory,  this query can potentially be significantly improved. The root issue here is an index that has a terrible Clustering Factor being used to retrieve a significant number of rows, while being executed a significant number of times.

If we look at the current Clustering Factor:

SQL> select index_name, clustering_factor from user_indexes

where table_name='MAJOR_TOM';

INDEX_NAME           CLUSTERING_FACTOR
-------------------- -----------------
MAJOR_TOM_CODE_I               2000000

At 2000000, it’s about as bad as it can get.

As I’ve discussed previously, Oracle now has a nice way of being able change the clustering of a table by adding a Clustering Attribute to a table (12.1) and by the reorganising the table online (12.2):

SQL> alter table major_tom add clustering by linear order(code);

Table altered.

SQL> alter table major_tom move online;

Table altered.

If we look at the Clustering Factor of the index now:

SQL> select index_name, clustering_factor from user_indexes where table_name='MAJOR_TOM';

INDEX_NAME           CLUSTERING_FACTOR
-------------------- -----------------
MAJOR_TOM_CODE_I                  7322

It’s now about as good as it can get at just 7322.

If we now re-run the “problematic” query:

SQL> select * from major_tom where code=42;

1000 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 4132562429

------------------------------------------------------------------------------------------------
| Id | Operation                   | Name             | Rows | Bytes | Cost (%CPU) | Time     |
------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT            |                  | 1000 | 21000 |       9 (0) | 00:00:01 |
|  1 | TABLE ACCESS BY INDEX ROWID | MAJOR_TOM        | 1000 | 21000 |       9 (0) | 00:00:01 |
|* 2 | INDEX RANGE SCAN            | MAJOR_TOM_CODE_I | 1000 |       |       5 (0) | 00:00:01 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("CODE"=42)

Statistics
----------------------------------------------------------
    0 recursive calls
    0 db block gets
   12 consistent gets
    0 physical reads
    0 redo size
26208 bytes sent via SQL*Net to client
  608 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
 1000 rows processed

The number of consistent gets has plummeted from 1006 to just 12, which is about as good as it gets when retrieving 1000 rows.

Of course the impact this change has on other queries on the table based on other columns needs to be carefully considered. But we have now potentially significantly reduced the overall CPU consumption of our database (especially if we tackle other problem queries in a similar manner).

If you have attended by “Oracle Indexing Internals and Best Practices” seminar, you already know all this as this is one of many key messages from the seminar 🙂

Question: Anything Wrong With Query Performance? (Straight To You) April 5, 2018

Posted by Richard Foote in Oracle Indexes, Performance Tuning.
8 comments

nick cave

I have a query that runs pretty darn efficiently, here’s the setup:

SQL> create table major_tom (id number, code number, name varchar2(42));

Table created.

SQL> insert into major_tom select rownum, mod(rownum,2000)+1, 'DAVID BOWIE'
from dual connect by level  commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'MAJOR_TOM');

PL/SQL procedure successfully completed.

SQL> create index major_tom_code_i on major_tom(code);

Index created.

SQL> select * from major_tom where code=42;

1000 rows selected.

Elapsed: 00:00:00.00

Execution Plan
----------------------------------------------------------
Plan hash value: 4132562429

------------------------------------------------------------------------------------------------
| Id | Operation                   | Name             | Rows | Bytes | Cost (%CPU) | Time     |
------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT            |                  | 1000 | 21000 |    1005 (0) | 00:00:01 |
|  1 | TABLE ACCESS BY INDEX ROWID | MAJOR_TOM        | 1000 | 21000 |    1005 (0) | 00:00:01 |
|* 2 | INDEX RANGE SCAN            | MAJOR_TOM_CODE_I | 1000 |       |       5 (0) | 00:00:01 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("CODE"=42)

Statistics
----------------------------------------------------------
    0 recursive calls
    0 db block gets
 1006 consistent gets
    0 physical reads
    0 redo size
26208 bytes sent via SQL*Net to client
  607 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
1000 rows processed

So the query basically returns 1000 rows based on the CODE column and it does so using an index on CODE. The CBO has got the costings for this just about spot on. For 1000 rows returned, it does so with just 1006 consistent gets, which if you consider the index blocks that need to be accessed and the 1000 rows accessed, all seems quite reasonable.

If you look at the elapsed time of just 00:00:00.00, well you can’t really beat that.

None of the user base is complaining, users are more than happy with this performance.

So the question I have is why on earth would a DBA complain about the performance of this query?

Note: if you’ve attended my “Oracle Indexing Internals and Best Practices” seminar, you’re not allowed to answer 🙂

BIG NEWS: I’ve Been Selected On “Journey To Mars” Program !! April 1, 2018

Posted by Richard Foote in April Fools.
7 comments

mars

HAPPY APRIL FOOLS !!

After many months of nervous waiting, knowing that my chances were extremely slim at best, I have just received my confirmation letter from NASA saying that I’ve been one of the lucky ones selected to participate in the “Journey To Mars” program !! I’m soooo damn excited !!!

Planning is now under way for what will be one of the most extraordinary experiences of my life, a one-way trip to the Mars. I now have to stay in peak physical condition and be prepared to follow NASA’s very strict diet program. Lots of fruit and veg, a high protein intake and no more than 5 whiskies per week (If Crystal Palace ever win a game, that limit might stretch a tad). It’ll be tough, but I’m sure the motivation of the adventures that lie ahead will ensure I keep peak physical condition.

Really, I will only miss my beautiful wife, my 2 wonderful children and all my many friends. I’m allowed to take with me a few photos that I will cherish forever. On the plus side, my wife is rather excited about selling off my extensive David Bowie collection and spending the proceeds on some nice clothes to wear at the launch.

I have always dreamed of flying in space and viewing the Earth from above. But to be given this opportunity to be one of the first humans to actually set foot on another planet, well that’s just going to be truly incredible. Foote one of the first to set foot on Mars (doesn’t that sound apt), what an incredible piece of history and what an honour to be immortalised in this manner.

Life will of course be very tough on Mars. Not much human interaction (oh God, please make my fellow space explorers be semi attractive and intelligent), no more whiskey (except the few bottles I smuggle on board in water bottles), no more Crystal Palace (if they make Europe while I’m away, I’ll be really pissed), no more twitter (I’ll miss baiting all the US gun loving nut jobs) and no more Game of Thrones (who am I kidding, there’s no way George R. R. Martin will finish book 7 within the next 20 years). The thought of drinking my recycled urine for the rest of my life is rather daunting, but not too different to Johnnie Walker Reds really.

For those of you that know of my hatred of guns, this is one of the huge advantages of leaving Earth and moving to Mars. It has been mandated that there are to be no weapons of any kind to be taken of board. If anyone wants to repel any future corrupt government via an armed militia, then they can do so with sharpen pencils and ballpoint pens. Of course, if there are really any hostile aliens on Mars, we’re all stuffed.

Oracle is another advantage to moving to Mars. They don’t have a Cloud Data Centre on Mars and will likely not have one for quite some time. So no concerns about migrating to the cloud, autonomous databases and all that stuff. The mission database I’ll be responsible for will be on-prem (well, on spaceship and then on Mars but you get my point). So no versions of Oracle Database that disappear on me with little notice, but I do worry about having to raise an SR from Mars. Then again, considering the timeliness of Oracle Support these days, I’m hopeful it won’t make too much of a difference.

Of course, life just hasn’t been the same since the passing of the great David Bowie. I’ll be allowed to take my solar powered iPod with me so his music will still be with me. But wouldn’t it be wonderful if he really has moved on to Mars and I get the opportunity to see him play live again with the Spiders From Mars. I have my fingers crossed on that one.

Let me just say there’s a huge advantage to living on Mars that many don’t appreciate. A year on Mars is approximately 687 days so that means it’s roughly twice as long as a year here on Earth. Therefore, while you poor buggers on Earth age say 30 years, I will only age about 15 Mars years. If I can just hold on for anorther 50 odd years, it means that I will be theoretically younger than my own children and will be able ask them for pocket money and if I can borrow their car (tricky I know from Mars, but it’s the principle that matters).

So in a few brief years, I will be leaving this planet for good and setting up a new home and future on the planet Mars. Yes it will be a little scary, yes it will be extremely difficult and dangerous and yes I will miss many family members and friends who I will never see again. I will though be posting on my blog here with similar regularity, so don’t worry too much 🙂

HAPPY APRIL FOOLS !!

Richard Foote Consulting Seminars At Your Workplace !! March 28, 2018

Posted by Richard Foote in Index Internals Seminar, Oracle Indexes.
add a comment

seminar photo

If you have 6 or more people in you organisation who would be interested in attending a seminar, please contact me at richard@richardfooteconsulting.com.

I currently run my highly acclaimed “Oracle Indexing Internals and Best Practices” seminar as public events, which has now been run in some 18 countries. I’m also at work on developing an exciting new seminar on the topic of Oracle Database Troubleshooting and Performance Tuning, which I hope to complete and present later in the year.

I also offer the opportunity for larger organisations to have my seminar run exclusively at their workplace. This offers the following benefits:

  • Potentially significant cost savings over public events
  • Possible for all interested parties within organisation to attend and learn from the seminar, not just the selected few
  • No need to wait for a public event to come close to you, the seminar comes directly to you
  • No need for attendee travel expenses to go to the closest seminar event
  • Run at a convenient time when most members of the organisation can attend and when training budgets are available
  • Can be customised to focus on topics of most importance and interest to an organisation
  • Can potentially look at actual issues and specific concerns that currently impact an organisation
  • All attendees are at close proximity to their workplace work incase of needing to attend emergencies
  • Class synergies are maximised with all class discussions directly relevant to the organisation

The only on-site requirements are a room capable of holding all the attendees, with a PC projector and white board. All attendees receive both a hard and soft copy of all the course materials.

At a recent on-site customer event, all 20 attendees gave the seminar an overall rating of 5 out of 5, in part because of some of the advantages listed.

If you have 6 or more people in you organisation who would be interested in attending a seminar, please contact me at richard@richardfooteconsulting.com.

The following public seminars are currently scheduled to run in various European locations throughout June, with a customer on-site seminar already scheduled during this time:

Munich, Germany 25-26 June 2018: (IntercityHotel München). Tickets and Registration Link

Zürich, Switzerland 28-29 June 2018: (Crowne Plaza Zürich). Tickets and Registration Link

Announcement: Europe June 2018 Dates – Oracle Indexing Internals Seminar March 21, 2018

Posted by Richard Foote in Index Internals Seminar, Oracle Indexes.
5 comments

RichardFoote2

I’m very excited to announce some European June 2018 dates for my popular “Oracle Indexing Internals and Best Practices” seminar.

This is a must attend seminar of benefit to not only DBAs, but also to Developers, Solution Architects and anyone else interested in designing, developing or maintaining high performance Oracle-based applications. It’s a fun, but intense, content rich seminar that is suitable for people of all experiences (from beginners to seasoned Oracle experts).  Full seminar content.

The last 44 people to have attended this seminar have all rated the seminar as a 5 out of 5 training experience !!

Places are as usual are strictly limited with venues selected with small classes in mind for that intimate training experience, with plenty of time for questions and discussions.

Early Bird Rates are currently available but I would recommend booking early to avoid disappointment.

The European June 2018 locations are as follows:

Munich, Germany 25-26 June 2018: (IntercityHotel München). Tickets and Registration Link

Zürich, Switzerland 28-29 June 2018: (Crowne Plaza Zürich). Tickets and Registration Link

ALL TICKETS ARE NOW AVAILABLE FOR PURCHASE !!

Please note all seminars are conducted in English.

If you have any questions, please contact me at richard@richardfooteconsulting.com.

Presenting At ODTUG Kscope18 Conference in Orlando, Florida 10-14 June 2018 March 20, 2018

Posted by Richard Foote in Oracle Indexes.
add a comment

odtug

I’m very excited to have a couple of papers accepted at this year’s ODTUG Kscope18 Conference in sunny and likely very hot Orlando, Florida between 10-14 June 2018.

I’m excited because I’ve been to a few of these conferences before and they have always been excellent events. As a mainly Oracle Database kinda guy, it’s always good it get a feel for what’s happening in the developer world and this conference has always managed to get that perfect mix between DBA and Developer topics. In the future, I see that understanding the requirements of the developer community as being even more crucial to the DBA role as cloud technologies advance.

However, I’m also excited as the conference this year is being held at the Walt Disney World Dolphin Resort, so it’s also an opportunity to catch up with my friends Micky, Donald, Goofy and the rest of the gang !!

I have two presentations:

New Indexing Features Introduced in Oracle 12c Release 2 (and 18c): 11 June, Session 2 between 1:15pm and 2:15pm

Indexing and Clustering Data – Key to Developing High Performance and Scalable Apps: 14 June, Deep Dive Session between 9:30am and 11:00am

The rest of the time I’ll be attending some of the other amazing presentations, or going down Splash Mountain again 🙂

Hope to see many of you there.

dolphin resort

Improve Data Clustering on Multiple Columns Concurrently (Two Suns in the Sunset) March 12, 2018

Posted by Richard Foote in 12c, Attribute Clustering, Clustering Factor, Online DDL, Oracle Indexes.
2 comments

I’ve had a couple of recent discussions around clustering and how if you attempt to improve the clustering of a table based on a column, you thereby ruin the current clustering that might exist for a different column. The common wisdom being you can only order the data one way and if you change the order, you might improve things for one column but totally stuff things up for another.

However, that’s not strictly correct. Depending on the characteristics of your data, you can potentially order (or interleave) data based on multiple columns concurrently. It’s quite possible to have good or good enough clustering on multiple columns and this is extremely important for indexes, as the efficiency of an index can be directly impacted by the clustering of data on the underlining tables.

So to illustrate, I’m going to create a table that initially has terrible clustering on two unrelated columns (code and grade) :

SQL> create table ziggy (id number, code number, grade number, name varchar2(42));

Table created.

SQL> insert into ziggy select rownum, mod(rownum, 100)+1, ceil(dbms_random.value(0,100)), 'ZIGGY STARDUST'
from dual connect by level  commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=> 'ZIGGY', method_opt=>'FOR ALL COLUMNS SIZE 1');

PL/SQL procedure successfully completed.

SQL> create index ziggy_code_i on ziggy(code);

Index created.

SQL> create index ziggy_grade_i on ziggy(grade);

Index created.

SQL> select index_name, clustering_factor, num_rows from user_indexes
where table_name='ZIGGY';

INDEX_NAME           CLUSTERING_FACTOR   NUM_ROWS
-------------------- ----------------- ----------
ZIGGY_CODE_I                   1748800    4000000
ZIGGY_GRADE_I                  1572829    4000000

So with values for both columns distributed all throughout the table, the Clustering Factor of both the CODE and GRADE indexes are both quite poor (values of 1748800 and 1572829 respectively). Even though both columns have 100 distinct values (and so a selectivity of 1%), the CBO will likely consider the indexes too inefficient to use:

SQL> select * from ziggy where code=42;

40000 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 2421001569

---------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU) | Time    |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       | 40000 | 1054K |   4985 (10) | 00:00:01|
| * 1 | TABLE ACCESS FULL | ZIGGY | 40000 | 1054K |   4985 (10) | 00:00:0 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("CODE"=42)

Statistics
----------------------------------------------------------
       0 recursive calls
       0 db block gets
   20292 consistent gets
       0 physical reads
       0 redo size
 1058750 bytes sent via SQL*Net to client
   29934 bytes received via SQL*Net from client
    2668 SQL*Net roundtrips to/from client
       0 sorts (memory)
       0 sorts (disk)
  40000 rows processed

SQL> select * from ziggy where grade=42;

40257 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 2421001569

---------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       | 40000 | 1054K |  5021 (10) | 00:00:01 |
| * 1 | TABLE ACCESS FULL | ZIGGY | 40000 | 1054K |  5021 (10) | 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("GRADE"=42)

Statistics
----------------------------------------------------------
       0 recursive calls
       0 db block gets
   20307 consistent gets
       0 physical reads
       0 redo size
 1065641 bytes sent via SQL*Net to client
   30121 bytes received via SQL*Net from client
    2685 SQL*Net roundtrips to/from client
       0 sorts (memory)
       0 sorts (disk)
   40257 rows processed

So even though the CBO has got the row estimates just about spot on, in both cases a Full Table Scan was chosen.

Let’s create another table based on the table above but this time order the data in CODE column order:

SQL> create table ziggy2 as select * from ziggy order by code;

Table created.

SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=> 'ZIGGY2', method_opt=> 'FOR ALL COLUMNS SIZE 1');

PL/SQL procedure successfully completed.

SQL> create index ziggy2_code_i on ziggy2(code);

Index created.

SQL> create index ziggy2_grade_i on ziggy2(grade);

Index created.

SQL> select index_name, clustering_factor, num_rows from user_indexes 

where table_name='ZIGGY2';

INDEX_NAME           CLUSTERING_FACTOR   NUM_ROWS
-------------------- ----------------- ----------
ZIGGY2_CODE_I                    17561    4000000
ZIGGY2_GRADE_I                 1577809    4000000

We can see that by doing so, we have significantly reduced the Clustering Factor of the CODE index (down from 1748800 to just 17561) . The GRADE index though has changed little as there’s little co-relation between the CODE and GRADE columns.

If we now run the same query with the CODE based predicate:

SQL> select * from ziggy2 where code=42;

40000 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 16801974

-----------------------------------------------------------------------------------------------------
| Id | Operation                           | Name          | Rows  | Bytes | Cost (%CPU) | Time     |
-----------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT                    |               | 40000 | 1054K |     264 (4) | 00:00:01 |
|  1 | TABLE ACCESS BY INDEX ROWID BATCHED | ZIGGY2        | 40000 | 1054K |     264 (4) | 00:00:01 |
|* 2 | INDEX RANGE SCAN                    | ZIGGY2_CODE_I | 40000 |       |      84 (5) | 00:00:01 |
-----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("CODE"=42)

Statistics
----------------------------------------------------------
       0 recursive calls
       0 db block gets
     273 consistent gets
       0 physical reads
       0 redo size
 1272038 bytes sent via SQL*Net to client
     685 bytes received via SQL*Net from client
       9 SQL*Net roundtrips to/from client
       0 sorts (memory)
       0 sorts (disk)
   40000 rows processed

The CBO has not only used the index, but the query is much more efficient as a result, with just 273 consistent gets required to retrieve 40000 rows.

However the query based on the GRADE predicate still uses a FTS:

SQL> select * from ziggy2 where grade=42;

40257 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 1810052534

----------------------------------------------------------------------------
| Id | Operation         | Name   | Rows  | Bytes | Cost (%CPU) | Time     |
----------------------------------------------------------------------------
|  0 | SELECT STATEMENT  |        | 40000 | 1054K |   4920 (10) | 00:00:01 |
|* 1 | TABLE ACCESS FULL | ZIGGY2 | 40000 | 1054K |   4920 (10) | 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("GRADE"=42)

Statistics
----------------------------------------------------------
      0 recursive calls
     11 db block gets
  17602 consistent gets
      0 physical reads
      0 redo size
 434947 bytes sent via SQL*Net to client
    696 bytes received via SQL*Net from client
     10 SQL*Net roundtrips to/from client
      0 sorts (memory)
      0 sorts (disk)
  40257 rows processed

Now if we decide that actually the query based on GRADE is far more important to the business, we could of course reorder the data again. The following is yet another table, this time based on the CODE sorted ZIGGY2 table, but inserted in GRADE column order:

SQL> create table ziggy3 as select * from ziggy2 order by grade;

Table created.

SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=> 'ZIGGY3', method_opt=> 'FOR ALL COLUMNS SIZE 1');

PL/SQL procedure successfully completed.

SQL> create index ziggy3_code_i on ziggy3(code);

Index created.

SQL> create index ziggy3_grade_i on ziggy3(grade);

Index created.

SQL> select index_name, clustering_factor, num_rows from user_indexes 

where table_name='ZIGGY3';

INDEX_NAME           CLUSTERING_FACTOR   NUM_ROWS
-------------------- ----------------- ----------
ZIGGY3_CODE_I                    30231    4000000
ZIGGY3_GRADE_I                   17582    4000000

We notice we now have an excellent, very low Clustering Factor for the GRADE index (down to just 17582). But notice also the Clustering Factor for CODE. Although it has increased from 17561 to 30231, it’s nowhere near as bad as it was initially when is was a massive 1748800.

The point being that with the data already ordered on CODE, Oracle inserting the data in GRADE order effectively had the data already sub-ordered on CODE. So we end up with perfect clustering on the GRADE column and “good enough” clustering on CODE as well.

If we now run the same queries again:

SQL> select * from ziggy3 where code=42;

40000 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 1004048030

-----------------------------------------------------------------------------------------------------
| Id | Operation                           | Name          | Rows  | Bytes | Cost (%CPU) | Time     |
-----------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT                    |               | 40000 | 1054K |     392 (3) | 00:00:01 |
|  1 | TABLE ACCESS BY INDEX ROWID BATCHED | ZIGGY3        | 40000 | 1054K |     392 (3) | 00:00:01 |
|* 2 | INDEX RANGE SCAN                    | ZIGGY3_CODE_I | 40000 |       |      84 (5) | 00:00:01 |
-----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("CODE"=42)

Statistics
----------------------------------------------------------
       0 recursive calls
       0 db block gets
     401 consistent gets
       0 physical reads
       0 redo size
 1272038 bytes sent via SQL*Net to client
     685 bytes received via SQL*Net from client
       9 SQL*Net roundtrips to/from client
       0 sorts (memory)
       0 sorts (disk)
   40000 rows processed

With the CODE based query, the CBO still uses the index and performance is still quite good with consistent gets having  gone up a tad (401 up from 273). However, we now have the scenario where the GRADE based query is also efficient with the index access also selected by the CBO:

SQL> select * from ziggy3 where grade=42;

40257 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 844233985

------------------------------------------------------------------------------------------------------
| Id | Operation                           | Name           | Rows  | Bytes | Cost (%CPU) | Time     |
------------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT                    |                | 40000 | 1054K |     264 (4) | 00:00:01 |
|  1 | TABLE ACCESS BY INDEX ROWID BATCHED | ZIGGY3         | 40000 | 1054K |     264 (4) | 00:00:01 |
|* 2 | INDEX RANGE SCAN                    | ZIGGY3_GRADE_I | 40000 |       |      84 (5) | 00:00:01 |
------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("GRADE"=42)

Statistics
----------------------------------------------------------
       0 recursive calls
       0 db block gets
     278 consistent gets
       0 physical reads
       0 redo size
 1280037 bytes sent via SQL*Net to client
     696 bytes received via SQL*Net from client
      10 SQL*Net roundtrips to/from client
       0 sorts (memory)
       0 sorts (disk)
   40257 rows processed

We are relying here however on how Oracle actually loads the data on the non-sorted columns, so we can guarantee good clustering on both these columns by simply ordering the data on both columns. Here’s table number 4 with data explicitly sorted on both columns (the values of CODE sub-sorted within the ordering of GRADE):

SQL> create table ziggy4 as select * from ziggy3 order by grade, code;

Table created.

SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=> 'ZIGGY4', method_opt=> 'FOR ALL COLUMNS SIZE 1');

PL/SQL procedure successfully completed.

SQL> create index ziggy4_code_i on ziggy4(code);

Index created.

SQL> create index ziggy4_grade_i on ziggy4(grade);

Index created.

SQL> select index_name, clustering_factor, num_rows from user_indexes 

where table_name='ZIGGY4';

INDEX_NAME           CLUSTERING_FACTOR   NUM_ROWS
-------------------- ----------------- ----------
ZIGGY4_CODE_I                    27540    4000000
ZIGGY4_GRADE_I                   17583    4000000

We notice we have a near perfect Clustering Factor on the GRADE column (just 17583) and a “good enough” Clustering Factor on the CODE column (27540).

With 12c Rel 2, we can effectively “fix” the original poorly clustered table online on both columns by adding an appropriate Clustering Attribute to the table (new in 12.1) and performing a subsequent Online table reorg (new in 12.2):

SQL> alter table ziggy add clustering by linear order (grade, code);

Table altered.

SQL> alter table ziggy move online;

Table altered.

SQL> select index_name, clustering_factor, num_rows from user_indexes

where table_name='ZIGGY';

INDEX_NAME           CLUSTERING_FACTOR   NUM_ROWS
-------------------- ----------------- ----------
ZIGGY_CODE_I                     27525    4000000
ZIGGY_GRADE_I                    17578    4000000

We now have the same excellent Clustering Factor values as we had in the previous example.

Depending on data characteristics, you could potentially use the Interleave Clustering Attribute for good enough Clustering Factor values on your multiple columns, rather than perfect clustering on specific columns.

So it is entirely possible to have the necessary data ordering you need for effective data accesses on multiple columns concurrently.

Trivadis Performance Days 2018 March 8, 2018

Posted by Richard Foote in Oracle Indexes.
add a comment

tri_wuerfel_trainingsdisplay_0

I’m very excited and privileged to be asked to present the opening and closing sessions at this year’s Trivadis Performance Training Days in Zurich, Switzerland on 19-20 September 2018. It’s one of the premier Oracle training events in Europe, one which I’ve always wanted to attend. This year has a fantastic lineup, so I’m really looking forward to it all. Other speakers include:

  • Christian Antognini
  • Jonathan Lewis
  • Kamil Stawiarski
  • Mauro Pagano
  • Roger MacNicol
  • Nigel Bayliss
  • Ludovico Caldara

For details of all the presentations and on how to enrol: https://www.trivadis.com/en/training/performance-days-2018-tvdpdays2018

Hope to see some of you there !!

My New David Bowie Blog March 7, 2018

Posted by Richard Foote in David Bowie.
add a comment

db-albums

Just a short note to say I’ve started a new blog that we focus on the one and only David Bowiehttps://davidbowiewordpresscom.wordpress.com/

Those of you that only vaguely know me, know I’m a huge David Bowie fan, so I thought it might be a bit of fun to write about his extraordinary body of work.

I’m starting off with a series that will look at each of his studio albums, where I’ll rank them from worst to best. It was a really tough job, picking out which superb album is better than others, but I think I’ve finally settled on my final order.

Join me in what will no doubt be a controversial journey as I initially discuss and explore his amazing 28 studio albums. I’ll publish my views on the first album, that which I consider to be his worst, very soon.

Oracle Database 18c: Introduction to Memoptimized RowStore (Memory of a Free Festival) March 5, 2018

Posted by Richard Foote in 18c, 18c New Features, Memoptimized RowStore, Oracle Indexes.
2 comments

memory of a free festival

One of the potentially cool new features introduced in Oracle Database 18c is the Memoptimized RowStore, designed to improve the performance and scalability of key-value based queries.

A new pool in the SGA called the Memoptimize Pool can be configured to store specific heap tables that you may wish to optimize, in a manner not dissimilar to how you may wish to store specific tables in the Keep Pool. The difference with this new pool however is that not only are the table blocks store in the memory pool, but 25% of the pool is configured to automatically create and store a hash index based on the Primary Key (PK) of the table.

Accesses to the table via an equality predicate on the PK can then use the in-memory hash index to very efficiently access the required row, eliminating any physical IOs. In addition, Exadata Exadirect Oracle Net Protocol can leverage Remote Direct Memory Access (RDMA) to allow clients to directly access the required memory regions over the network, avoiding both CPU and OS overheads. The result is potentially very fast and very scalable key-value based queries.

A new parameter MEMOPTIMIZE_POOL_SIZE is used to size the pool as required. Note this pool is part of the SGA and so needs to be carefully considered when sizing the overall SGA. Currently with the first release of 18c, the MEMOPTIMIZE_POOL_SIZE can not be dynamically changed and is not automatically sized and tuned.

Once you’ve set the MEMOPTIMIZE_POOL_SIZE accordingly, you can then create or alter tables to take advantage of the new pool.

The following demo was run on Oracle Live SQL:

I first attempt to create a new table using the new MEMOPTIMIZE FOR READ clause to allocate the table to this new pool:

create table bowie (id number, code number, name varchar2(42))
memoptimize for read;

ORA-62156: MEMOPTIMIZE FOR READ feature not allowed on segment with deferred storage

So a new table can not be created with the default setting of segment with deferred storage. Let’s try again:

create table bowie (id number, code number, name varchar2(42))
segment creation immediate memoptimize for read;

ORA-62142: MEMOPTIMIZE FOR READ feature requires NOT DEFERRABLE PRIMARY KEY constraint on the table

So to use the Memoptimized KeyStore, a table must have a PK. These are the columns by which the automatic in-memory hash index is based.

create table bowie (id number constraint bowie_pk primary key, code number, name varchar2(42))
segment creation immediate memoptimize for read;

Table created.

Success. So this table has been created to use the Memoptimized RowStore.

You can also alter existing tables to make use of the Memoptimized RowStore:

create table bowie2 (id number constraint bowie2_pk primary key, code number, name varchar2(42));

Table created.
insert into bowie2 select rownum, mod(rownum,1000), 'DAVID BOWIE' from dual connect by level <=100000;

100000 row(s) inserted.

commit;

Statement processed.

alter table bowie2 memoptimize for read;

Table altered.

For tables that have just been altered to use the Memoptimized RowStore, you can load the table data into the pool via the new DBMS_MEMOPTIMIZE package:

exec dbms_memoptimize.populate(schema_name=>'SQL_GLGTFSMUBLHEUOFHPRXBRHITF', table_name=>'BOWIE');

ORA-62138: MEMOPTIMIZE memory area does not exist. ORA-06512: at "SYS.DBMS_MEMOPTIMIZE", line 10
ORA-06512: at "SYS.DBMS_MEMOPTIMIZE", line 113
ORA-06512: at line 1
ORA-06512: at "SYS.DBMS_SQL", line 1721

So it appears from the error message that the Memoptimized RowStore is not currently enabled on Oracle Live SQL.

Which makes some sense as it would be very difficult  to size the MEMOPTIMIZE_POOL_SIZE parameter appropriately if any old bugger can create objects in the pool.

So it means we can’t use the Oracle Live SQL environment to test and play around with this new feature, but it doesn’t stop us from exploring a little in how this feature would work.

Let’s now populate the original table:

insert into bowie select rownum, mod(rownum,1000), 'DAVID BOWIE' from dual connect by level <=100000;

100000 row(s) inserted.

commit;

Statement processed.

 

And collects statistics:

exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'BOWIE');

Statement processed.

In much as the same way as the Oracle Database In-Memory option can’t guarantee all objects assigned to the InMemory pool can fit in the allocated pool set by  INMEMORY_SIZE, the same goes for the Memoptimized RowStore. Therefore the CBO can only but assume the tables reside in the memory structures in which they’re allocated and only at run time decide if it might need to access the required data elsewhere (e.g. via the Buffer Cache, via a database index rather than the in-memory hash index, etc.).

Note both the commands to create a table and to alter a table to use the Memoptimised RowStore worked even though the necessary pool doesn’t appear to have been allocated in the database.

So lets investigate what the CBO does with a table assigned to the Memoptimized RowStore.

As the PK will automatically have an inmemory hash index created within the Memoptimized RowStore, does Oracle still automatically create a database index as well:

select index_name, blevel, leaf_blocks from user_indexes where table_name='BOWIE';

INDEX_NAME BLEVEL LEAF_BLOCKS

BOWIE_PK        1         187

So yes, an index is still automatically created to police the PK constraint if a table is assigned to the Memoptimized RowStore.

So let’s have a look at the execution plan for a query based on a PK equality predicate:

select * from bowie where id=42;

---------------------------------------------------------------------------------------------------
| Id | Operation                              | Name     | Rows | Bytes | Cost (%CPU) | Time      |
---------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT                       |          |    1 |    20 |       2 (0) |  00:00:01 |
|  1 | TABLE ACCESS BY INDEX ROWID READ OPTIM | BOWIE    |    1 |    20 |       2 (0) |  00:00:01 |
|* 2 | INDEX UNIQUE SCAN READ OPTIM           | BOWIE_PK |    1 |       |       1 (0) |  00:00:01 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("ID"=42)

So we have a couple of new execution plan steps here. INDEX UNIQUE SCAN READ OPTIM which states the in-memory Hash Index will be accessed to find the location of the required row and TABLE ACCESS BY INDEX ROWID READ OPTIM which states the table will then accessed via the rowid obtained from the in-memory Hash Index.

If we run a query that uses a BETWEEN predicate on the PK column:

select * from bowie where id between 42 and 50;

------------------------------------------------------------------------------------------------
| Id | Operation                           | Name     | Rows | Bytes | Cost (%CPU) | Time      |
------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT                    |          |   10 |   200 |       3 (0) |  00:00:01 |
|  1 | TABLE ACCESS BY INDEX ROWID BATCHED | BOWIE    |   10 |   200 |       3 (0) |  00:00:01 |
|* 2 | INDEX RANGE SCAN                    | BOWIE_PK |   10 |       |       2 (0) |  00:00:01 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("ID">=42 AND "ID"<=50)

We notice the Memoptimized RowStore is not used, just the database PK index via a standard Index Range Scan.

If we run a query that the CBO knows will only retrieve 2 rows via an IN clause:

select * from bowie where id in (42, 442);

-----------------------------------------------------------------------------------------
| Id | Operation                   | Name     | Rows | Bytes | Cost (%CPU) | Time      |
-----------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT            |          |    2 |    40 |       3 (0) |  00:00:01 |
|  1 | INLIST ITERATOR             |          |      |       |             |           |
|  2 | TABLE ACCESS BY INDEX ROWID | BOWIE    |    2 |    40 |       3 (0) |  00:00:01 |
|* 3 | INDEX UNIQUE SCAN           | BOWIE_PK |    2 |       |       2 (0) |  00:00:01 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access("ID"=42 OR "ID"=442)

Again, the Memoptimized RowStore is not used, just the database PK index.

The Memoptimized RowStore is only used on queries with a single equality predicate on the table. Even a query with an equality predicate in combination with another predicate on the same table (odd as that may be), will result in the Memopimized RowStore not being used:

select * from bowie where id=42 and name='DAVID BOWIE';

----------------------------------------------------------------------------------------
| Id | Operation                   | Name     | Rows | Bytes | Cost (%CPU) | Time     |
----------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT            |          |    1 |    21 |       2 (0) | 00:00:01 |
|* 1 | TABLE ACCESS BY INDEX ROWID | BOWIE    |    1 |    21 |       2 (0) | 00:00:01 |
|* 2 | INDEX UNIQUE SCAN           | BOWIE_PK |    1 |       |       1 (0) | 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("NAME"='DAVID BOWIE')
2 - access("ID"=42)

 

So the Memoptimized RowStore is only considered by the CBO, for queries based on a singular PK=value predicate on a table.

More to come once I have my own database to play in 🙂

Oracle Database 18c Indexing Related New Features (New Angels of Promise) February 27, 2018

Posted by Richard Foote in Oracle Indexes.
2 comments

new angels of promise

Although the recently released Oracle 18c Database is really just 12.2.0.2 under the covers, there are a few little features and enhancements that are of interest from an indexing perspective. These include:

  • Memory Optimized Rowstore
  • Scalable Sequences
  • Oracle Text indexing enhancements, such as automatic background index maintenance and new optimize index options
  • JSON Search Index key name limit increased from 64 to 255 characters
  • Spatial Index support with Oracle Database Sharding
  • Golden Gate automatic conflict detect and resolution (CDR) support on tables with just Unique Constraints and/or Unique Indexes (previously required a Primary Key)
  • Modifying the partitioning strategy using a single, online DML, which can be extremely useful with regard modifying related indexing strategies
  • Online Merging of partitions and subpartitions (and impact on associated indexes)

 

The complete list of all that’s new with the Oracle 18c Database, check out the online manuals.

If you don’t have an Oracle Cloud account or an Oracle Engineered System to play around with the 18c database, you can have a play on Oracle Live SQL.

I’ll be blogging about these new features and enhancements (and various others) in the coming weeks so keep and eye out.

European Indexing Internals Seminar Events: Please Help Me Select (Station To Station) February 20, 2018

Posted by Richard Foote in Oracle Indexes.
add a comment

station to station

I’m currently determining which European countries to initially target for a series of my 2 day Oracle Indexing Internals and Best Practices seminars tentatively scheduled for the late May / early June time frame.

The cost for the 2 day seminar is to be confirmed but will be approximately 1200 Euros (plus any local taxes), which includes a guaranteed small class setting, lunch and both hard and soft copy of the exclusive 800+ page seminar materials.

To help me decide which 4 countries to initially target, can you please leave me a comment or contact me directly (richard@richardfooteconsulting.com) if you would be interested in attending a seminar in a particular country.

This is a must attend seminar of benefit to not only DBAs, but also to Developers, Solution Architects and anyone else interested in designing, developing or maintaining high performance Oracle-based applications. It’s a fun, but intense, content rich seminar that is suitable for people of all experiences (from beginners to seasoned Oracle experts).

I’ve now run this seminar in some 20 countries to critical acclaim. The last 30 people that have attended the seminar have scored the seminar a straight 5 out of 5.

For full details on seminar content, see my Index Seminar page.

The more people who express an interest in attending the seminar in a particular country, the more likely I’ll select the country as part of the initial May/June series.

Don’t hesitate to contact me if you have any questions regarding the seminar.

“The European cannon is here” !!