18c Scalable Sequences Part I (Saviour Machine) April 30, 2018
Posted by Richard Foote in 18c New Features, Oracle Indexes, Scalable Sequences.11 comments
One of the problems with populating the value of a Primary Key (or some such), is that they can cause contention, especially in relation to indexes. By having an indexed value that monotonically increases, the right-hand most index leaf block is continually being accessed, which results in a “hot block”, which in turn results in buffer block contention. This can be extremely problematic in RAC environments, with this leaf block continually bouncing between RAC instances resulting in excessive cluster wait events.
There are a number of possible methods to address this contention, including the use of Reverse Key Indexes, Hash Partitioned Indexes, the caching of Sequence values through to RAC aware Sequence implementations.
Oracle Database 18c has introduced a new option to resolve this issue, Scalable Sequences. These are sequences that have an automatically applied 6 digit prefix value that consists of 3 digits for the Instance offset and 3 digits for a Session offset value. This ensures that different Instances no longer contend for the same index leaf block and that sessions within a particular instance also no longer contend for the same index leaf block.
Reduced contention therefore means a far more scalable environment enabling faster inserts of data, although it can come with some disadvantages worth noting.
If we first look at just a standard Sequence:
SQL> create sequence bowie_seq; Sequence created. SQL> select sequence_name, scale_flag, extend_flag from user_sequences where sequence_name='BOWIE_SEQ'; SEQUENCE_NAME SCALE_FLAG EXTEND_FLAG --------------- ---------- ------------ BOWIE_SEQ N N SQL> select bowie_seq.nextval from dual; NEXTVAL ---------- 1
We notice that just a standard number is generated from the sequence. This number can be controlled somewhat by whether we want to guarantee order or not, if we want to cache a series of sequence values, how they get incremented, etc.
If we create a table with a Primary Key ID column:
SQL> create table bowie (id number constraint bowie_id_i primary key, name varchar2(42)); Table created.
Then next a procedure to populate the table with a bunch of rows, using the normal sequence for the ID column:
SQL> create or replace procedure pop_bowie as begin for i in 1..100000 loop insert into bowie values (bowie_seq.nextval, 'DAVID BOWIE'); commit; end loop; end; / Procedure created.
We then run this procedure from 3 different sessions:
SQL> exec pop_bowie --- 3 sessions PL/SQL procedure successfully completed.
If we look at the data from INDEX_STATS:
SQL> analyze index bowie_id_i validate structure; Index analyzed. SQL> select name, lf_blks, pct_used from index_stats; NAME LF_BLKS PCT_USED ---------- ---------- ---------- BOWIE_ID_I 672 93
We notice the ID index has 672 leaf blocks and has a relatively high 93% PCT_USED, primarily because the majority of the index block splits have been 90-10 due to the largest indexed value more often than not causing the right most index leaf block to split.
Let’s now perform the same demo, but this time with an 18c Scalable Sequence. A Scalable Sequence is simply defined with the new SCALE clause:
SQL> create sequence ziggy_seq scale; Sequence created. SQL> select sequence_name, scale_flag, extend_flag from user_sequences where sequence_name='ZIGGY_SEQ'; SEQUENCE_NAME SCALE_FLAG EXTEND_FLAG --------------- ---------- ------------ ZIGGY_SEQ Y N
If we look at a default sequence value now:
SQL> select ziggy_seq.nextval from dual; NEXTVAL ---------------------------------- 1013890000000000000000000001
We notice the value is now much larger by default (28 digits) with the first 6 digits assigned (as previously mentioned, the first 3 digits is an Instance offset value and the next 3 digits a Session offset value). The last digit represents the incremental sequence value.
If we create a similar table and procedure as previously:
SQL> create table ziggy (id number constraint ziggy_id_i primary key, name varchar2(42)); Table created. SQL> create or replace procedure pop_ziggy as begin for i in 1..100000 loop insert into ziggy values (ziggy_seq.nextval, 'DAVID BOWIE'); commit; end loop; end; / Procedure created.
And again run the procedure in 3 different sessions, this time using the Scalable Sequence:
SQL> exec pop_ziggy --- 3 sessions PL/SQL procedure successfully completed. SQL> analyze index ziggy_id_i validate structure; Index analyzed. SQL> select name, lf_blks, pct_used from index_stats; NAME LF_BLKS PCT_USED ---------- ---------- ---------- ZIGGY_ID_I 1858 71
We notice a couple of key differences. Firstly, the number of leaf blocks is much greater at 1858 from the previous 672 leaf blocks. This is due in large part to the larger ID values being generated via the Scalable Sequence, with fewer ID values now fitting in each leaf block. The second difference is the somewhat worse PCT_USED of 71% (previously 93%) due to the fewer 90-10 index block splits with the index entries now less likely to be the greatest value currently within the index. Only one of the 3 sessions is likely to now be inserting into the right hand most leaf block on the index. This also contributes to the increased number of leaf blocks as each leaf block has more free space on average than previously.
More on Scalable Sequences in Part II.
Europe June 2018 Seminars: Confirmed Venues for Munich and Zürich April 26, 2018
Posted by Richard Foote in Index Internals Seminar, Oracle Indexes.add a comment
I’m very excited to confirm all venues for the European June 2018 dates of my popular “Oracle Indexing Internals and Best Practices” seminar.
This is a must attend seminar of benefit to not only DBAs, but also to Developers, Solution Architects and anyone else interested in designing, developing or maintaining high performance Oracle-based applications. It’s a fun, but intense, content rich seminar that is suitable for people of all experiences (from beginners to seasoned Oracle experts). Full seminar content.
The last 44 people to have attended this seminar have all rated the seminar as a 5 out of 5 training experience !!
Places as usual are strictly limited with venues selected with small classes in mind for that intimate training experience, with plenty of time for questions and discussions.
When was the last time you attended an instructor-led training event with world class content you can immediately apply back at work to potentially significantly improve the performance of your Oracle databases/applications? If it’s been too long, here now is your chance to do something about it !!
Early Bird Rates are available until 25 May but I would recommend booking early to avoid disappointment. I don’t get the opportunity to present in these countries very often.
The European June 2018 locations and confirmed venues are as follows:
Munich, Germany 25-26 June 2018: (IntercityHotel München). Tickets and Registration Link
Zürich, Switzerland 28-29 June 2018: (Crowne Plaza Zürich). Tickets and Registration Link
If you have any questions, please contact me at richard@richardfooteconsulting.com.
Answer: Anything Wrong With Query Performance? (Red Right Hand) April 11, 2018
Posted by Richard Foote in 12c, Attribute Clustering, Clustering Factor, Oracle Indexes.add a comment
I of course attract a highly clever readership :). As some have commented, for a single table to require 1000+ consistent gets to retrieve 1000 rows implies that each row needs to be accessed from a different block. This in turn implies the Clustering Factor for this index to be relatively bad and the associated index relatively inefficient.
If this query is very infrequently executed, then no real damage done and the index is likely a better alternative than a Full Table Scan.
However, if this query was executed very frequently (maybe 100’s of times per second), if this query featured as one of the top consuming CPU queries in an AWR report, then you could be burning more CPU than necessary. Maybe a lot lot more CPU…
Improving database performance is of course desirable but reducing a significant amount of CPU usage is always a good thing. For a start you usually pay database licenses and cloud subscriptions based on CPU consumption. The less CPU your systems use, the more head-room you have in case anything goes wrong as running out of CPU usually means performance hell for your database systems. Less CPU means more time until you need to update your infrastructure, more database systems you can run in your current environment, more time until you need to pay for more database licenses, more time until you have to increase your cloud subscriptions etc.
I have assisted many customers in significantly improving performance, in delaying IT investments costs by significantly reducing CPU wastage. Often this is based on improving queries that individually perform adequately and often when the number of rows to number of consistent gets/logical reads ratios appear OK.
So in this particular example, although things are currently deemed hunky dory, this query can potentially be significantly improved. The root issue here is an index that has a terrible Clustering Factor being used to retrieve a significant number of rows, while being executed a significant number of times.
If we look at the current Clustering Factor:
SQL> select index_name, clustering_factor from user_indexes where table_name='MAJOR_TOM'; INDEX_NAME CLUSTERING_FACTOR -------------------- ----------------- MAJOR_TOM_CODE_I 2000000
At 2000000, it’s about as bad as it can get.
As I’ve discussed previously, Oracle now has a nice way of being able change the clustering of a table by adding a Clustering Attribute to a table (12.1) and by the reorganising the table online (12.2):
SQL> alter table major_tom add clustering by linear order(code); Table altered. SQL> alter table major_tom move online; Table altered.
If we look at the Clustering Factor of the index now:
SQL> select index_name, clustering_factor from user_indexes where table_name='MAJOR_TOM'; INDEX_NAME CLUSTERING_FACTOR -------------------- ----------------- MAJOR_TOM_CODE_I 7322
It’s now about as good as it can get at just 7322.
If we now re-run the “problematic” query:
SQL> select * from major_tom where code=42; 1000 rows selected. Execution Plan ---------------------------------------------------------- Plan hash value: 4132562429 ------------------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU) | Time | ------------------------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1000 | 21000 | 9 (0) | 00:00:01 | | 1 | TABLE ACCESS BY INDEX ROWID | MAJOR_TOM | 1000 | 21000 | 9 (0) | 00:00:01 | |* 2 | INDEX RANGE SCAN | MAJOR_TOM_CODE_I | 1000 | | 5 (0) | 00:00:01 | ------------------------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("CODE"=42) Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 12 consistent gets 0 physical reads 0 redo size 26208 bytes sent via SQL*Net to client 608 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1000 rows processed
The number of consistent gets has plummeted from 1006 to just 12, which is about as good as it gets when retrieving 1000 rows.
Of course the impact this change has on other queries on the table based on other columns needs to be carefully considered. But we have now potentially significantly reduced the overall CPU consumption of our database (especially if we tackle other problem queries in a similar manner).
If you have attended by “Oracle Indexing Internals and Best Practices” seminar, you already know all this as this is one of many key messages from the seminar 🙂
Question: Anything Wrong With Query Performance? (Straight To You) April 5, 2018
Posted by Richard Foote in Oracle Indexes, Performance Tuning.8 comments
I have a query that runs pretty darn efficiently, here’s the setup:
SQL> create table major_tom (id number, code number, name varchar2(42)); Table created. SQL> insert into major_tom select rownum, mod(rownum,2000)+1, 'DAVID BOWIE' from dual connect by level commit; Commit complete. SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'MAJOR_TOM'); PL/SQL procedure successfully completed. SQL> create index major_tom_code_i on major_tom(code); Index created. SQL> select * from major_tom where code=42; 1000 rows selected. Elapsed: 00:00:00.00 Execution Plan ---------------------------------------------------------- Plan hash value: 4132562429 ------------------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU) | Time | ------------------------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1000 | 21000 | 1005 (0) | 00:00:01 | | 1 | TABLE ACCESS BY INDEX ROWID | MAJOR_TOM | 1000 | 21000 | 1005 (0) | 00:00:01 | |* 2 | INDEX RANGE SCAN | MAJOR_TOM_CODE_I | 1000 | | 5 (0) | 00:00:01 | ------------------------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("CODE"=42) Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 1006 consistent gets 0 physical reads 0 redo size 26208 bytes sent via SQL*Net to client 607 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1000 rows processed
So the query basically returns 1000 rows based on the CODE column and it does so using an index on CODE. The CBO has got the costings for this just about spot on. For 1000 rows returned, it does so with just 1006 consistent gets, which if you consider the index blocks that need to be accessed and the 1000 rows accessed, all seems quite reasonable.
If you look at the elapsed time of just 00:00:00.00, well you can’t really beat that.
None of the user base is complaining, users are more than happy with this performance.
So the question I have is why on earth would a DBA complain about the performance of this query?
Note: if you’ve attended my “Oracle Indexing Internals and Best Practices” seminar, you’re not allowed to answer 🙂
BIG NEWS: I’ve Been Selected On “Journey To Mars” Program !! April 1, 2018
Posted by Richard Foote in April Fools.7 comments
HAPPY APRIL FOOLS !!
After many months of nervous waiting, knowing that my chances were extremely slim at best, I have just received my confirmation letter from NASA saying that I’ve been one of the lucky ones selected to participate in the “Journey To Mars” program !! I’m soooo damn excited !!!
Planning is now under way for what will be one of the most extraordinary experiences of my life, a one-way trip to the Mars. I now have to stay in peak physical condition and be prepared to follow NASA’s very strict diet program. Lots of fruit and veg, a high protein intake and no more than 5 whiskies per week (If Crystal Palace ever win a game, that limit might stretch a tad). It’ll be tough, but I’m sure the motivation of the adventures that lie ahead will ensure I keep peak physical condition.
Really, I will only miss my beautiful wife, my 2 wonderful children and all my many friends. I’m allowed to take with me a few photos that I will cherish forever. On the plus side, my wife is rather excited about selling off my extensive David Bowie collection and spending the proceeds on some nice clothes to wear at the launch.
I have always dreamed of flying in space and viewing the Earth from above. But to be given this opportunity to be one of the first humans to actually set foot on another planet, well that’s just going to be truly incredible. Foote one of the first to set foot on Mars (doesn’t that sound apt), what an incredible piece of history and what an honour to be immortalised in this manner.
Life will of course be very tough on Mars. Not much human interaction (oh God, please make my fellow space explorers be semi attractive and intelligent), no more whiskey (except the few bottles I smuggle on board in water bottles), no more Crystal Palace (if they make Europe while I’m away, I’ll be really pissed), no more twitter (I’ll miss baiting all the US gun loving nut jobs) and no more Game of Thrones (who am I kidding, there’s no way George R. R. Martin will finish book 7 within the next 20 years). The thought of drinking my recycled urine for the rest of my life is rather daunting, but not too different to Johnnie Walker Reds really.
For those of you that know of my hatred of guns, this is one of the huge advantages of leaving Earth and moving to Mars. It has been mandated that there are to be no weapons of any kind to be taken of board. If anyone wants to repel any future corrupt government via an armed militia, then they can do so with sharpen pencils and ballpoint pens. Of course, if there are really any hostile aliens on Mars, we’re all stuffed.
Oracle is another advantage to moving to Mars. They don’t have a Cloud Data Centre on Mars and will likely not have one for quite some time. So no concerns about migrating to the cloud, autonomous databases and all that stuff. The mission database I’ll be responsible for will be on-prem (well, on spaceship and then on Mars but you get my point). So no versions of Oracle Database that disappear on me with little notice, but I do worry about having to raise an SR from Mars. Then again, considering the timeliness of Oracle Support these days, I’m hopeful it won’t make too much of a difference.
Of course, life just hasn’t been the same since the passing of the great David Bowie. I’ll be allowed to take my solar powered iPod with me so his music will still be with me. But wouldn’t it be wonderful if he really has moved on to Mars and I get the opportunity to see him play live again with the Spiders From Mars. I have my fingers crossed on that one.
Let me just say there’s a huge advantage to living on Mars that many don’t appreciate. A year on Mars is approximately 687 days so that means it’s roughly twice as long as a year here on Earth. Therefore, while you poor buggers on Earth age say 30 years, I will only age about 15 Mars years. If I can just hold on for anorther 50 odd years, it means that I will be theoretically younger than my own children and will be able ask them for pocket money and if I can borrow their car (tricky I know from Mars, but it’s the principle that matters).
So in a few brief years, I will be leaving this planet for good and setting up a new home and future on the planet Mars. Yes it will be a little scary, yes it will be extremely difficult and dangerous and yes I will miss many family members and friends who I will never see again. I will though be posting on my blog here with similar regularity, so don’t worry too much 🙂