jump to navigation

Oracle 19c Automatic Indexing: Data Skew Fixed By Baselines Part II (Sound And Vision) September 28, 2020

Posted by Richard Foote in 19c, 19c New Features, Automatic Indexing, Autonomous Data Warehouse, Autonomous Database, Autonomous Transaction Processing, Baselines, CBO, Data Skew, Exadata, Explain Plan For Index, Full Table Scans, Histograms, Index Access Path, Index statistics, Oracle, Oracle Blog, Oracle Cloud, Oracle Cost Based Optimizer, Oracle General, Oracle Indexes, Oracle Statistics, Oracle19c, Performance Tuning.
add a comment

 

In my previous post, I discussed how the Automatic Indexing task by using Dynamic Sampling Level=11 can correctly determine the correct query cardinality estimates and assume the CBO will likewise determine the correct cardinality estimate and NOT use an index if it would cause performance to regress.

However, if other database sessions DON’T use Dynamic Sampling at the same Level=11 and hence NOT determine correct cardinality estimates, newly created Automatic Indexes might get used by the CBO inappropriately and result inefficient execution plans.

Likewise, with incorrect CBO cardinality estimates, it might also be possible for newly created Automatic Indexes to NOT be used when they should be (as I’ve discussed previously).

These are potential issues if the Dynamic Sampling value differs between the Automatic Indexing task and other database sessions.

One potential way to make things more consistent and see how the Automatic Indexing behaves if it detects an execution plan where the CBO would use an Automatic Index that causes performance regression, is to disable Dynamic Sampling within the Automatic Indexing task.

This can be easily achieved by using the following hint which effectively disables Dynamic Sampling with the previous problematic query:

SQL> select /*+ dynamic_sampling(0) */ * from space_oddity where code in (190000, 170000, 150000, 130000, 110000, 90000, 70000, 50000, 30000, 10000);

1000011 rows selected.

Execution Plan
----------------------------------------------------------------------------------
| Id  | Operation         | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |              |  1005K|   135M| 11411   (1)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| SPACE_ODDITY |  1005K|   135M| 11411   (1)| 00:00:01 |
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("CODE"=10000 OR "CODE"=30000 OR "CODE"=50000 OR
           "CODE"=70000 OR "CODE"=90000 OR "CODE"=110000 OR "CODE"=130000 OR
           "CODE"=150000 OR "CODE"=170000 OR "CODE"=190000)

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
      41169  consistent gets
          0  physical reads
          0  redo size
   13535504  bytes sent via SQL*Net to client
       2705  bytes received via SQL*Net from client
        202  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
    1000011  rows processed

 

The query currently has good cardinality estimates (1005K vs 1000011 rows returned) only because we currently have histograms in place for the CODE column. As such, the query correctly uses a FTS.

However, if we now remove the histogram on the CODE column:

SQL> exec dbms_stats.gather_table_stats(null, 'SPACE_ODDITY', method_opt=> 'FOR ALL COLUMNS SIZE 1’);

PL/SQL procedure successfully completed.

 

There is no way for the CBO to now determine the correct cardinality estimate because of the skewed data and missing histograms.

So what does the Automatic Indexing tasks make of things now. If we look at the next activity report:

 

SQL> select dbms_auto_index.report_last_activity() report from dual;

REPORT
--------------------------------------------------------------------------------
GENERAL INFORMATION
-------------------------------------------------------------------------------
Activity start               : 18-AUG-2020 16:42:33
Activity end                 : 18-AUG-2020 16:43:06
Executions completed         : 1
Executions interrupted       : 0
Executions with fatal error  : 0
-------------------------------------------------------------------------------

SUMMARY (AUTO INDEXES)
-------------------------------------------------------------------------------
Index candidates                             : 0
Indexes created                              : 0
Space used                                   : 0 B
Indexes dropped                              : 0
SQL statements verified                      : 1
SQL statements improved                      : 0
SQL plan baselines created (SQL statements)  : 1 (1)
Overall improvement factor                   : 0x
-------------------------------------------------------------------------------

SUMMARY (MANUAL INDEXES)
-------------------------------------------------------------------------------
Unused indexes    : 0
Space used        : 0 B
Unusable indexes  : 0

We can see that it has verified this one new statement and has created 1 new SQL Plan Baseline as a result.

If we look at the Verification Details part of this report:

 

VERIFICATION DETAILS
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
The following SQL plan baselines were created:
-------------------------------------------------------------------------------
Parsing Schema Name     : BOWIE
SQL ID                  : 3yz8unzhhvnuz
SQL Text                : select /*+ dynamic_sampling(0) */ * from
space_oddity where code in (190000, 170000, 150000,
130000, 110000, 90000, 70000, 50000, 30000, 10000)
SQL Signature           : 3910785437403172730
SQL Handle              : SQL_3645e6a2952fcf7a
SQL Plan Baselines (1)  : SQL_PLAN_3cjg6naakzmvu198c05b9

We can see Automatic Indexing has created a new SQL Plan Baseline for our query with Dynamic Sampling set to 0 thanks to the hint.

Basically, the Automatic Indexing task has found a new query and determined the CBO would be inclined to use the index, because it now incorrectly assumes few rows are to be returned. It makes the poor cardinality estimate because there are currently no histograms in place AND because it can’t now use Dynamic Sampling to get a more accurate picture of things on the fly because it has been disabled with the dynamic_sampling(0) hint.

Using an Automatic Index over the current FTS plan would make the performance of the SQL regress.

Therefore, to protect the current FTS plan, Automatic Indexing has created a SQL Plan Baseline that effectively forces the CBO to use the current, more efficient FTS plan.

This can be confirmed by looking at the DBA_AUTO_INDEX_VERIFICATIONS view:

 

SQL> select execution_name, original_buffer_gets, auto_index_buffer_gets, status
from dba_auto_index_verifications where sql_id = '3yz8unzhhvnuz';

EXECUTION_NAME             ORIGINAL_BUFFER_GETS AUTO_INDEX_BUFFER_GETS STATUS
-------------------------- -------------------- ---------------------- ---------
SYS_AI_2020-08-18/16:42:33                41169                 410291 REGRESSED

 

If we now re-run the SQL again (noting we still don’t have histograms on the CODE column):

SQL> select /*+ dynamic_sampling(0) */ * from space_oddity where code in (190000, 170000, 150000, 130000, 110000, 90000, 70000, 50000, 30000, 10000);

1000011 rows selected.

Execution Plan
----------------------------------------------------------------------------------
| Id  | Operation         | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |              |    32 |  4512 | 11425   (2)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| SPACE_ODDITY |    32 |  4512 | 11425   (2)| 00:00:01 |
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("CODE"=10000 OR "CODE"=30000 OR "CODE"=50000 OR
           "CODE"=70000 OR "CODE"=90000 OR "CODE"=110000 OR "CODE"=130000 OR
           "CODE"=150000 OR "CODE"=170000 OR "CODE"=190000)

Hint Report (identified by operation id / Query Block Name / Object Alias):

Total hints for statement: 1 (U - Unused (1))
---------------------------------------------------------------------------
1 -  SEL$1
U -  dynamic_sampling(0) / rejected by IGNORE_OPTIM_EMBEDDED_HINTS

Note
-----

- SQL plan baseline "SQL_PLAN_3cjg6naakzmvu198c05b9" used for this statement

Statistics
----------------------------------------------------------
          9  recursive calls
          4  db block gets
      41170  consistent gets
          0  physical reads
          0  redo size
   13535504  bytes sent via SQL*Net to client
       2705  bytes received via SQL*Net from client
        202  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
    1000011  rows processed

 

We can see the CBO is forced to use the SQL Plan Baseline “SQL_PLAN_3cjg6naakzmvu198c05b9” as created by the Automatic Indexing task to ensure the more efficient FTS is used and not the available Automatic Index.

So Automatic Indexing CAN create SQL PLan Baselines to protect SQL from performance regressions caused by inappropriate use of Automatic Indexes BUT it’s really hard and difficult for it to do this effectively if the Automatic Indexing tasks and other database sessions have differing Dynamic Sampling settings as it does by default…

Oracle 19c Automatic Indexing: Data Skew Part I (A Saucerful of Secrets) September 10, 2020

Posted by Richard Foote in 19c, 19c New Features, Automatic Indexing, Autonomous Data Warehouse, Autonomous Database, Autonomous Transaction Processing, Data Skew, Full Table Scans, Histograms, Index Access Path, Index statistics, Low Cardinality, Oracle Blog, Oracle Indexes, Oracle19c, Performance Tuning.
1 comment so far

When it comes to Automatic Indexes, things can become particularly interesting when dealing with data skew (meaning that some columns values are much less common than other column values). The next series of blog posts will look at a number of different scenarios in relation to how Automatic Indexing works with data that is skewed and not uniformly distributed.

I’ll start with a simple little example, that has an interesting little twist at the end.

The following table has a CODE column, which has 10 distinct values that a widely skewed, with some values much less common than others:

SQL> create table bowie_skew (id number, code number, name varchar2(42));

Table created.

SQL> insert into bowie_skew select rownum, 10, 'DAVID BOWIE' from dual connect by level <=1000000;

1000000 rows created.

SQL> update bowie_skew set code = 9 where mod(id,3) = 0;

333333 rows updated.

SQL> update bowie_skew set code = 1 where mod(id,2) = 0 and id between 1 and 20000;

10000 rows updated.

SQL> update bowie_skew set code = 2 where mod(id,2) = 0 and id between 30001 and 40000;

5000 rows updated.

SQL> update bowie_skew set code = 3 where mod(id,100) = 0 and id between 300001 and 400000;

1000 rows updated.

SQL> update bowie_skew set code = 4 where mod(id,100) = 0 and id between 400001 and 500000;

1000 rows updated.

SQL> update bowie_skew set code = 5 where mod(id,100) = 0 and id between 600001 and 700000;

1000 rows updated.

SQL> update bowie_skew set code = 6 where mod(id,1000) = 0 and id between 700001 and 800000;

100 rows updated.

SQL> update bowie_skew set code = 7 where mod(id,1000) = 0 and id between 800001 and 900000;

100 rows updated.

SQL> update bowie_skew set code = 8 where mod(id,1000) = 0 and id between 900001 and 1000000;

100 rows updated.

SQL> commit;

Commit complete.

 

I’ll collect statistics on this table, but explicitly NOT collect histograms, so that the CBO will have no idea that the data is actually skewed. Note if I collected data with the default size, there would still be no histograms, as the column has yet to be used within an SQL predicate and so has no column usage recorded.

SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'BOWIE_SKEW', estimate_percent=>100, method_opt=>'FOR ALL COLUMNS SIZE 1');

PL/SQL procedure successfully completed.

We can clearly see that some CODE values (such as “6”) have relatively few values, with only 100 occurrences:

SQL> select code, count(*) from bowie_skew group by code order by code;

      CODE   COUNT(*)
---------- ----------
         1      10000
         2       5000
         3       1000
         4       1000
         5       1000
         6        100
         7        100
         8        100
         9     327235
        10     654465

 

As I explicitly collected statistics with SIZE 1, we currently have NO histograms in the table:

SQL> select column_name, num_buckets, histogram from user_tab_cols
where table_name='BOWIE_SKEW';

COLUMN_NAME     NUM_BUCKETS HISTOGRAM
--------------- ----------- ---------------
ID                        1 NONE
CODE                      1 NONE
NAME                      1 NONE

 

Let’s now run the following query with a predicate on CODE=6, returning just 100 rows:

SQL> select * from bowie_skew where code=6;

100 rows selected.

Execution Plan
-------------------------------------------------------------------------------------------
| Id  | Operation                      | Name         | Rows  | Bytes | Cost (%CPU)| Time       |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |              |   100K|  1953K|   570   (7)| 00:00:01 |
|   1 |  PX COORDINATOR                |              |         |         |              |            |
|   2 |   PX SEND QC (RANDOM)          | :TQ10000   |   100K|  1953K|   570   (7)| 00:00:01 |
|   3 |    PX BLOCK ITERATOR           |              |   100K|  1953K|   570   (7)| 00:00:01 |
|*  4 |     TABLE ACCESS STORAGE FULL| BOWIE_SKEW |   100K|  1953K|   570   (7)| 00:00:01 |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

4 - storage("CODE"=6)
    filter("CODE"=6)

Statistics
----------------------------------------------------------
         6  recursive calls
         0  db block gets
      3781  consistent gets
         0  physical reads
         0  redo size
      2796  bytes sent via SQL*Net to client
       654  bytes received via SQL*Net from client
         8  SQL*Net roundtrips to/from client
         0  sorts (memory)
         0  sorts (disk)
       100  rows processed

 

The CBO has no choice but to use a FTS as I currently have no indexes on the CODE column. Note also that the CBO has got its cardinality estimates way wrong, expecting 100,000 rows and not the actual 100 rows, as I have no histograms on the CODE column.

So let’s now wait 15 minutes or so and see what the Automatic Indexing process decides to do. Following are portions of the next Auto Indexing report:

INDEX DETAILS
-------------------------------------------------------------------------------
The following indexes were created:
--------------------------------------------------------------------------
| Owner | Table      | Index                | Key  | Type   | Properties |
--------------------------------------------------------------------------
| BOWIE | BOWIE_SKEW | SYS_AI_7psvzc164vbng | CODE | B-TREE | NONE       |
--------------------------------------------------------------------------

VERIFICATION DETAILS
-------------------------------------------------------------------------------
The performance of the following statements improved:
-------------------------------------------------------------------------------

Parsing Schema Name  : BOWIE
SQL ID               : fn4shnphu4bvj
SQL Text             : select * from bowie_skew where code=6
Improvement Factor   : 41.1x

Execution Statistics:
-----------------------------

                   Original Plan                 Auto Index Plan
                   ----------------------------  ----------------------------
Elapsed Time (s):  119596                        322
CPU Time (s):      100781                        322
Buffer Gets:       11347                         103
Optimizer Cost:    570                           4
Disk Reads:        0                             0
Direct Writes:     0                             0
Rows Processed:    100                           100
Executions:        1                             1

 

So we can see that yes, Auto Indexing has decided to create a new index here on the CODE column (“SYS_AI_7psvzc164vbng“) as it improves the performance of the query by a factor of 41.1x.

If we look further down the Auto Indexing report and compare the execution plans:

 

PLANS SECTION
---------------------------------------------------------------------------------------------
- Original
-----------------------------
Plan Hash Value  : 3374004665
-----------------------------------------------------------------------------------------
| Id | Operation                      | Name       | Rows   | Bytes   | Cost | Time     |
-----------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT               |            |        |         |  570 |          |
|  1 |  PX COORDINATOR                |            |        |         |      |          |
|  2 |    PX SEND QC (RANDOM)         | :TQ10000   | 100000 | 2000000 |  570 | 00:00:01 |
|  3 |     PX BLOCK ITERATOR          |            | 100000 | 2000000 |  570 | 00:00:01 |
|  4 |      TABLE ACCESS STORAGE FULL | BOWIE_SKEW | 100000 | 2000000 |  570 | 00:00:01 |
-----------------------------------------------------------------------------------------

- With Auto Indexes
-----------------------------
Plan Hash Value  : 140816325
-------------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name                 | Rows | Bytes | Cost | Time     |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |                      |  100 |  2000 |    4 | 00:00:01 |
|   1 |   TABLE ACCESS BY INDEX ROWID BATCHED | BOWIE_SKEW           |  100 |  2000 |    4 | 00:00:01 |
| * 2 |    INDEX RANGE SCAN                   | SYS_AI_7psvzc164vbng |  100 |       |    3 | 00:00:01 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
------------------------------------------

* 2 - access("CODE"=6)

Notes
-----

- Dynamic sampling used for this statement ( level = 11 )

 

We can see that new execution plan indeed uses the index BUT interestingly, it has a correct cardinality estimate of 100 and not 100,000 as per the original plan.

Now this can be explained in that the Automatic Indexing process uses a Dynamic Sampling level of 11, meaning it can calculate the correct cardinality on the fly and can cause difficulties between what the Automatic Indexing process thinks the CBO costs will be vs. the CBO costs in a default database session that uses the (usually default) Dynamic Sampling level of 2 (as I’ve discussed previously).

BUT when I now rerun the SQL query again:

SQL> select * from bowie_skew where code=6;

100 rows selected.

Execution Plan
---------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name                 | Rows  | Bytes | Cost (%CPU)|
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |                      |   100 |  2000 |     4   (0)|
|   1 |  PX COORDINATOR                       |                      |       |       |            |
|   2 |   PX SEND QC (RANDOM)                 | :TQ10001             |   100 |  2000 |     4   (0)|
|   3 |    TABLE ACCESS BY INDEX ROWID BATCHED| BOWIE_SKEW           |   100 |  2000 |     4   (0)|
|   4 |     BUFFER SORT                       |                      |       |       |            |
|   5 |      PX RECEIVE                       |                      |   100 |       |     3   (0)|
|   6 |       PX SEND HASH (BLOCK ADDRESS)    | :TQ10000             |   100 |       |     3   (0)|
|   7 |        PX SELECTOR                    |                      |       |       |            |
|*  8 |           INDEX RANGE SCAN            | SYS_AI_7psvzc164vbng |   100 |       |     3   (0)|
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

8 - access("CODE"=6)

Statistics
----------------------------------------------------------
        12  recursive calls
         0  db block gets
       103  consistent gets
         0  physical reads
         0  redo size
      2796  bytes sent via SQL*Net to client
       654  bytes received via SQL*Net from client
         8  SQL*Net roundtrips to/from client
         2  sorts (memory)
         0  sorts (disk)
       100  rows processed

 

We notice the new Automatic Index is now used BUT also that the CBO has now determined the correct cardinality estimate of 100. But how is this possible when I haven’t recalculated the table statistics?

I’ll explain in my next post.

100,000 Hits, Olympic Games and OOW (Golden Years) August 26, 2008

Posted by Richard Foote in Olympics, OOW, Oracle Blog.
8 comments

Time for a bit of a catchup on things.

It doesn’t seem that long ago when I was celebrating 10,000 hits. Well over the last day or so, I’ve hit the next significant milestone, 100,000 hits. Considering the somewhat limited appeal of my humble writings (Oracle Indexes doesn’t generally generate the same interest as say Madonna’s new world tour or the world’s best apple pie recipe), it’s a nice little number to reach nonetheless. So a big thank-you to everyone who has contributed over the past 8 months or so and I hope some of those 100,000 hits were worth the effort.

Over the past two weeks, I’ve been stuck in front of the TV watching the spectacle that is the Beijing 2008 Olympic Games. I’m an Olympic Games tragic and just love watching the best athletes in the world compete and amaze us with their talents while of course watching Australia win medals. One of the happiest and most exciting periods of life was during the whole Sydney 2000 Olympics experience and for example being there poolside as Ian Thorpe won the 400M freestyle gold medal. Magical times indeed.

For the BeiJing 2008 Olympics, my highlights and lowlights were:

Australian Highlight: Toss up between Sally McLennan’s celebration when she won silver in the 100M hurdles and Matthew Mitcham’s amazing last dive to take gold in the 10M platform. To finish 6th overall was another impressive performance for a country of only 20 odd million although beating the poms in 2012 must surely be a national priority 🙂

Overall Highlight: Michael Phelps winning 8 gold medals was an incredible achievement that will likely never been repeated but for me the highlight was the performance of Usain Bolt who managed to crush the best in the world and beat world records while playing with the crowd. He would be booked for speeding if he ran that fast in a school zone !!

Biggest Disappointment: The TV coverage dished out by Channel 7 in Australia, it was simply appalling. Featuring more ads than actual coverage, what coverage we had was often delayed and featured repeat after repeat after repeat of the same things. Many sports were simply not shown, including many blue ribbon events like many of the athletic field events. SBS, a second TV station with secondary rights managed to salvage some respectability with their coverage but overall, it was a huge disappointment. Thankfully, Channel 7 no longer have the TV rights for the Olympics, hopefully lessons have been learnt and things will improve in time for London 2012.

I’ve had a number of emails regarding where the hell I’m listed for this year’s Oracle OpenWorld schedule after my presentation was one of the winners in the Oracle Mix “competition”. Firstly a big thank-you to everyone who voted, my “Indexing Secrets” presentation ended up finishing 6th and was selected. Unfortunately, times are somewhat tight at work and I wasn’t able to get my travel costs approved and so will not be able to attend OOW this year. Sorry to all those who were hoping to see me action, maybe next year.

Finally, in the next day or two, I’ll finally get around to answering Brian Tkatch’s question on when partitioning is a better option than just an index. So stayed tuned !!

Most Influencial Person In My Career (I Am…I Said) April 10, 2008

Posted by Richard Foote in Oracle Blog, Oracle Opinion, Richard's Musings.
14 comments

I previously listed four of the people who have probably had the biggest impact on me as a DBA.

However, the single most important person in my career as a DBA, the person who’s had the biggest impact in all my various successes and failures throughout my career, is undoubtedly the one and only Richard Foote.

Congratulations Slater 🙂

Now I’m not suggesting for one minute I’m as capable or knowledgeable as the four previously listed, indeed I can say with some confidence that I’m not, but there’s no question that at the end of the day, I’m ultimately responsible for being the DBA (and indeed the person generally) I am today.

Influences are of course very important, but it’s up to the individual to ensure all influences (good and bad) become positive experiences. It’s entirely up to the individual to take those influences and to find the drive, the energy, the motivation and the enthusiasm to be as successful, as capable, as knowledgeable and as competent as they can be.

Or indeed as “successful” as one ultimately wants to be because all these things are measured and mean something different to each individual. The scale that really matters, the best measurement to determine the level or standard or confidence that one has achieved is ultimately happiness. When you walk into the office each morning, how do you feel about yourself? You really don’t need to be world’s best Oracle expert (or in any subject matter or profession) to feel good about yourself, to feel you’re heading in the right direction and that you’re at a stage in your career, your work-life journey, where you want to be. To feel like you’re a bloody good and successful DBA.

If you’re “happy” with where you are, congratulations, because you’re the one that’s had the biggest impact and influence in your “success”. If you’re not happy, if you’re not satisfied with where you’re at, if you feel you’re behind where you really want to be, if you’re not the Oracle DBA you want to be (or developer, or pilot or porn star, or whatever), the good news, the really exciting and positive news, is that it’s entirely in your own hands to turn things around.

I spend a good portion of my life at work. I probably spend as much time talking to my work colleagues as I do talking to my own family. I certainly spend more time working on Oracle databases than I do working in my garden or playing football or losing at computer games against the kids or watching David Bowie DVDs. Therefore, it’s really important to me that I enjoy what I do at work and that I’m as good at my job as I can reasonably hope to be. How much I enjoy my work is very much related to how confident I feel in my capabilities and in how much I continue to learn and grow in my abilities. Ultimately, I’m directly responsible for it all…

After I do a presentation or talk, people often ask how do I know all that stuff, how come I know so much about bloody indexes, where did I pick up all that 10g/ 11g stuff. It’s no secret, I spend a lot of energy researching, experimenting and investing time into learning more and more about that which I’m responsible for; which is lots of Oracle databases that have lots of important information for lots of people.

The four people I mentioned as influences certainly have had a big impact in how I approach my learning and my work generally, in how I attempt to better myself, in my drive to test things for myself, in how I view what’s possible and what’s important. However, they can’t actually put things in my brain; they can’t force me to spend hours determining how the behaviour of bitmap indexes changed in 10g, or make me spend hours practicing different types of database recoveries with and without RMAN, or make me start this Blog or make me research and write a 2 day index internals seminar, etc.

That’s all up to me.

How I deal with failure, how I learn (or not) from mistakes, how I determine right from wrong (at so many levels) how I handle criticism, how I admit and respond to errors, how I judge and police values and how I actually absorb and turn influences and feedback into positive experiences is also totally and entirely up to me as an individual as well.

The key point I want to make is that when discussing influences and who has contributed and had an impact in your successes and in your career, ultimately the person who has had the biggest impact is you.

April Fools (I Got You Babe) !! April 1, 2008

Posted by Richard Foote in Humour, Oracle Blog, Richard's Musings.
2 comments

Firstly, thank you for all the comments and emails, I’m very touched !!

Those of you that know me realise I have a naughty little streak in me that likes to get out and have some fun every now and then. April 1st is usually one of those days.

I feel a little guilty in posting my “Announcement” on 31 May GMT, however it was well and truly the morning of April 1 here in sunny Canberra, Australia, Australian Eastern Summer Time. So it’s well within the official April Fools rules of conduct. Time zones are a funny thing.

The first subtle clue that this “Announcement” was not quite what it might appear was in the title, Announcing Finale, as it has the same initials as April Fools. However, the far more obvious clue was within the post itself, with the first letter in every paragraph spelling April Fools. I modified the post to highlight how it was sitting there all along. Hee hee 😉

I must say I had thought the reference to my future career plans in the adult film industry might be stretching things a tad (pun fully intended), that and the fact I must of sounded totally demented, but it seems that many saw this as some kind of confirmation that I was really moving on !! I guess with a name like Dick Foote, anything is possible I guess.

Congratulations to those who picked it and sorry to those of you who I caught in the net 🙂

However, let me clearly state I have absolutely no intention of retiring and that this Blog will be around for quite some time yet.

I guess it only leaves me with one more thing to say, “April Fools” everyone !!

Marcel Kratochvil: New Oracle Multimedia Blog (Good DBA / Bad DBA) March 19, 2008

Posted by Richard Foote in Oracle Blog, Oracle General.
add a comment

Just wanted to quickly mention an excellent new blog, the Oracle Multimedia Blog, that might be of interest to some of you.

It’s run by fellow Canberra resident, Marcel Kratochvil, a well known Oracle identity, who is almost as well known as myself but nowhere near as good looking 😉

Marcel and I go way back, having worked together at Oracle Corporation in the mid 1990’s. Marcel was also my partner in crime when we won the runner’s up award for best paper at Oracle Openworld in Brisbane in 1999 with our paper / theatre production called “Good DBA / Bad DBA”.  For those who might remember that fateful day, he was the over-acting “Cowboy”, I was the understated and somewhat professional looking “Airline Pilot” 🙂

Marcel is a most knowledgeable and clever fellow who among his many achievements won the Oracle PL/SQL Developer Of the Year award in 2004 and has recently been made an Oracle Ace.

I wish Marcel the very best with his new Blog and encourage everyone to check it out.

10,000 Hits Already !! January 9, 2008

Posted by Richard Foote in Oracle Blog, Richard's Musings.
17 comments

I’ve just noticed that the hits counter has just reached 10,000 hits.

I’m quite excited as I actually saw it displaying the 10000, being one of those weird people who would drive around the block a couple of times just to see the mileometer reading in the car reach some “special” number. I screamed with frustration once when I just missed 12345 scroll past, 123456 just seemed such a long way away ….

I must admit I had absolutely no idea how many people would read this Blog when I started it almost a month ago. Starting a Blog was something that was mentioned and suggested to me a few times when I attended and presented at the Unconference at Oracle OpenWorld last year.

Now I must admit, I don’t really know whether 10,000 visits in less than a month, including the Christmas Holidays, is actually a lot or not, but if someone asked me a month ago how long I thought it would take me to reach 10,000 hits, I would have guessed (and hoped somewhat) in 6 months or so. So it’s certainly a lot more hits than I ever thought I would get at this early stage.

So to have generated so much interest, so many comments, there’s over 150 although lots are mine I know 🙂 and so many emails is really really, umm what’s the right word, I guess “nice”.

So thank you all for your interest and involvement !!

If someone can give me a bit of warning just before 20,000 is reached, I would appreciate it !!

Richard Foote’s Oracle Blog Has Arrived !! December 11, 2007

Posted by Richard Foote in Oracle Blog, Richard's Musings, Uncategorized.
15 comments

Finally decided it might be worth giving this blogging lark a go.

Occasionally, as I come across interesting Oracle Database related issues, I’ll post my thoughts and opinions and who knows what else and perhaps, just maybe, others may find it interesting or useful as well.

However, will try and focus on the specific topic of Oracle Indexes as it’s such a huge and important area in any Oracle database design. It’s also an area in which there is much confusion, more than it’s far share of myths and popular misconceptions and one in which DBAs, Developers, Database Designers and the like all have a key role to play.

Let the fun begin …