jump to navigation

Indexes: Oracle11g New Features Presentation (Get Back) September 19, 2012

Posted by Richard Foote in 11g, 11g New features, Oracle Indexes.
5 comments

I’m in the early stages of compiling an Oracle12c Indexes New Features presentation so I thought I might make available the 11g version I’ve presented previously at Oracle OpenWorld and InSync conferences:

https://richardfoote.files.wordpress.com/2012/09/oracle-indexing-new-features-oracle-11g-release-1-and.pdf

Enjoy :)

Oracle11g: Analyze Table Validate Structure Cascade “FAST” (Slow Burn) March 31, 2011

Posted by Richard Foote in 11g, 11g New features, Oracle Indexes, Validate Structure.
6 comments

I always take notice when Oracle introduces a new “FAST” option, so it was with some excitement when I first noticed in Oracle 11g Rel 1 there was a new FAST option when running the ANALYZE TABLE CASCADE VALIDATE STRUCTURE command.
 
This was described in the manuals as introducing a hashing scheme that was significantly faster than the traditional cascade validation method.
 
However, when I tested this new feature, the results were rather disappointing to say the least (I’ve tested this on various 11g versions, both R1 and R2 and on various platforms, AIX and Windows). In the example below, the PERSONS table is a rather large table that has a number of associated indexes:

 
SQL> set timing on
SQL> analyze table persons validate structure cascade;
 
Table analyzed.
Elapsed: 00:06:01.84
 
SQL> analyze table persons validate structure cascade fast;
 
Table analyzed.
Elapsed: 00:15:20.27
 
SQL> analyze table persons validate structure cascade;
 
Table analyzed.
Elapsed: 00:02:22.27
 
SQL> analyze table persons validate structure cascade fast;
 
Table analyzed.
Elapsed: 00:15:46.28
 
SQL> analyze table persons validate structure cascade;
 
Table analyzed.
Elapsed: 00:02:23.78
 
SQL> analyze table persons validate structure cascade fast;
 
Table analyzed.
Elapsed: 00:14:58.00

 
When using the so-called “FAST” option, the performance was consistently significantly slower, not faster, when compared to using the default method. Perhaps the default option is “FASTER-STILL” ? (the default is actually “COMPLETE”). Additionally, the results of subsequent executions of the default method often resulted in improved times (eg: elapsed times reduced from 6 mins to around 2.5 mins), whereas the FAST option resulted in  uniformly slower elapsed times.
 
I thought this warranted further investigation and so decided to trace what was actually going on behind the scenes and see where the extra time was being spent.
 
With the Complete default method, a significant number of I/Os were being performing as Oracle had to effectively perform an Index Full Scan on each index, reading the table primarily using single block reads. However, CPU related overheads were relatively low, with most of the elapsed times attributed to the related I/Os. Following is an extract from a formatted trace file:
  

 
analyze table promis.persons validate structure cascade
 

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.11         17        915          0           0
Execute      1    145.62     352.22     134061   58166955          3           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2    145.63     352.33     134078   58167870          3           0

 

The total results for the trace were as follows:
 

 
 
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
 
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.11         17        915          0           0
Execute      1    145.62     352.22     134061   58166955          3           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2    145.63     352.33     134078   58167870          3           0
 

OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
 
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse       87      0.03       0.02          0          0          0           0
Execute    243      0.04       0.07          0          0          0         100
Fetch      577      0.10       0.31        110       1114          0         766
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      907      0.17       0.41        110       1114          0         866

 
  

We’ll compare these results later but for now note that elapsed times were 352 seconds in total, with the CPU only contributing about 165 seconds of that time.

If we now look behind the scenes of a “FAST” VALIDATE STRUCTURE CASCADE, we notice that statements such as the following are now recursively executed:
 

 
select /*+ full(PROMIS.PERSONS) */ ORA_HASH(ID || rowid)
from
 PROMIS.PERSONS MINUS select /*+ index_ffs(PROMIS.PER_PK) */ ORA_HASH(ID ||
  rowid) from PROMIS.PERSONS
 

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1     35.23      37.83       7055      10402          6           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3     35.23      37.83       7055      10402          6           0

  
 
We notice that Oracle now uses a new method, basically performing a Full Table Scan of the table and processing the indexed column and associated rowid through the ORA_HASH function which in turn returns a computed hash value. The full table scan is a very efficient method of reading all related values from a table but putting each column value and rowid through the ORA-HASH function is a relatively expensive CPU intensive operation. Oracle then does the same for the index by performing an Index Fast Full Scan of the index and processing again the index column value and associated rowid through the ORA-HASH function. Again, this is a very efficient method of reading all values from an index but again relatively CPU intensive putting every indexed value and associated rowid through the ORA-HASH function.
 
Having performed this for both table and index, in theory if the data structures are indeed valid and non-corrupt, they should both return exactly the same results. So by performing a MINUS of both row sets, if no data is returned, we can be sure indeed all is well with both table/index. If however differences are detected, then there’s some disparity between the data in the index and in the table and hence something is wrong. We’ll have no idea exactly what the differences might be as we only have the resultant hash values to go by, but the analyze validate structure command can at least raise an error and say something is indeed wrong when validating the table and its associated indexes.
 
We would actually now need to perform an Analyze Validate Structure again (without the FAST option this time) to determine the exact cause of the issue, but assuming detecting an error is a very rare event, it’s not an additional step we would ordinarily need to perform. So if by reading both table and index via multi-block Full Scans and processing the index data via the ORA-HASH function is “faster” and more efficient to determine nothing is actually wrong, then it’s a reasonable strategy to take.
 
This processing is then repeated for every index associated with the table we’re analyzing. So in theory, because we’re only performing Full Table and Fast Full Index Scans, we end up performing far fewer multi-block I/Os calls and so can complete this whole Validate Structure processing “Faster”.
 
However, Oracle needs to perform the ORA-HASH function operation for every column in every index, both within the table and associated indexes. So although we end up doing less I/Os, we end up burning much more CPU due to having to call the ORA-HASH function so often. If we look at the total resources when performing the “FAST” validate structure on exactly the same table:

 
 
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
 
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.01          1        145          0           0
Execute      1      0.03       0.02     190294    3086489          2           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.04       0.03     190295    3086634          2           0
 
 
 
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
 
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse       49      0.06       0.07          0          0          0           0
Execute     49      0.00       0.00          0          0          0           2
Fetch       46    830.63     878.62     190295    3086514        138          21
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      144    830.69     878.69     190295    3086514        138          23

 
 
We notice the query values are significantly reduced (down to just 3M from 58M). This of course is a good thing as this translates to reduced I/O related overheads.
 
However, if we look at the corresponding CPU related costs, they have increased significantly (up to 830 seconds from just 146 seconds). It’s really expensive putting all the indexed column values through the ORA-HASH function. Additionally, because multi-block Full Scans which aren’t generally cached are being performed during the Fast option, the results remain consistently poor with subsequent executions.

The net result is that the final elapsed times are consistently greater with the FAST option than without (878 seconds up from 352 seconds). Using the FAST option has resulted in slower response times, not faster ones.

Now your mileage may vary based on your specific environments, but my results have not been particularly impressive to say the least. Substantially increased elapsed times (during which the structures remain locked let’s not forget) is not exactly my idea of a “FAST” option.

Oracle11g: Zero Sized Unusable Indexes Part II (Nathan Adler) February 27, 2011

Posted by Richard Foote in 11g, 11g New features, Oracle Indexes, Unusable Indexes.
12 comments

In my previous post, I discussed how Oracle from 11g R2 onwards will automatically drop the segment and associated storage from unusable index objects. Mohamend Houri asked in the comments section the excellent question of just how useful this feature will be in real life cases when typically indexes are not left in an unusuable state for a long time, perhaps only when performing large data loads when such indexes would ordinarily be rebuilt anyways.

Thought the question was worth a seperate blog entry to provide a worthy answer.

The first point I would make is that we need to think a little outside the box and consider how such change in behaviour can open up new possibilities and flexibilities in how we index our tables.

For example, previously a Local Partitioned Index must have the same number of index partitions as the parent table. But what if an index is only useful for the “current” partition, where accessing newish data makes sense via an index. However, historical data in “older” partitions might only be accessed in batch processes via full partition scans. Why have a local index for older partitions when such indexes are never used. Previously, we had no choice, it was a case of if one or some of the partitions needed an index, then all the partitions needed to be indexed. If we made such unnecessary partitioned indexes unusable, we still needed to allocate storage for the index segment. Now, we can make any unnecessary index partition unusable and no storage at all is allocated to such index partitions.

Taking this a step further, we now have a really nice method of potentially indexing only portions of a table that need indexing, values which don’t have any benefit of being indexed (perhaps because the values are too numerous to ever be accessed efficiently via an index) no longer need to be indexed at all.

Here’s a classic example. Following is a table with a flag  in which the vast number of rows in the data have been “processed”. However, we have a few rows, those current rows which are of interest to us, which have not yet been processed (they may have a status of another value). We need an index in order to find the few rows which have not yet been processed but the index needs to also include all the values which are not of interest and have been processed.

 
SQL> create table bowie_stuff (id number, processed varchar2(10));
Table created.

SQL> insert into bowie_stuff select rownum, 'YES' from dual connect by level <= 1000000;
1000000 rows created.

SQL> commit;
Commit complete.

SQL> update bowie_stuff set processed = ‘NO’ where id in (999990, 999992, 999994, 999996, 999998);
5 rows updated.

SQL> commit;
Commit complete.

SQL> create index bowie_stuff_i on bowie_stuff(processed) pctfree 0;
Index created.

SQL> select index_name, leaf_blocks from dba_indexes where index_name = 'BOWIE_STUFF_I';

INDEX_NAME                     LEAF_BLOCKS
------------------------------ -----------
BOWIE_STUFF_I                         1877

SQL> select segment_name, blocks from dba_segments where segment_name = 'BOWIE_STUFF_I';

SEGMENT_NAME             BLOCKS
-------------------- ----------
BOWIE_STUFF_I              1920

 
 

Notice how the index is quite large (1,877 leaf blocks) as it needs to hold values for all 1M rows, even though only a relative handful of values within the index are ultimately of any use.

If we now gather stats (note we need to collect histograms as the column value distribution is very skewed) and run a query to select just the 5 rows that have not actually been processed:

 
SQL> exec dbms_stats.gather_table_stats(ownname=>'BOWIE', tabname=>'BOWIE_STUFF', estimate_percent=>null, cascade=> true, method_opt=> 'FOR ALL COLUMNS SIZE 1');

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.gather_table_stats(ownname=>'BOWIE', tabname=>'BOWIE_STUFF', estimate_percent=>null, method_opt=> 'FOR COLUMNS PROCESSED SIZE 5');

PL/SQL procedure successfully completed.

SQL> select * from bowie_stuff where processed = 'NO';

Execution Plan
---------------------------------------------------------------------------------------------
| Id  | Operation                   | Name          | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |               |     5 |    40 |     4   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| BOWIE_STUFF   |     5 |    40 |     4   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | BOWIE_STUFF_I |     5 |       |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          6  consistent gets
          0  physical reads
          0  redo size
        540  bytes sent via SQL*Net to client
        396  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          5  rows processed

 
 

Note the CBO uses the index but it requires a total of 6 consistent reads.

Not bad but we can do somewhat better and perform less I/O , significantly reduce storage overheads and significantly reduce index maintenance operations, if only we didn’t store the unnecessary index values within the index.

One method could be to create a function-based index based on the decode function and only store non-null values that are of interest. However, this requires the application to likewise use the decode function in order to make use of the index.

Another method is to use a partitioned index and now with this new Oracle11g feature of zero sized unusable indexes, we don’t need any storage at all for the unwanted indexed values.

Let’s now re-create the index as a globally partitioned index, with one partition defined to contain all values of interest and another partition defined to contain the vast number of processed values. Initially, the index is created in an unusable state so no segments and no storage is allocated to any of the partitions:

 
SQL> drop index bowie_stuff_i;

Index dropped.

SQL> create index bowie_stuff_i on bowie_stuff(processed)
  2  global partition by range (processed)
  3  (partition not_processed_part values less than ('YES'),
  4   partition processed_part values less than (MAXVALUE))
  5  unusable;

Index created.

 
 

Next, we’re only going to rebuild the partition containing just the relatively few rows of interest. The partition containing the values that are not of interest is left in an unusable state and so continues to occupy no storage at all:

 
SQL> alter index bowie_stuff_i rebuild partition not_processed_part;

Index altered.

SQL> select index_name, partition_name, leaf_blocks from dba_ind_partitions where index_name = 'BOWIE_STUFF_I';

INDEX_NAME           PARTITION_NAME       LEAF_BLOCKS
-------------------- -------------------- -----------
BOWIE_STUFF_I        PROCESSED_PART                 0
BOWIE_STUFF_I        NOT_PROCESSED_PART             1

SQL> select segment_name, partition_name, blocks from dba_segments where segment_name = 'BOWIE_STUFF_I';

SEGMENT_NAME         PARTITION_NAME           BLOCKS
-------------------- -------------------- ----------
BOWIE_STUFF_I        NOT_PROCESSED_PART            8

 
 

Note how the index is now tiny (reduced from 1,877 leaf blocks to just 1) as it is only now just storing the index entries that are of interest. We have just saved ourselves heaps of storage as the other partition remains unusable and uses no storage at all.

If we now run our query again:

 
SQL> select * from bowie_stuff where processed = 'NO';

Execution Plan
--------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name          | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |               |     5 |    45 |     1   (0)| 00:00:01 |       |       |
|   1 |  PARTITION RANGE SINGLE      |               |     5 |    45 |     1   (0)| 00:00:01 |     1 |     1 |
|   2 |   TABLE ACCESS BY INDEX ROWID| BOWIE_STUFF   |     5 |    45 |     1   (0)| 00:00:01 |       |       |
|*  3 |    INDEX RANGE SCAN          | BOWIE_STUFF_I |     5 |       |     1   (0)| 00:00:01 |     1 |     1 |
--------------------------------------------------------------------------------------------------------------

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          4  consistent gets
          0  physical reads
          0  redo size
        542  bytes sent via SQL*Net to client
        395  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          5  rows processed

 
 

We notice that the execution plan is just using the tiny index partition and as a result we have reduced our consistent gets down from 6 to just 4. Additionally, we have not had to change our application at all to use the improved index, it was the exact same query as executed previously.

This method can of course be used in Oracle prior to 11g R2 but now with  zero sized unusable indexes, we do not have to allocate any storage at all to those indexes that we may wish to remain in an unusable state for extended or indefinite periods of time. So yes, zero sized unusable indexes can be extremely useful in many real life scenarios :)

Oracle11g: Zero Sized Unusable Indexes (Zeroes) February 25, 2011

Posted by Richard Foote in 11g, 11g New features, Oracle Indexes, Unusable Indexes.
2 comments

Following on from my previous discussion on “Create On Demand” segments, Oracle 11g R2 has also introduced storage saving initiatives in relation to useable indexes.  Starting with a simple Oracle 10g example, we create a table and associated index:
 

 
SQL> create table bowie as select rownum id, 'BOWIE' name from dual connect by level <= 1000000;
 
Table created.
 
SQL> create index bowie_id_i on bowie(id);
 
Index created.
 
SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'BOWIE', cascade=> true, estimate_percent=> null, method_opt=> 'FOR ALL COLUMNS SIZE 1');
 
PL/SQL procedure successfully completed.

 
  

If we now make the index unusable:
 

 
SQL> alter index bowie_id_i unusable;
 
Index altered.
 
SQL> select index_name, blevel, leaf_blocks, num_rows, status, dropped from dba_indexes where index_name = 'BOWIE_ID_I';
 
INDEX_NAME     BLEVEL LEAF_BLOCKS   NUM_ROWS STATUS   DRO
---------- ---------- ----------- ---------- -------- ---
BOWIE_ID_I          2        2226    1000000 UNUSABLE NO
 

SQL> select segment_name, bytes, blocks, extents from dba_segments where segment_name = 'BOWIE_ID_I';
 
SEGMENT_NAME      BYTES     BLOCKS    EXTENTS
------------ ---------- ---------- ----------
BOWIE_ID_I     18874368       2304         18

 
  

We notice that the storage associated with the segment remains, even though the data within the index is now totally useless to us now. The index definition is of course vital but why bother continuing to assign 18 extents of storage (in this example) to the index ?  Oracle 11g Release 2 has now by default changed this behaviour. 
 
Using the same demo as before but running Oracle11g R2:
 

 
SQL> create table bowie as select rownum id, 'BOWIE' name from dual connect by level <= 1000000;
 
Table created.
 
SQL> create index bowie_id_i on bowie(id);
 
Index created.
 
SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'BOWIE', cascade=> true, estimate_percent=> null, method_opt=> 'FOR ALL COLUMNS SIZE 1');
 
PL/SQL procedure successfully completed.
 
SQL> alter index bowie_id_i unusable;
 
Index altered.
 
SQL> select index_name, blevel, leaf_blocks, num_rows, status, dropped from dba_indexes where index_name = 'BOWIE_ID_I';
 
INDEX_NAME     BLEVEL LEAF_BLOCKS   NUM_ROWS STATUS   DRO
---------- ---------- ----------- ---------- -------- ---
BOWIE_ID_I          2        2226    1000000 UNUSABLE NO
 
SQL> select segment_name, bytes, blocks, extents from dba_segments where segment_name = 'BOWIE_ID_I';
 
no rows selected

 
  

We notice that the storage associated with the object is all gone. The index object remains but the underling segment and storage have been automatically dropped.

If we now look at a partitioning example, where we create 3 types of indexes:
 

 
SQL> CREATE TABLE big_album_sales(id number, album_id number, country_id number,
          release_date date, total_sales number)  PARTITION BY RANGE (release_date)
(PARTITION ALBUMS_2007 VALUES LESS THAN (TO_DATE('01-JAN-2008', 'DD-MON-YYYY')),
 PARTITION ALBUMS_2008 VALUES LESS THAN (TO_DATE('01-JAN-2009', 'DD-MON-YYYY')),
 PARTITION ALBUMS_2009 VALUES LESS THAN (TO_DATE('01-JAN-2010', 'DD-MON-YYYY')),
 PARTITION ALBUMS_2010 VALUES LESS THAN (MAXVALUE));
 
Table created.
  

SQL> INSERT INTO big_album_sales SELECT rownum, mod(rownum,5000)+1, mod(rownum,100)+1, sysdate-mod(rownum,2000), ceil(dbms_random.value(1,500000)) FROM dual CONNECT BY LEVEL <= 1000000;
 
1000000 rows created.
 
SQL> commit;
 
Commit complete.

 
  

We first create a Non-Partitioned Index:
 

 
SQL> CREATE INDEX big_album_tot_sales_i ON big_album_sales(total_sales);
 
Index created.

 
  

Next a Global Partitioned Index:
 

 
SQL> CREATE INDEX big_album_country_id_i  ON big_album_sales(country_id)
     GLOBAL PARTITION BY RANGE (country_id)
     (PARTITION TS1 VALUES LESS THAN (26),
      PARTITION TS2 VALUES LESS THAN (51),
      PARTITION TS3 VALUES LESS THAN (76),
      PARTITION TS4 VALUES LESS THAN (MAXVALUE));
 
Index created.

  

Finally,  a Local Partitioned index:
  

 
SQL> CREATE INDEX big_album_album_id_i ON big_album_sales(album_id) local;
 
Index created.
 
SQL> exec dbms_stats.gather_table_stats(ownname=> 'BOWIE', tabname=> 'BIG_ALBUM_SALES', estimate_percent=> null, method_opt=> 'FOR ALL COLUMNS SIZE 1');
 
PL/SQL procedure successfully completed.

 
  

If we now split the last table partition, this will effectively make the:
 
1) Non-Partitioned Unusable
2) All partitions of the Global Partitioned index unusable
3) Just the last 2 partitions of the Local Partitioned Index unusable
  

 
SQL> ALTER TABLE big_album_sales SPLIT PARTITION ALBUMS_2010
     AT (TO_DATE('01-JAN-2011', 'DD-MON-YYYY'))
     INTO (PARTITION ALBUMS_2010, PARTITION ALBUMS_2011);
 
Table altered.
 
SQL> select index_name, status from dba_indexes where table_name = 'BIG_ALBUM_SALES';
 
INDEX_NAME               STATUS
------------------------ --------
BIG_ALBUM_TOT_SALES_I    UNUSABLE
BIG_ALBUM_COUNTRY_ID_I   N/A
BIG_ALBUM_ALBUM_ID_I     N/A
 
SQL> select index_name, partition_name, status, leaf_blocks from dba_ind_partitions where index_name like 'BIG_ALBUM_%';
 
INDEX_NAME              PARTITION_NAME STATUS   LEAF_BLOCKS
----------------------- -------------- -------- -----------
BIG_ALBUM_ALBUM_ID_I    ALBUMS_2007    USABLE           807
BIG_ALBUM_ALBUM_ID_I    ALBUMS_2008    USABLE           381
BIG_ALBUM_ALBUM_ID_I    ALBUMS_2009    USABLE           383
BIG_ALBUM_ALBUM_ID_I    ALBUMS_2010    UNUSABLE
BIG_ALBUM_ALBUM_ID_I    ALBUMS_2011    UNUSABLE
BIG_ALBUM_COUNTRY_ID_I  TS1            UNUSABLE         629
BIG_ALBUM_COUNTRY_ID_I  TS2            UNUSABLE         629
BIG_ALBUM_COUNTRY_ID_I  TS3            UNUSABLE         629
BIG_ALBUM_COUNTRY_ID_I  TS4            UNUSABLE         629
 
SQL> select segment_name, partition_name, bytes, blocks from dba_segments where segment_name like 'BIG_ALBUM_%' and segment_type like 'INDEX%';
 
SEGMENT_NAME          PARTITION_NAME    BYTES BLOCKS
--------------------- -------------- -------- ------
BIG_ALBUM_ALBUM_ID_I  ALBUMS_2007     7340032    896
BIG_ALBUM_ALBUM_ID_I  ALBUMS_2008     3145728    384
BIG_ALBUM_ALBUM_ID_I  ALBUMS_2009     4194304    512
BIG_ALBUM_TOT_SALES_I                23068672   2816

 
  

We notice that all segments associated with the Global Partitioned index which are now unusable have been dropped. As have both unusable partitions from the Local Partitioned Index. However, the segment and storage associated with the unusable Non-Partitioned index still remains. Perhaps a missing feature for another time …
 
It’s a nice little touch that the unusable and somewhat useless index segments now get automatically cleaned out in Oracle11g R2, although they did previously act as “placeholders” in that nothing else within the tablespace could come along and use the occupied storage.

Oracle11g Creation On Demand Indexes (Invisible Touch) February 17, 2011

Posted by Richard Foote in 11g, 11g New features, Oracle Indexes.
11 comments

Prior to Oracle11g Release 2, the default and minimum size of a segment is one extent. So in the below example, where we create a table and five associated indexes: 

 
SQL> create table empty (a number, b number, c number, d number, e number);
 
Table created.
 
SQL> create index empty_a_i on empty(a);
 
Index created.
 
SQL> create index empty_b_i on empty(b);
 
Index created.
 
SQL> create index empty_c_i on empty(c);
 
Index created.
 
SQL> create index empty_d_i on empty(d);
 
Index created.
 
SQL> create index empty_e_i on empty(e);
 
Index created.
 

SQL> select segment_name, blocks, bytes, extents from dba_segments where segment_name like 'EMPTY%';
 
SEGMENT_NAME     BLOCKS      BYTES    EXTENTS
------------ ---------- ---------- ----------
EMPTY               128    1048576          1
EMPTY_A_I           128    1048576          1
EMPTY_B_I           128    1048576          1
EMPTY_C_I           128    1048576          1
EMPTY_D_I           128    1048576          1
EMPTY_E_I           128    1048576          1
 
6 rows selected.

  

Each of the segments has been allocated an extent, including each of the indexes.
 
However, since Oracle11g Release 2, this default behaviour has changed. Running exactly the same demo:
 

 
SQL> create table empty (a number, b number, c number, d number, e number);
 
Table created.
 
SQL> create index empty_a_i on empty(a);
 
Index created.
 
SQL> create index empty_b_i on empty(b);
 
Index created.
 
SQL> create index empty_c_i on empty(c);
 
Index created.
 
SQL> create index empty_d_i on empty(d);
 
Index created.
 
SQL> create index empty_e_i on empty(e);
 
Index created.
 
SQL> select segment_name, blocks, bytes, extents from dba_segments where segment_name like 'EMPTY%';
 
no rows selected

 

We can see that no actual segments have been allocated. The default number of extents when creating an object is now effectively zero. Oracle now defers the creation of the segment and the actual allocation of extents and storage until the point in time when the first row gets inserted.
 
This means for those packaged applications where a large number of objects get created of which relatively few are actually ever used by the specific deployment of the application (eg. SAP) , a substantial amount of storage could potentially be saved. It also can save a significant amount of time deploying such applications as the overheads associated with actually creating the never to be used segments can be avoided.
 
There also some subtle performance implications when the application attempts to access some of these “empty” tables. I’m just going to create another table, but this one populated with a bunch of rows and run a query that joins this with the “empty” table:
 

 
SQL> create table bowie as select rownum id, 'BOWIE' name from dual connect by level <= 1000000;
 
Table created.
 
SQL> create index bowie_id_i on bowie(id);
 
Index created. 

SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'BOWIE', cascade=>true, estimate_percent=>null, method_opt=> 'FOR ALL COLUMNS SIZE 1');
 
PL/SQL procedure successfully completed.
 
SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'EMPTY', cascade=>true, estimate_percent=>null, method_opt=> 'FOR ALL COLUMNS SIZE 1');
 
PL/SQL procedure successfully completed.

SQL> select * from bowie, empty where bowie.id=empty.a and bowie.id = 42;
 
no rows selected
 

Execution Plan
----------------------------------------------------------
Plan hash value: 1098778158
 
--------------------------------------------------------------------------------------------
| Id  | Operation                     | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |            |     1 |    76 |     2   (0)| 00:00:01 |
|   1 |  MERGE JOIN CARTESIAN         |            |     1 |    76 |     2   (0)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID | EMPTY      |     1 |    65 |     1   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN           | EMPTY_A_I  |     1 |       |     1   (0)| 00:00:01 |
|   4 |   BUFFER SORT                 |            |     1 |    11 |     1   (0)| 00:00:01 |
|   5 |    TABLE ACCESS BY INDEX ROWID| BOWIE      |     1 |    11 |     1   (0)| 00:00:01 |
|*  6 |     INDEX RANGE SCAN          | BOWIE_ID_I |     1 |       |     0   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------
 

Predicate Information (identified by operation id):
---------------------------------------------------
 
   3 - access("EMPTY"."A"=42)
   6 - access("BOWIE"."ID"=42)
 

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          0  consistent gets
          0  physical reads
          0  redo size
        303  bytes sent via SQL*Net to client
        239  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed

 

With a table which has yet to be populated, the CBO knows there can be no rows associated with this table (and hence no returns return from this query). You can’t get any less than 0 consistent gets. Note previously, the CBO would be forced to perform at the very least one read (if not more) as it can’t “know” there are indeed no rows, even with the table statistics set at zero (as they may no longer be accurate) until it actually physically accesses a segment.
 
The segment is not actually created and storage allocated to the object until the first row is inserted into the table. This means that this first insert will be a relatively expensive operation as the work associated with creating the segment and allocating the initial extent needs to be implicitly performed in the background at this time. Not only for the table itself, but for any dependant object as well (eg. indexes).

 
SQL> insert into empty (a, b) values (1,1);
 
1 row created.
 

Execution Plan
----------------------------------------------------------
 
--------------------------------------------------------------------------
| Id  | Operation                | Name  | Rows  | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |       |     1 |     1   (0)| 00:00:01 |
|   1 |  LOAD TABLE CONVENTIONAL | EMPTY |       |            |          |
--------------------------------------------------------------------------
 

Statistics
----------------------------------------------------------
       3535  recursive calls
        178  db block gets
        830  consistent gets
         26  physical reads
      27272  redo size
        367  bytes sent via SQL*Net to client
        321  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
         59  sorts (memory)
          0  sorts (disk)
          1  rows processed
 
SQL> commit;
 
Commit complete.

   

Although it’s only a single row insert, note the high number of recursive calls and logical I/Os due in large part to the segment(s) being implicitly created in the background at this time. Subsequent inserts are nowhere near as expensive:
 

 
SQL> insert into empty (a, b) values (1,1);
 
1 row created.
 

Execution Plan
----------------------------------------------------------
 
--------------------------------------------------------------------------
| Id  | Operation                | Name  | Rows  | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |       |     1 |     1   (0)| 00:00:01 |
|   1 |  LOAD TABLE CONVENTIONAL | EMPTY |       |            |          |
--------------------------------------------------------------------------
 
Statistics
----------------------------------------------------------
          0  recursive calls
          5  db block gets
          1  consistent gets
          0  physical reads
        692  redo size
        381  bytes sent via SQL*Net to client
        321  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          1  rows processed

 

Although, the table was populated with values that only directly impacted 2 of the indexes, once the table segment is created, all dependent segments are created at the same time.

 
SQL> select segment_name, blocks, bytes, extents from dba_segments where segment_name like 'EMPTY%';
 
SEGMENT_NAME     BLOCKS      BYTES    EXTENTS
------------ ---------- ---------- ----------
EMPTY               128    1048576          1
EMPTY_A_I           128    1048576          1
EMPTY_B_I           128    1048576          1
EMPTY_C_I           128    1048576          1
EMPTY_D_I           128    1048576          1
EMPTY_E_I           128    1048576          1
 

SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'EMPTY', cascade=>true, estimate_percent=>null, method_opt=> 'FOR ALL COLUMNS SIZE 1');
 
PL/SQL procedure successfully completed.
 
SQL> select index_name, blevel, leaf_blocks, status from dba_indexes where index_name like 'EMPTY%';
 
INDEX_NAME     BLEVEL LEAF_BLOCKS STATUS
---------- ---------- ----------- --------
EMPTY_A_I           0           1 VALID
EMPTY_B_I           0           1 VALID
EMPTY_C_I           0           0 VALID
EMPTY_D_I           0           0 VALID
EMPTY_E_I           0           0 VALID

 

 
Note all the index segments have been created although only 2 of them actually contain data.

Finally, there are some implications with regard to how quotas are enforced that potentially may cause issues. Prior to Oracle11g Release 2, if a user tried to create a segment in a tablespace for which they don’t have sufficient privileges, the operation will fail during the creation of the object:

 
SQL> create user muse identified by muse default tablespace user_data temporary tablespace temp;
 
User created.
 
SQL> grant create session, create table to muse;
 
Grant succeeded.
 
SQL> connect muse/muse;
Connected.
SQL> create table fred (id number primary key using index (create index fred_pk on fred(id) tablespace user_data), name varchar2(20));
create table fred (id number primary key using index (create index fred_pk on fred(id) tablespace user_data), name varchar2(20))
*
ERROR at line 1:
ORA-01950: no privileges on tablespace 'USER_DATA'

 

However with Oracle11g Release 2, as no segment is actually created and hence storage allocated when an object is created, such creation of objects will now initially succeed. No quotas can be violated if no storage is actually used during the creation of the object:
 

 
SQL> create user muse identified by muse default tablespace user_data temporary tablespace temp;
 
User created.
 
SQL> grant create session, create table to muse;
 
Grant succeeded.
 
SQL> connect muse/muse
Connected.
 
SQL> create table fred (id number primary key using index (create index fred_pk on fred(id) tablespace user_data), name varchar2(20));
 
Table created.

  

It’s only when the table is first populated and hence when the segment is actually created and storage allocated, will quota related issues generate errors:

 
SQL> insert into fred values (1, 'BOWIE');
insert into fred values (1, 'BOWIE')
            *
ERROR at line 1:
ORA-01950: no privileges on tablespace 'USER_DATA'

 

An interesting change in default behaviour introduced in Oracle11g Release 2 that can potentially be quite beneficial but perhaps also a little dangerous for the unwary.

Oracle11g IGNORE_ROW_ON_DUPKEY_INDEX Hint (Micro Cuts) December 20, 2010

Posted by Richard Foote in 11g, 11g New features, Oracle Indexes.
17 comments

An interesting new hint was introduced in Oracle11g which provides an alternative approach when inserting data where duplicate values might be an issue.
 
To illustrate, I’m going to create a little table with just the 10 rows with a unique ID column containing values 1 – 10 policed by a Unique index:

 

SQL> create table radiohead (id number constraint radiohead_pk_i primary key using index (create unique index radiohead_pk_i on radiohead(id)), name varchar2(20));
 
Table created.
 
SQL> select index_name, uniqueness, table_name from dba_indexes where index_name='RADIOHEAD_PK_I';
 
INDEX_NAME                     UNIQUENES TABLE_NAME
------------------------------ --------- ------------------------------
RADIOHEAD_PK_I                 UNIQUE    RADIOHEAD
 
SQL> insert into radiohead select rownum, 'OK COMPUTER' from dual connect by level <= 10;
 
10 rows created.
 
SQL> commit;
 
Commit complete.

 
 

If we now attempt to add 12 more rows, but including the values 1 – 10:

 
SQL> insert into radiohead select rownum, 'OK COMPUTER' from dual connect by level <= 12;
insert into radiohead select rownum, 'OK COMPUTER' from dual connect by level <= 12
*
ERROR at line 1:
ORA-00001: unique constraint (BOWIE.RADIOHEAD_PK_I) violated

  
We obviously get a unique constraint violation error.
 
However, Oracle11g allows us to use the IGNORE_ROW_ON_DUPKEY_INDEX hint, which will silently deal with the unique constraint violation errors by simply ignoring and not inserting any row in which the unique values already exist in the table.
 
The hint comes in 2 formats, the first allows us to specify the unique index which contains the unique values to be ignored:
 

 
SQL> insert /*+ ignore_row_on_dupkey_index(radiohead,radiohead_pk_i) */ into radiohead select rownum, 'OK COMPUTER' from dual connect by level <= 12;
 
2 rows created.
 
SQL> commit;
 
Commit complete.

 
 

  
Note the 10 duplicate values (1 – 10) have been ignored and have not been inserted into the table but the values 11 and 12 which didn’t previously exist have successfully been inserted into the table.
 
Here we attempt to insert values 1 – 13 into the table, although now values 1 – 12 currently already exist. This time, we’ll use the second format of the hint which allows us to stipulate the column which contains unique values which are to be ignored if they already exist:
 

 
SQL> insert /*+ ignore_row_on_dupkey_index(radiohead(id)) */ into radiohead select rownum, 'OK COMPUTER' from dual connect by level <= 13;
 
1 row created.
 
SQL> commit;
 
Commit complete.
 

SQL> select * from radiohead;
 
        ID NAME
---------- --------------------
         1 OK COMPUTER
         2 OK COMPUTER
         3 OK COMPUTER
         4 OK COMPUTER
         5 OK COMPUTER
         6 OK COMPUTER
         7 OK COMPUTER
         8 OK COMPUTER
         9 OK COMPUTER
        10 OK COMPUTER
        11 OK COMPUTER
        12 OK COMPUTER
        13 OK COMPUTER
 
13 rows selected.

 
 

Note in this case, the values 1 – 12 have all been silently ignored with just the value 13 inserted this time into the table.
 
You can’t however use this hint within an update statement …

 
SQL> update /*+ ignore_row_on_dupkey_index(radiohead,radiohead_pk_i) */ radiohead set id = 13 where id = 3;
update /*+ ignore_row_on_dupkey_index(radiohead,radiohead_pk_i) */ radiohead set id = 13 where id = 3
*
ERROR at line 1:
ORA-38917: IGNORE_ROW_ON_DUPKEY_INDEX hint disallowed for this operation

 
 

Interesting difference in the behaviour of this hint. Usually “invalid” hints are just ignored and treated as comments but here if an illegal operation is attempted with the use of this “hint”, an error is invoked.
 
Going to now set up the same demo again, but this time police the Primary Key constraint via a Non-Unique index:

 
SQL> drop table radiohead;
 
Table dropped.
 
SQL> create table radiohead (id number constraint radiohead_pk_i primary key using index (create index radiohead_pk_i on radiohead(id)), name varchar2(20));
 
Table created.
 
SQL> insert into radiohead select rownum, 'OK COMPUTER' from dual connect by level <= 10;
 
10 rows created.
 
SQL> commit;
 
Commit complete.
 
SQL> select index_name, uniqueness, table_name from dba_indexes where index_name='RADIOHEAD_PK_I';
 
INDEX_NAME                     UNIQUENES TABLE_NAME
------------------------------ --------- ------------------------------
RADIOHEAD_PK_I                 NONUNIQUE RADIOHEAD
 

SQL> insert /*+ ignore_row_on_dupkey_index(radiohead,radiohead_pk_i) */ into radiohead select rownum, 'OK COMPUTER' from dual connect by level <= 12;
insert /*+ ignore_row_on_dupkey_index(radiohead,radiohead_pk_i) */ into radiohead select rownum, 'OK COMPUTER' from dual connect by level <= 12
                                                                        *
ERROR at line 1:
ORA-38913: Index specified in the index hint is invalid
 

SQL> insert /*+ ignore_row_on_dupkey_index(radiohead(id)) */ into radiohead select rownum, 'OK COMPUTER' from dual connect by level <= 13;
insert /*+ ignore_row_on_dupkey_index(radiohead(id)) */ into radiohead select rownum, 'OK COMPUTER' from dual connect by level <= 13
                                                             *
ERROR at line 1:
ORA-38913: Index specified in the index hint is invalid

 
 

Note again an error is invoked here as well as this hint can only be applied via a constraint policed via a Unique index. A Non-Unique index, even with a Unique or PK constraint in place is not sufficient and will generate the above error.
 
Yet another reason to use Unique indexes to police constraints whenever possible and practical …

11g Virtual Columns and Fast Refreshable Materialized Views (What In The World) November 24, 2010

Posted by Richard Foote in 11g, 11g New features, Function Based Indexes, Oracle Bugs, Virtual Columns.
9 comments

Previous to Oracle 11g Rel 2, two very common and useful features previously worked well together, they being fast refreshable materialized views and the introduction of virtual columns due to the creation of function-based indexes.
 
To illustrate, we create and populate a little demo table: 

 
SQL> create table bowie (a number, b number, c number);
 
Table created.
 
SQL> insert into bowie select rownum, mod(rownum,10), mod(rownum,100) from dual connect by level <= 100000;
 
100000 rows created.
 
SQL> commit;
 
Commit complete.
 
SQL> alter table bowie add primary key (a);
 
Table altered.

 
We now create a simple little function-based index:
 

 
SQL> create index bowie_func_i on bowie(b+c);
 
Index created.

 
 
If we look at the columns in the table via DBA_TAB_COLS:
 

 
SQL> select column_name, data_default, virtual_column, hidden_column from dba_tab_cols where table_name = 'BOWIE';
 
COLUMN_NAME  DATA_DEFAULT VIR HID
------------ ------------ --- ---
SYS_NC00004$ "B"+"C"      YES YES
C                         NO  NO
B                         NO  NO
A                         NO  NO

 

We notice Oracle has introduced a new, hidden virtual column (SYS_NC00004$), required to store statistics for use by the Cost Based Optimizer.
 
Next we create a materialized view log on this table and a fast refreshable materialized view:
 

 
SQL> create materialized view log on bowie WITH PRIMARY KEY,SEQUENCE, ROWID (b,c) INCLUDING NEW VALUES;
 
Materialized view log created.
 
SQL> create materialized view bowie_mv
  2  build immediate
  3  refresh fast
  4  with primary key
  5  enable query rewrite
  6  as
  7  select b, count(*) from bowie group by b;
 
Materialized view created.

 
 
Collect a few statistics and we note the Materialized View does indeed get used during a query rewrite scenario:
 

 
SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'BOWIE', estimate_percent=>null, cascade=> true, method_opt=> 'FOR ALL COLUMNS SIZE 1')
 
PL/SQL procedure successfully completed.
 
SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'BOWIE_MV', estimate_percent=>null, cascade=> true, method_opt=> 'FOR ALL COLUMNS SIZE 1')
 
PL/SQL procedure successfully completed. 

SQL> select b, count(*) from bowie having b > 3 group by b;
 
         B   COUNT(*)
---------- ----------
         6      10000
         4      10000
         5      10000
         8      10000
         7      10000
         9      10000
 
6 rows selected.
 

Execution Plan
----------------------------------------------------------
Plan hash value: 593592962
 
-----------------------------------------------------------------------------------------
| Id  | Operation                    | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |          |     7 |    42 |     2   (0)| 00:00:01 |
|*  1 |  MAT_VIEW REWRITE ACCESS FULL| BOWIE_MV |     7 |    42 |     2   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------
 

Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter("BOWIE_MV"."B">3)
 

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          4  consistent gets
          0  physical reads
          0  redo size
        538  bytes sent via SQL*Net to client
        395  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          6  rows processed

 
 

And indeed the materialized view is fast refreshable:
 

 
SQL> insert into bowie values (100001, 5, 42);
 
1 row created.
 
SQL> commit;
 
Commit complete.
 
SQL> exec dbms_mview.refresh('BOWIE_MV', 'F');
 
PL/SQL procedure successfully completed.
 
 
SQL> select b, count(*) from bowie having b > 3 group by b;
 
         B   COUNT(*)
---------- ----------
         6      10000
         4      10000
         5      10001
         8      10000
         7      10000
         9      10000
 
6 rows selected.
 

Execution Plan
----------------------------------------------------------
Plan hash value: 593592962
 
-----------------------------------------------------------------------------------------
| Id  | Operation                    | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |          |     7 |    42 |     2   (0)| 00:00:01 |
|*  1 |  MAT_VIEW REWRITE ACCESS FULL| BOWIE_MV |     7 |    42 |     2   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------
 

Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter("BOWIE_MV"."B">3)
 

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          4  consistent gets
          0  physical reads
          0  redo size
        546  bytes sent via SQL*Net to client
        395  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          6  rows processed

 
Notice how the materialized view does indeed displayed the correct updated information via the query rewrite operation . So the materialized view behaved and worked as expected even though the underlining master table has a virtual column due to the creation of the function-based index (note that QUERY_REWRITE_INTEGRITY is set to STALE_TOLERATED)
 
Unfortunately, things go off the rails somewhat since Oracle 11g Rel 2 when a virtual column is introduced due to one of the 11g new features. For example, I now collect some Extended Statistics on this table:
 

 
SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'BOWIE', method_opt=> 'FOR COLUMNS (A,B,C) SIZE 254');
 
PL/SQL procedure successfully completed.
 
SQL> select column_name, data_default, virtual_column, hidden_column from dba_tab_cols where table_name = 'BOWIE';
 
COLUMN_NAME                    DATA_DEFAULT                      VIR HID
------------------------------ --------------------------------- --- ---
SYS_STUM4KJU$CCICS9C1UJ6UWC4YP SYS_OP_COMBINED_HASH("A","B","C") YES YES
SYS_NC00004$                   "B"+"C"                           YES YES
C                                                                NO  NO
B                                                                NO  NO
A                                                                NO  NO

 
 
Notice how extended statistics has resulted in another hidden virtual column (SYS_STUM4KJU$CCICS9C1UJ6UWC4YP) being created to store the resultant statistics.
 
However, if now attempt to perform a fast refresh on the Materialized View:

 
SQL> insert into bowie values (100002, 5, 42);
 
1 row created.
 
SQL> commit;
 
Commit complete.
 
SQL> exec dbms_mview.refresh('BOWIE_MV', 'F');
BEGIN dbms_mview.refresh('BOWIE_MV', 'F'); END;
 
*
ERROR at line 1:
ORA-12008: error in materialized view refresh path
ORA-00904: "MAS$"."SYS_STUM4KJU$CCICS9C1UJ6UWC4YP": invalid identifier
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2558
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2771
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2740
ORA-06512: at line 1

 

We get an error, complaining about the existence of this new virtual column.
 
If we attempted to drop and re-create the materialized view:

 
SQL> drop materialized view bowie_mv;
 
Materialized view dropped.
 
SQL> create materialized view bowie_mv
  2  build immediate
  3  refresh fast
  4  with primary key
  5  enable query rewrite
  6  as
  7  select b, count(*) from bowie group by b;
select b, count(*) from bowie group by b
                        *
ERROR at line 7:
ORA-12033: cannot use filter columns from materialized view log on "BOWIE"."BOWIE"

 

It fails, complaining that the materialized view log is somehow missing a filter column (which it isn’t).  We get exactly the same set of issues if we add a visible virtual column via this new 11g capability: 
 
 

 
SQL> create table bowie2 (a number, b number, c number, d as (a+b+c));
 
Table created.
 
SQL> select column_name, data_default, virtual_column, hidden_column from dba_tab_cols where table_name = 'BOWIE2';
 
COLUMN_NAME  DATA_DEFAULT VIR HID
------------ ------------ --- ---
D            "A"+"B"+"C"  YES NO
C                         NO  NO
B                         NO  NO
A                         NO  NO
 
SQL> insert into bowie2 (a,b,c) select rownum, mod(rownum,10), mod(rownum,100) from dual connect by level <= 100000;
 
100000 rows created.
 
SQL> commit;
 
Commit complete.
 
SQL> alter table bowie2 add primary key (a);
 
Table altered.
 
SQL> create materialized view log on bowie2 WITH PRIMARY KEY,SEQUENCE, ROWID (b,c) INCLUDING NEW VALUES;
 
Materialized view log created.
 
SQL> create materialized view bowie2_mv
  2  build immediate
  3  refresh fast
  4  with primary key
  5  enable query rewrite
  6  as
  7  select b, count(*) from bowie2 group by b;
select b, count(*) from bowie2 group by b
                        *
ERROR at line 7:
ORA-12033: cannot use filter columns from materialized view log on "BOWIE"."BOWIE2"

 
Extended statistics and visible virtual columns are both potentially extremely useful new features introduced in 11g but unfortunately both can not be implemented on any table that needs to be fast refreshable within a complex materialized view.

I raised this issue with Oracle Support who have raised bug 10281402 as a result as it occurs in both 11.2.0.1 and 11.2.0.2 on various platforms I’ve tested.

ACT Oracle User Group Seminar Session: 2 June 2010 May 27, 2010

Posted by Richard Foote in 11g New features, Advert, Oracle Indexes.
add a comment

Just a short note for anyone near the Canberra region that the next ACT Oracle User Group Seminar Session will be held next Wednesday, 2 June 2010 at the Oracle Offices in Turner.

I’ll be presenting “Indexing New Features and Improvements Introduced in Oracle 11g Release 1 & 2″. Follow the above link for the full agenda.

If that isn’t enough to make you want to attend, there will be free beer and pizza available afterwards :)

Follow

Get every new post delivered to your Inbox.

Join 1,703 other followers