jump to navigation

12.2 Moving Tables Online (Real Cool World) March 15, 2017

Posted by Richard Foote in Oracle Indexes.
2 comments

real cool world

One of the really cool new features introduced in Oracle Database 12c Release 2 is the capability to MOVE tables ONLINE, while maintaining all the associated index structures. This was possible for table partitions in Oracle Database 12c Release 1 but only for Indexed Organized Tables before then.

Rather than show a demo that just moves a table, thought I might also provide a classic example of when moving a table might be extremely beneficial.

I’ll start by creating a table and specifically populate a CODE column with 100 distinct values that are distributed throughout the entire table structure:


SQL> create table ziggy (id number, code number, date_created date, name varchar2(42));

Table created.

SQL> insert into ziggy select rownum, mod(rownum,100), sysdate – mod(rownum,1000), DAVID BOWIE’ from dual connect by level <= 2000000; 2000000 rows created. SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats(ownname=>user, tabname=>'ZIGGY', estimate_percent=>null, method_opt=>'FOR ALL COLUMNS SIZE 1');

PL/SQL procedure successfully completed.

If I create an index on the CODE column, we’ll see the index has a terrible Clustering Factor (a massive 923,900), as the values basically appear throughout the entire table in each of the table blocks:


SQL> create index ziggy_code_i on ziggy(code);

Index created.

SQL> select index_name, clustering_factor, num_rows from user_indexes
where index_name='ZIGGY_CODE_I';

INDEX_NAME           CLUSTERING_FACTOR   NUM_ROWS
-------------------- ----------------- ----------
ZIGGY_CODE_I                    923900    2000000

If we now run a query that returns rows from just one CODE value (i.e. just 1% of the table):


SQL> set arraysize 5000
SQL> select * from ziggy where code = 42;

20000 rows selected.

Execution Plan
-----------------------------------------------------------------------------------
| Id | Operation                 | Name  | Rows  | Bytes | Cost (%CPU) | Time     |
-----------------------------------------------------------------------------------
| 0  | SELECT STATEMENT          |       | 20000 |  546K |    2750 (1) | 00:00:01 |
|* 1 | TABLE ACCESS STORAGE FULL | ZIGGY | 20000 |  546K |    2750 (1) | 00:00:01 |
-----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - storage("CODE"=42)
filter("CODE"=42)

Statistics
----------------------------------------------------------
     0 recursive calls
     0 db block gets
  9354 consistent gets
     0 physical reads
     0 redo size
367165 bytes sent via SQL*Net to client
   533 bytes received via SQL*Net from client
     5 SQL*Net roundtrips to/from client
     0 sorts (memory)
     0 sorts (disk)
 20000 rows processed

We see the Cost Based Optimizer (CBO) ignores the index and opts for a Full Table Scan. With such a terrible Clustering Factor and with each block in the table having to be accessed, the CBO is making the correct decision here. The index is effectively useless, even though we’re only interested in just 1% of the table. At 9,354 consistent gets, it’s the best we can do.

If this query was important to us and executed frequently, we might want to look at improving the Clustering Factor and hence the efficiency of this index. Unlike rumors to the contrary (which unfortunately still exist, believe), the key here is not to rebuild the index, but to reorganise the table such that data in the table is sorted/clustered in CODE order.

Oracle database 12.1.0.2 introduced the concept of the table Clustering Attribute, by which the table after a reorg or bulk load will store the data based on this Clustering Attribute (as I’ve discussed previously). For example:


SQL> alter table ziggy add clustering by linear order(code) without materialized zonemap;

Table altered.

This now ensures that the data within the table will be in CODE order after a table re-org, but we need to re-org the table for this to take effect. However, prior to 12.2, it wasn’t possible to subsequently MOVE the table ONLINE:


SQL> alter table ziggy move online;
alter table ziggy move online
*
ERROR at line 1:
ORA-01735: invalid ALTER TABLE option

Moving a heap table previously caused locking issues and resulted in all associated indexes beoming unusable and having to be rebuilt, further adding to the effective unavailability of the table:


SQL> alter table ziggy move;
alter table ziggy move
*
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired

SQL> alter table ziggy move;

Table altered.

SQL> select index_name, leaf_blocks, status from dba_indexes where table_name = 'ziggy';

INDEX_NAME      LEAF_BLOCKS STATUS
--------------- ----------- --------
ZIGGY_CODE_I           1936 UNUSABLE

That’s now changed with Oracle Database 12c Release 2, where we can now move heap tables online while fully maintaining all associated indexes:


SQL> alter table ziggy move online;

Table altered.

SQL> select index_name, clustering_factor, num_rows, status from user_indexes
where index_name='ZIGGY_CODE_I';

INDEX_NAME      CLUSTERING_FACTOR   NUM_ROWS STATUS
--------------- ----------------- ---------- ------
ZIGGY_CODE_I                 9277    2000000 VALID

After the table re-org, we note that not only is the index still in a valid state, but because the table re-org effectively sorted the data in CODE order due to the Clustering Attribute, it has result in a much much reduced Clustering Factor (just 9,277 reduced down from 923,900).

If we now re-run our original query:


SQL> select * from ziggy where code=42;

20000 rows selected.

Execution Plan
----------------------------------------------------------------------------------------------------
| Id | Operation                           | Name         | Rows  | Bytes | Cost (%CPU) | Time     |
----------------------------------------------------------------------------------------------------
| 0  | SELECT STATEMENT                    |              | 20000 |  546K |     126 (0) | 00:00:01 |
| 1  | TABLE ACCESS BY INDEX ROWID BATCHED | ZIGGY        | 20000 |  546K |     126 (0) | 00:00:01 |
|* 2 | INDEX RANGE SCAN                    | ZIGGY_CODE_I | 20000 |       |      33 (0) | 00:00:01 |
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("CODE"=42)

Statistics
----------------------------------------------------------
     0 recursive calls
     0 db block gets
   136 consistent gets
     0 physical reads
     0 redo size
670725 bytes sent via SQL*Net to client
   533 bytes received via SQL*Net from client
     5 SQL*Net roundtrips to/from client
     0 sorts (memory)
     0 sorts (disk)
 20000 rows processed

We notice the index is now used and has resulted in a significant reduction of consistent gets (just 136, down from 9,354). The query is now much much more efficient than it was previously.

An issue here though is that we have to re-org the entire table in order to address the clustering of any new rows that are subsequently added with standard DML (as the Clustering Attribute is ineffective in this case).

Wouldn’t it be nice if we could easily convert this table (and associated indexes) to be partitioned and hence only have to concern ourselves with subsequently reorganising just those partitions that have new/changed data.

Topic of my next post 🙂

Advertisements

Good Singapore Maths Students Would Likely Make Good Oracle DBAs (Problems) April 15, 2015

Posted by Richard Foote in Oracle Indexes.
32 comments

An interesting mathematics based question from a Singapore high school exam has been doing the internet rounds in the past few days. Considering it’s aimed at high school students, it’s a tricky one and obviously designed to filter out the better students, in a country with a very good reputation for churning out mathematically gifted students.

I enjoy solving problems and this one took me a few minutes to work it out. However, at the end of the process, it occurred to me that I used a similar process to how I’ve often solved performance issues with Oracle databases. In fact, this question kinda reminded me of a performance issue that I had only just recently been asked by a customer to help resolve.

One needs to clearly understand the question being asked. One also needs to focus and understand the data at hand. Then use a process of elimination to both rule out and just as importantly rule in possible answers (or root causes to performance issues). Eventually, one can then narrow down and pinpoint things down to a specific solution (or set of answers).

So for example, the database is running exceptionally slow globally at certain times, why ?  So it looks like it’s because there are lots of cluster related waits at these times, why ? So it looks like it’s because writing to the redo logs is really slow at these times, why ? And so on and so on.

If you can work out the solution to this problem in a reasonably timely manner, then in all likelihood you have good problem solving skills and the makings of a good Oracle DBA. You just need to also like donuts and good whiskies 🙂

I’ve reproduced the question here, changing the names to protect the innocent.

“Bowie and Ziggy just become friends with me, and they want to know when my birthday is. I give them 10 possible dates:

May 13   May 15   May 19

June 13   June 14

July 16   July 18

August 14   August 15   August 16

I then tell Bowie and Ziggy separately the month and the day of my birthday respectively.

Bowie: I don’t know when your birthday is, but I know Ziggy does not know too.

Ziggy: At first I don’t know when your birthday is, but now I know.

Bowie: Then I also know when your birthday is.

So when is my birthday ?”

Feel free to comment on what you think the answer is but please don’t give away how you might have worked it out. For those interested (or for those that don’t check out the solution on the web first 🙂 ), I’ll explain how to get to the answer in a few days time.

Like I said, if you get it right, you should consider a career as an Oracle DBA !! And here’s a link to an excellent whisky: Sullivans Cove Whisky 🙂

Presenting at UKOUG Tech14 Conference (Ian Fish, U K Heir) July 15, 2014

Posted by Richard Foote in Richard Presentations, UKOUG Tech14.
add a comment

I’ve been lucky enough to present at various conferences, seminars and user group events over the years in some 20 odd countries. But somewhere I’ve never quite managed to present at before is the place of my birth, the UK. Well this year, I’ve decided end my drought and submitted a number of papers for the UKOUG Tech14 Conference and the parallel OakTable World UK14 Conference which were all thankfully accepted. So the land of David Bowie, Coronation Street, Bruce Forsyth, televised darts and a touch of drizzle, here I come.

The conference this year is being held in balmy Liverpool on 8-10 December 2014.

I’ll be presenting (times subject to change):

Oracle Indexes Q&A Session With Richard Foote (at OakTable World UK14)
(08/12/2014 09:00 – 09:50)

Oracle Database 12c New Indexing Features
(09/12/2014 12:00 – 12:50)

Indexing In Exadata
(10/12/2014 11:30 – 12:20)

 

Hopefully I’ll get to meet some of you there 🙂

12c Online Partitioned Table Reorganisation Part II (Move On) January 15, 2014

Posted by Richard Foote in 12c, Move Partitions, Oracle Indexes, Update Indexes Online.
15 comments

In Part I, we looked at some of the issues associated with locking and unusable indexes when Moving both tables and table partitions.

The Oracle 12c Database has introduced a number of great new capabilities associated with online DDL activities. One of these really useful features is the capability to now move table partitions online while maintaining associated indexes.

If we look at the same demo as with the previous post:

SQL> create table muse (id number, status varchar2(6), name varchar2(30))
2 partition by range (id)
3 (partition p1 values less than (1000001)
4 partition p2 values less than (2000001)
5 partition p3 values less than (maxvalue));

Table created.

SQL> insert into muse select rownum, 'CLOSED', 'DAVID BOWIE' from dual connect by level >= 3000000;

3000000 rows created.

SQL> commit;

Commit complete.

SQL> create index muse_id_pk on muse(id);

Index created.

SQL> alter table muse add constraint muse_id_pk primary key(id);

Table altered.

SQL> create index muse_status_i on muse(status) local;

Index created.

If in one session we have an active transaction (i.e. not yet committed):

SQL> insert into muse values (3000001, 'OPEN', 'ZIGGY STARDUST');

1 row created.

while we move a table partition in another session as we did previously:

SQL> alter table muse move partition p3 tablespace users update indexes;

alter table muse move partition p3 tablespace users update indexes

ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired

We get the same resource busy error.

However, if we now use the new 12c UPDATE INDEXES ONLINE clause:

SQL> alter table muse move partition p3 tablespace users update indexes online;

The session now simply hangs until the transaction in session one completes, in a similar manner to that of an index rebuild online operation.

Once we commit in session one:

SQL> commit;

Commit complete.

The Move Partition operation can obtain the necessary table partition lock and complete successfully:

SQL> alter table muse move partition p3 tablespace users update indexes online;

Table altered.

The indexes remain in a USABLE state throughout:

SQL> select index_name, partition_name, status from dba_ind_partitions
where index_name='MUSE_STATUS_I'
union
select index_name, null, status from dba_indexes where index_name='MUSE_ID_PK';

INDEX_NAME      PARTITION_NAME       STATUS
--------------- -------------------- --------
MUSE_ID_PK                           VALID
MUSE_STATUS_I   P1                   USABLE
MUSE_STATUS_I   P2                   USABLE
MUSE_STATUS_I   P3                   USABLE

If we perform these operations the other way around with the Move Partition first:

SQL> alter table muse move partition p3 tablespace users update indexes online;

While in session two we now insert a new row into the partition being moved:

SQL> insert into muse values (3000004, 'OPEN', 'ZIGGY STARDUST');

1 row created.

The insert operation is now not locked out by the Move Partition operation and can proceed. The Move Partition operation in turn requires a brief lock at the end of the process (in much the same way as a normal online index rebuild) and so will hang if there are any outstanding transactions on the partition being moved at that time. And again like an online index rebuild, this will not in turn lock out other DML transactions.

Once the commit is performed in session two:

SQL> commit;

Commit complete.

The Move Partition can complete:

SQL> alter table muse4 move partition p3 tablespace users update indexes online;

Table altered.

And importantly, all the associated indexes again remain in a USABLE state:

SQL> select index_name, partition_name, status
from dba_ind_partitions where index_name='MUSE_CODE_I'
union select index_name, null, status from dba_indexes
where index_name='MUSE_ID_PK';

INDEX_NAME      PARTITION_NAME       STATUS
--------------- -------------------- --------
MUSE_CODE_I     P1                   USABLE
MUSE_CODE_I     P2                   USABLE
MUSE_CODE_I     P3                   USABLE
MUSE_ID_PK                           VALID

Now this is only applicable to the online move of table partitions. Moving a non-partitioned table still has the same issues and restrictions as before (as discussed in Part I).

Therefore for those with the Partitioning option, serious consideration could be given to converting tables to partitioned tables, even if the table only has the one partition:

SQL> create table muse3 (id number, status varchar2(6), name varchar2(30))
2 partition by range (id)
3 (partition p1 values less than (maxvalue));

Table created.

Such a table is effectively the equivalent of a non-partitioned table, but can now take advantage of the flexibility that the new online move partition capability provides 🙂

Exadata Storage Indexes Part III – Similarities With Database Indexes (Same Old Scene) October 15, 2012

Posted by Richard Foote in Exadata, Oracle Indexes, Storage Indexes.
2 comments

As discussed previously, there are quite a number of differences between Storage Indexes (SIs) and Database Indexes (DIs). However, there are also a number similarities between both of them as well.

The obvious one is that they’re both designed specifically to reduce the overheads associated with retrieving the required data out of the database. Both index structures provides a method by which Oracle can avoid having to read every row/block in a table when searching for data of interest. It’s just the actual implementation of this mechanism that differs between the two general index types as I’ve previously discussed.

The efficiency of both index types is very largely dependant upon the clustering of the indexed data within the table (i.e. the index Clustoring Factor). The better the clustering of the related indexed data within the table, the more tightly coupled the required data is likely to be and so the more efficient both index types would be in avoiding accessing unnecessary data. If the required data were to be less well clustered and randomly distributed throughout the table, the less efficient would be both index types in retrieving the necessary data. Some actual examples of this to come in future posts.

Both index types have a period of “warming up” before being fully effective. It’s simply the manner in which this warming up process occurs that differs between the two. Database indexes on a freshly bounced database server initially incur substantial physical I/Os (PIOs) until the related index/table data is cached within the database buffer cache (and indeed in the flash cache). These PIOs can significantly reduce the performance of the SQL plans utilising database indexes. Storages indexes on a freshly bounced storage server need to be recreated and can’t immediately eliminate accessing unnecessary storage regions. This can significantly reduce the performance of Full Table Scans until the associated SIs are fully created. Again, some actual demos on all this to come in future posts.

Both index types can use “Index Combine” like logic and use multiple indexes in combination with each other to further reduce the actual number of table blocks that need to be accessed to retrieve the data of interest. Storage and Bitmap database indexes are especially suited to these types of index combine operations, although B-Tree indexes can also be used in this manner.

Both Oracle index types are really quite interesting and often misunderstood and so meets the general theme of this blog, meaning I can quite happily blog about them without shocking too many people in the process 🙂

Like I said, more to come …

Index Rebuild – Does it use the Index or the Table ? (Nothing Touches Me) May 15, 2012

Posted by Richard Foote in Index Rebuild, Oracle Indexes, Secondary Indexes.
10 comments

A common question that gets asked is does Oracle access the index itself or the parent table during an index rebuild to extract the necessary data for the index ? Thought it might be worth a blog post to discuss.

Now if the index is currently in an UNUSABLE state, then Oracle clearly can’t use the existing index during the index rebuild operation. So we’ll assume both table and index are hunky dory.

OK, to setup the first demo (using 11.2.0.1), we create and populate a table and index with the index being somewhat smaller than the parent table as is most common:

SQL> create table bowie (id number, code number, name1 varchar2(30), name2 varchar2(30), name3 varchar2(30), name4 varchar2(30), name5 varchar2(30), name6 varchar2(30), name7 varchar2(30), name8 varchar2(30), name9 varchar2(30), name10 varchar2(30));

Table created.

SQL> insert into bowie select rownum, mod(rownum, 100), 'DAVID BOWIE', 'DAVID BOWIE', 'DAVID BOWIE', 'DAVID BOWIE', 'DAVID BOWIE', 'DAVID BOWIE', 'DAVID BOWIE','DAVID BOWIE', 'DAVID BOWIE', 'DAVID BOWIE' from dual connect by level <= 1000000;

1000000 rows created.

SQL> commit;

Commit complete.

SQL> create index bowie_code_i on bowie(code);

Index created.

SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'BOWIE', estimate_percent=>null, cascade=> true);

PL/SQL procedure successfully completed.

If we look at the corresponding size of table and index:

SQL> select table_name, blocks from dba_tables where table_name = 'BOWIE';

TABLE_NAME                         BLOCKS
------------------------------ ----------
BOWIE                               19277

SQL> select index_name, leaf_blocks from dba_indexes where index_name = 'BOWIE_CODE_I';

INDEX_NAME                     LEAF_BLOCKS
------------------------------ -----------
BOWIE_CODE_I                          1948

As is common, the table is somewhat larger than the corresponding index.

Now in my first demo, I’m just going to perform a normal offline Index Rebuild. I’ll however trace the session to see what might be happening behind the scenes (the good old alter session set events ‘10046 trace name context forever, level 12’; still does the job). I’ll also flush the buffer cache as well to ensure the trace file shows me which blocks from which object get accessed.

SQL> alter system flush buffer_cache;

System altered.

SQL> alter session set events '10046 trace name context forever, level 12';

Session altered.

SQL> alter index bowie_code_i rebuild;

Index altered.

SQL> alter session set events '10046 trace name context off';

Session altered.

There’s lots of information of interest in the resultant trace file, well, for someone with an unhealthy interest in Oracle indexes anyways 🙂 However, the portion that’s of direct interest in this discussion is to see which object Oracle accesses in order to read the necessary data for the index rebuild. The trace file will contain a relatively extensive section with the following wait events (the following is just a short sample):

WAIT #6: nam=’db file scattered read’ ela= 933 file#=4 block#=79339 blocks=5 obj#=75737 tim=20402099526
WAIT #6: nam=’db file scattered read’ ela= 1016 file#=4 block#=79344 blocks=8 obj#=75737 tim=20402102334
WAIT #6: nam=’db file scattered read’ ela= 978 file#=4 block#=79353 blocks=7 obj#=75737 tim=20402106904
WAIT #6: nam=’db file scattered read’ ela= 9519 file#=4 block#=80000 blocks=8 obj#=75737 tim=20402119605
WAIT #6: nam=’db file scattered read’ ela= 2800 file#=4 block#=80009 blocks=7 obj#=75737 tim=20402131869

….

If we query the database for the identity of object 75737:

SQL> select object_name from dba_objects where object_id = 75737;

OBJECT_NAME
-----------------------

BOWIE_CODE_I

We can see that Oracle has accessed the data from the Index itself, using multi-block reads. As the index is the smallest segment that contains the necessary data, Oracle can very efficiently read all the required data (the expensive bit) from the index itself, perform a sort of all the data (as a multi-block read will not return the data in a sorted format) and complete the rebuild process relatively quickly. Note the table is locked throughout the entire index rebuild operation preventing DML operations on the table/index and so for an offline index rebuild, Oracle can access the Index segment without complication.

I’m going to repeat the same process but this time perform an Online index rebuild operation:

SQL> alter session set events '10046 trace name context forever, level 12';

Session altered.

SQL> alter index bowie_code_i rebuild online;

Index altered.

SQL> alter session set events '10046 trace name context off';

Session altered.

We notice this time there are many more wait events than previously and that another object is referenced:

WAIT #5: nam=’db file scattered read’ ela= 8259 file#=4 block#=5635 blocks=5 obj#=75736 tim=4520179453
WAIT #5: nam=’db file scattered read’ ela= 1656 file#=4 block#=5640 blocks=8 obj#=75736 tim=4520181368
WAIT #5: nam=’db file scattered read’ ela= 891 file#=4 block#=5649 blocks=7 obj#=75736 tim=4520182459
WAIT #5: nam=’db file scattered read’ ela= 886 file#=4 block#=5656 blocks=8 obj#=75736 tim=4520183544
WAIT #5: nam=’db file scattered read’ ela= 827 file#=4 block#=5665 blocks=7 obj#=75736 tim=4520184579

SQL> select object_name from dba_objects where object_id = 75736;

OBJECT_NAME
-------------------------

BOWIE

This time, the much larger BOWIE parent table has been accessed. So with an Online rebuild, Oracle is forced to use the parent table to access the data for the rebuild operation due to the concurrency issues associated with changes being permitted to the underlying table/index during the rebuild process. So although an online index rebuild has availability advantages, it comes at the cost of having to access the parent table which can result in much additional I/O operations. So if you don’t have availability concerns, an offline index rebuild is probably going to be the more efficient option.

In fact, Oracle can be quite clever in deciding which object to access with an offline rebuild …

In this next example, I’m going to create another table/index, only this time the index is somewhat larger than the parent table. This scenario is less common but certainly possible depending on circumstances:

SQL> create table bowie2 (id number, code number, name varchar2(30));

Table created.

SQL> insert into bowie2 select rownum, mod(rownum,100), 'DAVID BOWIE' from dual connect by level<= 1000000;

1000000 rows created.

SQL> commit;

Commit complete.

SQL> create index bowie2_code_i on bowie2(code) pctfree 90;

Index created.

SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'BOWIE2', estimate_percent=>null, cascade=> true);

PL/SQL procedure successfully completed.

SQL> select table_name, blocks from dba_tables where table_name = 'BOWIE2';

TABLE_NAME                         BLOCKS
------------------------------ ----------
BOWIE2                               3520

SQL> select index_name, leaf_blocks from dba_indexes where index_name = 'BOWIE2_CODE_I';

INDEX_NAME                     LEAF_BLOCKS
------------------------------ -----------
BOWIE2_CODE_I                        21726

So the index is indeed much larger than the table. Which object will Oracle access now during an offline rebuild ?

SQL> alter system flush buffer_cache;

System altered.

SQL> alter session set events '10046 trace name context forever, level 12';

Session altered.

SQL> alter index bowie2_code_i rebuild;

Index altered.

SQL> alter session set events '10046 trace name context off';

Session altered.

A look at the trace file reveals:

WAIT #15: nam=’db file scattered read’ ela= 2278 file#=4 block#=81723 blocks=5 obj#=75744 tim=8570990574
WAIT #15: nam=’db file scattered read’ ela= 2733 file#=4 block#=81728 blocks=8 obj#=75744 tim=8570994765
WAIT #15: nam=’db file scattered read’ ela= 2398 file#=4 block#=81737 blocks=7 obj#=75744 tim=8570999057
WAIT #15: nam=’db file scattered read’ ela= 2661 file#=4 block#=81744 blocks=8 obj#=75744 tim=8571003369
WAIT #15: nam=’db file scattered read’ ela= 1918 file#=4 block#=81753 blocks=7 obj#=75744 tim=8571006709

SQL> select object_name from dba_objects where object_id = 75744;

OBJECT_NAME
----------------------------

BOWIE2

In this case, the smaller table segment is accessed. So during an offline rebuild, Oracle will access either the table or index, depending on which one is smaller and cheaper to read.

What if we now create another index that also contains the CODE column which is smaller than both the table and the existing index.

SQL> create index bowie2_code_id_i on bowie2(code, id);

Index created.

SQL> select index_name, leaf_blocks from dba_indexes where index_name = 'BOWIE2_CODE_ID_I';

INDEX_NAME                     LEAF_BLOCKS
------------------------------ -----------
BOWIE2_CODE_ID_I                      2642

SQL> alter system flush buffer_cache;

System altered.

SQL> alter session set events '10046 trace name context forever, level 12';

Session altered.

SQL> alter index bowie2_code_i rebuild;

Index altered.

SQL> alter session set events '10046 trace name context off';

Session altered.

A look at the trace file reveals:

WAIT #6: nam=’db file scattered read’ ela= 2070 file#=4 block#=85179 blocks=5 obj#=75747 tim=8925949081
WAIT #6: nam=’db file scattered read’ ela= 2864 file#=4 block#=85184 blocks=8 obj#=75747 tim=8925957161
WAIT #6: nam=’db file scattered read’ ela= 2605 file#=4 block#=85193 blocks=7 obj#=75747 tim=8925969901
WAIT #6: nam=’db file scattered read’ ela= 10636 file#=4 block#=85536 blocks=8 obj#=75747 tim=8925989726
WAIT #6: nam=’db file scattered read’ ela= 2188 file#=4 block#=85545 blocks=7 obj#=75747 tim=8925996890

SQL> select object_name from dba_objects where object_id = 75747;

OBJECT_NAME
------------------------------

BOWIE2_CODE_ID_I

In this case, the smaller alterative index is actually accessed. So it might not be the table or the index being rebuilt that gets accessed, but the smallest segment that contains the data of interest which in this case is another index entirely.

My final little demo brings me back to the subject of secondary indexes on Index Organized Tables (IOTs) I’ve been recently discussing. In this example, I create an IOT and a much smaller secondary index:

SQL> create table bowie3 (id number constraint bowie_pk primary key, code number, name1 varchar2(30), name2 varchar2(30), name3 varchar2(30), name4 varchar2(30), name5 varchar2 (30), name6 varchar2(30), name7 varchar2(30), name8 varchar2(30), name9 varchar2(30), name10 varchar2(30)) organization index;

Table created.

SQL> insert into bowie3 select rownum, mod(rownum, 100), 'DAVID BOWIE', 'DAVID BOWIE', 'DAVID BOWIE', 'DAVID BOWIE', 'DAVID BOWIE', 'DAVID BOWIE', 'DAVID BOWIE','DAVID BOWIE','DAVID BOWIE', 'DAVID BOWIE' from dual connect by level <= 1000000;

1000000 rows created.

SQL> commit;

Commit complete.

SQL> create index bowie3_code_i on bowie3(code);

Index created.

SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'BOWIE3', estimate_percent=>null, cascade=> true);

PL/SQL procedure successfully completed.

SQL> select index_name, leaf_blocks from dba_indexes where table_name = 'BOWIE3';

INDEX_NAME                     LEAF_BLOCKS
------------------------------ -----------
BOWIE_PK                             16950
BOWIE3_CODE_I                         2782

So the secondary index is much smaller. However, if I rebuild it offline:

SQL> alter system flush buffer_cache;

System altered.

SQL> alter session set events '10046 trace name context forever, level 12';

Session altered.

SQL> alter index bowie3_code_i rebuild;

Index altered.

SQL> alter session set events '10046 trace name context off';

Session altered.

A look at the trace file reveals:

WAIT #5: nam=’db file scattered read’ ela= 13019 file#=4 block#=217856 blocks=4 obj#=75733 tim=8949436015
WAIT #5: nam=’db file scattered read’ ela= 1869 file#=4 block#=72915 blocks=5 obj#=75733 tim=8949438360
WAIT #5: nam=’db file scattered read’ ela= 3023 file#=4 block#=72920 blocks=8 obj#=75733 tim=8949442877
WAIT #5: nam=’db file scattered read’ ela= 2381 file#=4 block#=72929 blocks=7 obj#=75733 tim=8949448410
WAIT #5: nam=’db file scattered read’ ela= 2613 file#=4 block#=72936 blocks=8 obj#=75733 tim=8949453521

SQL> select object_name from dba_objects where object_id = 75733;

OBJECT_NAME
---------------------------

BOWIE_PK

In this case, we see that the much larger IOT PK segment is accessed and not the smaller secondary index. When rebuilding the secondary index of an IOT, Oracle has no choice but to access the parent IOT PK segment itself as of course the secondary index doesn’t contain all the necessary information required for the index rebuild operation. The physical guess component within the secondary index might be stale and the only way for Oracle to determine the correct current address of all the rows is to access the IOT PK segment. This is another disadvantage of secondary indexes associated with IOTs, even offline index rebuilds must access the potentially much larger IOT PK segment in order to ensure the correctness of the physical guess components of the logical rowids.

So the general answer of whether an index rebuild accesses the table or index is that it depends and that it could very well be neither of them …

New Indexing Seminars Scheduled For Vienna and Tallinn (New Europeans) January 15, 2011

Posted by Richard Foote in Oracle Index Seminar.
6 comments

I have two Oracle Index Internals and Best Practices seminars scheduled in the coming months, in Austria (Vienna) and Estonia (Tallinn). These could very well be my last seminars for quite a while as I’m unsure whether I’ll be able to travel again this year, so this could be your last opportunity to see me in action 🙂

Details are:

7-8 March 2011 – Vienna

10-11 March 2011 – Tallinn

You need to book early to ensure your place and to ensure the seminars go ahead. Last year, a seminar in Prague was nearly cancelled due to lack of numbers only for 20 people to enroll in the last couple of weeks.

Hopefully, I’ll get to meet some of you at these events, assuming of course the Australian airports aren’t flooded, a distinct possibility considering the amount of rain that has been falling lately in this continent !!

InSync10 Conference Melbourne 16-17 August 2010 August 15, 2010

Posted by Richard Foote in InSync10.
2 comments

Looking forward to catching up with everyone at the InSync10 Conference in Melbourne tomorrow. Should be a great event with lots of interesting presentations. I’ll be flying in from (not so) sunny Canberra tomorrow morning in time for most of the morning sessions.

My presentation on “Indexing New Features in Oracle11g R1 and R2” is on the Tuesday morning at 10am in Room 106. As usual, I’ve packed in about 2 hours worth of material in the 45 minute session but hopefully not too many people will mind missing out on the following morning tea 😉

Be sure to tap my shoulder and say hello if you see me wandering around 🙂

Collaborate 2010: Here I Come (Red Money) April 15, 2010

Posted by Richard Foote in Collaborate 2010, Richard's Musings.
5 comments

Just a short note to say I’ll be attending and presenting at next weeks Collaborate 2010 Conference in (hopefully) sunny Las Vegas.

I’ll be presenting my latest version of Oracle Indexing Tips, Tricks and Traps which was a big hit when I presented it recently at the Hotsos Symposium.  Details are:

Session ID: 302

Date: Monday, April 19

Time: 10:45am-11:45am

Location: Surf D

As with all good presentations, the room is filling up fast so make sure you book your seat early 🙂  Hopefully, I get the opportunity to meet some of you at the conference. Please stop and say hello.

I’m really looking forward to spending some time again in Las Vegas, meeting up with some other Oracle ACEs and taking the opportunity to catch a few shows.

If plans go well and Black 26 finally pulls through and gives me a break, who knows, I might yet retire afterwards … 🙂

Oracle OpenWorld 2009 – Day 3 Highlights October 15, 2009

Posted by Richard Foote in Oracle OpenWorld, Richard's Musings.
6 comments

The first thing to note about day 3 is that it’s wet. I mean really really wet and really windy as well. Considering my home town of Canberra has been in drought now for many years with water restrictions in place, I haven’t seen this much rain for a very long time. That said, I believe it’s been raining quite a bit in Canberra since I’ve been traveling so perhaps things are beginning to change for the better at last.

Started the day by attending Dan Morgan’s Unconference session on 11g Rel 2 new features that don’t feature in the marketing material. Some good stuff is in there, with basic little things such as being able to now easily alter database links for example having the potential to make life a little easier.

I then presented my own Unconference Q & A session on Oracle indexes. The turn out was actually quite good and there were some really good discussions including Index Skip Scans, Locally Partitioned Indexes, dangers of Bitmap indexes, coalecing rather than rebuilding and how different it is to drop an index once it’s been created. Thank you to everyone who attended and for making the session a successful exercise, including Jonathan Lewis who started the ball rolling with the first question and some excellent comments and Tim Hall who managed to ask the first David Bowie related question 🙂

The heavy rain meant I didn’t have the patience or a strong enough umbrella to wander too far from the main Moscone complex, but I managed to catch a nice presentation by Karen Morton on how to effectively collect database statistics. She presented really well and I really it although perhaps didn’t highlight enough the perils of some of the auto statistics options, especially with regard to method_opt.

I then listened in to Dan Norris in the  OTN lounge on the new Exadata V2 offering. Certainly the columnar compression capabilities sound interesting. Of course I also heard that everything runs so fast that you might not need indexes on such a powerful beast, so perhaps my days on covering indexes on this blog are numbered 😉

I next had a look around at the other exhibition hall, practicing my golfing skills and picking up yet more t-shirts and yoyos that light up in funny ways. I’m a bit concerned that my luggage just won’t be able to hold all this stuff I’ve picked up !!

I have an interest in the Total Recall capabilities of 11g (and it’s lack of capabilities) and so attended a presentation on the improvements of Total Recall and flashback archive in 11g Rel 2. It looks like most of my issues with subsequent DDL changes on flashback archive enabled tables now being supported  one way or the other. However, I still have concerns with the potential “corruptability” of the archived historical data and the subsequent recoverability implications. It would also be a nice improvement if “who” made a particular change was more readily available without having to purchase yet more expensive database options or have to modify the table structure.

The final presentation of the day was Tom Kyte’s Top 11 new 11g Release 2 New features. His top 11 was basically:

– Do It Yourself Parallelism (with new dbms_parallel_execute feature)

– Analytics Functions (new ones such as LISTAGG and NTH_VALUE)

– Execute on a directory (and the preprocessor capabilities)

– Recursive Subquery Factory (self referencing WITH clause capabilities)

– Improved Time Travel (Total Recall improvements as mentioned above)

– You’ve got mail (new file watchers capabilities that can automatically invoke some code when a file arrives in a directory)

– Deferred Segment Creation (only allocated storage when the first row is inserted, not when segment is created)

– Flash Cache (write blocks back to disk AND an SSD like device so that subsequent reads from the SSD are faster)

– Parallel Improvements (such as better control of parallel process executions and memory parallel execution capabilities)

and both number 10 and 11 best feature, the killer one …

– Edition Based Redefinition (ability to have different versions of the same stored code running at the same time within the database)

As usual, Tom covered this topics with nice examples, in a very clear and concise manner.

On the way back to the hotel, I popped into the Blogger get-together function and had a few drinks and had the opportunity to meet and chat with a number of fellow Oracle bloggers. Thanks to the Alex Gorbachev and Justin and the OTN team for putting the event together.

PA140820

Aussie Oracle Bloggers, although one of us an Aussie no more 😉

It’s been a looong day …

Lanyon United Masters Div 3 Football Premiers 2009 (We Are The Champions) September 15, 2009

Posted by Richard Foote in Richard's Musings.
7 comments

On Sunday mornings, I play for the mighty Lanyon United Football Club, in the Canberra Masters Division 3 competition. You need to be over 35 to compete, meaning it’s a league specifically for those  who are highly skilled, have years of previous experience but not handicapped with youthful exuberance 🙂

In the previous 2 years, we managed to get to the final series but lost out in the semi-finals. This year we managed to get to the Grand Final where we met our local rivals, the Brindabella Blues.

This was the first Grand Final I’ve ever played in so I was pretty excited, managing only a couple of hours of sleep the night before. I’ve been struggling with injuries for much of the season but a few sore leg muscles weren’t going to stop me from giving it my all. It was a perfect Canberra Spring morning and I just had a good feeling about the day. There was a decent crowd on hand to support us oldies in action when the whistle blew to start the game.

The first half was a pretty close affair with perhaps Brindabella having the better run of play. Within a few minutes of starting, my left calf was feeling pretty tight and sore and I had their speedster on the left wing to contend with (note that “speed” is a relative word in Masters Div 3). It was with some relief when half-time arrived with the game still tied at 0-0. The second half was equally close although we started to gain more control and have the better of the chances. With about 20 minutes to go, I just couldn’t get the final touch on a ball that slipped through the penalty box.

Then with 15 minutes to go, disaster struck. I was off chasing a ball down the wing when my legs just gave out, first tearing my right hamstring and then with the very next stride, tearing my left calf. Ouch !! Down I went like a sack of lumpy potatoes and off I slowly hobbled with the support of a few teammates. My contribution was over but with the game still locked at 0-0, the game was far from over.

Watching helpless from the sidelines, in a lot of pain wasn’t easy. We had a number of good chances to win late in the game but full-time arrived with still no score. So 10 minutes of extra-time each way it was with then. 5 minutes into the first period and the dead lock was finally broken when a break on the right resulted in a nice cross for Robin, our ever reliable striker, to brilliantly put the ball into the back of the net.

Wooohooo !! We were up 1-0. I was so excited that I forgot I was injured and hurt my legs some more as I jumped up off the chair. Time appeared to just drag for the rest of the game but our superb defence was just impenetrable and we managed to hold on for victory.

Lanyon United 1 Brindabella Blues 0. We had won the game and were Premiers for season 2009 !! 

It was such a great feeling, all the hard work and pain was worth it. And a beer had never tasted so good.

Lanyon 2009 Champions

 

Yes indeed, we’re one hell of a good looking team 🙂 More photos here.

Thanks to all the guys and all our supporters for making it such a fun and rewarding year. I’m already looking forward to defending our title next season.

I visited the physio the next day who was really impressed that I managed to injure 2 different muscles in both legs simultaneously. I should thankfully be OK for my upcoming trips to England and then Oracle OpenWorld in the coming weeks.

 

Back to Oracle indexes and the CPU based CBO in the next couple of days …

The CBO and Indexes: Introduction Continues … June 15, 2009

Posted by Richard Foote in CBO, Oracle Indexes.
11 comments

OK, I previously briefly covered how the CBO calculates the basic cost of an index range scan. Yes, those cardinality/rows values in the execution plans are vitally important as they highlight whether or not the CBO has used the appropriate selectivity values in the index costing formula. And yes, the associated cost values are meaningful and potentially useful as they determine the actual costs associated with the execution plan in terms of the expected number of I/Os the CBO estimates will be required (when using the IO costing model and often the CPU costing model as well).

I’m just going to look at another example now using the same table setup as before, but this time running an SQL query that has 5 distinct values in an IN list predicate on our demo table (again, follow the link to see the query and formatted execution plan).

The first thing we notice in this example, is that Oracle has decided to use a FTS rather than use the index on the ID column. Considering we’re only after 5 values out of the possible 100 values, some may not see this as expected behaviour, especially considering the index has such a good Clustering Factor. Basically Oracle is deciding to access each and every block below the HWM of the table, retrieving all 100% of the rows in the table, only to ultimately discard 95% of them.

It certainly appears at first glance to be a more “costly” option than using the index to directly access just the 5% of rows we’re interested in …

The first thing to check is the estimated cardinality figures, to see if the CBO has miscalculated the expected number of rows it needs to retrieve. However, as the statistics have just been fully computed and that the ID column has perfectly even distributed values, we notice the cardinality figures are again spot on. The query returns 10,000 rows and indeed the rows estimate in the execution plan is exactly10,000 rows. The calculation is simply 0.01 (density of column) x 200,000 (rows) x 5 (values in select list) = 10,000.

Let’s now calculate the cost of using the index using our index costing formula, using the CEIL function this time 😉

index blevel + ceil(index selectivity x leaf blocks) + ceil(table selectivity x clustering factor)

2 + 5 x ceil(0.01 x 602) + ceil(0.05 x 854) = 2 + 5 x 7 + 43 = 37 + 43 = 80.

So the cost of using an index range scan to retrieve 5% of the rows in our example comes to a total of 80.

If we look at the cost of the FTS in our explain plan in the above link, we notice the cost is just 65. 65 is less than 80, so the FTS wins.

So how did the CBO come to a cost of just 65 when it has to read all 659 blocks in the table ?

Well, the first missing piece of information is the value of the db_file_multiblock_read_count parameter because this governs how many blocks Oracle will attempt to read within a single logical multiblock I/O call. Remember, when performing a FTS, Oracle knows it has to read all the table related blocks below the HWM and so rather than reading one tiny little block at a time, it does so more efficiently by reading multiple blocks at a time. This is the fundamental advantage of the FTS over the index range scan which can only ever access the one block at a time.

SQL> show parameter db_file_multi

NAME                          TYPE    VALUE
----------------------------- ------- -----
db_file_multiblock_read_count integer    16

 

So the db_file_multiblock_read_count is 16.

The next thing to note is that it’s very unlikely that Oracle will actually read the full 16 blocks at a time as there are a number of factors that prevents this from occurring. Extent boundaries is one classic example (a multiblock read can not span across extent boundaries) but the more common issue is a block within the table already being stored in the buffer cache. Rather than storing the same block at the same consistent point twice in memory, Oracle breaks up the multiblock read and only reads up to the block that is already cached in the buffer cache. Therefore, for Oracle to actually read the entire table using the full 16 block multiblock I/Os, it would mean there are no cached blocks from the table currently in the buffer cache, an unlikely event.

Therefore, Oracle doesn’t use the full 16 value when determining the number of expected multiblock I/Os, but a modified “fudge” value which equates to approximately 10.4. for a MBRC of 16. Again, Jonathan Lewis in his excellent “Cost-Based Oracle Fundamentals” book discusses all this is some detail. 

Remember also that Oracle needs to access the segment header as part of a FTS as I explained is some detail in my “Indexes and Small Table” series. So that’s an additional single block I/O on top of the multiblock I/Os.

Therefore the cost of performing a FTS is:

segment header I/O + ceil(table blocks/fudged mbrc value) = 1 + ceil(659/10.4) = 1 + 64 = 65.

The 65 cost for the FTS does make sense when one understands a little how this value is derived by the CBO …

As the FTS can read big chunks of the table at a time whereas the index range scan can only read each necessary block one at a time, the FTS can indeed read the table and retrieve the required 5% of data in fewer LIOs and so has the lesser associated cost than the index.

Now there are a few issues with all of this. Firstly, is the db_file_multiblock_read_count actually a valid and correct setting as this directly impacts not only the actual size of the multiblock read operations but critically, the associated costs relating to FTS operations (and indeed Fast Full Index Scans as well) ?

Also, is it really correct and valid to assume the cost of a multiblock I/O to be the same and equal to the cost of a single block I/O ? Surely, the process of performing a single block I/O is likely to be “cheaper” than that of a multiblock I/O and yet the CBO treats both types of I/Os as having the same fundamental “cost”.

Also the CPU overheads of having to access each and every row in each and every block is likely going to be more significant than the CPU required to access just specific data from specific blocks when using an index.

Perhaps, the more “expensive” index range scan might actually be a better alternative than the FTS if these factors were taken into consideration ?

Now this may indeed be true, if these factors were correctly taken into consideration. However, this may also indeed be quite false and the FTS may really truly be the better and more efficient alternative and attempts to force the use of the index may be inappropriate and ultimately more expensive.

I’ll next discuss some really bad (although still very common) methods of making the CBO favour indexes, using generally inappropriate so-called “Silver Bullets” …

Read-Only Table Before 11g (A Day In The Life) May 15, 2008

Posted by Richard Foote in Oracle General, Oracle Indexes, Oracle Opinion, Read Only.
9 comments

An excellent question by fellow Aussie Chris Muir on this OTN Thread reminded me of a little trick I picked up in my travels.

Basically the question is how can one make a table read-only before 11g ?

The thread mentions a number of possibilities, some better than others. I thought I might just mention this possible solution involving Materialized Views. There are various alternatives based on this basic idea, this is just a simple example.

First create and populate a table we want to convert to Read-Only.

SQL> create table bowie_ro (id number, name varchar2(20));

Table created.

SQL> alter table bowie_ro add primary key (id);

Table altered.

SQL> insert into bowie_ro values (1, ‘Bowie’);

1 row created.

SQL> insert into bowie_ro values (2, ‘Ziggy’);

1 row created.

SQL> insert into bowie_ro values (3, ‘Major Tom’);

1 row created.

SQL> commit;

Commit complete.

Next, rename the table to a another name.

SQL> rename bowie_ro to bowie_temp;

Table renamed.

Next, create a materialized view called the original name based on the renamed table.

SQL> create materialized view bowie_ro
2 refresh on demand complete
3 as select * from bowie_temp;

Materialized view created.

Next, drop the orignal table.

SQL> drop table bowie_temp;

Table dropped.

We can now see and select the table as we could previously.

SQL> select * from bowie_ro;

ID         NAME
———————–
1          Bowie
2          Ziggy
3          Major Tom

However, you now can’t perform DML on the table, making it effectively read-only …

SQL> insert into bowie_ro values (4, ‘Thin White Duke’);
insert into bowie_ro values (4, ‘Thin White Duke’)
*
ERROR at line 1:
ORA-01732: data manipulation operation not legal on this view

You may of course need to add a few grants, constraints or indexes here or there but the table is now effectively read-only without the need of a read-only tablespace or other trick as mentioned in the OTN thread.

Back to indexes accessing all rows in a table soon 🙂

Clustering Factor: A Consideration in Concatenated Index Leading Column Decision (Sweet Thing) February 15, 2008

Posted by Richard Foote in Clustering Factor, Concatenated Indexes, Oracle Cost Based Optimizer, Oracle General, Oracle Indexes, Performance Tuning.
14 comments

Short but sweet today.

I last discussed how high cardinality columns shouldn’t necessarily be in the leading column of a concatenated index as  they don’t provide the performance benefit as sometimes claimed.

If all things are equal and the columns in the concatenated index are all likely to be referenced, a simple consideration that is often forgotten when deciding which column to have as the leading index column is the Clustering Factor of the corresponding columns.

As previously discussed, the Clustering Factor  determines how well aligned or ordered the index entries are in relation to the rows in the parent table. So if the rows are ordered within the table on a particular column or columns (such as a sequential ID column, a monotonically increasing date or time-stamp, etc), then an index on these columns is likely to have a very good Clustering Factor. Consequently less IOs will be required to retrieve all the required rows via the index as all the required rows will be housed in relatively few, well clustered data blocks.

It therefore makes sense to at least consider the Clustering Factor of the various columns in a concatenated index. Why ? Because if the leading column has a very good Clustering Factor, the concatenated index by definition must also have a very good Clustering Factor as all indexes are primarily ordered based on the leading indexed column. A concatenated index with a good Clustering Factor is going to be more efficient in retrieving rows from the table and more importantly, will be considered more desirably by the CBO when costing access path options.

Of course, the opposite is also true. By having a leading column with a poor Clustering Factor will mean the concatenated index will have a poor Clustering Factor, making it less efficient and less likely to be considered by the CBO.

As such, the Clustering Factor of each corresponding column in a concatenated index is at least worthy of some consideration when making the decision on how best to order the indexed columns.

This demo on Index Column Order and Clustering Factor  shows how the order of columns in a concatenated index has a big impact on the Clustering Factor of the resultant index.

UPDATE: However as Tom Kyte has stated in the comments, in virtually all cases, the Clustering Factor is not really a factor (yes, pun fully intended) as subsequently columns are generally going to impact the CF anyways or the selectivity of the index is such that the improved CF is not relevant anyways.

More relevant considerations regarding the ordering of columns in an index coming I promise 🙂

Radiohead: In Rainbows December 15, 2007

Posted by Richard Foote in Music, Radiohead.
13 comments

One of my favourite bands is Radiohead, from Oxford in the UK.

It’s been a full 10 years since they released what is probably my favourite album of all time, the classic “OK Computer”. Since then, they’ve become one of the most influential and innovative bands around, with Thom Yorke’s distinctive vocals and Jonny Greenwood’s guitar work creating their uniquely beautiful “sound”.

A new Radiohead album is always in big event in my life and it’s been a long wait since their last album was released in 2003, “Hail To The Thief”. For the past few months I’ve been enjoying their new release “In Rainbows”.

It created quite a bit of press and controversy when Radiohead decided to release the new album as a download only release with no record label affiliation, with the buyer choosing how much to pay for it. You could pay as little (0.45p was the minimum I think) or as much as you liked. As a marketing move, it was a very very clever initiative.

It did however distract attention away somewhat from the music itself and what is quite simply a stunning collection of songs. From the bouncy opening “15 Steps”, to haunting beauty and tragedy of the closing “Videotape”, it’s arguably their best album since “OK Computer”. A wonderful surprise is the release finally of the 1997 era “Nude”, a live favourite for years which Thom has finally got right in the studio. I just love “Jigsaw Falling Into Place”, it just has so many twists and turns but the highlight for me is Weird Fishes/Arpeggi which just sends shivers down my spine every time I hear it. It’s incredibly emotional and has a profound sense of sadness but at the same time an incredible energy that perfectly encapsulates what music is all about.

As my Christmas treat (as I’ve been a really really good boy this year), I’ve got on order the “In Rainbows” box-set which includes a second CD of unreleased tracks. I can’t wait.

However, the album is released commercially in CD and vinyl formats via the usual channels on the 31st December.  Do yourself a favour, buy “In Rainbows” and enjoy what I’m sure will be considered a truly classic album in the years ahead.

Remember where you heard it first.