Some Things To Consider Now ROWIDs Are Updated When Rows Migrate Part I (“More”) February 22, 2023
Posted by Richard Foote in 19c, Autonomous Database, Autonomous Transaction Processing, Changing ROWID, Migrated Rows, Oracle, Oracle 21c, Oracle Blog, Oracle Cloud, Oracle General, Oracle Indexes, Oracle19c, Pink Floyd, Richard's Blog.add a comment
In my previous post, I discussed the obvious advantage of ROWIDs now being updated when rows migrate in an Oracle Autonomous Database, that being subsequent accesses to these rows via an index being more efficient.
However, there were likely reasons why Oracle has not historically updated ROWIDs on the fly in the past, so it’s worth exploring some of the possible side-effects of this new behaviour.
The most obvious issue will be for those applications that explicitly currently store ROWIDs, to enable the direct and very fast retrieval of such rows without having to read and access additional index blocks. If the ROWID can now suddenly change when a row is simply migrated, then of course these applications will no longer be guaranteed to be able to access these rows via the stored ROWIDs. Worse, it may now be possible for such applications to unknowingly fetch the wrong row, with the ROWID value now potentially associated with an entirely different row.
If this is a legitimate concern, then the remedy is simply to just NOT assign such tables the ENABLE ROW MOVEMENT attribute (which is disabled by default on a table) and the behaviour of migrated rows in association with ROWIDs will remain unchanged in Oracle Autonomous Databases. The risks here can be clearly and easily limited.
The other obvious disadvantage with ROWIDs being updated on the fly when a row migrates is in the additional costs associated with such Update statements in maintaining all the corresponding indexes.
As I discussed previously, these additional costs can be significant, especially if we have many indexes on a table.
To illustrate these extra costs, a simple example.
I’ll first start by creating and populating a table called BIG_ZIGGY (which at 100,000 rows is actually quite tiny, but it does have a number of columns) that does NOT have ENABLE ROW MOVEMENT set. The PCTFREE is set to 0 is ensure the rows are nicely packed in each block:
SQL> CREATE TABLE big_ziggy(id number, code1 number, code2 number, code3 number, code4 number, code5 number, code6 number, code7 number, code8 number, code9 number, code10 number, code11 number, code12 number, code13 number, code14 number, code15 number, code16 number, code17 number, code18 number, code19 number, code20 number, name varchar2(142)) PCTFREE 0; Table BIG_ZIGGY created. SQL> INSERT INTO big_ziggy SELECT rownum, rownum, rownum, rownum, rownum, rownum, rownum, rownum, rownum, rownum, rownum, rownum, rownum, rownum, rownum, rownum, rownum, rownum, rownum, rownum, rownum, 'BOWIE' FROM dual CONNECT BY LEVEL <= 100000; 100,000 rows inserted. SQL> commit; Commit complete. SQL> exec dbms_stats.gather_table_stats(ownname=> null, tabname=> 'BIG_ZIGGY'); PL/SQL procedure successfully completed.
As we’ve only inserted rows, there are currently no migrated rows:
SQL> analyze table big_ziggy compute statistics; Table BIG_ZIGGY analyzed. SQL> select table_name, num_rows, chain_cnt from user_tables where table_name='BIG_ZIGGY'; TABLE_NAME NUM_ROWS CHAIN_CNT _____________ ___________ ____________ BIG_ZIGGY 100000 0
I’ll next create a whole bunch of indexes on many of these columns:
SQL> create index big_ziggy_id_i on big_ziggy(id); Index BIG_ZIGGY_ID_I created. SQL> create index big_ziggy_code1_i on big_ziggy(code1); Index BIG_ZIGGY_CODE1_I created. SQL> create index big_ziggy_code2_i on big_ziggy(code2); Index BIG_ZIGGY_CODE2_I created. SQL> create index big_ziggy_code3_i on big_ziggy(code3); Index BIG_ZIGGY_CODE3_I created. SQL> create index big_ziggy_code4_i on big_ziggy(code4); Index BIG_ZIGGY_CODE4_I created. SQL> create index big_ziggy_code5_i on big_ziggy(code5); Index BIG_ZIGGY_CODE5_I created. SQL> create index big_ziggy_code6_i on big_ziggy(code6); Index BIG_ZIGGY_CODE6_I created. SQL> create index big_ziggy_code7_i on big_ziggy(code7); Index BIG_ZIGGY_CODE7_I created. SQL> create index big_ziggy_code8_i on big_ziggy(code8); Index BIG_ZIGGY_CODE8_I created. SQL> create index big_ziggy_code9_i on big_ziggy(code9); Index BIG_ZIGGY_CODE9_I created. SQL> create index big_ziggy_code10_i on big_ziggy(code10); Index BIG_ZIGGY_CODE10_I created. SQL> create index big_ziggy_code11_i on big_ziggy(code11); Index BIG_ZIGGY_CODE11_I created. SQL> create index big_ziggy_code12_i on big_ziggy(code12); Index BIG_ZIGGY_CODE12_I created. SQL> create index big_ziggy_code13_i on big_ziggy(code13); Index BIG_ZIGGY_CODE13_I created. SQL> create index big_ziggy_code14_i on big_ziggy(code14); Index BIG_ZIGGY_CODE14_I created. SQL> create index big_ziggy_code15_i on big_ziggy(code15); Index BIG_ZIGGY_CODE15_I created. SQL> create index big_ziggy_code16_i on big_ziggy(code16); Index BIG_ZIGGY_CODE16_I created. SQL> create index big_ziggy_code17_i on big_ziggy(code17); Index BIG_ZIGGY_CODE17_I created. SQL> create index big_ziggy_code18_i on big_ziggy(code18); Index BIG_ZIGGY_CODE18_I created. SQL> create index big_ziggy_code19_i on big_ziggy(code19); Index BIG_ZIGGY_CODE19_I created. SQL> create index big_ziggy_code20_i on big_ziggy(code20); Index BIG_ZIGGY_CODE20_I created.
I’ll now run an UPDATE statement, that will increase the row size and result in a number of row migrations:
SQL> update big_ziggy set name='THE RISE AND FALL OF ZIGGY STARDUST AND THE SPIDERS FROM MARS'; 100,000 rows updated. PLAN_TABLE_OUTPUT ____________________________________________________________________________________ SQL_ID 53xtnn8mmtwj5, child number 0 ------------------------------------- update big_ziggy set name='THE RISE AND FALL OF ZIGGY STARDUST AND THE SPIDERS FROM MARS' Plan hash value: 1689330390 --------------------------------------------------------- | Id | Operation | Name | E-Rows | --------------------------------------------------------- | 0 | UPDATE STATEMENT | | | | 1 | UPDATE | BIG_ZIGGY | | | 2 | TABLE ACCESS STORAGE FULL| BIG_ZIGGY | 100K | --------------------------------------------------------- Note ----- - automatic DOP: Computed Degree of Parallelism is 1 - PDML disabled because object is not decorated with parallel clause - Warning: basic plan statistics not available. These are only collected when: * hint 'gather_plan_statistics' is used for the statement or * parameter 'statistics_level' is set to 'ALL', at session or system level Statistics ----------------------------------------------------------- 345 CPU used by this session 347 CPU used when call started 399 DB time 3442830 RM usage 5 Requests to/from client 491 Session total flash IO requests 25403392 cell physical IO interconnect bytes 48814 consistent gets 10111 consistent gets examination 10111 consistent gets examination (fastpath) 48814 consistent gets from cache 38703 consistent gets pin 38702 consistent gets pin (fastpath) 544587 db block gets 544587 db block gets from cache 538582 db block gets from cache (fastpath) 127 enqueue releases 129 enqueue requests 3086 gcs affinity lock grants 803 gcs data block access records 3 ges messages sent 33574 global enqueue gets sync 33573 global enqueue releases 43 messages sent 483 non-idle wait count 44 non-idle wait time 8 opened cursors cumulative 1 opened cursors current 71 physical read requests optimized 420 physical read total IO requests 25403392 physical read total bytes 3219456 physical read total bytes optimized 1 pinned cursors current 4 process last non-idle time 55 recursive calls 1 recursive cpu usage 593401 session logical reads 42 user I/O wait time 6 user calls Elapsed: 00:00:04.532 SQL> commit Commit complete.
Note that the CPU used by session is 335, the number of db block gets is 544587 and that the raw elapsed time is 00:00:04.532. We’ll shortly compare these values with those of the same demo, but on a table with ENABLE ROW MOVEMENT set.
If we now check for migrated (chained) rows:
SQL> analyze table big_ziggy compute statistics; Table BIG_ZIGGY analyzed. SQL> select table_name, num_rows, chain_cnt from user_tables where table_name='BIG_ZIGGY'; TABLE_NAME NUM_ROWS CHAIN_CNT _____________ ___________ ____________ BIG_ZIGGY 100000 28323
We notice we indeed have 28323 migrated rows.
We’ll now repeat the exactly same demo, but this time on the BIG_ZIGGY2 table that has ENABLE ROW MOVEMENT set:
SQL> CREATE TABLE big_ziggy2(id number, code1 number, code2 number, code3 number, code4 number, code5 number, code6 number, code7 number, code8 number, code9 number, code10 number, code11 number, code12 number, code13 number, code14 number, code15 number, code16 number, code17 number, code18 number, code19 number, code20 number, name varchar2(142)) PCTFREE 0 ENABLE ROW MOVEMENT; Table BIG_ZIGGY2 created. SQL> INSERT INTO big_ziggy2 SELECT rownum, rownum, rownum, rownum, rownum, rownum, rownum, rownum, rownum, rownum, rownum, rownum, rownum, rownum, rownum, rownum, rownum, rownum, rownum, rownum, rownum, 'BOWIE' FROM dual CONNECT BY LEVEL <= 100000; 100,000 rows inserted. SQL> commit; Commit complete. SQL> exec dbms_stats.gather_table_stats(ownname=> null, tabname=> 'BIG_ZIGGY2'); PL/SQL procedure successfully completed. SQL> analyze table big_ziggy2 compute statistics; Table BIG_ZIGGY2 analyzed. SQL> select table_name, num_rows, chain_cnt from user_tables where table_name='BIG_ZIGGY2'; TABLE_NAME NUM_ROWS CHAIN_CNT _____________ ___________ ____________ BIG_ZIGGY2 100000 0 SQL> create index big_ziggy2_id_i on big_ziggy2(id); Index BIG_ZIGGY2_ID_I created. SQL> create index big_ziggy2_code1_i on big_ziggy2(code1); Index BIG_ZIGGY2_CODE1_I created. SQL> create index big_ziggy2_code2_i on big_ziggy2(code2); Index BIG_ZIGGY2_CODE2_I created. SQL> create index big_ziggy2_code3_i on big_ziggy2(code3); Index BIG_ZIGGY2_CODE3_I created. SQL> create index big_ziggy2_code4_i on big_ziggy2(code4); Index BIG_ZIGGY2_CODE4_I created. SQL> create index big_ziggy2_code5_i on big_ziggy2(code5); Index BIG_ZIGGY2_CODE5_I created. SQL> create index big_ziggy2_code6_i on big_ziggy2(code6); Index BIG_ZIGGY2_CODE6_I created. SQL> create index big_ziggy2_code7_i on big_ziggy2(code7); Index BIG_ZIGGY2_CODE7_I created. SQL> create index big_ziggy2_code8_i on big_ziggy2(code8); Index BIG_ZIGGY2_CODE8_I created. SQL> create index big_ziggy2_code9_i on big_ziggy2(code9); Index BIG_ZIGGY2_CODE9_I created. SQL> create index big_ziggy2_code10_i on big_ziggy2(code10); Index BIG_ZIGGY2_CODE10_I created. SQL> create index big_ziggy2_code11_i on big_ziggy2(code11); Index BIG_ZIGGY2_CODE11_I created. SQL> create index big_ziggy2_code12_i on big_ziggy2(code12); Index BIG_ZIGGY2_CODE12_I created. SQL> create index big_ziggy2_code13_i on big_ziggy2(code13); Index BIG_ZIGGY2_CODE13_I created. SQL> create index big_ziggy2_code14_i on big_ziggy2(code14); Index BIG_ZIGGY2_CODE14_I created. SQL> create index big_ziggy2_code15_i on big_ziggy2(code15); Index BIG_ZIGGY2_CODE15_I created. SQL> create index big_ziggy2_code16_i on big_ziggy2(code16); Index BIG_ZIGGY2_CODE16_I created. SQL> create index big_ziggy2_code17_i on big_ziggy2(code17); Index BIG_ZIGGY2_CODE17_I created. SQL> create index big_ziggy2_code18_i on big_ziggy2(code18); Index BIG_ZIGGY2_CODE18_I created. SQL> create index big_ziggy2_code19_i on big_ziggy2(code19); Index BIG_ZIGGY2_CODE19_I created. SQL> create index big_ziggy2_code20_i on big_ziggy2(code20); Index BIG_ZIGGY2_CODE20_I created.
If we now repeat the same UPDATE statement:
SQL> update big_ziggy2 set name='THE RISE AND FALL OF ZIGGY STARDUST AND THE SPIDERS FROM MARS'; 100,000 rows updated. PLAN_TABLE_OUTPUT ____________________________________________________________________________________ SQL_ID gupa6k30c341n, child number 0 ------------------------------------- update big_ziggy2 set name='THE RISE AND FALL OF ZIGGY STARDUST AND THE SPIDERS FROM MARS' Plan hash value: 3856369697 ---------------------------------------------------------- | Id | Operation | Name | E-Rows | ---------------------------------------------------------- | 0 | UPDATE STATEMENT | | | | 1 | UPDATE | BIG_ZIGGY2 | | | 2 | TABLE ACCESS STORAGE FULL| BIG_ZIGGY2 | 100K | ---------------------------------------------------------- Note ----- - automatic DOP: Computed Degree of Parallelism is 1 - PDML disabled because object is not decorated with parallel clause - Warning: basic plan statistics not available. These are only collected when: * hint 'gather_plan_statistics' is used for the statement or * parameter 'statistics_level' is set to 'ALL', at session or system level Statistics ----------------------------------------------------------- 1310 CPU used by this session 1310 CPU used when call started 1732 DB time 13104856 RM usage 5 Requests to/from client 12 Session IORM flash wait time 13343 Session total flash IO requests 235888640 cell physical IO interconnect bytes 36437 consistent gets 994 consistent gets examination 994 consistent gets examination (fastpath) 36437 consistent gets from cache 35443 consistent gets pin 35275 consistent gets pin (fastpath) 2574278 db block gets 2574278 db block gets from cache 1418826 db block gets from cache (fastpath) 5729 enqueue releases 5731 enqueue requests 23745 gcs affinity lock grants 11119 gcs data block access records 25 ges messages sent 1165 global enqueue gets sync 1164 global enqueue releases 215 messages sent 8254 non-idle wait count 476 non-idle wait time 31 opened cursors cumulative 5324 physical read requests optimized 8019 physical read total IO requests 235888640 physical read total bytes 63856640 physical read total bytes optimized 17 process last non-idle time 160 recursive calls 7 recursive cpu usage 2610715 session logical reads 475 user I/O wait time 6 user calls Elapsed: 00:00:17.600 SQL> commit Commit complete.
We notice that this update consumed more resources than the previous example.
Note that the CPU used by session is now 1310 (previously 335), the number of db block gets is now 2574278 (previously 544587) and that the raw elapsed time has increased to 00:00:17.600 (previously it was 00:00:04.532).
SQLcl doesn’t automatically display redo statistics which is a shame and something I’ve only just noticed, but it will have increased significantly as I discussed previously.
However, if we look at the number of migrated rows on the BIG_ZIGGY2 table:
SQL> analyze table big_ziggy2 compute statistics; Table BIG_ZIGGY2 analyzed. SQL> select table_name, num_rows, chain_cnt from user_tables where table_name='BIG_ZIGGY2'; TABLE_NAME NUM_ROWS CHAIN_CNT _____________ ___________ ____________ BIG_ZIGGY2 100000 0
We notice there are no rows considered chained (migrated), as in this scenario on Oracle Autonomous Databases, all rows that moved to a different block had their associated ROWIDs updated on the fly in all the corresponding indexes and as such there was no need to have the pointer in the original block to denote the row’s new location.
So the choice is entirely yours.
If you have applications that rely on stored ROWIDs not changing in the background when a row happens to migrate OR you have applications in which the performance of the UPDATE DML is absolutely paramount and you wish to avoid the overheads associated with updating ROWIDs on the fly (which in an Exadata environment is less likely to be an issue), then do NOT set ENABLE ROW MOVEMENT on the table.
Generally, the improvements associated with more efficient indexed-based accesses overrides the overheads associated with (usually) one-off and uncommon row migrations (which might be mitigated with more appropriate settings of PCTFREE).
That said, I’ll discuss a few other areas of potential concern associated with this change of behaviour in my next post…
When Does A ROWID Change? Part I (“Fearless”) December 7, 2022
Posted by Richard Foote in Autonomous Database, Block Dumps, Changing ROWID, Oracle, Oracle Blog, Oracle Cloud, Oracle General, Oracle Indexes, Oracle Table Internals, Oracle19c, Pink Floyd, Richard's Blog, ROWID.7 comments
Recently, my mate Connor McDonald caused a tad of a storm when he disclosed that the once sacred, (almost) unchangeable ROWID can now indeed potentially easily change, without the DBA doing a thing.
You can watch his excellent video on the subject here.
As the humble ROWID is a critical component of any index, I thought it worthwhile to have a deep dive discussion on when a ROWID can and can’t change and some of the key changes that have been introduced within Oracle’s Autonomous database environments.
If you’re a developer who explicitly uses the ROWID in your applications, you might want to pay extra attention to these changes.
I thought I’ll begin first though by discussing how the ROWID currently doesn’t generally change…
The ROWID is basically just a pointer stored in indexes that effectively points to the physical location of a row within a table that’s associated with the specific indexed key. It consists of the Data Object ID (if it’s a Global Index associated to a Partitioned Table, as the Relative File ID is no longer a unique value and so needs this to determine the appropriate tablespace), a Relative File ID, a Data Block ID within the Relative File and the Row Location ID within the Data Block.
Now although it has never been a completely risk free approach to manually store and directly use these ROWIDs to fetch a row from a table, it’s a technique that has been frequently used by developers for 3 very good reasons.
The first reason is that it’s one of the most efficient ways to fetch a required row, because the database can go directly to the physical location and directly access the required row, without having to even read a single index block.
The second reason is because it has always been fully supported by Oracle to do so, with the required syntax well documented. Indeed, Oracle APEX includes the base functionality to store and access rows directly via their associated ROWIDs.
The third reason is because it’s generally well understood that the ROWID doesn’t change, except for a very few (generally) well known scenarios, and so using the ROWID to access data within an application is viewed as being reasonable safe.
When a row is inserted into a table, each corresponding index on the table (generally) has a new index entry also inserted, including its associated ROWID. Now, if the row never moves from its current physical location, there is no need to ever worry about the ROWID subsequently changing.
So the big question is, when can a row physically move AND the associated ROWID change?
Logically, a scenario that springs to mind is when a row is updated and made bigger and there’s no longer room within the current data block to store the larger row. Does the row move to another block with sufficient free space and result in the ROWID to change?
To check out this scenario, I’ll create and populate a basic table, with PCTFREE set to 0, so once the blocks within the table are filled, there is precious little space for rows to subsequently grow:
SQL> create table bowie (id number, name varchar2(142)) pctfree 0 enable row movement; Table created. SQL> insert into bowie select rownum, 'BOWIE' from dual connect by level <=10000; 10000 rows created. SQL> commit; Commit complete. SQL> create index bowie_id_i on bowie(id); Index created. SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'BOWIE'); PL/SQL procedure successfully completed.
For good measure, I’ve also included the ENABLE ROW MOVEMENT clause, which is usually associated with Partitioned Tables (as I’ll discuss in Part II in this series).
Let’s have a look at the ROWIDs of a few random rows:
SQL> select id, rowid from bowie where id in (42, 424, 4242) order by id; ID ROWID ---------- ------------------ 42 AAASe4AAMAAAACHAAp 424 AAASe4AAMAAAACHAGn 4242 AAASe4AAMAAAACNAHV
If we look at a partial block dump of one of the table blocks:
tab 0, row 0, @0x1312 tl: 12 fb: --H-FL-- lb: 0x1 cc: 2 col 0: [ 2] c1 02 col 1: [ 5] 42 4f 57 49 45 tab 0, row 1, @0x131e tl: 12 fb: --H-FL-- lb: 0x1 cc: 2 col 0: [ 2] c1 03 col 1: [ 5] 42 4f 57 49 45 tab 0, row 2, @0x132a tl: 12 fb: --H-FL-- lb: 0x1 cc: 2 col 0: [ 2] c1 04 col 1: [ 5] 42 4f 57 49 45 tab 0, row 3, @0x1336 tl: 12 fb: --H-FL-- lb: 0x1 cc: 2 col 0: [ 2] c1 05 col 1: [ 5] 42 4f 57 49 45 tab 0, row 4, @0x1342 tl: 12 fb: --H-FL-- lb: 0x1 cc: 2 col 0: [ 2] c1 06 col 1: [ 5] 42 4f 57 49 45
The above partial block dump shows the first 5 rows within the block, with the contents of the 2 table columns listed (in hex format).
If we access a row via an index:
SQL> select * from bowie where id=42; ID NAME ---------- -------------------------------------------------------------- 42 THE RISE AND FALL OF ZIGGY STARDUST AND THE SPIDERS FROM MARS Execution Plan -------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 66 | 2 (0)| 00:00:01 | | 1 | TABLE ACCESS BY INDEX ROWID BATCHED | BOWIE | 1 | 66 | 2 (0)| 00:00:01 | | * 2 | INDEX RANGE SCAN | BOWIE_ID_I | 1 | | 1 (0)| 00:00:01 | -------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("ID"=42) Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 4 consistent gets 0 physical reads 0 redo size 702 bytes sent via SQL*Net to client 52 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed
We can see in this example that Oracle requires 4 consistent gets, 3 of which are accesses to the index.
If we access this table via its ROWID:
SQL> select * from bowie where rowid='AAASe4AAMAAAACHAAp'; ID NAME ---------- -------------------------------------------------------------- 42 THE RISE AND FALL OF ZIGGY STARDUST AND THE SPIDERS FROM MARS Execution Plan ------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU) | Time | ------------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 10 | 1 (0) | 00:00:01 | | 1 | TABLE ACCESS BY USER ROWID | BOWIE | 1 | 10 | 1 (0) | 00:00:01 | ------------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 1 consistent gets 0 physical reads 0 redo size 698 bytes sent via SQL*Net to client 52 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed
We can see that this only requires just the 1 consistent get (vs. 4 when using the index) to access the row.
So we can see the appeal of using the ROWID to access a row.
If we now update the rows within the table and make them substantially larger so they can no longer fit within the currently filled blocks:
SQL> update bowie set name='THE RISE AND FALL OF ZIGGY STARDUST AND THE SPIDERS FROM MARS'; 10000 rows updated. SQL> commit; Commit complete.
And now look again at the ROWIDs of these selected rows:
SQL> select id, rowid from bowie where id in (42, 424, 4242) order by id; ID ROWID ---------- ------------------ 42 AAASe4AAMAAAACHAAp 424 AAASe4AAMAAAACHAGn 4242 AAASe4AAMAAAACNAHV
We notice that the ROWIDs all remain the same.
This has always historically been the behaviour here. If we update a row and the updated row can no longer fit within the current block, the row “migrates” to another table block with sufficient free space. BUT the associated ROWIDs do NOT change and the associated indexes are NOT updated.
Rather, the row data within the updated table is replaced with a “pointer”, that points to the new physical location of the migrated row. The advantage here being that Oracle only has to update the table with this new pointer, rather having to update the associated ROWIDs of all the (possibly many) associated indexes (noting that such an update would actually result in a delete followed by a re-insert of each index entry).
The disadvantage of course is that to now access this migrated row via an index requires an extra hop, to first read the initial table block and then to follow the pointer and access the actual block that now contains the row. Note if this row is forced to be migrated again because it grows again and can’t be housed in the current block, this pointer in the initial block is updated to reflect the newer location, so at least there is only ever the one extra hop.
If we look at a new partial block dump of the previously accessed block:
tab 0, row 0, @0x1f8f tl: 9 fb: --H----- lb: 0x2 cc: 0 nrid: 0x030000bf.4b tab 0, row 1, @0x1f86 tl: 9 fb: --H----- lb: 0x2 cc: 0 nrid: 0x030000bf.4c tab 0, row 2, @0x1f7d tl: 9 fb: --H----- lb: 0x2 cc: 0 nrid: 0x030000bf.4d tab 0, row 3, @0x1f74 tl: 9 fb: --H----- lb: 0x2 cc: 0 nrid: 0x030000bf.4e tab 0, row 4, @0x1f6b tl: 9 fb: --H----- lb: 0x2 cc: 0 nrid: 0x030000b8.0
We notice that the rows with their 2 columns have been replaced with a logical nrid pointer (consisting of a relative block address and row location within the block), that effectively points to the new physical location of the row.
Note we can still use the same, unchanged ROWID to access the same table rows:
SQL> select * from bowie where rowid='AAASe4AAMAAAACHAAp'; ID NAME ---------- -------------------------------------------------------------- 42 THE RISE AND FALL OF ZIGGY STARDUST AND THE SPIDERS FROM MARS Execution Plan ------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU) | Time | ------------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 10 | 1 (0) | 00:00:01 | | 1 | TABLE ACCESS BY USER ROWID | BOWIE | 1 | 10 | 1 (0) | 00:00:01 | ------------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 2 consistent gets 0 physical reads 0 redo size 698 bytes sent via SQL*Net to client 52 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed
This ROWID access still works fine, except it has now increased to 2 consistent gets, one to access the initial block referenced by the ROWID and the extra consistent get to follow the pointer and access the new physical location of the row.
So historically, we haven’t had to worry about updates changing the ROWID of a row (except perhaps at looking at reducing the number of these migrated rows).
Well, except for one clear example as I’ll discuss in Part II…
The Dark Side Of The Moon Immersion Box Set September 4, 2011
Posted by Richard Foote in Pink Floyd, Richard's Musings, The Dark Side of the Moon.14 comments
It’s Father’s Day here today in Australia and because I’ve naturally been a really really good Dad all year, my family have given me a real treat for my present this year, the Immersion Box Set of the Pink Floyd classic, The Dark Side Of The Moon (although unfortunately, I have to wait a couple of weeks for it to get released until I can get my hands on it).
As the days of actually having a physical format for music (be it record or tape or CD or whatever) to hold and hug are fast disappearing in this age of digital downloads, Pink Floyd have decided to re-release their back catalogue in physical format one last time with some style.
All their albums are being re-released in new digitally remastered formats, but three of their very best albums (The Dark Side Of The Moon in late September, Wish You Were Here in November and The Wall in February 2012) get the special treatment with the release of Immersion Box Sets.
So what do you get in TDSOTM Immersion Box Set ?
The answer is heaps !!
In a very large box (naturally), you get:
Disc 1Â , a CD containing a digitally remastered version of the album
Disc 2, a CD containing a previously unreleased live concert at Wembley dating back to 1974.
Disc 3, an audio DVD containing various 5.1 Surround Sound and Quadraphonic (as originally released in 1973) mixes of the album
Disc 4, a visual DVD containing various live performances, documentaries and all the original concert screen films (makes me want to go out and buy a circular TV !!)
Disc 5, a Blu-Ray containing both audio and video highlights of what I’ve listed already
Disc 6, a CD containing previously unreleased material, including demos and the various live sequences that didn’t quite make it onto the final album.
You also get a whole bunch of other goodies, including colour booklets, a photo book, Storm Thorgerson artwork and cards, a set of 9 coasters, a scarf (just in time for our Canberra summer), Pink Floyd marbles (of course) and replicas of various memorabilia.
I can’t wait !!
I’m almost pitying the neighbours already as I fully plan to sit in the middle of my surround sound system and play all this as it was intended. REALLY REALLY LOUD !!
If you want more details or you’re interested in buying this as well, simply click on the picture of the album artwork above.
This will definitely make my Recommendations Page 🙂