OTN APAC tour 2014 stop 1 – Perth, Australia

I arrived in Singapore a day before the conference and used this bit of quiet time to catch up on work stuff rather than do sightseeing. We had a nice ACE dinner downtown even though Tim did not make it in time. So it was Craig from the US and the local heroes (really, a very high density of excellent Oracle presenters and ACEs) Penny, Gavin, Connor and Scott. To me, one of the biggest benefites of the ACE program is getting to meet all these birght people and being able to call a lot of them great friends.

Connor took us to the beach for a quick swim on both days since – again – Tim did not make it on the first morning even though he said he was up and ready. The swell as a tad gnarly with pretty big waves on Friday. It was good fun and a refreshing way to start both days.

For me and the other speakers of the tour this was propably the hardest work at any one of the conferences since we all did two 2-hour ACE masterclasses each. But we started with an OpenWorld recap which had 7 of us talk about one aspect of OOW for 5 minutes each. This was inspired by the 12 on 12c presentation that EOUC did in San Francisco. This was good fun and I explained how you can have an excellent time and get good value out of the conference even without going to a single session. Others shared their own perspective on what to do or not and gave a summary of the hot topics at the conference.

All of my sessions went great and were very well attended. They made me feel like a real VIP during lunch when I got my own (vegetarian) meal with a big sign with my name. Since this was a local event for most attendees there was only a small networking thing with a few tame beers after the sessions and everybody went home at a decent time so after a quick dinner from the local convenience store this was another chance to get some decent rest.

Tim Hall handstandThe second day had the same strong content-rich agenda as the first day, mayben even more. Connor did a talk about 12c new features for developers. Initially we were afraid that this would clash with my presentation the day before (which was also 12c new features) but they complemented each other very well. He mentioned a lot of things that I did not even think about and also a lot of stuff around PL/SQL that I just barely ever get to play with. Penny Cookson did a brilliant session on how the optimizer works by comparing it to a girl trying to find a partner and working out the best plan all the while doing estimates based on statistics (as in how many single guys in the right age and height would be at any of Perth’s bars at a given time). It was hilarious and a great learning experience aswell. It is pretty scary to realize that an analogy like this works so well. I was still so impressed that I tried to work the analogy into my own presentation on baselines that I was doing after lunch. But it did not feel right to compare SPM to evaluating the quality of a new partner by trying out both the old and the new one. Or maybe that is an excellent analogy.

There were a few drinks and a dinner after the conference after which Tim and I went straight to the airport to catch the first of two flights to Shanghai at 2am – no point in going back to the hotel.

OTN APAC tour 2014 stop 0 – Singapore

#SOS - Singapore Oracle SessionsWait? Stop 0? How does that work? One reason may be that there is a lack of a universal understanding of what to call the first element of a finite set. Like floors in a building. Things you learn on tours like these are that 1 is not actually the ground floor in all countries (and it was the reason why I exited the lift on the wrong one). But the real reason that Singapore was stop 0 for me is that it is not actually part of the OTN tour (but in my opinion should be next year). I had planned a quick 3 day stopover and stay with Doug Burns and he decided this was a good time to start a local oracle thing in Singapore. So with less than a week notice he booked a room, organized food and drinks and managed to get over 30 people to show up (Hemant Chitale helped spreading the news) for the first installment of #SOS Singapore Oracle Sessions. Excellent! Portrix, Oracle and Doug chipped in as sponsors for the few expenses we had.

The speakers were blown away by the enthusiasm and passion of the attendees, received many great questions and chatted in the breaks. There was also great networking between people and everybody seemed to at least remotely know everybody else or made introductions. It was very clear that there was a demand for something like this format and while people were unsure if they needed a “proper” user group it was evident that everybody would like to see this continue.

Singapore Oracle Sessions Bjoern Rost on SQL Plan Management and baselinesI started by talking about SQL Plan Management. I have done this talk a few times now and am really happy to see how I am making progress with it. I was happy to make the most important points very clear, answered a few great questions and still finished ahead of time because Doug was signalling from the back of the room that the pizza was getting cold.

Doug talked about SQL Monitor (one of the most useful performance tools in Oracle) and how to use it from the commandline. What I did not know was that you could create just the same flash based report that the full-blown Cloud Control is generating and even more. Some of the tabs are grayed out in certain version of EM12c but they work in the CLI-generated one.

Morton did a Big Data primer, introducing people to what big data is and what it is not, giving funny but concise quotes from the Oak Table mailing list, giving an introduction to Hadoop and what the hadoop connector in Oracle can do for you (mostly access files on HDFS so you can either export big tables straight to hdfs for mapreduce processing and/or to import the results of a map/reduce job which will be written to HDFS). He finished with some funny correlation-type analysis which actually helped a customer identifying how to better do their job and got them to invest into big data infrastructure.

singapore supertrees and MBS from satay by the bay bathroomThe only bad thing I can say about the Sessions was that it cost me a few hours of precious sight-seeing time in Singapore. People have told me to come for a while but I never thought it would be as exciting, interesting and cool as it actually was. I wonder why we “dream” about some destinations (some of which turn out to not even be that great once you are there) and turn down the idea of visiting some others. Lesson learned: keep an open mind and good things will happen to you. To everybody else: Singapore is well worth a visit, with or withour Oracle Sessions.

The picture on the left was taken from the bathroom of a hawker centre (street food court). There was no wall behind the sinks, only the awesome view of tropical plants, supertrees and the iconic Marina Bay Sands hotel.

As I am writing this I am already in Perth for the annual Western Australia Oracle UG conference enjoying a rare moment of peace and quiet before the conference tomorrow. I have already experienced a new world and images of tropical plants, buildings old and new and exotic smells (both good and bad) fill my mind but this is merely the start of a tour that lasts two more weeks.

PS1: #SOS is a really bad hashtag
PS2: B-Tree indexes would be awesome if they were called Supertree-Indexes

Singapore: sending out an SOS

The Oracle community rocks! I mentioned previously that the beginning of the OTN APAC tour will take me to Singapore to meet Doug Burns again. Well, Doug took the opportunity to organize a little meetup of the local Singapore Oracle User community for Free Pizza, Free Beer and Free Oracle Presentations next monday, November 3rd. If you are in Singapore or can get there with reasonable effort, I would love to meet you there. See the flyer and rsvp to the email address if you want to come as seats are limited.
The speaker lineup sounds like a european takeover. You will have to listen to me talk about SQL Plan management first only to be rewarded with presentations by Doug Burns (Scotland) on Real Time SQL Monitoring and Morten Egan (Denmark) on Big Data.

on the road again: OTN APAC tour and rest of 2014

I should be packing right now. But as much as I love travelling, I yet have to grow fond of packing. So instead I am updating my blog which I have neglected a bit over the past months. But excuses are plenty and I have been quite busy and it won’t get any better anytime soon. But this is a good kind of busy since I get to present at a bunch of events and meet a lot of good friends around the world. I was lucky enough to be accepted on the OTN APAC tour again. Last year, I was “only” able to attend two stops in New Zealand and Australia but this year I will manage to visit China (twice) and Japan in addition to Australia. The trip starts this week with a client visit after which I will fly straight to Singapore for a long weekend to meet up with Doug Burns. After that these will be the tour stops:

After Beijing, I am leaving directly to Nuremberg for the annual DOAG conference and one more customer meeting the day after before finally heading home for a weekend. But that will not be the last trip of the year, the Oracle Midlands user group is having a “german night” with two of my talks and a stroll over the German christmas market in Birmingham on November 25th. And then finally the last event of the year will be the UKOUG tech14 conference in Liverpool from December 8th to 10th where I will get to co-present with Philippe Fierens.

Spreading Out Data With MINIMIZE RECORDS_PER_BLOCK

This post is part of a series on why sometimes we don’t need to pack data in, we need to spread it out. In the previous post I discussed some old-school methods to achieve this. This post talks about a technique that has been available for many Oracle versions so is also pretty old school but I thought it was worth a post all to itself. The technique uses the ALTER TABLE clause “MINIMIZE RECORDS_PER_BLOCK”.

Documentation

MINIMIZE RECORDS_PER_BLOCK

I have included a link to Oracle documentation above but one of the first things to highlight is this feature is light on official documentation. In fact the above page and a brief reference in the SQL Language Quick Reference are just about all there is. Here is a quote from the above link, the emphasis is mine.

“The records_per_block_clause lets you specify whether Oracle Database restricts the number of records that can be stored in a block. This clause ensures that any bitmap indexes subsequently created on the table will be as compressed as possible.”

There is no example in the official documentation but there is a very nice example on Richard Foote’s blog titled “Bitmap Indexes & Minimize Records_Per_Block (Little Wonder)”, and he works for Oracle so perhaps it counts as official documentation!

In the documentation above we have a mention of “restricts the number of records that can be stored in a block” but no clear guidance that it can be used for the reason we desire, to spread out data. If I can’t find what I want in the Oracle documentation my next port of call is My Oracle Support and there is a note describing how we might use this clause for our purpose.

Oracle Performance Diagnostic Guide (OPDG) [ID 390374.1]

In essence the note describes a “hot block” issue and a solution which is to spread data out. Suggested methods are to utilise PCTFREE or the table option, MINIMIZE RECORDS_PER_BLOCK. The note also goes through steps to highlight how to achieve our goal. Which leads us on to the next section.

How To [Mis]use MINIMIZE RECORDS_PER_BLOCK To Spread Out Data

The basic set of steps when using this clause to spread out data is:

  1. Temporarily remove data from the table, typically with CTAS or Datapump/Export followed by truncate or delete
  2. Insert the desired number of dummy records any data block should hold
  3. Restrict the number of records that can be stored in any block to the maximum number currently held
  4. Delete the dummy rows
  5. Reinstate the application data

Here is an example.

In the first post in the series I introduced the PROCSTATE table with all rows of data in a single block.

select dbms_rowid.rowid_block_number(rowid) blockno
,      count(*)
from procstate
group by dbms_rowid.rowid_block_number(rowid);
 
   BLOCKNO   COUNT(*)
---------- ----------
    239004         12

First let’s copy the data elsewhere and truncate the table.

SQL> create table procstate_store
  2  as select * from procstate;

Table created.

SQL> truncate table procstate;

Table truncated.

Next we insert a dummy row and restrict the number of rows any block will hold.

SQL> insert into procstate
  2  select * from procstate_store
  3  where rownum = 1;

1 row created.

SQL> commit;

Commit complete.

SQL> select count(*) from procstate;

  COUNT(*)
----------
         1

SQL> alter table procstate minimize records_per_block;

Table altered.

Finally we delete the dummy row and re-insert the original data.

SQL> truncate table procstate;

Table truncated.

SQL> insert into procstate
  2  select * from procstate_store;

12 rows created.

SQL> commit;

Commit complete.

And we should have only one row per block right? Wrong!

SQL> select dbms_rowid.ROWID_BLOCK_NUMBER(rowid) blockno, count(*)
  2  from procstate group by dbms_rowid.ROWID_BLOCK_NUMBER(rowid);

   BLOCKNO   COUNT(*)
---------- ----------
    278668          2
    278669          2
    278670          2
    278667          2
    278688          2
    278671          2

6 rows selected.

Two rows in each block… but that’s still a good result and using the same test as in the previous post I can see a huge reduction in contention when running my primitive test case. Original results on the left and new results with two rows per block on the right.

Contention  11g normal heap table chart-11g-mrpb

Under The Covers

In this section we’ll dig a bit deeper in to how this works and perhaps get some insight into why the example above resulted in two rows per block.

When MINIMIZE RECORDS_PER_BLOCK is used it manipulates a property in SYS.TAB$ in the SPARE1 column. This property is known as the Hakan Factor (no I don’t know why either but I do notice there is a Mr Hakan Jacobsson listed as an author of the Performance Tuning Guide and Data Warehousing Guide… who knows). Below is a query showing the Hakan Factor for a simple table stored in a tablespace using 8KB blocks.

select spare1 from sys.tab$ where obj# = 18013;

    SPARE1
----------
       736

The Hakan Factor is set by default for all heap tables, or more correctly, for all table segments of heap tables. Below is a table showing how it changes as block size changes. It makes sense that larger blocks can hold more rows.

BLOCK_SIZE  SPARE1
----------  ------
       4KB     364
       8KB     736
      16KB    1481
      32KB    2971

After a minimize operation with only a single row in a table it would be reasonable to expect SPARE1 to be set to “1”. So let’s check the value stored after a MINIMIZE operation on the PROCSTATE table.

    SPARE1
----------
     32769

This is because the MINIMIZE operation sets a flag in the 16th bit of the Hakan factor. We can see this using the BITAND SQL function in a query like the one below. This query uses BITAND to check if a specific bit is set in a number. So the increasing powers of 2 have been passed in. I have then used the LEAST() or GREATEST() functions to convert the result to a “1” or “0”.

select spare1
,      least(1,BITAND(spare1,32768)) c32k
,      least(1,BITAND(spare1,16384)) c16k
,      least(1,BITAND(spare1,8192)) c8k
,      least(1,BITAND(spare1,4096)) c4k
,      least(1,BITAND(spare1,2048)) c2k
,      least(1,BITAND(spare1,1024)) c1k
,      least(1,BITAND(spare1,512)) c512
,      least(1,BITAND(spare1,256)) c256
,      least(1,BITAND(spare1,128)) c128
,      least(1,BITAND(spare1,64)) c64
,      least(1,BITAND(spare1,32)) c32
,      least(1,BITAND(spare1,16)) c16
,      least(1,BITAND(spare1,8)) c8
,      least(1,BITAND(spare1,4)) c4
,      least(1,BITAND(spare1,2)) c2
,      greatest(0,BITAND(spare1,1)) c1
from sys.tab$
where obj# = (select obj# from sys.obj$
              where name = 'PROCSTATE');

-- After MINIMIZE with 1 row in PROCSTATE                                 ( decimal 1 )
SPARE1 C32K C16K  C8K  C4K  C2K  C1K C512 C256 C128  C64  C32  C16   C8   C4   C2   C1
------ ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ----
 32769    1    0    0    0    0    0    0    0    0    0    0    0    0    0    0    1

-- After MINIMIZE with 2 rows in PROCSTATE                                ( decimal 1 )
 32769    1    0    0    0    0    0    0    0    0    0    0    0    0    0    0    1

-- After MINIMIZE with 3 rows in PROCSTATE                                ( decimal 2 )
 32770    1    0    0    0    0    0    0    0    0    0    0    0    0    0    1    0

-- After MINIMIZE with 4 rows in PROCSTATE                                ( decimal 3 )
 32771    1    0    0    0    0    0    0    0    0    0    0    0    0    0    1    1

-- After MINIMIZE with 5 rows in PROCSTATE                                ( decimal 4 )
 32771    1    0    0    0    0    0    0    0    0    0    0    0    0    1    0    0

-- etc etc

Notice the 32K flag is set for all numbers and the lower bits specify the row limit. There is an anomaly for a single row and then the values lag one behind the desired row limit. For example with three rows in the table the Hakan factor has the MINIMIZE flag in the 16th bit and binary “10” (decimal 2) stored. So with a default Hakan factor of 736 the limit is actually set at 737 rows. I can only think that the anomaly where 1 row and 2 rows have the same value stored are down to this feature being in place to optimise bitmap indexes and the difference between one and two rows is irrelevant. Or perhaps storing a zero was ugly and a compromise was made, we’ll never know.

It is worth noting at this point that SPARE1 is a multi use column and a COMPRESS operation will set a flag in the 18th bit. I have not seen a case where the 13th – 15th bits are used. Perhaps they are left for future proofing MINIMIZE from block sizes above 32KB (should Oracle ever decide to implement them). Anyway, back on track…

When “MINIMIZE RECORDS_PER_BLOCK” is executed the Oracle process full scans the table and uses the undocumented SYS_OP_RPB() function to retrieve the row number of every row within its block. e.g.

select max(sys_op_rpb(rowid)) from procstate;

The SYS_OP_RPB() function returns the same result as the documented DBMS_ROWID.ROWID_ROW_NUMBER() function. Output from a table containing four records is below:

select rownum
, dbms_rowid.rowid_relative_fno(rowid) file_no
, dbms_rowid.rowid_block_number(rowid) blk_no
, dbms_rowid.rowid_row_number(rowid) row_no
, sys_op_rpb(rowid) row_rno2
from haktest;

    ROWNUM    FILE_NO     BLK_NO     ROW_NO   ROW_RNO2
---------- ---------- ---------- ---------- ----------
         1         11      18676          0          0
         2         11      18676          1          1
         3         11      18676          2          2
         4         11      18676          3          3

Notice the numbering starts at 0, this explains why the value stored in TAB$.SPARE1 is one less to our eyes than the actual limit. This is true of a block dump too, the numbering of rows in a data block starts at 0.

Finally, to return a table to its default setting the ALTER TABLE NOMINIMIZE clause is used.

Summary

In summary the MINIMIZE clause is not very well documented and definitely not very well exposed in the data dictionary. It is intended for use with bitmap indexes but can be also be used to spread data out in a similar way to PCTFREE but by managing the number of rows rather than amount of free space. I have used MINIMIZE in the past but now that I am older and less excitable I would probably stick to other methods of achieving my goals (e.g. PCTFREE or partitioning).

In the next post in the series we’ll look at some newer ways we can spread out data.

OpenWorld 2014 activities

In just a few days I will be sitting on a plane to San Francisco for this year’s geek festivities of Oracle OpenWorld and the schedule is already pretty full. This is the second time I am invited to the ACE Director briefing at HQ on Thursday and Friday before the conference. This is not only a great opportunity to hear about all the new and upcoming stuff directly from product managers and executives but also two days of seeing a ton of familiar old and new faces.

Sunday will be super packed and propably has enough content and activities to fill a whole week. I am presenting in an early slot at 9am which means that I have to ditch the Golden Gate bridge run organized by Jeff Smith and the SQLDeveloper gang. ODTUG has asked me to talk about how developers can implement a feature like flashback data archives into their own applications and I am happy to be co-presenting on this with Henning who can cover the developer view much better than me.

By the time we are done with the presentation, RACattack will already be in full swing. You can find the ninjas at the OTN lounge. Join us to install a running 12c RAC cluster right on your laptop or stop by to chat with experts or join the ninja ranks and help out fellow DBAs yourself. There is also a very limited supply of RACattack ninja t-shirts, so get there early. And if you already have a cluster on your laptop, you can still join the OTN kickoff party from 3 to 5 and meet likeminded people, including many ACEs.

The EOUC came up with a unique and fresh format for this conference and host a session of two timeslots called 12 on 12c where a dirty dozen techies from the EMEA region will talk about 12 different topics and each speaker only gets 5 minutes. My tiny slice is second to last and I will talk about what 12c can do for existing applications.

The night will wrap up with the ACE dinner which is always a bittersweet affair. On one hand you get to hang out with so many smart friends, on the other there are always a few you wish were there but are not part of the club (yet). It also means that I will only be able to attend the OPN partner reception towards the very end.

If there is one event that you should not miss and you cannot make up for at any other oracle conference in the world, it is the Swim in the Bay on monday morning. Chet oraclenerd Justice started to make this official two years ago and we have grown ever since. Meet the crazy bunch at aquatic park at 7:30 am and secure your own oraclenerd beach ball!

Following the success of RACattack at previous conferences, this year you can experience an attack of the Attacks. The OTN lounge will also be hosting RepAttack and CloneAttack where you can talk to experts and gain hands-on experience with replication and cloning on your own laptop.

I have always loved the OTN night parties and was a bit sad that there was none last year. Well, it is back on. Get your best geek on and join the party on monday night.

The now traditional OpenWorld bloggers meetup is going to happen once more on Wednesday evening, a perfect opportunity to meet fellow bloggers, authors and tweeps before the big party with Aerosmith on treasure island. Check out this video from last years event.

Navigating the session catalog has been a big challenge these past years because of the sheer volume and finding the content that you were interested in was a bit like chasing needles in a haystack. This year, there are focusOn … documents that feature a best-of list of sessions per area of interest. Check it out! Especially this list of all 149 sessions by Oracle ACEs.

And then there is also the Oak Table World right next door with even deeper technical sessions by some of the smartest people in the Oracle technical community. It’s free, it is only 1 minute from moscone and you should not miss it.

Spreading Out Data – Old School Methods

In the previous post in this series we introduced a crude test case in which we had data packed in to a single block which was causing severe buffer contention. To re-cap – here is a chart of ASH data demonstrating the issue, the large grey stripe is time wasted on global cache contention.

Contention  11g normal heap table

This post is to cover some “old school” methods that can be used to spread out the data and hopefully reduce contention. I have purposefully ignored the possibility of a design flaw in the code, for the purposes of this series we are assuming the code is in good shape.

Node Affinity

Nothing to do with spreading data out but an obvious remedy for global cache contention by managing where the workload runs. I’m not going to discuss this too much as it doesn’t protect us from “buffer busy waits” within a single instance, however in some cases node affinity would be the correct solution. I have included a chart from the same test case as before but usilising only a single RAC node. You can see that the global cache contention (grey) has gone but we do still have a little “buffer busy wait” contention (red).

Contention 11g nodeaffinity

PCTFREE

Oracle documentation: PCTFREE

This option doesn’t really need any explanation but here we create the test table with the clause “PCTFREE 99″ and reload the data.

create table procstate
(    
...
) pctfree 99;

And the data is now spread out.

select dbms_rowid.rowid_block_number(rowid) blockno
, count(*)
from procstate
group by dbms_rowid.rowid_block_number(rowid);

   BLOCKNO   COUNT(*)
---------- ----------
    239021          2
    239040          2
    239022          2
    239020          2
    239019          2
    239023          2

The test case row size is quite small, approximately 0.5% of the free space in a block, so we are left with two rows in each block. Looking at ASH data from the test case shows that contention is much reduced, a nice solution.

Contention 11g pctfree

Single Table Hash Cluster

Oracle documentation: Single Table Hash Cluster

This option didn’t occur to me initially. I tend to avoid Hash Clusters but when I presented on this topic at UKOUG Tech13 a leading light in the Oracle Community suggested this would also be a good solution. So here it is.

The test table has been created in a Hash Cluster stating the size of a cluster of records as 8000 bytes. This is a cheat to ensure each cluster of records ends up in its own block. We are clustering on the primary key column so each block will contain only a single row.

create cluster procstate_cluster
(proc_id number(3)) 
size 8000 single table hashkeys 100;

create table procstate
(
...
,	
)
cluster procstate_cluster(proc_id);

Before loading the data we already have 110 empty blocks. This is because we have configured the cluster to have 100 hash keys and, as stated above, ensured each key maps to a different block. Oracle then rounds up the number of cluster keys to the next prime number (109) and the extra block is the table header.

    BLOCKS
----------
       110

After loading the test data we see there is only a single row in each block.

   BLOCKNO   COUNT(*)
---------- ----------
    172398          1
    172361          1
    239067          1
    172402          1
    172365          1
    239079          1
    172389          1
    172352          1
    239075          1
    172356          1
    239071          1
    172394          1

Running the test case again shows the contention has vanished completely. All sessions spend their time on CPU doing useful work.

Contention 11g Hash Cluster

Another nice if somewhat obscure solution.

Minimize Records Per Block

Speaking of obscure solutions we come to the “MINMIZE RECORDS_PER_BLOCK” clause, we’ll save that for the next installment as I have quite a lot of material to cover.

Securing Oracle DB Accounts With Default Passwords

One view I didn’t know about until recently is DBA_USERS_WITH_DEFPWD. This view appeared in 11g but it obviously passed me by. The reason it cropped up recently was a requirement to ensure that the default accounts in an Oracle database were not left with default passwords, regardless of their account status. In order to achieve this I knocked up a quick snippet of PL/SQL which could be added to automation scripts and therefore tick another box on the audit checklist. The code specifically doesn’t output the passwords to avoid leaving them in log files. I thought it was worth sharing here.

set serveroutput on
begin
  for i in (	select 'alter user '||u.username||' identified by '
                     ||dbms_random.string('a', 10)||'_'||trunc(dbms_random.value(1,99)) cmd
                     , username
                from sys.dba_users_with_defpwd u
                where username <> 'XS$NULL')
  loop
    dbms_output.put_line('Securing '||i.username||'...');
    execute immediate i.cmd;
  end loop;
end;
/

And the output

SQL> set serveroutput on
SQL> begin
  2    for i in (    select 'alter user '||u.username||' identified by '
  3                       ||dbms_random.string('a', 10)||'_'||trunc(dbms_random.value(1,99)) cmd
  4                       , username
  5                  from sys.dba_users_with_defpwd u
  6                  where username <> 'XS$NULL')
  7    loop
  8      dbms_output.put_line('Securing '||i.username||'...');
  9      execute immediate i.cmd;
 10    end loop;
 11  end;
 12  /
Securing GSMUSER...
Securing MDSYS...
Securing OLAPSYS...
Securing LBACSYS...
Securing ORDDATA...
Securing ORDSYS...
Securing DVF...
Securing SYSDG...
Securing APPQOSSYS...
Securing WMSYS...
Securing GSMCATUSER...
Securing OJVMSYS...
Securing SYSTEM...
Securing XDB...
Securing SI_INFORMTN_SCHEMA...
Securing CTXSYS...
Securing ORACLE_OCM...
Securing MDDATA...
Securing ORDPLUGINS...
Securing SPATIAL_CSW_ADMIN_USR...
Securing SPATIAL_WFS_ADMIN_USR...
Securing DVSYS...
Securing DBSNMP...
Securing SYS...
Securing SYSKM...
Securing DIP...
Securing ANONYMOUS...
Securing AUDSYS...
Securing GSMADMIN_INTERNAL...
Securing SYSBACKUP...
Securing OUTLN...

PL/SQL procedure successfully completed.

And a second time, there is nothing to do

SQL> /

PL/SQL procedure successfully completed.

The snippet could be changed to add “ACCOUNT LOCK” if required. Though beware locking SYS on 11.2.0.4 and above:

ORA-28000: The Account Is Locked When Log In As SYS User Remotely While SYS User Was Locked (Doc ID 1601360.1)

Sometimes we don’t need to pack data in, we need to spread it out

I’ve gotten away with doing a presentation called “Contentious Small Tables” (download here) for the UKOUG three times now so I think it’s time to retire it from duty and serialise it here.

The planned instalments are below, I’ll change the items to links as the posts are published. The methods discussed in these posts are not exhaustive but hopefully cover most of the useful or interesting options.

  1. Sometimes we don’t need to pack data in, we need to spread it out (this post)
  2. Spreading out data – old school methods
  3. Spreading out data – with minimize records per block (also old school)
  4. Spreading out data – some partitioning methods
  5. Spreading out data – some modern methods

This post is the first instalment.

Sometimes we don’t need to pack data in, we need to spread it out

As data sets continue to grow there is more and more focus on packing data as tightly as we can. For performance reasons there are cases where we DBAs & developers need to turn this on its head and try to spread data out. An example is a small table with frequently updated/locked rows. There is no TX blocking but instead contention on the buffers containing the rows. This contention can manifest itself as “buffer busy waits”, one of the “gc” variants of “buffer busy waits” such as “gc buffer busy acquire” or “gc buffer busy release” or maybe on “latch: cache buffers chains”.

You’ll probably come at a problem of this nature from a session perspective via OEM, ASH data, v$session_event or SQL*Trace data. Taking ASH data as an example you can see below that for my exaggerated test, global cache busy waits dominate the time taken and, key for this series of posts, the “P1″ and “P2″ columns contain a small number of values.

column event format a35
select * from (
 select NVL(event,'CPU') event
 , count(*) waits
 , p1, p2
 from gv$active_session_history
 where sample_time between 
       to_date(to_char(sysdate,'DD-MON-YYYY')||' &from_time.','DD-MON-YYYY HH24:MI')
   and to_date(to_char(sysdate,'DD-MON-YYYY')||' &to_time.','DD-MON-YYYY HH24:MI')
 and module = 'NJTEST'
 group by NVL(event,'CPU'), p1, p2
 order by 2 desc
) where rownum <= 5;

EVENT                   WAITS    P1      P2
---------------------- ------ ----- -------
gc buffer busy acquire   1012     5  239004
gc buffer busy release    755     5  239004
gc current block busy     373     5  239004
CPU                        65     0       0

For the wait events above “P1″ and “P2″ have the following meanings:

select distinct parameter1, parameter2
from v$event_name
where name in ('gc buffer busy acquire'
              ,'gc buffer busy release'
              ,'gc current block busy');

PARAMETER1     PARAMETER2
-------------- --------------
file#          block#

So above we have contention on a single buffer. From ASH data we can also get the “SQL_ID” or “CURRENT_OBJ#” in order to identify the problem table. Below we show that all rows in the test table are in a single block – 239004.

select dbms_rowid.rowid_block_number(rowid) blockno
, count(*)
from procstate
group by dbms_rowid.rowid_block_number(rowid);

   BLOCKNO   COUNT(*)
---------- ----------
    239004         12

After discovering this information the first port of call is to look at the SQL statement, execution plan and supporting code but assuming the application is written as best it can be then perhaps it’s time to look at the segment and data organisation.

In order to help illustrate the point of this series here is a chart showing contention in ASH data from a crude test case. The huge grey stripe being “gc” busy waits and the thin red stripe being standard “buffer busy waits”.

Contention  11g normal heap table

The next post in this series will look at some “Old School” methods of reducing this contention.