Tuesday, May 24, 2011

Extended RAC vs Golden Gate


 Every so often this question comes up... "Why don't we do extended RAC ?"

 

 
Well here is some of my answer as to what we need to consider.

 
First, Extended RAC is setting up a RAC cluster across datacenters.. Each datacenter has it's own independent Storage arrays. Some Nodes are placed on one datacenter (a) and some are placed in (b). To keep it simple and for redundancy, lets say there is 2 nodes in each datacenter (4 total). These 4 nodes are all part of the same cluster, and they all share the same Global cache. Disk writes are mirrored across datacenters. This means that all writes are synchronous, and the write must be acknowledged on both arrays (in both datacenters).. Dark fiber is a must to accomplish this.

  
Issues ? There are number of concerns this brings up.

 

 
1) The global cache is shared across the datacenters. Any latency is felt by any processing that requires sharing data between clusters in opposite datacenters. This can cause some performance degradation.

 
2) Writes are synchronous across datacenters. This can increase the write time for any disk writes.

 
3) You are not protected against logical corruption, upgrade outages etc.

 
Best practice is to also have a physical standby for HA, and to allow patching etc (transient logical), etc.

 

 
In the end you end up with twice as much equipment, twice as much storage, and a lot more complex system over just keeping a physical standby database. It is even recommended that your quorum is kept in a third datacenter.

 
The cost doubles, and the complexity doubles. The question you need to ask is.. is it worth all this ?

 
Another option is to utilize Golden Gate. Golden gate gives you similar flexibility by going active-active across datacenters. You still need to have a physical standby, but the advantage of GG over Extended RAC is that the physical does not have to be the same class server or storage ($$). The whole point of the physical is to keep the changes so that they can be sent to the other active cluster.. The standby cluster never becomes a a client available primary.

 

 
I would summerize the 2 choices as follows

 

 
Extended RAC

 
PROS
  • Guaranteed synchronous write across datacenters.

 CONS

 
  • Less availability for any database changes requiring shutdown (parameter or patching).
  • Greater latency when data is sent between datacenters.
  • BCP requires full size footprint
 
Golden Gate

 

 
PROS

  •  Database available for any database changes requiring downtime.
  •  Application releases can be applied in a rolling fasion
  •  Smaller footprint for BCP servers
  •  Less lattency for reads and commits (though latency for data availabilty)

 
CONS

 
  • Database updates are asynchronous. There will be a delay before update is visible in other datacenter (< 2 seconds)

 

 
Neithor one of these will guarantee 99.999% uptime, but GG comes a lot closer. With extended RAC, there is more planned downtime because it is a single database.
As you can see the decision should be driven by how important it is for the application to be able to have the data immediately available for read across datacenters. If you can tolerate the < 2 second latency, GG is a better product. If the application can't tolerate the latency Extended RAC is only viable solution.

Oracle and networking latency with small packets

Recently I have run into some performance issues, and trying to track down the cause has been a challenge.  It wanted to share a valuable lesson that I learned.

It started like any other performance issues.  The phone rings, and the database is slow.  Why do they think the database is slow ? Because the app isn't very busy, and the processing isn't moving along at the expected pace.  This is a new app, and they are in beta.  The process goes along as it always does.  Check the database for any bottlenecks, or any performance issues.  The database all looks good.  The queries are all executing in milliseconds. on the new hardware.  The application looks good.  The process is executing in Milliseconds, and then calling out to the database for the next query.  Everything looks good, database is fast, application is fast.  The issue goes to the networking group, and the network group concludes that we have a nice 1ge going between the 2 servers.  The network isn't breaking a sweat.  The issue continues.

So what was the issue ?  It was the number of hops, and the distance of the hops between the application and the database server.  It turns out that this is a chatty app, and it sends a lot of small packets back and forth between the database server, and the app server. 
To make matters worse, the application was just uplifted to new hardware for both the database, and the application.  This uplift caused the applications to be less of a bottleneck, pushing the bottleneck to the network.

The issue turned out to be network latency.  Not a lot of latency, (just a couple of milliseconds), but enough to be noticable for a very chatty application.  It becomes more, and more noticable as servers become faster.  Now that queries run in < 0.00ms, the network is popping up as more of a bottleneck.

The lesson I want to pass on, is that I would highly suggest you measure the network latency, and know how much of a impact it has.. Especially if you going across datacenters, or across many networks in the same datacenters.

My test was a simple one..

1) Create a script full of "select 'x' from dual;

2) put a !date call at the beging and end of the script

3) Run this on a client on your network and compare the difference between the begin and the end time.

4) Run this for multiple scenarios.. I even ran it as IPC, to find out what the network overhead is.


Knowing the expected latency as you go across your datacenter(s) is useful to find out where that missing time is going. So often, it is blamed on the dba (database), and doing this kind of check will let you know what isn't the database.

Thursday, April 28, 2011

Cardinality Feedback recommendations

Here are the recommendations I have for working nicely with Cardinaly feedback.

1) Make sure your statistics are accurate.

2) Try to use histograms where necessary for cadinality

3) Avoid functions on columns or anything that forces oracle to estimate cardinality.

4) Test a query more than once.  Cardinality feedback doesn't show up until the second execution

5) Don't necessarily trust the "explain plan for" for any of the tools that gives you the explain plan based on the first  execution.  The first plan may change.

6) Use function based indexes on expressions (like the decode expression in my previous post), and gather statistics on the function based index

7) Create a histogram for indexes with composite keys, especially when the cardinality isn't as expected. The perfect example of this is a composite index on the column  "Birth Month" and  "Astrological Sign".  There are only so many valid combiniations. Without a histogram, Oracle assumes a cartesion product, and estimates cardinality based on that.

I think it all comes down to Giving oracle as much information as possible, and then making sure you have tested the query a couple of times.

Often the development folks don't realize these things.  They find "cheats" that only work the first time.  They often rely on the explain plan for the first execution.

It is up to us in the DBA realm to educate developers on how things have changed with this feature.

Wednesday, April 27, 2011

Cardinality Feedback Example

I have been fighting cardinality feedback issues all week, and I wanted to document an example of cardinality feedback in action.

First take build my test case, it can be found here

drop user testcard cascade;
create user testcard identified by testcard;
grant dba to testcard;
create TABLE testcard.object_sample1 as select * from dba_objects where 1=2;
create TABLE testcard.object_sample2 as select * from dba_objects where 1=2;

Insert into testcard.object_sample1 (select * from dba_objects where owner in ('SYS','TESTCARD') );
Insert into testcard.object_sample1 (select * from testcard.object_sample1 );
Insert into testcard.object_sample1 (select * from testcard.object_sample1 );
Insert into testcard.object_sample2 (select * from dba_objects where owner in ('SYS','TESTCARD') );

create index testcard.idx1_object_sample2 on testcard.object_sample2(owner);
create index testcard.idx1_object_sample1 on testcard.object_sample1(owner);

EXECUTE DBMS_STATS.GATHER_TABLE_STATS( ownname => 'TESTCARD', tabname => 'OBJECT_SAMPLE1', method_opt => 'FOR ALL COLUMNS SIZE 1',granularity => 'ALL',cascade => TRUE,degree => DBMS_STATS.DEFAULT_DEGREE);

EXECUTE DBMS_STATS.GATHER_TABLE_STATS( ownname => 'TESTCARD', tabname => 'OBJECT_SAMPLE2', method_opt => 'FOR ALL COLUMNS SIZE 1',granularity => 'ALL',cascade => TRUE,degree => DBMS_STATS.DEFAULT_DEGREE);


Now lets take a look a what we have

select owner,count(1) from testcard.object_sample1 group by owner;

OWNER                            COUNT(1)
------------------------------ ----------
TESTCARD                                8
SYS                                126620

select owner,count(1) from testcard.object_sample2 group by owner;

OWNER                            COUNT(1)
------------------------------ ----------
TESTCARD                                2
SYS                                 31655


Now here is the query

select * from testcard.object_sample1 a,
testcard.object_sample2 b
where a.owner='SYS' and a.owner is not null and a.object_type='SSSSS'
and b.owner='SYS' and b.object_type=a.object_type and a.owner=b.owner
 
Notice the "is not null" clause, and notice the cardinality.  Finally notice I did not create a histogram for the columns.  if you run this query for owner of 'TESTCARD', it should return 2 rows, and if you run it for 'SYS' it should scan 126,00+ rows.
 
Well the developers are running it for 'TESTCARD', and it is doing a full table scan. They want to use the index for OWNER.  They find a cheat, and rewrite the query. This fools the optimize into thinking the index is a better bet since a function is used (the decode function)..
 
Below is the query with the "is not null" and the "decode" and you can see the 2 different paths.
 

"A.OWNER IS NOT NULL"
 
EXPLAIN PLAN
SET STATEMENT_ID = 'abc' FOR
select * from testcard.object_sample1 a,
testcard.object_sample2 b
where a.owner='SYS' and a.owner is not null and a.object_type='SSSSS'
and b.owner='SYS' and b.object_type=a.object_type and a.owner=b.owner;
set pagesize 25
set linesize 121
set long 1000000
col xplan format a100
SELECT dbms_xplan.display_plan(statement_id => 'abc') AS XPLAN
FROM dual;
 


 Plan Hash Value  : 3876965040

-------------------------------------------------------------------------------------
| Id  | Operation            | Name           | Rows   | Bytes    | Cost | Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |                | 294519 | 56547648 |  650 | 00:00:08 |
| * 1 |   HASH JOIN          |                | 294519 | 56547648 |  650 | 00:00:08 |
| * 2 |    TABLE ACCESS FULL | OBJECT_SAMPLE2 |    384 |    36864 |  136 | 00:00:02 |
| * 3 |    TABLE ACCESS FULL | OBJECT_SAMPLE1 |   1535 |   147360 |  513 | 00:00:07 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
------------------------------------------
* 1 - access("B"."OBJECT_TYPE"="A"."OBJECT_TYPE" AND "A"."OWNER"="B"."OWNER")
* 2 - filter("B"."OBJECT_TYPE"='SSSSS' AND "B"."OWNER"='SYS')
* 3 - filter("A"."OBJECT_TYPE"='SSSSS' AND "A"."OWNER"='SYS')



"DECODE(A.OWNER,NULL,1,2)=2"
EXPLAIN PLAN

SET STATEMENT_ID = 'cde' FOR
select * from testcard.object_sample1 a,
testcard.object_sample2 b
where a.owner='SYS' and decode(a.owner,null,1,2)=2 and a.object_type='SSSSS'
and b.owner='SYS' and b.object_type=a.object_type and a.owner=b.owner ;
set pagesize 25
set linesize 121
set long 1000000
col xplan format a100
SELECT dbms_xplan.display_plan(statement_id => 'cde') AS XPLAN
FROM dual;



----------------------------------------------------------------------------------------------------
 Plan Hash Value  : 2829654189

------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name                | Rows | Bytes  | Cost | Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                     | 2945 | 565440 |  279 | 00:00:04 |
| * 1 |   HASH JOIN                    |                     | 2945 | 565440 |  279 | 00:00:04 |
| * 2 |    TABLE ACCESS BY INDEX ROWID | OBJECT_SAMPLE1      |   15 |   1440 |  142 | 00:00:02 |
| * 3 |     INDEX RANGE SCAN           | IDX1_OBJECT_SAMPLE1 |  629 |        |  133 | 00:00:02 |
| * 4 |    TABLE ACCESS FULL           | OBJECT_SAMPLE2      |  384 |  36864 |  136 | 00:00:02 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
------------------------------------------
* 1 - access("B"."OBJECT_TYPE"="A"."OBJECT_TYPE" AND "A"."OWNER"="B"."OWNER")
* 2 - filter("A"."OBJECT_TYPE"='SSSSS')
* 3 - access("A"."OWNER"='SYS')
* 3 - filter(DECODE("A"."OWNER",NULL,1,2)=2)
* 4 - filter("B"."OBJECT_TYPE"='SSSSS' AND "B"."OWNER"='SYS')


So as you can see there are decode plan is a better plan for owner of  'TESTCODE'.  The deveopment folks then rewrite their query with this "cheat" thinking they beat the optimizer...


But cardinality feedback kicks in, when the query is executed with an owner of 'SYS'.

Below is the output of the display_cursor.. Notice the 2 plans, and notice that cardinality feedback was used to change the plan from an index lookup to FTS again.


select * from testcard.object_sample1 a,testcard.object_sample2 b where a.owner='SYS' and decode(a.owner,null,1,2)=2 and a.object_type='SSSSS' and b.owner='SYS' and b.object_type=a.object_type and a.owner=b.owner;


PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------------------
SQL_ID  91ugxuy8jc3g4, child number 0
-------------------------------------
 select * from testcard.object_sample1 a,
testcard.object_sample2 b  where a.owner='SYS' and
decode(a.owner,null,1,2)=2  and a.object_type='SSSSS'       and
b.owner='SYS' and b.object_type=a.object_type and a.owner=b.owner

Plan hash value: 2829654189

----------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                     |       |       |   279 (100)|          |
|*  1 |  HASH JOIN                   |                     |  2945 |   552K|   279   (1)| 00:00:04 |
|*  2 |   TABLE ACCESS BY INDEX ROWID| OBJECT_SAMPLE1      |    15 |  1440 |   142   (1)| 00:00:02 |
|*  3 |    INDEX RANGE SCAN          | IDX1_OBJECT_SAMPLE1 |   629 |       |   133   (1)| 00:00:02 |
|*  4 |   TABLE ACCESS FULL          | OBJECT_SAMPLE2      |   384 | 36864 |   136   (0)| 00:00:02 |
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------


PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------------------
   1 - access("B"."OBJECT_TYPE"="A"."OBJECT_TYPE" AND "A"."OWNER"="B"."OWNER")
   2 - filter("A"."OBJECT_TYPE"='SSSSS')
   3 - access("A"."OWNER"='SYS')
       filter(DECODE("A"."OWNER",NULL,1,2)=2)
   4 - filter(("B"."OBJECT_TYPE"='SSSSS' AND "B"."OWNER"='SYS'))






select * from testcard.object_sample1 a,testcard.object_sample2 b where a.owner='SYS' and decode(a.owner,null,1,2)=2 and a.object_type='SSSSS' and b.owner='SYS' and b.object_type=a.object_type and a.owner=b.owner;

select * from table(dbms_xplan.display_cursor('07prx7pkch6tr',null,'typical +peeked_binds'));

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------------------
SQL_ID  91ugxuy8jc3g4, child number 0
-------------------------------------
 select * from testcard.object_sample1 a,
testcard.object_sample2 b  where a.owner='SYS' and
decode(a.owner,null,1,2)=2  and a.object_type='SSSSS'       and
b.owner='SYS' and b.object_type=a.object_type and a.owner=b.owner

Plan hash value: 2829654189

----------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                     |       |       |   279 (100)|          |
|*  1 |  HASH JOIN                   |                     |  2945 |   552K|   279   (1)| 00:00:04 |
|*  2 |   TABLE ACCESS BY INDEX ROWID| OBJECT_SAMPLE1      |    15 |  1440 |   142   (1)| 00:00:02 |
|*  3 |    INDEX RANGE SCAN          | IDX1_OBJECT_SAMPLE1 |   629 |       |   133   (1)| 00:00:02 |
|*  4 |   TABLE ACCESS FULL          | OBJECT_SAMPLE2      |   384 | 36864 |   136   (0)| 00:00:02 |
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------


PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------------------
   1 - access("B"."OBJECT_TYPE"="A"."OBJECT_TYPE" AND "A"."OWNER"="B"."OWNER")
   2 - filter("A"."OBJECT_TYPE"='SSSSS')
   3 - access("A"."OWNER"='SYS')
       filter(DECODE("A"."OWNER",NULL,1,2)=2)
   4 - filter(("B"."OBJECT_TYPE"='SSSSS' AND "B"."OWNER"='SYS'))

SQL_ID  91ugxuy8jc3g4, child number 1
-------------------------------------
 select * from testcard.object_sample1 a,
testcard.object_sample2 b  where a.owner='SYS' and
decode(a.owner,null,1,2)=2  and a.object_type='SSSSS'       and
b.owner='SYS' and b.object_type=a.object_type and a.owner=b.owner

Plan hash value: 1738482415

-------------------------------------------------------------------------------------
| Id  | Operation          | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                |       |       |   650 (100)|          |
|*  1 |  HASH JOIN         |                |  2945 |   552K|   650   (1)| 00:00:08 |
|*  2 |   TABLE ACCESS FULL| OBJECT_SAMPLE1 |    15 |  1440 |   513   (1)| 00:00:07 |
|*  3 |   TABLE ACCESS FULL| OBJECT_SAMPLE2 |   384 | 36864 |   136   (0)| 00:00:02 |

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("B"."OBJECT_TYPE"="A"."OBJECT_TYPE" AND
              "A"."OWNER"="B"."OWNER")
   2 - filter(("A"."OBJECT_TYPE"='SSSSS' AND "A"."OWNER"='SYS' AND
              DECODE("A"."OWNER",NULL,1,2)=2))
   3 - filter(("B"."OBJECT_TYPE"='SSSSS' AND "B"."OWNER"='SYS'))

Note
-----
   - cardinality feedback used for this statement


This should be enough to play with and see how cardinality feedback can kick in to dynamically change the plan on you.

NOTE : I've played further, and this is what I could find.  Anyone correct if any of these are wrong..

) The v$sqlstats only showed one plan_hash_value even though the plan changed.


2) The plan didn't change immediately if I execute one after another.. When I put in a sleep, then the plan changes (must be some sort of delay before cardinality feedback kicks in).

3) Once locked into the new plan, it stays with it.

4) Adaptive Cursor sharing only kicks in if a histogram is used, or if the bind values fall outside the high/low values.

5) Oracle avoids using plans that are suspect (i.e. in my example with decode).

6) Oracle tends to use cardinality feedback more when the plan is "suspect" because of some sort of rough estimation.

7) Using dynamic sampling causes Oracle to use cardinality feedback more.



Cardinality feedback seems to be more of a weighting thing. Oracle uses tries to look at the "situation" and the less likely the estimate is reliable, the more likely it is to use cardinality feedback to choose another plan.






Friday, April 22, 2011

cardinality feedback

My big challenge this week was an 11g database that wouldn't behave.  I'm sure you've been there.. Getting frustrated by a database that doesn't seen to perform right.

I can tell you that challenge has gotten even bigger in 11g.. A new feature was added called "cardinality feedback".  The idea is simple.  Oracle comes up with a plan, including cardinality for each step, and then executes the plan.  If, after executing the plan, the optimizer believes that the Actual cardinality is out of line with the estimated cardinality, the optimizer will pick a new plan.

Sounds like a great feature right ?  But what if you have issues with histograms, and your actual cardinality is lopsided.  Oracle may chose a plan, run against the lobsided data, and decided the plan is no good.  Oracle morphs the plan for you.. No charge, no control.  It get's frrustrating.

The only quick fix I've found is the COE_PROFILE script that is part of the SQLT tuning kit available from MOS.

There is quick way to test this behavior (if you think it's occuring), use this undocumented parameter

-- Turn on cardinality feedback

alter session set "_optimizer_use_feedback" = true;

-- Turn off (disable) cardinality feedback
alter session set "_optimizer_use_feedback" = false;

Tuesday, April 19, 2011

HCC compression

In my free time this week I have been playing on the exadata.  The first thing I wanted to determine is the compression ratios for my data, and the select statistics.

One of the first observations I made was how HCC saves you on selects. I took a 158g table from another system, and loaded it up in HCC query mode (I loaded it direct).. the table is now 6g in size.. WOW, but that didn't impress me as much as my select.

First I selected from the table with storage indexes off.  2.02 seconds.. 2.02 seconds ?  1.7 Billion rows that was 158g of data.  wow.

Next I select from the table with storage indexes on. it takes 9.02..  The extra 7 seconds was the exadata building the storage indexes.   Then I select again.  .30 seconds.  No indexes.

I am going to do some more testing.  but this certainly is impressive !!

Saturday, April 16, 2011

pin x wait on s wait event

Chances are you've stumbled onto my website because you are searching for the keywords "pin x wait on s" "wait" "event" and "parsing".

Well I have been fighting a fire for the last 2 weeks in our performance enviroment on just these "keywords".

We have been doing our testing by sending through a set workload with very similar processing needs.  This workload is date sensitive, so it has been a challenge to keep moving dates and retesting. Recently, we have begin utilzing "flashback database" for our testing.. This has great promise for getting consistent testing done!
You run through your test case scenario, and capture your performance data.  You flashback to the start point, make the change, then run through the same exact scenario. Pretty cool huh ?

Well all this was going as planned.  We ran our processing, saved AWR data, and reran.  Suddenly we started seeing these "pin x wait on S" events holding up processing for minutes at a time. Where did these come from ? was it the release ??  We had to dig in and find out.

Well here is some background.
  • Our queries are very, very complex.  They often take ~ 1000ms to parse (1 second), and 4ms to execute.
  • flashback will flush your cache, kind of obvious, but pertinent to my situation.
  • Our processing is very structured, and sends multiple processing streams through the same steps in the same order
So what was happening ?  Well it turns out that this is caused by hard parsing multiple duplicate sql (probably 50+ in our case) simultaneously.  This wait event is Oracle trying to parse the first sql statement, and having all the others wait.  This parse queing doesn't do well with sql of this complexity, and it throws off the parsing causing these wait events.

Just wanted to pass this on, that if you see this event, and you are doing similar testing, look at the complexity of the queries, and consider hard simultaneous parsing of the same sql as the cuplrit.