Tuesday, April 19, 2011

HCC compression

In my free time this week I have been playing on the exadata.  The first thing I wanted to determine is the compression ratios for my data, and the select statistics.

One of the first observations I made was how HCC saves you on selects. I took a 158g table from another system, and loaded it up in HCC query mode (I loaded it direct).. the table is now 6g in size.. WOW, but that didn't impress me as much as my select.

First I selected from the table with storage indexes off.  2.02 seconds.. 2.02 seconds ?  1.7 Billion rows that was 158g of data.  wow.

Next I select from the table with storage indexes on. it takes 9.02..  The extra 7 seconds was the exadata building the storage indexes.   Then I select again.  .30 seconds.  No indexes.

I am going to do some more testing.  but this certainly is impressive !!

Saturday, April 16, 2011

pin x wait on s wait event

Chances are you've stumbled onto my website because you are searching for the keywords "pin x wait on s" "wait" "event" and "parsing".

Well I have been fighting a fire for the last 2 weeks in our performance enviroment on just these "keywords".

We have been doing our testing by sending through a set workload with very similar processing needs.  This workload is date sensitive, so it has been a challenge to keep moving dates and retesting. Recently, we have begin utilzing "flashback database" for our testing.. This has great promise for getting consistent testing done!
You run through your test case scenario, and capture your performance data.  You flashback to the start point, make the change, then run through the same exact scenario. Pretty cool huh ?

Well all this was going as planned.  We ran our processing, saved AWR data, and reran.  Suddenly we started seeing these "pin x wait on S" events holding up processing for minutes at a time. Where did these come from ? was it the release ??  We had to dig in and find out.

Well here is some background.
  • Our queries are very, very complex.  They often take ~ 1000ms to parse (1 second), and 4ms to execute.
  • flashback will flush your cache, kind of obvious, but pertinent to my situation.
  • Our processing is very structured, and sends multiple processing streams through the same steps in the same order
So what was happening ?  Well it turns out that this is caused by hard parsing multiple duplicate sql (probably 50+ in our case) simultaneously.  This wait event is Oracle trying to parse the first sql statement, and having all the others wait.  This parse queing doesn't do well with sql of this complexity, and it throws off the parsing causing these wait events.

Just wanted to pass this on, that if you see this event, and you are doing similar testing, look at the complexity of the queries, and consider hard simultaneous parsing of the same sql as the cuplrit.


Friday, April 8, 2011

Exadata class

This week I attended "exadata and database machine admintration" class.  It was a good class, but they usually are from OU.

This class covers a lot of material in 3 days and it is a good intro to what it's going to be like managing an exadara.  Take heed to the prerecs.  They aren't manditory, but knowing about RAC and grid is necessary to understand how Exadata implements those pieces.

For those who might attend, my advice is do more with dbcontrol,  The class doesn't say much about dbcontrol or grid control.

Well to start at the beginning the class is in a virual environment, Each student has their own virtual 1/4 rack with 1 cpu per node,  This works out well because everyone has their own isolated environment.  This environment has a database with the sh default schema.

In order to get the most out of the class
1) I changed database password for sys,system,dbsnmp,and sysman.
2) I recreated the password file.
3) I installed dbcontrol (dbca)
4) I set up ssh keys for celladmin, and cellmonitor to go to each cell from the oracle account.  Be aware that to set up ssh for cellmonitor, copy the keys as root and change permissions.  The cellmonitor account has very little privileges,
5) I created my own account in asm and granted sysasm to it.

After doing all this I was able to start up dbcontrol and connect to my virtual environment,  This gave me a great view into how the disks were configured, and I was able to add the exadata cells to dbcontrol.

I think having dbcontrol set up made the class much more useful so I could visually see all the items newly available on the exadata.

Tuesday, April 5, 2011

Hadoop

I know after all this exadata posting, your are probably wondering why my blood doesn't run black and red ?

Well I've been hearing about Hadoop for years, and I think it's finally about time that I set up an Hadoop envioronment.

If you have never heard of hadoop here is a starting point.

There are also 2 very interesting flavors.

Cassandra - this allows for HA through clustering your hadoop database with no single point of failure
Hive          - This allows for data warehousing with the building of materialized views.

The thing that grabbed my attention is the ability to handle large amounts of log data.  Like most major companies we produce unwielding amounts of log files from our application server farms with no way to sift through it.  It seems like a lot of people are utilizing hadoop to organize and search log files.

Interesting subject that I will be pursuing further. If anyone has any experience, or words of wisdom on this pass it on.

Sunday, April 3, 2011

Configuring an Exadata (lessons learned)

Well, the time is finally here. 

Twas the night before Exadata,
and all through the data warehouse,
Not a keyboard was stirring,
Not even a mouse

ACS was all snug in their hotel beds,
with visions of "ONE" scripts danced in their heads.
And DBA's with laptops, and I with my smart phone,
had just settled in to see the hype all get blown.

When out on the server room there arose such a clatter,

I sprang from the bed to see what was the matter.
Away to the laptop I flew like a flash,
Tore open the lid and started up bash.

The nodes on the cabinet of  the new server appliance
Gave the heat of mid-day to everything near the servers in the alliance.
When, what to my wondering eyes should appear,
But a man with a sweater drinking a beer.
he was a sailing pro with the magic of a fairy,

I knew in a moment it must be Larry.

More rapid than eagles his crew they came,
And he whistled, and shouted, and called them by name!

Now Dan Norris! now, Kerry Osborne! now, .....

Well you get the picture..

Anyway here is my lessons learned.  These are mostly the things that would have helped us get from purchase, to ACS coming on site a little faster.

1) IP addresses.. Yes the exadata needs lots of IP address.  Here is what you need for an x2-2 full rack.


Ethernet Subnet 1 IP addresses 51
ILOM for Database Servers 8
ILOM for Exadata Cells 14
Eth0 for Database Servers 8
Eth0 for Exadata Cells 14
Mgmt port for IB switches 3
IP address for KVM 1
IP address for Ethernet Switch 1
IP address for PDUs 2
Ethernet Subnet 2 IP addresses 19
Eth1 for Database Servers 8
VIPs for Database Servers 8
SCAN Addresses (per Cluster) 3
Total 70

2) Naming - The naming convention for a server name is used for all the components within the Exadata.  Even the disk themselves include the name of the server so you can track down any issues.  That's not saying that your standard host name isn't usable. It just means that you give Oracle 1 name, and it is the building block for everything else.

3) Default database -  Surprisingly the default database is probably going to be useless to you (unless you are currently going to use the same configuration as oracle provides).. Oracle creates a default database with default parameters, and default Characterset.  Anything other than that you are on your own.

4) Backup.  If you've read my previous posts you have probably found that this is the most confusing area.  Each Database server comes with 4 1ge ports, and 2 10ge ports.  If you are using 10ge you are all set.. Bond and aggregate, and you are in business.. Infiniband you just use the 2 infiniband ports.
If you are still on 1ge like a lot of the datacenters, you can't easily make it redundant.  Eth0 is reserved for management.  Eth1 and eth2 are usually bonded for active-passive redundancy. This leaves one port for your tape backup.. No redundancy.  This is one of the most important things if you are planning on using 1ge.  Make sure you understand how you are going to configure the Exadata, and what that you most likely will not have redundancy.

Finally, I'll pass on two of my favorite comments to make on all this.

"Buying an exadata is like putting a window airconditioner in your 1920's house" -- This is from a time when I had a house with original wiring.. My wife would try to blow dry her air with the airconditioner on, and the fuse would blow.. If you put an exadata in your datacenter running 1ge , you will blow a fuse.

"Buying an Exadata doesn't make things easier.. It is easier to be told we can't afford it, it's more difficult if they buy it.  It's like telling your mangement that you need a ferarri to go fast, and they say yes.. now drive it 320MPH without crashing." -This should be pretty self explainitory.  There are strong expectations from managment when you buy one of these.








Friday, April 1, 2011

New Free oracle products are out

Oracle XE  11g beta  just came out today, SQL Developer 3.0 came out, and Oracle SQL developer data modeler came out a couple of months ago. All free.




Wednesday, March 30, 2011

Configuring an Exadata (part III)

Well, the time has come to finally get the exadata configured.   We are coming down the end, and we are still figuring out the network connections.  The problem is the lack of  1ge ports.

The exadata comes with 4 1ge ports.  1 of which is reserved for the management (patching, monitoring etc).  It also comes with 2 10ge ports.  This is where the fun begins.  Our standard is to bind 2 ports togethor for public (active-passive) for redundancy, then bond 2 ports for backup aggregating them to get 2ge speed, and have redundancy for the backup.  How do we do this with 3 ports ?  This leaves us with  3 choices.

1) Take the 2 10ge ports, and funnel them down to 2 1ge ports.  Bond and aggregate these 2 ports togethor, and we have our 2 tan ports.  We would be non-standard, and the only ones doing this as far as I know

2) Disable the management services, and utilze the 2 other 1ge ports for Tan.  This means 2 ports 1ge for public bonded, and 1 ports for TAN bonded and aggregated.  Again non-standard.

3) Utilize the 2 ports 1ge for public bonded, 1 management port, and only 1 tan port.  This would be standard but the least desirable.

In looking at the documentation, it states

When connecting the media servers to the Database Machine through Ethernet, connect the eth3 interfaces from each database server directly into the data center network. For high availability, multiple network interfaces on the database servers and multiple network interfaces on the media server can be bonded together. In this configuration, configure the eth3 interface as the preferred or primary interface and configure eth2 as the redundant interface.


If throughput is of a concern then connect both eth2 and eth3 interfaces from each database server directly into the data center’s redundant network. The two interfaces can then be bonded together in a redundant and aggregated way to provide increased throughput and redundancy.
But this certainly doesn't explain what this means to bond eth2 and eth3. Is oracle suggesting not bonding public, and utilzing 2 of the 3 available ports for TAN, or are they suggesting backing up over LAN ?

In any case this whole network configuration of the Exadata has been very confusing.