Sunday, April 3, 2011

Configuring an Exadata (lessons learned)

Well, the time is finally here. 

Twas the night before Exadata,
and all through the data warehouse,
Not a keyboard was stirring,
Not even a mouse

ACS was all snug in their hotel beds,
with visions of "ONE" scripts danced in their heads.
And DBA's with laptops, and I with my smart phone,
had just settled in to see the hype all get blown.

When out on the server room there arose such a clatter,

I sprang from the bed to see what was the matter.
Away to the laptop I flew like a flash,
Tore open the lid and started up bash.

The nodes on the cabinet of  the new server appliance
Gave the heat of mid-day to everything near the servers in the alliance.
When, what to my wondering eyes should appear,
But a man with a sweater drinking a beer.
he was a sailing pro with the magic of a fairy,

I knew in a moment it must be Larry.

More rapid than eagles his crew they came,
And he whistled, and shouted, and called them by name!

Now Dan Norris! now, Kerry Osborne! now, .....

Well you get the picture..

Anyway here is my lessons learned.  These are mostly the things that would have helped us get from purchase, to ACS coming on site a little faster.

1) IP addresses.. Yes the exadata needs lots of IP address.  Here is what you need for an x2-2 full rack.


Ethernet Subnet 1 IP addresses 51
ILOM for Database Servers 8
ILOM for Exadata Cells 14
Eth0 for Database Servers 8
Eth0 for Exadata Cells 14
Mgmt port for IB switches 3
IP address for KVM 1
IP address for Ethernet Switch 1
IP address for PDUs 2
Ethernet Subnet 2 IP addresses 19
Eth1 for Database Servers 8
VIPs for Database Servers 8
SCAN Addresses (per Cluster) 3
Total 70

2) Naming - The naming convention for a server name is used for all the components within the Exadata.  Even the disk themselves include the name of the server so you can track down any issues.  That's not saying that your standard host name isn't usable. It just means that you give Oracle 1 name, and it is the building block for everything else.

3) Default database -  Surprisingly the default database is probably going to be useless to you (unless you are currently going to use the same configuration as oracle provides).. Oracle creates a default database with default parameters, and default Characterset.  Anything other than that you are on your own.

4) Backup.  If you've read my previous posts you have probably found that this is the most confusing area.  Each Database server comes with 4 1ge ports, and 2 10ge ports.  If you are using 10ge you are all set.. Bond and aggregate, and you are in business.. Infiniband you just use the 2 infiniband ports.
If you are still on 1ge like a lot of the datacenters, you can't easily make it redundant.  Eth0 is reserved for management.  Eth1 and eth2 are usually bonded for active-passive redundancy. This leaves one port for your tape backup.. No redundancy.  This is one of the most important things if you are planning on using 1ge.  Make sure you understand how you are going to configure the Exadata, and what that you most likely will not have redundancy.

Finally, I'll pass on two of my favorite comments to make on all this.

"Buying an exadata is like putting a window airconditioner in your 1920's house" -- This is from a time when I had a house with original wiring.. My wife would try to blow dry her air with the airconditioner on, and the fuse would blow.. If you put an exadata in your datacenter running 1ge , you will blow a fuse.

"Buying an Exadata doesn't make things easier.. It is easier to be told we can't afford it, it's more difficult if they buy it.  It's like telling your mangement that you need a ferarri to go fast, and they say yes.. now drive it 320MPH without crashing." -This should be pretty self explainitory.  There are strong expectations from managment when you buy one of these.








Friday, April 1, 2011

New Free oracle products are out

Oracle XE  11g beta  just came out today, SQL Developer 3.0 came out, and Oracle SQL developer data modeler came out a couple of months ago. All free.




Wednesday, March 30, 2011

Configuring an Exadata (part III)

Well, the time has come to finally get the exadata configured.   We are coming down the end, and we are still figuring out the network connections.  The problem is the lack of  1ge ports.

The exadata comes with 4 1ge ports.  1 of which is reserved for the management (patching, monitoring etc).  It also comes with 2 10ge ports.  This is where the fun begins.  Our standard is to bind 2 ports togethor for public (active-passive) for redundancy, then bond 2 ports for backup aggregating them to get 2ge speed, and have redundancy for the backup.  How do we do this with 3 ports ?  This leaves us with  3 choices.

1) Take the 2 10ge ports, and funnel them down to 2 1ge ports.  Bond and aggregate these 2 ports togethor, and we have our 2 tan ports.  We would be non-standard, and the only ones doing this as far as I know

2) Disable the management services, and utilze the 2 other 1ge ports for Tan.  This means 2 ports 1ge for public bonded, and 1 ports for TAN bonded and aggregated.  Again non-standard.

3) Utilize the 2 ports 1ge for public bonded, 1 management port, and only 1 tan port.  This would be standard but the least desirable.

In looking at the documentation, it states

When connecting the media servers to the Database Machine through Ethernet, connect the eth3 interfaces from each database server directly into the data center network. For high availability, multiple network interfaces on the database servers and multiple network interfaces on the media server can be bonded together. In this configuration, configure the eth3 interface as the preferred or primary interface and configure eth2 as the redundant interface.


If throughput is of a concern then connect both eth2 and eth3 interfaces from each database server directly into the data center’s redundant network. The two interfaces can then be bonded together in a redundant and aggregated way to provide increased throughput and redundancy.
But this certainly doesn't explain what this means to bond eth2 and eth3. Is oracle suggesting not bonding public, and utilzing 2 of the 3 available ports for TAN, or are they suggesting backing up over LAN ?

In any case this whole network configuration of the Exadata has been very confusing.





Thursday, March 24, 2011

Duplicating an ODI interface module

Here I am day 4 in my ODI class and I am on my quest to copy all the wrh$ performance data to a central repository. I think after this day in class I have all the tools to create jobs to do this.

Of course, being a curious recycle consious individual, I tried to reuse some of code. Specifically I tried to export a interface to an XML file, and do a replace all of the table name to the next table, then import the interface with the new name !! Everything looked good with the mapping, until I looked at the name of the Primary Key. It still had the primary key name from the original interface. This means that there must be some "hooks" from the interface XML document to other related objects in the database.. Oh well..

It looks like for now I will be creating interfaces for the objects I need to pull into my repository.

I have been very impressed with the flexibility of the product, and the way I can easily reuse it to add another source system.. Since I'm going to be pulling from 15+ sources flexibility is important.

I'm also going to be using APEX as the front end of all this data. WIth some simple tools like ODI, and APEX, a DBA type can do some serious reporting !

Wednesday, March 23, 2011

ADG with ODI and Exadata

Recently I've been taking a class on ODI. It is really a very interesting ELT tool (notice I didn't say ETL). I am planning on using it to take data from my ADG (active data guard) copy of production to another server. Perfect right ? Pull from a read-only copy of an oracle database, to another database. I pick my LKM (load Knowledge Module) of Oracle-oracle. Unfortunately the current knowledge module creates a view on the source. As you can imagine, with ADG, this is impossible. The only way to get ODI working against ADG is to create your own Knowledge Module, so I've been spending my evening creating my very own. I am hoping this can help others who are running into the same issue. First this is a great site explaining HOW to create your very own knowledge module...
http://www.oracle.com/technetwork/articles/bethke-odi-090881.html
This is a great site to find all the syntax you need.
http://gerardnico.com/doc/odi/webhelp/en/index.htm#ref_api/

Finally these are the steps I did to make my own knowledge module.

1) copy the oracle to oracle(DBLINK) module
2) Give it a new name like oracle to oracle(ADG)
3) Remove the following steps

- 70 create view/table on source
- 80 Create temp indexes on source
- 150 drop view on source
- 160 Drop temp indexes

4) change drop synonym on target to drop view on target

drop synonym <%=odiRef.getTable("L", "COLL_NAME", "W")%>

becomes

drop view <%=odiRef.getTable("L", "COLL_NAME", "W")%>

4) Change the "drop synonym on target" to "drop view on target"

drop synonym <%=odiRef.getTable("L", "COLL_NAME", "W")%>becomes

drop view <%=odiRef.getTable("L", "COLL_NAME", "W")%>

5) Last change. "create synonym on target" becomes "create view on target"

create synonym <%=odiRef.getTable("L", "COLL_NAME", "W")%>
for <%=odiRef.getTable("R", "COLL_NAME", "W")%>

becomes

<% if ((odiRef.getOption("ENABLE_EDITION_SUPPORT")).equals("0")) { %>
create or replace view <%=odiRef.getTable("L", "COLL_NAME", "W")%>
(
<%=odiRef.getColList("", "[CX_COL_NAME]", ",\n\t", "", "")%>
)
as select <%=odiRef.getPop("DISTINCT_ROWS")%>
<%=odiRef.getColList("", "[COL_NAME]", ",\n\t", "", "")%>
from <%=odiRef.getSrcTablesList("", "[SCHEMA].[TABLE_NAME]@", ", ", "")%><%=odiRef.getInfo("SRC_DSERV_NAME")%>
where (1=1)
<%=odiRef.getFilter()%>
<%=odiRef.getJrnFilter()%>
<%=odiRef.getJoin()%>
<%=odiRef.getGrpBy()%>
<%=odiRef.getHaving()%>
<% } else { %>
create table <%=odiRef.getTable("L", "COLL_NAME", "W")%> as
select <%=odiRef.getPop("DISTINCT_ROWS")&%>
<%=odiRef.getColList("", "[COL_NAME]\t[CX_COL_NAME]", ",\n\t", "", "")%>
from <%=odiRef.getSrcTablesList("", "[SCHEMA].[TABLE_NAME]@", ", ", "")%><%=odiRef.getInfo("SRC_DSERV_NAME%>
where (1=1)
<%=odiRef.getFilter()%>
<%=odiRef.getJrnFilter()%>
<%=odiRef.getJoin()%>
<%=odiRef.getGrpBy()%>
<%=odiRef.getHaving()%>
<%}%>



As you can see the idea is to remove any updates to the source, and switch the synonym in the target to a view pointing at the source.

And some advice. If you are using the simulation button to test the new Knowledge module, the "getinfo" commands only contain data at runtime.. The simulate will show nothing, and you will only see data when you actually execute (lost about an hour on that one).

Enjoy.. I am posting my actual XML knowledge module here.

The usual disclaimers here.. test well.. I also want to point out that I only changed the Oracle-oracle knowledge module. If you are going from Oracle to Netezza for example, you need to make the appropraite changes to that knowledge module.

I am including another article I found on a knowledge mudule that doesn't create the link..
http://www.business-intelligence-quotient.com/?p=1187

Tuesday, March 22, 2011

My quest to consolidate AWR data

I am still embarking on my quest to consolidate all the AWR data from all the database into a central performance database.

My first thought to use a simple CDC tool (goldengate) failed. Goldengate will not replicate sys objects. boo.

I am in Class for ODI this week, so my curent plan is to use ODI to replicate the data from my all my sources to a single target.

So far so good, and I will update on how things go with my quest to consolidate reporting data.

If this goes well with ODI, I will will use it to also consolidate tablespace sizing data, etc from all my databases. Wahoo

Sunday, March 20, 2011

Large SGA's and what it means to the future for databases

Well, first off, I don't have the answer to this one, just some musings.

I've noticed that Memory on Servers has gotten bigger lately or cheaper depending on how you look at it. Case in point is the x2-8 exadata. 1tb of memory per database node. Then you add the new 11gr2 ability to parallelize across nodes, and not have to keep passing blocks, you have close to 2tb of available memory.

So what does this mean to a database ? What does this mean for disk I/O ? What is a database doing if the blocks are all in memory. Essentially you are writing out changed blocks, and logging, that's it.

So what do you need a big disk array for ?

Then with the all the really awesome IP based disk arrays out there (like the isilon), what is the disk future ? Like many companies we are still running on 4gb Fiber for all our servers, and connecting to a San array. Should we go to 8gb Fiber or 10gb IP ?

I would be interested in opinions on what people see as the future of disk. IP, Fiber, or FCOE ? How important is the speed of a disk array going to be ? Just put you Redo logs on SSD (or flashcache ?).

Update :

I just saw that Arup Nanda just posted some writing on this topic.  You can read it here.  He basically said that because of consitent read, and other mechanisms, you might find that your database objects are in the cache multiple times utilzing much more of your buffer cache than you probably realize. 
He recommends using a special database (like times ten) to make sure everything is in memory.