Wednesday, March 30, 2011

Configuring an Exadata (part III)

Well, the time has come to finally get the exadata configured.   We are coming down the end, and we are still figuring out the network connections.  The problem is the lack of  1ge ports.

The exadata comes with 4 1ge ports.  1 of which is reserved for the management (patching, monitoring etc).  It also comes with 2 10ge ports.  This is where the fun begins.  Our standard is to bind 2 ports togethor for public (active-passive) for redundancy, then bond 2 ports for backup aggregating them to get 2ge speed, and have redundancy for the backup.  How do we do this with 3 ports ?  This leaves us with  3 choices.

1) Take the 2 10ge ports, and funnel them down to 2 1ge ports.  Bond and aggregate these 2 ports togethor, and we have our 2 tan ports.  We would be non-standard, and the only ones doing this as far as I know

2) Disable the management services, and utilze the 2 other 1ge ports for Tan.  This means 2 ports 1ge for public bonded, and 1 ports for TAN bonded and aggregated.  Again non-standard.

3) Utilize the 2 ports 1ge for public bonded, 1 management port, and only 1 tan port.  This would be standard but the least desirable.

In looking at the documentation, it states

When connecting the media servers to the Database Machine through Ethernet, connect the eth3 interfaces from each database server directly into the data center network. For high availability, multiple network interfaces on the database servers and multiple network interfaces on the media server can be bonded together. In this configuration, configure the eth3 interface as the preferred or primary interface and configure eth2 as the redundant interface.


If throughput is of a concern then connect both eth2 and eth3 interfaces from each database server directly into the data center’s redundant network. The two interfaces can then be bonded together in a redundant and aggregated way to provide increased throughput and redundancy.
But this certainly doesn't explain what this means to bond eth2 and eth3. Is oracle suggesting not bonding public, and utilzing 2 of the 3 available ports for TAN, or are they suggesting backing up over LAN ?

In any case this whole network configuration of the Exadata has been very confusing.





Thursday, March 24, 2011

Duplicating an ODI interface module

Here I am day 4 in my ODI class and I am on my quest to copy all the wrh$ performance data to a central repository. I think after this day in class I have all the tools to create jobs to do this.

Of course, being a curious recycle consious individual, I tried to reuse some of code. Specifically I tried to export a interface to an XML file, and do a replace all of the table name to the next table, then import the interface with the new name !! Everything looked good with the mapping, until I looked at the name of the Primary Key. It still had the primary key name from the original interface. This means that there must be some "hooks" from the interface XML document to other related objects in the database.. Oh well..

It looks like for now I will be creating interfaces for the objects I need to pull into my repository.

I have been very impressed with the flexibility of the product, and the way I can easily reuse it to add another source system.. Since I'm going to be pulling from 15+ sources flexibility is important.

I'm also going to be using APEX as the front end of all this data. WIth some simple tools like ODI, and APEX, a DBA type can do some serious reporting !

Wednesday, March 23, 2011

ADG with ODI and Exadata

Recently I've been taking a class on ODI. It is really a very interesting ELT tool (notice I didn't say ETL). I am planning on using it to take data from my ADG (active data guard) copy of production to another server. Perfect right ? Pull from a read-only copy of an oracle database, to another database. I pick my LKM (load Knowledge Module) of Oracle-oracle. Unfortunately the current knowledge module creates a view on the source. As you can imagine, with ADG, this is impossible. The only way to get ODI working against ADG is to create your own Knowledge Module, so I've been spending my evening creating my very own. I am hoping this can help others who are running into the same issue. First this is a great site explaining HOW to create your very own knowledge module...
http://www.oracle.com/technetwork/articles/bethke-odi-090881.html
This is a great site to find all the syntax you need.
http://gerardnico.com/doc/odi/webhelp/en/index.htm#ref_api/

Finally these are the steps I did to make my own knowledge module.

1) copy the oracle to oracle(DBLINK) module
2) Give it a new name like oracle to oracle(ADG)
3) Remove the following steps

- 70 create view/table on source
- 80 Create temp indexes on source
- 150 drop view on source
- 160 Drop temp indexes

4) change drop synonym on target to drop view on target

drop synonym <%=odiRef.getTable("L", "COLL_NAME", "W")%>

becomes

drop view <%=odiRef.getTable("L", "COLL_NAME", "W")%>

4) Change the "drop synonym on target" to "drop view on target"

drop synonym <%=odiRef.getTable("L", "COLL_NAME", "W")%>becomes

drop view <%=odiRef.getTable("L", "COLL_NAME", "W")%>

5) Last change. "create synonym on target" becomes "create view on target"

create synonym <%=odiRef.getTable("L", "COLL_NAME", "W")%>
for <%=odiRef.getTable("R", "COLL_NAME", "W")%>

becomes

<% if ((odiRef.getOption("ENABLE_EDITION_SUPPORT")).equals("0")) { %>
create or replace view <%=odiRef.getTable("L", "COLL_NAME", "W")%>
(
<%=odiRef.getColList("", "[CX_COL_NAME]", ",\n\t", "", "")%>
)
as select <%=odiRef.getPop("DISTINCT_ROWS")%>
<%=odiRef.getColList("", "[COL_NAME]", ",\n\t", "", "")%>
from <%=odiRef.getSrcTablesList("", "[SCHEMA].[TABLE_NAME]@", ", ", "")%><%=odiRef.getInfo("SRC_DSERV_NAME")%>
where (1=1)
<%=odiRef.getFilter()%>
<%=odiRef.getJrnFilter()%>
<%=odiRef.getJoin()%>
<%=odiRef.getGrpBy()%>
<%=odiRef.getHaving()%>
<% } else { %>
create table <%=odiRef.getTable("L", "COLL_NAME", "W")%> as
select <%=odiRef.getPop("DISTINCT_ROWS")&%>
<%=odiRef.getColList("", "[COL_NAME]\t[CX_COL_NAME]", ",\n\t", "", "")%>
from <%=odiRef.getSrcTablesList("", "[SCHEMA].[TABLE_NAME]@", ", ", "")%><%=odiRef.getInfo("SRC_DSERV_NAME%>
where (1=1)
<%=odiRef.getFilter()%>
<%=odiRef.getJrnFilter()%>
<%=odiRef.getJoin()%>
<%=odiRef.getGrpBy()%>
<%=odiRef.getHaving()%>
<%}%>



As you can see the idea is to remove any updates to the source, and switch the synonym in the target to a view pointing at the source.

And some advice. If you are using the simulation button to test the new Knowledge module, the "getinfo" commands only contain data at runtime.. The simulate will show nothing, and you will only see data when you actually execute (lost about an hour on that one).

Enjoy.. I am posting my actual XML knowledge module here.

The usual disclaimers here.. test well.. I also want to point out that I only changed the Oracle-oracle knowledge module. If you are going from Oracle to Netezza for example, you need to make the appropraite changes to that knowledge module.

I am including another article I found on a knowledge mudule that doesn't create the link..
http://www.business-intelligence-quotient.com/?p=1187

Tuesday, March 22, 2011

My quest to consolidate AWR data

I am still embarking on my quest to consolidate all the AWR data from all the database into a central performance database.

My first thought to use a simple CDC tool (goldengate) failed. Goldengate will not replicate sys objects. boo.

I am in Class for ODI this week, so my curent plan is to use ODI to replicate the data from my all my sources to a single target.

So far so good, and I will update on how things go with my quest to consolidate reporting data.

If this goes well with ODI, I will will use it to also consolidate tablespace sizing data, etc from all my databases. Wahoo

Sunday, March 20, 2011

Large SGA's and what it means to the future for databases

Well, first off, I don't have the answer to this one, just some musings.

I've noticed that Memory on Servers has gotten bigger lately or cheaper depending on how you look at it. Case in point is the x2-8 exadata. 1tb of memory per database node. Then you add the new 11gr2 ability to parallelize across nodes, and not have to keep passing blocks, you have close to 2tb of available memory.

So what does this mean to a database ? What does this mean for disk I/O ? What is a database doing if the blocks are all in memory. Essentially you are writing out changed blocks, and logging, that's it.

So what do you need a big disk array for ?

Then with the all the really awesome IP based disk arrays out there (like the isilon), what is the disk future ? Like many companies we are still running on 4gb Fiber for all our servers, and connecting to a San array. Should we go to 8gb Fiber or 10gb IP ?

I would be interested in opinions on what people see as the future of disk. IP, Fiber, or FCOE ? How important is the speed of a disk array going to be ? Just put you Redo logs on SSD (or flashcache ?).

Update :

I just saw that Arup Nanda just posted some writing on this topic.  You can read it here.  He basically said that because of consitent read, and other mechanisms, you might find that your database objects are in the cache multiple times utilzing much more of your buffer cache than you probably realize. 
He recommends using a special database (like times ten) to make sure everything is in memory.

Sunday, March 13, 2011

Social media and being an oracle professional

First I have to admit I'm old. I went to college at a time when card decks were still being used. Those were simpler times. You would submit your program on a deck of punch card, go out for the night, and you get the output the next day. There were numerous advantages to this.

The professors didn't expect you to code and debug through the night.. It just wasn't possible.

You couldn't be sloppy. If you didn't syntax check, and debug your code in your head you would get lots of output back that would require rework. Rework with a 24 hour turnaround time gets old fast.

There were very few resources and very few people to turn to.

Times have changed since then. There is a lot of places to find great information. Here are just a few.

Linked In Groups.. There is a great Exaata groups

Blogs. Most people who are famous in our industry blog

Twitter - Follow the people you like on twitter.

All this Social media really helps to keep up-to-date on what is happening. You don't have to feel all in alone in figuring out whether Huge pages is worth it, or how to deal with the problems with a new feature.

I have turned to these media a couple of times lately and been amazed that some of the top people have answered my questions.. Most recently Tanel Podor answered my question on Huge pages, and Tweeted to me that he is happy to answer any question that I tweet him. Here is his answer

Wednesday, March 9, 2011

Configuration of an Exadata Part II

After seeing how popular Configuration of An Exadata Part I was, I figured I would follow up with a part II.

First off there a couple of things I'd like to point out. Along with an Exadata waiting to be configured at this point, I have a Netezza Twin fin waiting to be configured. Many of the reasons why it is taking so long, aren't specifically because of it being an Exadata.

We did do a POC on an Exadata, and Oracle was able to bring it in quickly. It took a day or so to fill out the configuration sheet, and Oracle had the machine up within 1 week. One onsite person for a couple of days, and we were off and running. Although the Netezza video http://www.youtube.com/watch?v=jB7rlChCG-E is very humorous, it doesn't tell the whole picture. I would guess if you are coming from SQL-Server, which some people are, then the video is probably a little more realistic.

Anyway, as many of you can guess a POC is nothing like bringing a new production box into your production datacenter, to be supported by your production support staff. In the POC, there was no dataguard to instantiate, no backups to take, everyone on the POC knew the root, and oracle passwords, and the machine sat in the middle of a lab.

Bringing the box into a production environment is where all the questions begin. In a company that needs approval from support teams, the Exadata brings up a lot of questions. An appliance like the netezza brings up fewer questions, but still there are some hurdles.

At this point we are still coming up with a backup strategy that gives everyone some degree of comfort, without constantly being in hot backup mode. Utilizing 1ge for backup can be very painful. The configuration worksheets are finally completed, and ACS will be scheduled to come in, once we get some new switches.

In case you missed Part I in the Netezza slam Exadata series it's here.

Tuesday, March 8, 2011

How to utilize AWR beyond what grid gives you

Like many of you I use AWR reports all the time to see what is happening in the database, and especially to compare between different points in time.. Whenever the users tell me the system is "running slow", I always try to find a comparison time.. "what other day recently has a similar workload as today", and run an AWR Comparison between the time periods. Through Grid (or dbconsole), this can be frustrating for a RAC database. Sometimes you want to compare between nodes for the same period, or different periods across nodes. This is only the beginning.. I also want to compare across databases.

As you can guess the reports available through the grid don't give you these choices. But guess what ? DBReplay gives you these choices. In order to support DBReplay, Oracle has added some pretty nifty features into AWR.

First a non-dbreplay feature, the "colored SQL".

As many of you know, Oracle only saves the top 50 SQL of interest. Of course you can change the number of top sql saved with the MODIFY_SNAPSHOT_SETINGS procedure.

dbms_workload_repository.modify_snapshot_settings(
retention IN NUMBER DEFAULT NULL,
interval IN NUMBER DEFAULT NULL,
topnsql IN NUMBER DEFAULT NULL,
dbid IN NUMBER DEFAULT );


The problem is, what happens if you are interested in a SQL that isn't in the top sql statements ? If you have a very active system, you might find that the sql isn't in the top 50, or even the 100. So what is there to do ? "color the sql". Use this procedure to mark sql to gather information about the SQL_ID every snapshot even if it isn't a top sql. Here is the description

Routine to add a colored SQL ID. If an SQL ID is colored, it will always be captured in every snapshot, independent of its level of activities (i.e. does not have to be a TOP SQL). Capturing will occur if the SQL is found in the cursor cache at snapshot time

dbms_workload_repository.add_colored_sql(
sql_id IN VARCHAR2,
dbid IN NUMBER DEFAULT NULL);

Now onto my original problem.

How do I do comparisons against instances or even database ? Well when I took a good look at the AWR_DIFF_REPORT_HTML procedure, I noticed that you can call it with multiple parameters (like instance ID and dbid).

dbms_workload_repository.awr_diff_report_html(
dbid1 IN NUMBER,
inst_num1 IN NUMBER,
bid1 IN NUMBER,
eid1 IN NUMBER,
dbid2 IN NUMBER,
inst_num2 IN NUMBER,
bid2 IN NUMBER,
eid2 IN NUMBER)
RETURN awrrpt_html_type_table PIPELINE;

If you run awr_diff_report with the 2 different instances you compare them against each other. Now how do you do database ? Look at metallink note 872733.1 (yes I still call it metalink). This note tells you how to export the AWR information into a file, and import it into a new database.

Here is my plan to handle all this.

1) Create a centralized repository of all AWR information for all database in the infrastructure..

2) Follow the metalink note to export/import data into the centeralized repository

3) In the centralized repository create a table mapping DBID to source database.

4) Configure APEX (application Express), to pick DBID's, Instances, and snapshot ranges to run a comparison report. Display the report in APEX as a webpage.

This is my summer project, and I will let everyone know how it goes.