Wednesday, March 23, 2011

ADG with ODI and Exadata

Recently I've been taking a class on ODI. It is really a very interesting ELT tool (notice I didn't say ETL). I am planning on using it to take data from my ADG (active data guard) copy of production to another server. Perfect right ? Pull from a read-only copy of an oracle database, to another database. I pick my LKM (load Knowledge Module) of Oracle-oracle. Unfortunately the current knowledge module creates a view on the source. As you can imagine, with ADG, this is impossible. The only way to get ODI working against ADG is to create your own Knowledge Module, so I've been spending my evening creating my very own. I am hoping this can help others who are running into the same issue. First this is a great site explaining HOW to create your very own knowledge module...
http://www.oracle.com/technetwork/articles/bethke-odi-090881.html
This is a great site to find all the syntax you need.
http://gerardnico.com/doc/odi/webhelp/en/index.htm#ref_api/

Finally these are the steps I did to make my own knowledge module.

1) copy the oracle to oracle(DBLINK) module
2) Give it a new name like oracle to oracle(ADG)
3) Remove the following steps

- 70 create view/table on source
- 80 Create temp indexes on source
- 150 drop view on source
- 160 Drop temp indexes

4) change drop synonym on target to drop view on target

drop synonym <%=odiRef.getTable("L", "COLL_NAME", "W")%>

becomes

drop view <%=odiRef.getTable("L", "COLL_NAME", "W")%>

4) Change the "drop synonym on target" to "drop view on target"

drop synonym <%=odiRef.getTable("L", "COLL_NAME", "W")%>becomes

drop view <%=odiRef.getTable("L", "COLL_NAME", "W")%>

5) Last change. "create synonym on target" becomes "create view on target"

create synonym <%=odiRef.getTable("L", "COLL_NAME", "W")%>
for <%=odiRef.getTable("R", "COLL_NAME", "W")%>

becomes

<% if ((odiRef.getOption("ENABLE_EDITION_SUPPORT")).equals("0")) { %>
create or replace view <%=odiRef.getTable("L", "COLL_NAME", "W")%>
(
<%=odiRef.getColList("", "[CX_COL_NAME]", ",\n\t", "", "")%>
)
as select <%=odiRef.getPop("DISTINCT_ROWS")%>
<%=odiRef.getColList("", "[COL_NAME]", ",\n\t", "", "")%>
from <%=odiRef.getSrcTablesList("", "[SCHEMA].[TABLE_NAME]@", ", ", "")%><%=odiRef.getInfo("SRC_DSERV_NAME")%>
where (1=1)
<%=odiRef.getFilter()%>
<%=odiRef.getJrnFilter()%>
<%=odiRef.getJoin()%>
<%=odiRef.getGrpBy()%>
<%=odiRef.getHaving()%>
<% } else { %>
create table <%=odiRef.getTable("L", "COLL_NAME", "W")%> as
select <%=odiRef.getPop("DISTINCT_ROWS")&%>
<%=odiRef.getColList("", "[COL_NAME]\t[CX_COL_NAME]", ",\n\t", "", "")%>
from <%=odiRef.getSrcTablesList("", "[SCHEMA].[TABLE_NAME]@", ", ", "")%><%=odiRef.getInfo("SRC_DSERV_NAME%>
where (1=1)
<%=odiRef.getFilter()%>
<%=odiRef.getJrnFilter()%>
<%=odiRef.getJoin()%>
<%=odiRef.getGrpBy()%>
<%=odiRef.getHaving()%>
<%}%>



As you can see the idea is to remove any updates to the source, and switch the synonym in the target to a view pointing at the source.

And some advice. If you are using the simulation button to test the new Knowledge module, the "getinfo" commands only contain data at runtime.. The simulate will show nothing, and you will only see data when you actually execute (lost about an hour on that one).

Enjoy.. I am posting my actual XML knowledge module here.

The usual disclaimers here.. test well.. I also want to point out that I only changed the Oracle-oracle knowledge module. If you are going from Oracle to Netezza for example, you need to make the appropraite changes to that knowledge module.

I am including another article I found on a knowledge mudule that doesn't create the link..
http://www.business-intelligence-quotient.com/?p=1187

Tuesday, March 22, 2011

My quest to consolidate AWR data

I am still embarking on my quest to consolidate all the AWR data from all the database into a central performance database.

My first thought to use a simple CDC tool (goldengate) failed. Goldengate will not replicate sys objects. boo.

I am in Class for ODI this week, so my curent plan is to use ODI to replicate the data from my all my sources to a single target.

So far so good, and I will update on how things go with my quest to consolidate reporting data.

If this goes well with ODI, I will will use it to also consolidate tablespace sizing data, etc from all my databases. Wahoo

Sunday, March 20, 2011

Large SGA's and what it means to the future for databases

Well, first off, I don't have the answer to this one, just some musings.

I've noticed that Memory on Servers has gotten bigger lately or cheaper depending on how you look at it. Case in point is the x2-8 exadata. 1tb of memory per database node. Then you add the new 11gr2 ability to parallelize across nodes, and not have to keep passing blocks, you have close to 2tb of available memory.

So what does this mean to a database ? What does this mean for disk I/O ? What is a database doing if the blocks are all in memory. Essentially you are writing out changed blocks, and logging, that's it.

So what do you need a big disk array for ?

Then with the all the really awesome IP based disk arrays out there (like the isilon), what is the disk future ? Like many companies we are still running on 4gb Fiber for all our servers, and connecting to a San array. Should we go to 8gb Fiber or 10gb IP ?

I would be interested in opinions on what people see as the future of disk. IP, Fiber, or FCOE ? How important is the speed of a disk array going to be ? Just put you Redo logs on SSD (or flashcache ?).

Update :

I just saw that Arup Nanda just posted some writing on this topic.  You can read it here.  He basically said that because of consitent read, and other mechanisms, you might find that your database objects are in the cache multiple times utilzing much more of your buffer cache than you probably realize. 
He recommends using a special database (like times ten) to make sure everything is in memory.

Sunday, March 13, 2011

Social media and being an oracle professional

First I have to admit I'm old. I went to college at a time when card decks were still being used. Those were simpler times. You would submit your program on a deck of punch card, go out for the night, and you get the output the next day. There were numerous advantages to this.

The professors didn't expect you to code and debug through the night.. It just wasn't possible.

You couldn't be sloppy. If you didn't syntax check, and debug your code in your head you would get lots of output back that would require rework. Rework with a 24 hour turnaround time gets old fast.

There were very few resources and very few people to turn to.

Times have changed since then. There is a lot of places to find great information. Here are just a few.

Linked In Groups.. There is a great Exaata groups

Blogs. Most people who are famous in our industry blog

Twitter - Follow the people you like on twitter.

All this Social media really helps to keep up-to-date on what is happening. You don't have to feel all in alone in figuring out whether Huge pages is worth it, or how to deal with the problems with a new feature.

I have turned to these media a couple of times lately and been amazed that some of the top people have answered my questions.. Most recently Tanel Podor answered my question on Huge pages, and Tweeted to me that he is happy to answer any question that I tweet him. Here is his answer

Wednesday, March 9, 2011

Configuration of an Exadata Part II

After seeing how popular Configuration of An Exadata Part I was, I figured I would follow up with a part II.

First off there a couple of things I'd like to point out. Along with an Exadata waiting to be configured at this point, I have a Netezza Twin fin waiting to be configured. Many of the reasons why it is taking so long, aren't specifically because of it being an Exadata.

We did do a POC on an Exadata, and Oracle was able to bring it in quickly. It took a day or so to fill out the configuration sheet, and Oracle had the machine up within 1 week. One onsite person for a couple of days, and we were off and running. Although the Netezza video http://www.youtube.com/watch?v=jB7rlChCG-E is very humorous, it doesn't tell the whole picture. I would guess if you are coming from SQL-Server, which some people are, then the video is probably a little more realistic.

Anyway, as many of you can guess a POC is nothing like bringing a new production box into your production datacenter, to be supported by your production support staff. In the POC, there was no dataguard to instantiate, no backups to take, everyone on the POC knew the root, and oracle passwords, and the machine sat in the middle of a lab.

Bringing the box into a production environment is where all the questions begin. In a company that needs approval from support teams, the Exadata brings up a lot of questions. An appliance like the netezza brings up fewer questions, but still there are some hurdles.

At this point we are still coming up with a backup strategy that gives everyone some degree of comfort, without constantly being in hot backup mode. Utilizing 1ge for backup can be very painful. The configuration worksheets are finally completed, and ACS will be scheduled to come in, once we get some new switches.

In case you missed Part I in the Netezza slam Exadata series it's here.

Tuesday, March 8, 2011

How to utilize AWR beyond what grid gives you

Like many of you I use AWR reports all the time to see what is happening in the database, and especially to compare between different points in time.. Whenever the users tell me the system is "running slow", I always try to find a comparison time.. "what other day recently has a similar workload as today", and run an AWR Comparison between the time periods. Through Grid (or dbconsole), this can be frustrating for a RAC database. Sometimes you want to compare between nodes for the same period, or different periods across nodes. This is only the beginning.. I also want to compare across databases.

As you can guess the reports available through the grid don't give you these choices. But guess what ? DBReplay gives you these choices. In order to support DBReplay, Oracle has added some pretty nifty features into AWR.

First a non-dbreplay feature, the "colored SQL".

As many of you know, Oracle only saves the top 50 SQL of interest. Of course you can change the number of top sql saved with the MODIFY_SNAPSHOT_SETINGS procedure.

dbms_workload_repository.modify_snapshot_settings(
retention IN NUMBER DEFAULT NULL,
interval IN NUMBER DEFAULT NULL,
topnsql IN NUMBER DEFAULT NULL,
dbid IN NUMBER DEFAULT );


The problem is, what happens if you are interested in a SQL that isn't in the top sql statements ? If you have a very active system, you might find that the sql isn't in the top 50, or even the 100. So what is there to do ? "color the sql". Use this procedure to mark sql to gather information about the SQL_ID every snapshot even if it isn't a top sql. Here is the description

Routine to add a colored SQL ID. If an SQL ID is colored, it will always be captured in every snapshot, independent of its level of activities (i.e. does not have to be a TOP SQL). Capturing will occur if the SQL is found in the cursor cache at snapshot time

dbms_workload_repository.add_colored_sql(
sql_id IN VARCHAR2,
dbid IN NUMBER DEFAULT NULL);

Now onto my original problem.

How do I do comparisons against instances or even database ? Well when I took a good look at the AWR_DIFF_REPORT_HTML procedure, I noticed that you can call it with multiple parameters (like instance ID and dbid).

dbms_workload_repository.awr_diff_report_html(
dbid1 IN NUMBER,
inst_num1 IN NUMBER,
bid1 IN NUMBER,
eid1 IN NUMBER,
dbid2 IN NUMBER,
inst_num2 IN NUMBER,
bid2 IN NUMBER,
eid2 IN NUMBER)
RETURN awrrpt_html_type_table PIPELINE;

If you run awr_diff_report with the 2 different instances you compare them against each other. Now how do you do database ? Look at metallink note 872733.1 (yes I still call it metalink). This note tells you how to export the AWR information into a file, and import it into a new database.

Here is my plan to handle all this.

1) Create a centralized repository of all AWR information for all database in the infrastructure..

2) Follow the metalink note to export/import data into the centeralized repository

3) In the centralized repository create a table mapping DBID to source database.

4) Configure APEX (application Express), to pick DBID's, Instances, and snapshot ranges to run a comparison report. Display the report in APEX as a webpage.

This is my summer project, and I will let everyone know how it goes.

Tuesday, February 22, 2011

Configuring an Exadata (part I)

I am getting ready to do a presentation on "real world Experiences" with the exadata, and I am still in process of getting the Exadata set up. I figured I would share some of the pain we are having getting the configuration checklist ready for ACS (advanced Customer Support).. This service comes with your Exadata, and the ACS folks configure the box by installing the OS, and the database software. They even set up a starter database. This is where the fun begins. The exadata is a beast of a box. Remember in a full rack there is 8 database nodes, and 14 storage servers. Switches, power supplies.. And to support all this it needs lots and lots of IP's, and lots and lots of "names" for these pieces. When setting up an Exadata you get 4 characters for a name. For many datacenters, it is hard to get descriptive in those 4 characters, and fitting it all into 4 characters might not follow your standards. This is very rigid because of the "one" script.. If you've never heard that term, it is the "one" script that installs everything. Think of it as the silent install for the whole Exadata box.. Everything is driven off of it.. Server names, Cell disk names.. EVERYTHING. If you to add it up, there are probably 200+ items in an exadata that need to be named with this script (thus the reason why the 4 character name is so important).

Same with the database. Oracle crates a simple UTF8 database. Nothing more. And they create dbcontrol within the Exadata, nothing more.

If you are prepared it should be pretty simple to set up.

I think this is where the difference lies between the Exadata and a true appliance. An appliance, you wheel it in, hook it up and run.. An Exadata, you need to give some thought to how to set it up. The Exadata gives you a lot of flexibility (you can run multiple applications and section them off across nodes), you can create indexes for OLTP type transactions, but with that flexibility comes more complexity. It's a trade off.

Continued in part 2