Friday, December 23, 2011

Monitoring Goldengate through sql

I have been working on implementing GoldenGate.  Golden Gate works with Oracle, but I wanted to be able to monitor it's status within an Oracle session, and then use a database link to remotely query the status.. Once I have all this in place, I want to create a web page through Apex, that does a union of all my GG environments onto a single screen.. real time ! nifty Eh.

This is how I went about doing it..

1)  The basis for it all is the "info all" command executed within GGSCI.  The output looks like this.

GGSCI (orclbox) 10> info all

Program     Status      Group       Lag           Time Since Chkpt

MANAGER     RUNNING
EXTRACT     STOPPED     S_EXTR      00:00:00      00:10:38



in this example you can see that the manager is running, but the extract is stopped.  I took the output of this command and created my own command script..

ggsci_status.sh


#!/bin/bash
export ORACLE_SID=${ORA_SID}
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 
export ADR_HOME=/u01/app/oracle/diag/rdbms/${DB_UNIQUE_NAME}/${ORACLE_SID}
export PATH=${ORACLE_HOME}/bin:${PATH_ORIG} 
export LD_LIBRARY_PATH=$ORACLE_HOME/lib 
export PATH=$PATH:/dbfs/gg/ggs11
 cd /dbfs/gg/ggs11
 
cd /dbfs/gg/ggs11 
ggsci </tmp/ggsci.log 2>/tmp/ggsci.log
info all
EOF

cat /tmp/ggsci.log  | grep EXTRACT
cat /tmp/ggsci.log  | grep MANAGER
cat /tmp/ggsci.log  | grep REPLICAT
 


The output of this script is the 2 lines from above that show the statuses of those processes.

MANAGER     RUNNING
EXTRACT     STOPPED     S_EXTR      00:00:00      00:10:38




So now for step 2... We have the statuses we are looking for as the output of a script.. What to do ?  External tables with a preprocessor (sorry I believe this is an 11.2 feature)..

First create a directory and put your script in that directory.  (this is within oracle). I called mine SP_COPY

Here is how I defined my table creation to read the output of the above script.



create table bgrenn.ggsci_status
(gg_type varchar(20),
 status  varchar(20),
 lag_time       varchar(20),
  checkpoint    varchar(20)
)
 organization external
(type oracle_loader
  default directory SP_COPY
  access parameters
(  
   records delimited by newline
   preprocessor SP_COPY:'ggsci_status.sh'
   badfile SP_COPY: 'ggsci_status.bad'
   logfile SP_COPY: 'ggsci_status.log'
    fields terminated by whitespace
     missing field values are null
   (
   gg_type,
   status,
   lag_time,
   checkpoint
  )
)
   location ('ggsci_status.sh')
)
reject limit unlimited;

 



Now select against this table and you will see the columns from the output of your script appear as columns in the table (2 rows for this example).


Finally .... Step 3.. create a database link to this database and do a select * from ggsci_status@mylink.

There you go.  How to dynamically show the status of Golden Gate processes through a database link on a remote database.


NOTE : If the database is a RAC cluster with GG running on only one node, you need to specify the SID to ensure you are looking at the correct node running GG.

Enjoy...

Sunday, December 18, 2011

Hadoop (again)

I know I've blogged in the past that I am working on implementing Hadoop. Here are 3 articles that should explain why.

 First
http://www.nytimes.com/2011/12/18/sunday-review/the-internet-gets-physical.html?_r=1&pagewanted=all

This is a great article from the NY times, explaining that sensors are turning up everywhere. Of course more sensors mean more data! Lots more data. So how do we collect all this data and process it ?? http://www.calxeda.com/ Calxeda !! These are 5 watt processors that can be scaled up to thousands of nodes (yes I said THOUSANDS). And I know what you are saying. So what do we do this data ? So what ?

Here is a great article on how to tie it all together.

http://blogs.oracle.com/datawarehousing/entry/understanding_a_big_data_implementation

So there you are.. I think this is the vision of the future, and if you are not looking at these technologies, and how they tie together, you are missing the next big leap that is happening in the IT field. Our jobs as IT specialists will become even more important as we become an integral part of all our companies (whatever it is) business process.

Saturday, December 17, 2011

FTS vs Index scan on Exadata

I loaded up some data on my exadata, and created a pimary key on the table. The table has about 1.8 Billion rows, and is about 186g of space.  I decided to do a select count on the table to see how many rows were really there.

select count(1) from sp.wxyz_detl;

The query took about 5 minutes and 36 seconds (336 seconds) .

I was astonished !  186g took almost 6 minutes ? Well on an exadata that seemed really slow.  I took a closer look at the plan.

-----------------------------------------------------------------------------------------------------
| Id  | Operation              | Name               | Rows  | Cost (%CPU)| Time     | Pstart| Pstop |
-----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |                    |       |   653K(100)|          |       |       |
|   1 |  SORT AGGREGATE        |                    |     1 |            |          |       |       |
|   2 |   PARTITION RANGE ALL  |                    |  1850M|   653K  (6)| 02:10:42 |     1 |    29 |
|   3 |    INDEX FAST FULL SCAN| PIDX_WXYZ_WRKR_ID  |  1850M|   653K  (6)| 02:10:42 |     1 |    29 |
-----------------------------------------------------------------------------------------------------




Well that looked good.  INDEX FAST FULL SCAN should offload to the cells right ? it says "full scan" what could be simpler..

I looked deeper at the V$SQLSTATS table, but it didn't show what I expected.

DISK_READS BUFFER_GETS EXECUTIONS CPU_TIME            ELAPSED_TIME   CELL_OFFLOAD_ELIG_BYTES PHYSICAL_READ_BY  IO_CELL_UNC_bytes OFFLOAD_RETURNED
---------- ----------- ---------- ---------------- ---------------- ------------------------ ----------------   ----------------   ----------------
 3,165,962   3,172,055          1      112,050,965      366,230,300                0            25,935,560,704                0                0


The Offload_eligable bytes is 0, and the IO_CELL_UNCOMPRESSED_BYTES is 0.

Huh ?

Now I figured I would force a full table scan and see what happens


select /*+ full(t) */ count(1) from spr.wxyz_detl t;


-------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name      | Rows  | Cost (%CPU)| Time     | Pstart| Pstop |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |           |       |  1958K(100)|          |       |       |
|   1 |  SORT AGGREGATE             |           |     1 |            |          |       |       |
|   2 |   PARTITION RANGE ALL       |           |  1850M|  1958K  (3)| 06:31:40 |     1 |    29 |
|   3 |    TABLE ACCESS STORAGE FULL| WXYZ_DETL |  1850M|  1958K  (3)| 06:31:40 |     1 |    29 |
-------------------------------------------------------------------------------------------------



Looks like the cost went up, the expected elapsed time went up.. this looks like a worse plan to the optimizer, but here are the stats from v$sqlstats.

DISK_READS BUFFER_GETS EXECUTIONS CPU_TIME         ELAPSED_TIME     CELL_OFFLOAD_ELIG_BYTES PHYSICAL_READ_BY  IO_CELL_UNC_bytes  OFFLOAD_RETURNED
---------- ----------- ---------- ---------------- ---------------- ------------------------ ---------------- ---------------- ----------------
 8,684,661   8,685,010          1       36,904,390       52,623,383           71,144,235,008   71,144,742,912   71,324,041,216   25,825,191,648




Wow, look at that.. Using the FULL hint caused the cell offload eligability to change, and the elapsed time is now 52 seconds.  Big change and exactly what I wanted.

I did some searching and came up with this blog with a similar issue, but the key was a reverse lookup.

http://martincarstenbach.wordpress.com/2011/08/22/why-is-my-exadata-smart-scan-not-offloading/


This is a very simple schema (one table, one PK).. The PK was necessary because we are using Golden Gate to insert the data, and we don't want duplicates.

Thursday, December 15, 2011

Are my stats old for my query?

If you work someplace like I do you hear this all the time..
"my query is running long, can you gather stats?"

Of course the person saying this is bringing this up because somewhere, somehow it worked when reanalyzed long ago... so it's going to work again right ?  It's not like any of the users are Graduate students majoring in statistical analysis at a prestegious college like my wonderful wife (hi Jo).

Well, as we all know, that isn't always the answer, and I was looking for a faster way to tell.. I have query X and I know the SQL_ID, but are any of the statistics stale ??

Here is a great query I came up with..

set linesize 170
set pagesize 150
select table_name object_name,object_type,last_analyzed,
to_char(row_count,'999,999,999') row_count,
to_char(inserts,'999,999,999') inserts,to_char(updates,'999,999,999') updates,to_char(deletes,'999,999,999') deletes,
case when (nvl(row_count,0)/10 < nvl(inserts,0)+nvl(updates,0)+nvl(deletes,0))
then 'Y'
else 'N'
end "stale?",
case row_count
   when null then null
   when 0 then null
   else to_char(((nvl(inserts,0)+nvl(updates,0)+nvl(deletes,0))/nvl(row_count,0) *100),'99.99') || '%'
end "%change"
from
(
select  distinct object_name table_name, 'TABLE                                  ' object_type,
        (select last_analyzed from dba_tables where table_name=object_name and owner=object_owner) last_analyzed,
        (select num_rows from dba_tables where table_name=object_name and owner=object_owner) row_count,
        (select inserts from dba_tab_modifications where table_name=object_name and table_owner=object_owner) inserts,
         (select updates from dba_tab_modifications where table_name=object_name and table_owner=object_owner) updates,
        (select deletes from dba_tab_modifications where table_name=object_name and table_owner=object_owner) deletes,
        object_name sort_column
 from sys.dba_hist_sql_plan 
where sql_id='31v8553cuza05'
and object_name in (select table_name from dba_tables where table_name=object_name)
union
select  distinct object_name ,  'INDEX on ' || (select table_name from dba_indexes where index_name=object_name and owner=object_owner)  index_name,
        (select last_analyzed from dba_indexes where index_name=object_name and owner=object_owner) last_analyzed,
        (select num_rows from dba_indexes where index_name=object_name and owner=object_owner) row_count,
        null inserts,
        null updates,
        null deletes,
        (select table_name from dba_indexes where index_name=object_name and owner=object_owner)  sort_column
 from sys.dba_hist_sql_plan 
where sql_id='31v8553cuza05'
and object_name in (select index_name from dba_indexes where index_name=object_name)
)
order by sort_column,object_type desc;



and here is the output..  You can see I organized it by object in alphabetical order.   Indexes are sorted with the tables that they are on so they get grouped together.


Here is what the output looks like.

OBJECT_NAME                     OBJECT_TYPE                             LAST_ANALYZED       ROW_COUNT    INSERTS      UPDATES      DELETES      s %change
------------------------------- --------------------------------------- ------------------- ------------ ------------ ------------ ------------ - -------
CAR_TAB                         TABLE                                   2011-11-16 03:00:12           77                                        N    .00%
PK_CAR_TAB                      INDEX on CAR_TAB                        2011-11-16 03:00:13           77                                        N    .00%
CAR_DEFD_WORK_TAB               TABLE                                   2011-11-16 03:00:13           61                                        N    .00%
PK_CAR_DEFD_WORK_TAB            INDEX on CAR_DEFD_WORK_TAB              2011-11-16 03:00:13           61                                        N    .00%
CO_CAR_TAB                      TABLE                                   2011-12-01 11:19:11       27,998           94          739            0 N   2.98%
CO_EXEC_TAB                     TABLE                                   2011-11-16 03:00:57       32,679          187            2           21 N    .64%
D$VANR_TAB                      TABLE                                   2011-12-15 15:40:53            0                                        N
DIM_CLIENT                      TABLE                                   2011-12-13 22:11:51       27,203            3           22            0 N    .09%
ELEC_CMMN_ADDR                  TABLE                                   2011-11-16 03:01:06      375,095        1,949            0          171 N    .57%
PK_ELEC_CMMN_ADDR               INDEX on ELEC_CMMN_ADDR                 2011-11-16 03:01:06      375,095                                        N    .00%
ENTY_CMMN_ADDR                  TABLE                                   2011-11-16 03:01:11    2,234,749        9,221        7,755          722 N    .79%
ENTY_CMMN_ADDR_VT               TABLE                                   2011-11-16 03:01:14    2,249,575        9,395          648          722 N    .48%
IDX_ECAV_ENCA_ID                INDEX on ENTY_CMMN_ADDR_VT              2011-11-21 16:20:10    2,252,376                                        N    .00%
MAP_AREA                        TABLE                                   2011-11-16 03:01:24        4,835           11          342            0 N   7.30%
PK_MAP_AREA                     INDEX on MAP_AREA                       2011-11-16 03:01:24        4,835                                        N    .00%
INDEP_CNTRC                     TABLE                                   2011-11-16 03:01:31       17,879          241            0           32 N   1.53%
INSR_ROST_ELIG_CLSF             TABLE                                   2011-11-16 03:01:31            0                                        N
PK_INSR_ROST_ELIG_CLSF          INDEX on INSR_ROST_ELIG_CLSF            2011-11-16 03:01:31            0                                        N
J$VANR                          TABLE                                   2011-12-15 22:03:51          212                                        N    .00%
SLVR_LKUP_VAL                   TABLE                                   2011-11-16 03:01:41        2,536           24           19            2 N   1.77%
PK_SLVR_LKUP_VAL                INDEX on SLVR_LKUP_VAL                  2011-11-16 03:01:41        2,536                                        N    .00%
OPT_VAL                         TABLE                                   2011-11-16 03:01:45          628           43           16            0 N   9.39%
PK_OPT_VAL                      INDEX on OPT_VAL                        2011-11-16 03:01:45          628                                        N    .00%
REG_NM                          TABLE                                   2011-11-16 03:02:00      257,597        2,436        2,501           44 N   1.93%
REG_NM_VT                       TABLE                                   2011-11-16 03:02:02      260,111        2,513          630           44 N   1.23%
REG_ROLE                        TABLE                                   2011-11-16 03:02:05       87,808          526          239           18 N    .89%
PK_REG_ROLE                     INDEX on REG_ROLE                       2011-11-16 03:02:05       87,808                                        N    .00%
WOMN                            TABLE                                   2011-11-16 03:02:40      642,408        1,854           52           66 N    .31%
PK_WOMN                         INDEX on WOMN                           2011-11-16 03:02:41      642,408                                        N    .00%
WOMN_ETHN_CLSS                  TABLE                                   2011-11-16 03:02:42       90,622          900            4           32 N   1.03%
WOMN_NM                         TABLE                                   2011-11-16 03:02:43      678,775        1,901           84           66 N    .30%
PROD_CPNT                       TABLE                                   2011-12-02 22:05:00        2,104                                        N    .00%
PK_PROD_CPNT                    INDEX on PROD_CPNT                      2011-12-02 22:05:00        2,104                                        N    .00%
PSTL_CMMN_ADDR                  TABLE                                   2011-11-16 03:03:03      489,200        1,868          283           62 N    .45%
PK_PSTL_CMMN_ADDR               INDEX on PSTL_CMMN_ADDR                 2011-11-16 03:03:04      489,200                                        N    .00%
REF_CTRY                        TABLE                                   2011-10-25 22:02:19          260            0           21            0 N   8.08%
REF_CONV_FREQ_TYPE              TABLE                                   2011-10-26 22:01:53            8                                        N    .00%
REF_ST                          TABLE                                   2011-12-13 22:14:10           72                                        N    .00%
SNP_CDC_SUBS                    TABLE                                   2011-12-15 22:01:23            2                                        N    .00%
PK_SNP_CDC_SBS                  INDEX on SNP_CDC_SUBS                   2011-12-15 22:01:23            2                                        N    .00%
TCMN_ADDR                       TABLE                                   2011-11-16 03:03:30      628,266        4,826          219          284 N    .85%
PK_TCMN_ADDR                    INDEX on TCMN_ADDR                      2011-11-16 03:03:30      628,266                                        N    .00%
TRUC_IDFN                       TABLE                                   2011-11-16 03:03:38      471,413        3,392        4,050           84 N   1.60%
TRUC_IDFN_VT                    TABLE                                   2011-11-16 03:03:40      548,277        4,471        1,458           96 N   1.10%
VANR                            TABLE                                   2011-12-15 10:43:22      309,110           47          101            0 N    .05%
PK_VANR                         INDEX on VANR                           2011-12-15 10:43:23      309,110                                        N    .00%
VANR_EMPT_STUS                  TABLE                                   2011-11-16 03:04:43      689,725        3,098           23           54 N    .46%
VANR_EMPT_STUS_RESN_DT          TABLE                                   2011-11-16 03:04:44      477,062        2,414           21           40 N    .52%
VANR_PROD_CFG                   TABLE                                   2011-11-16 03:05:38      292,458        1,564          279           24 N    .64%
VANR_VT                         TABLE                                   2011-11-16 03:05:52      335,413        2,476          303           42 N    .84%
WV_VANR_ID_IDX                  INDEX on VANR_VT                        2011-12-13 13:05:54      337,452                                        N    .00%
VANR_WORK_CATG                  TABLE                                   2011-11-16 03:05:52      159,673        1,356            2           19 N    .86%




I'm sure I'm going to find this very useful next time I get that question.. It also nicely pinpoints any objects that you should immediately consider analyzing.

Of course you need to understand your application to really read this completely.  Especially with updates. Are they updating an index column ? Did a massive update just change the number of distinct values, and the range of values for an indexed column? Were the updates just updates to an "update date" column that isn't used (except for audits).

Lastely, it doesn't describe anything about how the statistics were gathered (histograms or not, which columns etc, etc).

Do not use this as the absolutely truth, but at least it will help point you in the right direction.

Monday, December 5, 2011

ORA-600 [kck_rls_check must use (11,0,0,0,0) or lower] during db duplicate

I just wanted to share this since when searching for this error, I didn't find out how to get around around it.

BACKGROUND -  I am running a dbdupliate through grid control for a 11.2.0.2 database.  Everytime it goes to create the standby database the copy fails with following errors in the log

 connected to auxiliary database (not started)
  
  released channel: tgt1
  
  released channel: tgt2
  
  RMAN-00571: ===========================================================
  
  RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
  
  RMAN-00571: ===========================================================
  
  RMAN-03002: failure of Duplicate Db command at 12/03/2011 23:49:25
  
  RMAN-05501: aborting duplication of target database
  
  RMAN-03015: error occurred in stored script Memory Script
  
  RMAN-04014: startup failed: ORA-00600: internal error code, arguments: [kck_rls_check must use (11,0,0,0,0) or lower], [kdt.c], [9576], [11.2.0.2.0], [], [], [], [], [], [], [], []
  
  RMAN-04017: startup error description: ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance
  


I was finally able to get around it... The bug has to do with compression compatibility and the metalink note on this bug says to do

alter system  set "_compression_compatibility"="11.2.0";

The problem is that this needs to be done on the NEW copy of your database.. I was able to do this by setting up a script that runs

export ORACLE_SID=<>
sqlplus "/ as sysdate" @setparm
sleep 1
sqlplus "/ as sysdate" @setparm

sleep 1
..
..




cat setpar.sql
>    alter system set "_compression_compatibility"="11.2.0";
>    exit



I let this run a couple of thousand of those sqlplus/sleep entries on the new database servers.  within about 5 minutes, it was able to appropriately set the init parameter during the duplicate process and continue on normally.

If you see this error during your dbduplicate that is the best way to handle it..








Monday, November 28, 2011

Calxeda

Have you heard of Calxeda ?? If not, here is a little information on it

http://www.calxeda.com/technology/architecture

When talking about big data, and the future of data centers, Calxeda has to be in the center of the conversation.  Calxeda represents the ideas of true parallelism in a small form factor.  HP is going forward with this product in their project "moonshot".
http://h17007.www1.hp.com/us/en/iss/110111.aspx
With these low power modules you can fit 2,800 servers in a single rac.  Amazing but these numbers are even more amazing.

•89% less energy


•94% less space

•63% less cost


So why am I talking about this ?  This type of processors will cause a revolution in our field (again).. This architecture will allow processing to scale vertically at a very reasonable cost.  Going wide and going parallel in a true MPP environment will be possible, and at a low power, and low cost.  WOW.

So what is this going to do to Databases, especially with a company like Oracle ? Well, I would guess a 2,800 node RAC cluster would beyond what most people would consider reasonable.  I believe OLTP is going to remain the way it is now..  Rac clusters, usually scaled more horizontal then vertical.

But what is going to everything else that is not OLTP systems that may not need to be ACID compliant ?  Parallel on another database.  Hadoop is really gathering steam, and Oracle has stepped on the bus.
Here is Oracles latest white paper on data warehousing.

http://www.oracle.com/us/products/database/big-data-for-enterprise-519135.pdf



Oracle definately has it's sights on Hadoop being part of its "data warehouse appliance".

Keep your eyes open and follow what is happening with this trend of lots of small low power servers.  It certainly is affecting my planning (I am a technical planner after all).  I am making sure all the systems I design are ready to go fully parallel when the time comes. 






Saturday, November 19, 2011

Big Data


Big Data

I wanted to write a little bit about big data, since that is such a common topic these days. 

First the definition. What is considered big data?  I had a conversation with a very smart thinker who defined it as “when the data is so big you have to put a lot of thought in how to deal with it”.  This means it is data where normal techniques are no longer applicable to work with this large of a dataset.  Large could be 20 Tb, 200tb, or petabytes. It’s about how the data relates to your infrastructure.

The next question is why ? why now ?

I think the big thing that has happen in the last couple of years is the explosion of smart phones, faster technology, and cellular data communications.  There is a huge amount of data generated from these sources.   And probably one of the biggest ingredients in this realm is the Location.. Think about GPS built into you smart phone (or the ability to triangulate your location from the use of cell phone towers. This adds a whole new dimension to the equations.  From your smart phone, data is generated to tell more detail more than was every possible.  You are in a store shopping for an item, and you check the price on amazon through your phone.  There is a ton of data in that “transaction”. It is now possible to mine data about where you were when you looked at the item online. Did you buy it online ? when did you go shopping for it ?
There is also a desire to mine the data that you have.  You might have order data, but that data is growing, and the desire to derive value from that data is growing.. At some point this forces you too look at solutions to handle this problem.
There is a huge proliferation of data from both traditional sources, and all these new sources of information.  There are so many new devices out there that are internet aware.  Look at the new coke vending machines that allow you customize your flavor, and data about your choices and the machines health is available to be monitored

So the next topic is to define the 2 types of “big data”


Structure data -  Set schema

Unstructured Data – No set schema for the data.

Structured Data

First let’s talk about structured data.  This data typically comes from your current sql database (oracle, db2, mysql, sqlserver, etc.. etc.).  Data is created into a set schema, and then it is often joined together. This is the most common way to store traditional data for customer/order information. I think we can all relate to the data model




  
This is a very simple diagram, but you get the picture.. There are multiple tables, and there are relationships. You most likely have historical data collected over time.  You might want to take all this data and find patterns.

unstructured data

A great example of this is resumes.  Let say you are linkedin, and you have millions of resumes.  The resume’s are not in a specific format and some key words may be difficult to disseminate.  This is especially true in the computer field were so many names are derived from other things.. Look at the skill set of someone who is versed in a lot of the current technology like Redhat, Python, Java.  If you go searching through resumes you might get a snake keeper who likes specific colored hats, and coffee.  You need to find out how take this data, and make it useful using natural language, or other such means.

So you do you handle these 2 types of  data ??

Structured data Solutions

I think there are many solutions that have come onto the market in this area.  I’m sure you have heard of them.  Exadata, Terradata, Netezza, Greenplum, vertica, and asterdata.  Some of these solutions have been around, and are still on the forefront of the data revolution, and some of these are new comers.  These are appliances that take the large amounts of data and parallelize the processing to quickly get to the solution.  Terradata has been at this for years with their solutions.  These solutions are usually mpp solutions, that use local disk.  They break the query workload down into pieces, and then bring back the result set.  These work very good to accomplish this.

Unstructured solutions

Undoubtedly the biggest player in this space is Hadoop.  Hadoop is an opensource “nosql” database that can process large amounts of data.  So lets get back to the our resume example.. You have million resumes and you are looking for a specific individual that has experience with Goldengate.  Hadoop is made of multiple servers using local disk to split up the workload into pieces (sometimes this is called sharding).  Commodity hardware is used to accomplish this.  This is a scalable solution because you keep adding nodes to give you more performance. You would take your 1 million resumes, and send them out to your “cluster”. Let’s say you have a 10 node cluster.  Hadoop would take the resumes and split across all 10 nodes, while maintaining redundancy by putting each resume on 2 nodes in the system. After all these are commodity servers that may break/  When you go to run your hadoop “query”, you have hadoop tell all 10 nodes to start looking for “goldengate” in the resumes and return the results.  This breaking down the work across multiple machines is huge advantage for scalability.. If you need more processing power you add more nodes.

Summary

This is a very interesting time for data.  The amount of data generated is skyrocketing.  The equipment that is utilized to parse through the data is getting faster.  In memory databases are becoming a reality.  All this is causing a lot changes in the market.

There are many opportunities for companies to take advantage of this change in the market.  Most large companies are looking at solutions for both of these issues (structured and unstructured data).  Oracle has been advertising their interest in Hadoop and their intention to enter this market with a product that will handle data warehousing. This was announced at open world, but the details haven’t been unveiled yet.

The one item I didn’t mention in this post is the use of an in memory database.  This type of technology is becoming more common the advent of SAP HANA.  Oracle has now announced Exalitics to fill this space.

Analytics and big data is definitely the wave of the future.