Tuesday, November 15, 2011

configurations for multiple instances on 4 nodes


How to handle multiple databases without enough memory.


Lets say we have 2 environments that need to use the same 4 node cluster.  Each application has 3 instances.  For simplicity lets call the apps


  • DBFS
  • MSTDB
  • DWDB




Now to separate out the 2 environments lets give each environment it’s own set of database.


DBFSI
MSTDBI
DWDBI
DBFSP
MSTDBP
DWDBP


We have 6 instances from 2 environments that all need to be running on 4 nodes with 96g apiece.


RECOMMENDATION


1)      Split the 4 node cluster in ½ .  Put the Imp systems on the first 2 nodes, and he perf systems on the second 2 nodes.
2)      Create 3 different sets of “databases” and “instances” through srvrctl.  These 3 sets will contain 3 different sets of instances.  Only 1 of these 3 will be up at any time.  They will be the same set of datafiles, just different configurations. By overriding the memory settings in the Init file, and have 3 sets of sids in the SPFILE, this configuration is possible.
3)      Start up the appropriate databases (and instances) for the proper configuration


Database
SGA
instance
Nodes
DBFSI
20g
DBFSI1-DBFSI2
dbnode1/dbnode2
LDBFSI
70g
LDBFSI1-LDBFSI4
dbnode1/dbnode2
dbnode3/dbnode4
SDBPFSI
4g
SDBPFSI1
dbnode1
MSTDBI
20g
MSTDBI1-MSTDBI2
dbnode1/dbnode2
LMSTDBI
70G
LMSTDBI1-LMSTDBI4
dbnode1/dbnode2
dbnode3/dbnode4
SMSTDBI
4g
SMSTDBI1
dbnode2
DWDBI
20g
DWDBI1-DWDBI2
dbnode1/dbnode2
LDWDBI
70G
LDWDBI1-LDWDBI4
dbnode1/dbnode2
dbnode3/dbnode4
SDWDBI
4g
SDWDBI1
dbnode1
DBFSP
20g
DBFSP1-DBFSP2
dbnode3/dbnode4
LDBFSP
70g
LDBFSP1-LDBFSP4
dbnode1/dbnode2
dbnode3/dbnode4
SDBPFSP
4g
SDBPFSP1
dbnode3
MSTDBP
20g
MSTDBP1-MSTDBP2
dbnode3/dbnode4
LMSTDBP
70G
LMSTDBP1-LMSTDBP4
dbnode1/dbnode2
dbnode3/dbnode4
SMSTDBP
4g
SMSTDBP1
dbnode4
DWDBP
20g
DWDBP1-DWDBP2
dbnode3/dbnode4
LDWDBP
70G
LDWDBP1-LDWDBP4
dbnode1/dbnode2
dbnode3/dbnode4
SDWDBP
4g
SDWDBP1
dbnode4




OK, now that I have 3 sets of 6 databases combined, what will the actual configuration choices be ??


Normal configuration showing memory usage


Database
dbnode1
dbnode2
dbnode3
dbnode4
DBFSI
20
20


MSTDBI
20
20


DWDBI
20
20


DBFSP


20
20
MSTDBP


20
20
DWDBP


20
20





Total
60g
60g
60g
60g


Perf Isolated testing of DWDB


Database
dbnode1
dbnode2
dbnode3
dbnode4
DBFSI
20
20


MSTDBI
20
20


DWDBI
20
20


SDBFSP


4

SMSTDBP



4
LDWDBP


70
70





Total
60g
60g
74g
74g














Perf Full testing of DWDB


Database
dbnode1
dbnode2
dbnode3
dbnode4
SDBFSI
4



SMSTDBI

4


SDWDBI

4


SDBFSP


4

SMSTDBP



4
LDWDBP
70g
70g
70
70





Total
74g
78g
74g
74g




You can see that with this configuration, it is possible to carefully manage the Database usage.  The above examples can be used to make any one of the database span the whole machine, while the others sit on one node in a small configuration.

Sunday, November 6, 2011

Fancy new Disk array technology

Well first off, I need to put a big disclaimer down.  These are my opinions, and my opinions only.  These to do not reflect the opinions of my employer, my spouse or my dog.

I was watching some twitter updates go by and this blogpost caught my eye. 
http://chucksblog.emc.com/chucks_blog/2011/10/shifts-happen.html

This blog was talking about new disk technology, and part of the covered the idea of FAST technology.  If you haven't heard of FAST (this is the EMC name, I'm sure other vendors have their own flavors), it is disk technology that moves blocks to the best tier of storage automagically. Really !  The idea is that you buy an array with 3 different tiers of disk.  Flash, Fibre channel, and Sata.  The disk array learns the patterns for the data access, and moves the data to the appropriate tiers.  Sounds great right  ?  It does make sense.. 
Let take an example...  Let say that you are a supplier and you supply parts for 100,000 small businesses.  You keep historical data their orders for 5 years for reference.  Whenever they place a new order you reference their latest orders to find patterns.

So following this workload you can guess what happens.. The current data for your customers stays in fiber channel (everything starts in fibre channel),  The old data gets migrated to sata, and your customer master data will most likely go the Flash.  All well and good.  Even though customers only order every month, their recent activity gets moved to a higher tier disk, and all that old history gets moved to Sata.  

Now lets throw in a physical standby, dataguard .

With dataguard, we are writing the new blocks of history, and they are not accessed (this is cold standby).  If you mix this data with other applications that are busy, all your data for the standby database is surely going to end up in Sata over time.. This makes perfect sense to the algorithms for the array.  This historical data (or even current data) isn't accessed. For your standby sata it is !!

Bang... Sinking feeling.... wham.. You do a failover. 

Now lets see what happens.. All your data is in Sata.  You are now accessing, and trying to give your customers the same performance they are used to.  You system is slow.  You have 100,000 business, that access data over the course of the month.  How long do you think it takes to move all the data from SATA to Flash or Fibre ?  It could take quite a while for your system to learn the new patterns, and during this time your old primary (now standby) has it's data pattern getting changed. The data is getting migrated to SATA.  You stay in your alternative site for a month, fail back, and guess what.. WHAM again.  The disk array has to learn the pattern again.

As I said, this is all conjecture, and solely my opinion.