Sunday, March 20, 2011

Large SGA's and what it means to the future for databases

Well, first off, I don't have the answer to this one, just some musings.

I've noticed that Memory on Servers has gotten bigger lately or cheaper depending on how you look at it. Case in point is the x2-8 exadata. 1tb of memory per database node. Then you add the new 11gr2 ability to parallelize across nodes, and not have to keep passing blocks, you have close to 2tb of available memory.

So what does this mean to a database ? What does this mean for disk I/O ? What is a database doing if the blocks are all in memory. Essentially you are writing out changed blocks, and logging, that's it.

So what do you need a big disk array for ?

Then with the all the really awesome IP based disk arrays out there (like the isilon), what is the disk future ? Like many companies we are still running on 4gb Fiber for all our servers, and connecting to a San array. Should we go to 8gb Fiber or 10gb IP ?

I would be interested in opinions on what people see as the future of disk. IP, Fiber, or FCOE ? How important is the speed of a disk array going to be ? Just put you Redo logs on SSD (or flashcache ?).

Update :

I just saw that Arup Nanda just posted some writing on this topic.  You can read it here.  He basically said that because of consitent read, and other mechanisms, you might find that your database objects are in the cache multiple times utilzing much more of your buffer cache than you probably realize. 
He recommends using a special database (like times ten) to make sure everything is in memory.

No comments:

Post a Comment