Wednesday, September 28, 2011

partitioning Local vs Global

In my last post I talked about creating a function based index on a GTT after my query plans went to hell after partitioning.  Someone asked me to elaborate why my query plans went south.

Well to start with, I deal with very large tables.. Not terribly large (250g 2 billion rows).  We are in the processess of partitioning this data, so we can purge it.. The performance on the data is very good, but we keep eating up disk space.

Seems simple enough right ? partition by date ranges, with some hash partitions thrown in on the column used the most for lookups.  Nice and neat. At this point we have 116 partitions.  Smaller is better right.

Since the whole reason for doing this is being able to purge, we created local indexes on almost all the columns except for the primary key.  Being able to maintain the partitions is critical.

Doing all this I assumed we would be OK with local indexes. The application does index lookups, and the ones that don't use the primary key (or the hash partitioned key) are close to unique.

How long can a lookup take with an index and number of distict values = num_rows.  Easy..

Then the dbreplay came, and the queries were slower.. much slower.. plan was similar but buffer gets was off.
% Total Gets Gets per Exec #Executions Exec Time (ms) per Exec CPU Time (ms) per Exec I/O Time (ms) per Exec Physical Reads per Exec #Rows Processed per Exec #Plans   
SQL Id 1st 1st Total 2nd 2nd Total Diff 1st 2nd 1st 2nd 1st 2nd 1st 2nd 1st 2nd 1st 2nd 1st 2nd 1st/2nd/Both SQL Text 
d76xhcfh5dsrs 0.71 1.42 25.46 50.96 24.75 12,401.18 875,159.54 2,389 2,518 347 2,819 135 2,130 203 351 23.43 82.44 9.34 9.48 1/ 1/ 2 SELECT vpcyd_wrkr_id, vpcyd_cl... 
.

It's hard to see above, but the 2 critical values are 12,401 buffer gets vs 875,159 buffer gets.. and 347 ms/exec vs 2,819 ms/exex

Buffer gets was making a huge difference with my partitioned tables.. Now to dig into the trace file.

Here is the part that really stood out..147,157 cr (buffer block reads), to get 54 rows of data.. wow..
       545        545        545                     PARTITION RANGE ALL PARTITION: 1 29 (cr=147157 pr=19 pw=0 time=1142973 us cost=232 size=0 card=1)
       545        545        545                      PARTITION HASH ALL PARTITION: 1 4 (cr=147157 pr=19 pw=0 time=1126073 us cost=232 size=0 card=1)
       545        545        545                       INDEX RANGE SCAN PIDX_CUST_ID PARTITION: 1 116 (cr=147157 pr=19 pw=0 time=1032020 us cost=232 size=0 card=

I isolated this lookup, and found that it was a "unique" key (it had no duplicate values).. Why would 545 rows of data take all that time? (this was where the time was going).

I created a small query, and did a index lookup for one row and compared partitioned vs non-partitioned.

                SQL_ID       PLAN_HASH_VALUE BUFFER_GETS EXECUTIONS CPU_TIME ELAPSED_TIME AVG_HARD_PARSE_TIME APPLICATION_WAIT_TIME CONCURRENCY_WAIT_TIME 
Partitioned     gz67xt981w53p  3,540,849,128       7,323          6  472,928      642,455  625,819       0 168,862 
Non-Partitioned gz67xt981w53p    791,655,517          32          6    3,999        4,473    2,847       0 0 


 

Comparing the index partitioned vs non-partitioned, (with 116 subpartitions), you can see the difference. 3,999 ms vs 472,928 ms .  What caused me the biggest issue is that I didn't realize it was doing a nested loop, 54 times.. this made the difference 36,000 ms vs 4,256,352 ms.  116x longer with a local partion vs global. 

Lesson learned was that with partitioning you need to balance performance with maintainability.. Local indexes can be very expensive.  Especially with nested loops.

No comments:

Post a Comment