I have been playing with what happens when you index HCC compressed data . If you follow the recommended strategy, you will partition your history tables, and end up with some data uncompressed, some OLTP compressed, and some in different stages of HCC compressed. What if you use an index lookup that spans all these levels of compression ?
First I took a table that started at 176g of data (uncompressed).
OLTP compression took it down to 76G
HCC query compression took it down to 6G.
Now I indexed the HCC copy.. My index (on a single column) ended up being 40g.
Now I queried the data using the index. First execution was longer than querying the unindexed data (with storage indexes working their magic). Second index is much, much faster.
The lesson I learned is that HCC really helps with storage, but start indexing the HCC data, and you end up using a lot more storage. Also unindexed lookups can be faster than indexed (until they hit the SGA).
Interesting information to help plan what happens when I go to query across all flavors of compression.
First I took a table that started at 176g of data (uncompressed).
OLTP compression took it down to 76G
HCC query compression took it down to 6G.
Now I indexed the HCC copy.. My index (on a single column) ended up being 40g.
Now I queried the data using the index. First execution was longer than querying the unindexed data (with storage indexes working their magic). Second index is much, much faster.
The lesson I learned is that HCC really helps with storage, but start indexing the HCC data, and you end up using a lot more storage. Also unindexed lookups can be faster than indexed (until they hit the SGA).
Interesting information to help plan what happens when I go to query across all flavors of compression.
No comments:
Post a Comment