Wednesday, January 9, 2013

Performance and Indexes with oracle

It's been a while since I've written a post, but twice this week, the same issue came up..

The story goes like this.. "I have a query that is using indexes not FTS, but it is much slower then expected".  It seems most folks have it drummed into their heads that Indexes are fastest.

This is the statement I've made twice....

"The only thing worse than a FTS is an index lookup of the whole table."  I figured I would show you what I mean.

First I create 2 tables..

TEST_TABLE with 76,989 rows of data.  There is a primary key.
DRIVER  with the same rows of test_table.

This is the 2 test I did.

1) I created DRIVER with 1 row, and analyzed it , then I deleted the row, and inserted all the rows from TEST_TABLE.
2) I ran the following query which should return every row.

select  /*+ GATHER_PLAN_STATISTICS */  count(distinct a.capture_time)
from test_table a,
     driver b
where a.pval=b.pval
and b.instance_number < 20;

So what happens ?  since the statistics on DRIVER say there is only row you can see the plan and actual vs estimated below.







----------------------------------------------------------------------------------------------        
| Id  | Operation                       | Name       | Starts | E-Rows | A-Rows |   A-Time   |    
----------------------------------------------------------------------------------------------   

PLAN_TABLE_OUTPUT                                                                                                                           
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |            |      1 |        |      1 |00:00:00.86 |
|   1 |  SORT AGGREGATE                 |            |      1 |      1 |      1 |00:00:00.86 |
|   2 |   VIEW                          | VW_DAG_0   |      1 |      1 |  77961 |00:00:01.28 |
|   3 |    HASH GROUP BY                |            |      1 |      1 |  77961 |00:00:01.01 |
|   4 |     NESTED LOOPS                |            |      1 |        |  77989 |00:00:01.00 |
|   5 |      NESTED LOOPS               |            |      1 |      1 |  77989 |00:00:00.58 |
|*  6 |       TABLE ACCESS FULL         | DRIVER     |      1 |      1 |  77989 |00:00:00.07 |
|*  7 |       INDEX UNIQUE SCAN         | PVAL_KEY   |  77989 |      1 |  77989 |00:00:00.21 |
|   8 |      TABLE ACCESS BY INDEX ROWID| TEST_TABLE |  77989 |      1 |  77989 |00:00:00.22 |
----------------------------------------------------------------------------------------------
                                                                                                

Since the Optimizer is only expecting 1 row to come back from the table DRIVER, it does an index lookup on TEST_TABLE, for every row. Notice the actual number of rows is the full table.

Now lets look at the cost of this index lookup.


select  /*+ GATHER_PLAN_STATISTICS */  count(distinct a.capture_time)
from test_table a,
     driver b
where a.pval=b.pval
and b.instance_number < 20

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.84       0.85          0      83932          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.84       0.85          0      83932          0           1


OK.  so the cost of the index lookup row-by-row is .85 seconds elapsed time.

Now after analyzing the DRIVER table, you can see the plan changed to a FTS.


------------------------------------------------------------------------------------
| Id  | Operation             | Name       | Starts | E-Rows | A-Rows |   A-Time   |
------------------------------------------------------------------------------------
PLAN_TABLE_OUTPUT                                                                                                                           
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |            |      1 |        |      1 |00:00:00.16 |
|   1 |  SORT AGGREGATE       |            |      1 |      1 |      1 |00:00:00.16 |
|   2 |   VIEW                | VW_DAG_0   |      1 |  76723 |  77961 |00:00:00.40 |
|   3 |    HASH GROUP BY      |            |      1 |  76723 |  77961 |00:00:00.23 |
|*  4 |     HASH JOIN         |            |      1 |  76723 |  77989 |00:00:01.23 | 
|*  5 |      TABLE ACCESS FULL| DRIVER     |      1 |  76723 |  77989 |00:00:00.22 |
|   6 |      TABLE ACCESS FULL| TEST_TABLE |      1 |  79110 |  77989 |00:00:00.22 |
------------------------------------------------------------------------------------
                                                                                              


Notice the actual rows, and estimates match.  You can also see it is a FTS.

Now for the run time stats with the FTS.

select  /*+ GATHER_PLAN_STATISTICS */  count(distinct a.capture_time)
from test_table a,
     driver b
where a.pval=b.pval
and b.instance_number < 20

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.15       0.15          0       1390          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.15       0.15          0       1390          0           1

     

WOW.. look at that the FTS took .15 seconds compared to .85 seconds.

The bottom line is.. The next time you talk to developers, and they think their query should be fast because they are using an index, look deeper.  The biggest clue is to look at the Estimate vs Actual for the plan.  The index might not be what you want. FTS's can be good.




Monday, November 12, 2012

Big Data and Privacy

I am writing an editorial (which is unusual for me). This was caused by a conversation I had with my Dad about how the world is changing with big data, and how much retiree's (like him) should know.  So here goes.

Dear Dad,

  I know you are involved in college education for Older folks that have a passion to learn.  An interesting topic would be "Big Data".  For someone outside the IT field, I would tell you that Big Data describes the plethora of new data that is generated in today's society.
Where does it come from ??


  • Logs from webservers 
  • Cell phones (including location data).
  • Search data
  • Medical data
  • Machine generated data (like from your computer in your car)
  • Sales data
All this can be tied together in new ways, that most people didn't think were possible years ago.  My favorite example is... You are walking in the mall past past a store, and you get a text on your phone.  The store's computer system has texted you a 30% offer for a new sweater, good for 2 hours.  The store knows ...
  • Your location from phone
  • That you have a ski trip planned for the next week, from your search history, and purchase history
  • That you are looking for a new sweater from your facebook, and twitter posts.
Amazing huh ?

All these things can open up miraculous possibilities. The day may come when your doctor calls you to schedule a preventive test.  Medical history gathered from a large group of people has shown that others with a similar medical history as yours have had a issue that can be tested for and prevented. Wow. really amazing things.

They can predict something that may go wrong on your car given all the information the computer has gathered.

Target (the department store), has even predicted when a woman is pregnant based on purchase history.. 

All these things are amazing possiblities, but they are also scary.

I find this topic very exciting, but I'm sure for a lot of people these ideas are very scary. Where do you cross the line and enter privacy issues.

This is going to be a interesting battle. Who ones all this data ? what data is public, and what data is private ?

But the most interesting thing is what can be done with the data....

Me, I'm an optimist so I can all the good that can come out of all this new data.

At the very least, the students would at least have a better idea what is going on in the world (behind the scenes), and they would understand why Target things "grandma" is pregnant.

Saturday, November 3, 2012

ODI Monitoring scripts

I have included some usefule ODI monitoring scripts (if you want the abridged version of this blog post).

I haven't been blogging in a while (it's been crazy), but I wanted to share some information on ODI.

I have been working on trying to monitor ODI (Oracle Data Integrator).  ODI is a somewhat recent Oracle purchase, and it has a client GUI that is used by the developers.

Me (like many of you), are DBA's, and we want to go into the database to see what is happening.  We eithor don't have access to the GUI, or we don't want access. 

ODI is a great tool used for transforming data.  It can be used to build sql statments that are executed directly in the database.  This makes it a bit different from a tool like Datastage that runs sql remotely.

Here is the first sql I was able to come up with.  It will tell you information about the load plans that have been run. You need to qualify the tables with the owner of the ODI repository..

SELECT SLI.I_LP_INST AS "Load Plan Instance #"
     , SLR.NB_RUN AS "Load Plan Run #"
     , SLI.LOAD_PLAN_NAME AS "Load Plan Name"
     , SLR.CONTEXT_CODE AS "Source System"
     , SLR.STATUS AS "Load Plan Status"
     , SLR.RETURN_CODE AS "error code"
     , CASE WHEN SLR.END_DATE IS NULL 
            THEN TRUNC(ROUND((NVL(SLR.END_DATE , SYSDATE) - SLR.START_DATE)*86400) / 3600) || ':' || 
                 LPAD(TRUNC(MOD(ROUND((NVL(SLR.END_DATE , SYSDATE) - SLR.START_DATE)*86400), 3600) / 60), 2, 0) || ':' || 
                 LPAD(MOD(ROUND((NVL(SLR.END_DATE , SYSDATE) - SLR.START_DATE)*86400), 60), 2, 0)
            ELSE TRUNC(SLR.DURATION / 3600) || ':' || LPAD(TRUNC(MOD(SLR.DURATION, 3600) / 60), 2, 0) || ':' || LPAD(MOD(SLR.DURATION, 60), 2, 0) 
       END AS "Load Time"
     , SLR.START_DATE
     , SLR.END_DATE
     , substr(to_char(SLR.START_DATE,'mm/dd/yy:hh24'),1,11) start_date_hour
  FROM SNP_LP_INST SLI
  JOIN SNP_LPI_RUN SLR ON SLI.I_LP_INST = SLR.I_LP_INST
where 'JRNL_LOAD'=sli.load_plan_name

I was able to include this in an apex report, and it display some of the history.

The next SQL igives you detail of the scenarios, for the load plan. The input is the Load plan



SELECT SLI.Load_plan_name as "Load Plan Name",
      SUBSTR(SLR.CONTEXT_CODE, 9, 5) AS "Source System",
      SLS.LP_STEP_NAME AS "Target Table",
      SLS.scen_name as "scenario name",
     TRUNC(SUM(SSTL.TASK_DUR) / 3600) || ':' ||
             LPAD(TRUNC(MOD(SUM(SSTL.TASK_DUR), 3600) / 60), 2, 0) || 
            ':' || LPAD(MOD(SUM(SSTL.TASK_DUR), 60), 2, 0) AS "Load Time"
     , SST.SESS_NO AS "Session Number"
     , SLSL.start_date as "Start Time"
     , SLSL.End_date as "End Time"
     , sum(sstl.nb_ins) as "Rows Inserted"
     , sum(sstl.nb_upd) as "Rows Updated"
     , sum(sstl.nb_del) as "Rows Deleted"
     , sum(sstl.nb_err) as "Rows Errors"
     , case 
        when (sum(sstl.nb_ins) + sum(sstl.nb_upd)) > 0 then trunc(sum(sstl.task_dur)/(sum(sstl.nb_ins) + sum(sstl.nb_upd)) ,4)
        else 0
       end as "Throughput"
  FROM SNP_LP_INST SLI
  JOIN SNP_LPI_STEP SLS 
                  ON SLI.I_LP_INST = SLS.I_LP_INST
  JOIN SNP_LPI_STEP_LOG SLSL
        ON SLS.I_LP_STEP = SLSL.I_LP_STEP
      AND SLS.I_LP_INST = SLSL.I_LP_INST
  JOIN SNP_SESS_TASK SST
            ON SST.SESS_NO = SLSL.SESS_NO
  JOIN SNP_SESS_TASK_LOG SSTL
            ON SSTL.SCEN_TASK_NO = SST.SCEN_TASK_NO 
           AND SST.SESS_NO = SSTL.SESS_NO
  JOIN SNP_LPI_RUN  SLR 
            on SLI.I_LP_INST = SLR.I_LP_INST
 WHERE (1=1)
  AND SLSL.I_LP_INST = :P6_SCENARIO
   AND SLS.LP_STEP_TYPE = 'RS'
--   AND SLSL.STATUS IN ('M','D')
 GROUP BY SUBSTR(SLR.CONTEXT_CODE, 9, 5),
           SLSL.start_date,SLSL.end_date,SLI.load_plan_name,
          SLS.scen_name,SLS.LP_STEP_NAME, SST.SESS_NO

Finally, this is the last query. This query takes the task number as an input, and will display the detail for all the tasks contained in a scenario.
SELECT SST.TASK_NAME2 AS "Session Name"
     , SST.TASK_NAME3 AS "Task Name"
     , CASE WHEN SSTL.TASK_END IS NULL 
            THEN TRUNC(ROUND((NVL(SSTL.TASK_END , SYSDATE) - SSTL.TASK_BEG)*86400) / 3600) || ':' || 
                 LPAD(TRUNC(MOD(ROUND((NVL(SSTL.TASK_END , SYSDATE) - SSTL.TASK_BEG)*86400), 3600) / 60), 2, 0) || ':' || 
                 LPAD(MOD(ROUND((NVL(SSTL.TASK_END , SYSDATE) - SSTL.TASK_BEG)*86400), 60), 2, 0)
            ELSE TRUNC(TASK_DUR / 3600) || ':' || LPAD(TRUNC(MOD(TASK_DUR, 3600) / 60), 2, 0) || ':' || LPAD(MOD(TASK_DUR, 60), 2, 0) 
       END AS "Load Time"
     , substr(sst.def_context_code,9,5) "Context"
     , SSTL.TASK_BEG AS "Start Time"
     , SSTL.TASK_END AS "End Time"
     , SSTL.NB_DEL AS "Rows Deleted"
     , SSTL.NB_UPD AS "Rows Updated"
     , SSTL.NB_INS AS "Rows Inserted"
     , SSTL.NB_ERR AS "# Of Errors"
     , SST.SESS_NO 
     , sst.scen_task_no
/* UNCOMMENT TO GET SQL EXECUTED FOR THIS STEP */
  FROM SNP_SESS_TASK SST,
      SNP_SESS_TASK_LOGv SSTL 
 WHERE (1=1)
   AND SST.SESS_NO =:P7_TASK   
   AND SSTL.TASK_STATUS IN ('D','M','R')
   AND SSTL.SCEN_TASK_NO = SST.SCEN_TASK_NO 
   AND SST.SESS_NO = SSTL.SESS_NO
 

Friday, October 5, 2012

Anayzing query Cost example

Well,
  Here is the issue I've been dealing with.  The query cost doesn't stay consistent, and I was wondering if a profile would help keep it stable, and if it keeps things better or worse.

  I'm sure you have all run into this.  For some reason your cardinality can't be well estimated, the optimizer changes plans on you, and you want to know if a profile will help or hurt.

  The good news I had the perfect test case.  The query was part of a data load, and there was a driving table with a varying number of rows in it.  This is how I went about analyzing.

First I created a table to store the results.

create table bgrenn.mytable
 (ROW_COUNT                                          NUMBER,
 PLAN_HASH_VALUE                                    VARCHAR2(15)
 COST                                               number);




Next I took a copy of my driving table ( a full size table). I used this to create a smaller copy of the table.



declare


row_count number:= 0;
v_statement_id varchar2(10);
v_plan_hash_value varchar2(15);
v_cost number;
v_statement varchar2(4000);

begin

for row_count in 0..100000  loop


execute immediate 'drop table BGRENN.TMP_DRIVER purge';
execute immediate 'create table   BGRENN.TMP_DRIVERT as select * from  BGRENN.TMP_DRIVERB where rownum<' ||  to_char(row_count,'9999999');

dbms_stats.gather_table_stats('BGRENN','TMP_DRIVERT');

v_statement_id := to_char(row_count,'99999');

v_statement := 'explain plan SET STATEMENT_ID = ''' || v_statement_id || '''  for ' ||
'select   * ' ||
' from BGRENN.TMP_DRIVERT TMP_DRIVERA  ' ||
' INNER JOIN BGRENN.TABA TABA ON TMP_DRIVERA.ID=TABA.ID  ' ||
' INNER JOIN BGRENN.TABB ON TABA.GCC_ID=TABA.ID AND TABA.LOCN <> 8031431  ' ||
' INNER JOIN BGRENN.TABC TABC ON TABA.ID=TABC.ID AND TABC.id = 1168583  ' ||
' INNER JOIN BGRENN.TABD TABD ON TABA.ID=TABD.ID  ' ||
' INNER JOIN BGRENN.TABE TABE ON TABD.ID=TABE.ID  ' ||
' INNER JOIN BGRENN.TABF TABF ON TABD.ID=TABF.ID  ' ||
' INNER JOIN BGRENN.TABG TABG ON TABF.ID=TABG.ID  ' ||
' INNER JOIN BGRENN.TABH TABH ON TABG.ID=TABH.ID AND TABH.SEQ_NBR < 500  ' ||
' INNER JOIN BGRENN.TABI ON TABC.ID=ID  ' ||
' INNER JOIN BGRENN.TABJ TABJ ON TABC.ID=TABJ.ID  ' ||
' INNER JOIN BGRENN.TABK TABK ON TABJ.ID=TABK.ID and TABK.id in ( 1221589, 1219009, 1191882, 1221590, 1171956)  ' ||
' LEFT OUTER JOIN ERD.TABL TABL ON TABH.ID=TABL.ID  ' ||
' LEFT OUTER JOIN ERD.TABM TABM ON TABE.ID=TABM.ID  ' ||
' where (1=1)';

dbms_output.put_line(v_statement);

execute immediate v_statement;



SELECT substr(plan_table_output,18,12) into v_plan_hash_value  FROM TABLE(dbms_xplan.display(statement_id => v_statement_id)) where rownum <2;
select cost into v_cost from plan_table where id=0 and rownum<2 and statement_id=v_statement_id;

insert into bgrenn.mytable values(row_count,v_plan_hash_value,v_cost);

delete from plan_table where statement_id=v_statement_id;
commit;
end loop;

end;
/                                            number);




This produced a set of rows in the table with the cost.

I then copied the table, installed a profile and reran.

After joining the 2 tables on row count I created an "r" program and analyzed the results.
Here is the program.


# open Libarary psych for functions
library(psych)

#  open file
query_data <- read.table("c:/r/data/query_output.txt", header=T)

#what are the variables
describe (query_data)


pdf("c:/r/data/querydata.pdf")
plot(query_data$ROW_COUNT,query_data$orig_cost,type='l',col="red")
lines(query_data$ROW_COUNT,query_data$new_cost,type='l',col="green")
dev.off()


And here is the output ..  The red is the orginal plan, and the green is the plan with the profile. I can see that the cost of the profile plan remains more consistent. and is probably a better choice.

Wednesday, September 5, 2012

"enq: CR - block range reuse ckpt" and the recycle bin

I decided to write this blog entry because we ran into an issue over the weekend.  The process that normally runs fine, was running slow.. "like molasses" was the comment from one of the folks.  After digging into an AWR report I found one of the top waits for the sql was


enq: CR - block range reuse ckpt

OK.. what is that, and why ?   I didn't find a whole lot except that it indiciated contention on blocks from multiple processes trying to update blocks.. hmmm..

I looked further in the AWR and saw the top Segment for Logical reads was "RECYCLEBIN$"

and one of the top queries was.

"delete from RecycleBin$ where bo=:1"


Well since the process was finished I did my own test.. I created a tablespace and created an object in it.. dropped the object, added the to table, then dropped the table.. over and over again, until the number of objects in  the tablespace remained constant.  I then created the table again, and started to let it grow (so it would have to free up the recycle bin to get space)... And what I saw in the top 5 wait events was again.....

enq: CR - block range reuse ckpt

I wanted to document this for others that may hit this.. If a search for this wait event brought you to my blog, Please check your recycle bin and make sure that it isn't cleaning itself out to make room causing this wait event ...




Tuesday, September 4, 2012

Recognize the magic optimizer numbers

Well I figured I document some of the magic numbers that the optimizer uses to help remember them, and help others. The back ground of this is simple.

I was looking through a query that was running for a long, long time, and the cardinality looked wrong.  I know the developers were using a table operation (looping over a LOB that was treated like  table). 

The Cardinality estimate for the step was 8168, and I thought hmmmm I've seen that before when dynamic sampling didn't happen.  Well after some digging I came across this page. Cardinality

The page contained this handy chart below...  These are important numbers to remember because when you see a cardinality matching this chart it is probably because the optimizer couldn't estimate the correct cardinality, and it couldn't dynamically sample.  Below is a snippet from the query I was investigating. Notice the cardinality on the first line.


0  0  0   COLLECTION ITERATOR PICKLER FETCH PARSE_DYNAMIC_COLS 
(cr=0 pr=0 pw=0 time=0 us cost=29 size=16336 card=8168) 0 0 0 HASH JOIN RIGHT OUTER (cr=0 pr=0 pw=0 time=0 us cost=8757 size=233200 card=100) 0 0 0 VIEW (cr=0 pr=0 pw=0 time=0 us cost=8614 size=14 card=1) 0 0 0 HASH UNIQUE (cr=0 pr=0 pw=0 time=0 us cost=8614 size=2069 card=1) 0 0 0 FILTER (cr=0 pr=0 pw=0 time=0 us) 0 0 0 NESTED LOOPS (cr=0 pr=0 pw=0 time=0 us) 0 0 0 NESTED LOOPS (cr=0 pr=0 pw=0 time=0 us cost=8613 size=2069 card=1) 0 0 0 HASH JOIN (cr=0 pr=0 pw=0 time=0 us cost=8612 size=2044 card=1)


Default cardinality for database objects

The following table demonstrates the estimated cardinalities (using a 8K blocksize) of various objects which have had no statistics generated for them :

Object TypeEstimated Cardinality
Heap Table82
Global Temporary Table8168
Index-Organized Table1
System Generated Materialized View
(such as the output of the TABLE operator)
8168

Saturday, September 1, 2012

Exadata sizing updated for 3tb drives 1/2 rack SATA

OK, Now I new Exadata coming in that has 3tb drives, and the first question asked is .. How much disk to I have to configure on it ?  Well I'm going to expand on a previous entry I did on sizing .

1/2 Rack. Sata drives. normal redundancy

This means we have
  • 7 storage cells
  • Each storage cell contains 12 disks
  • each disk is 3tb (which is about 2.794 tb usable)  *** This is calculated using base 1024 
  • The first 2 disks in each storage cell has 29.103g already partitioned for the OS (which is mirrored).
  • The rest of the disks in the group are used for DBFS
Given this, I am going to calculate out the total disk available then subtract out the 29.103g (for OS and DBFS).

First 12 disks * 7 cells x 2.794 = 234.696 tb of total raw storage/
Subtract out 29g* 2 disks * 7 cells = 406g    ----- OS
Subtract out 29g * 10 disks * 7 cells = 2.03tb  -----   DBFS
Available raw is 234.696 - 2.436 = 232.26  

Now I said we were running Normal Redundancy.. This means that we loose 1/2

DBFS = 1.015tb
OS        29g
Remaining for Data and Reco = 116.13

But of course we need to account for cell being off line.  This takes out 1/7 of the storage.

DBFS   === .870 tb (29g * 10 * 6)/2
Everything else  ===  ( 2.765 * 12 disks * 6 cells)/2   == 99.54

So now we have 99.54 raw storage available for Data and Reco.

This is now easy to figure out now.. You have really 100tb raw storage (with normal redundancy) to split up between Data and Reco.

Now a full rack is easy to do.

2.765 * 12 disks * 13 cells) / 2 =  215.67tb