Thursday, May 10, 2012

Analytical functions

I have been working on a query that is uses an analytical function..
I’m not familiar with them, but using my usual query tuning got me into some trouble..

This is the query.

WITH
SACOMMON71991 AS (select T17189.SUR_REF_TIME_PERD_ID as SUR_REF_TIME_PERD_ID,
    T17189.RTP_CAL_DT as c4,
    T17189.RTP_CAL_YR as c5,
    ROW_NUMBER() OVER (PARTITION BY T17189.RTP_CAL_YR ORDER BY
T17189.RTP_CAL_YR DESC) as c6,
    ROW_NUMBER() OVER (PARTITION BY T17189.RTP_CAL_YR,
T17189.SUR_REF_TIME_PERD_ID ORDER BY T17189.RTP_CAL_YR DESC,
T17189.SUR_REF_TIME_PERD_ID DESC) as c7
from
    REF_TIME_PERD T17189 /* Dim_TIME_PERD_D_Check_Date */ )
select * from SACOMMON71991 where SUR_REF_TIME_PERD_ID=20240213


The data in the table looks like this.

SUR_REF_TIME_PERD_ID    varchar(8)
RPT_CAL_DT                      date
RPT_CAL_YR                      varchar(4)

Some example values are.

SUR_REF_TIME_PERD_ID            RPT_CAL_DT              RPT_CAL_YR
19000101                                         01/01/1900                    1900

20120101                                         01/01/2012                    2012
-..
--
20120125                                          01/25/2012                    2012
..
20120430                                          04/30/2012                    2012
..
20240213                                          02/13/2024                    2024
..
47121231                                           12/31/4712                   4712


Now back to my query..

WITH
SACOMMON71991 AS (select T17189.SUR_REF_TIME_PERD_ID as SUR_REF_TIME_PERD_ID,
    T17189.RTP_CAL_DT as c4,
    T17189.RTP_CAL_YR as c5,
    ROW_NUMBER() OVER (PARTITION BY T17189.RTP_CAL_YR ORDER BY T17189.RTP_CAL_YR DESC) as c6,
    ROW_NUMBER() OVER (PARTITION BY T17189.RTP_CAL_YR,T17189.SUR_REF_TIME_PERD_ID ORDER BY T17189.RTP_CAL_YR DESC,T17189.SUR_REF_TIME_PERD_ID DESC) as c7
from
    REF_TIME_PERD T17189 /* Dim_TIME_PERD_D_Check_Date */ )
select * from SACOMMON71991 where SUR_REF_TIME_PERD_ID=20240213



I noticed that the query did a full table scan of 219,000 rows.




  I figured, well I only wanted 1 row, so why can’t I just change the
query ??

select T17189.SUR_REF_TIME_PERD_ID as SUR_REF_TIME_PERD_ID,
     T17189.RTP_CAL_DT as c4,
     T17189.RTP_CAL_YR as c5,
     ROW_NUMBER() OVER (PARTITION BY T17189.RTP_CAL_YR ORDER BY T17189.RTP_CAL_YR DESC) as c6,
     ROW_NUMBER() OVER (PARTITION BY T17189.RTP_CAL_YR, T17189.SUR_REF_TIME_PERD_ID ORDER BY T17189.RTP_CAL_YR DESC, T17189.SUR_REF_TIME_PERD_ID DESC) as c7
from 
     V_REF_TIME_PERD T17189  where SUR_REF_TIME_PERD_ID=20240213


My plan looks much better



But the answer is different ??

The first query returns.


SUR_REF_TIME_PERD_ID C4                            C5         C6         C7
-------------------- --------------------- ---------- ---------- ----------
            20240213 2024-02-13 00:00:00         2024         44          1

The second query returns



SUR_REF_TIME_PERD_ID C4                            C5         C6         C7
-------------------- --------------------- ---------- ---------- ----------
            20240213 2024-02-13 00:00:00         2024          1          1


So what exactly is the query doing ???

The query is actually analyzing more data than just my row.  After examining the query, I noticed the 4th column (C6) is actually the day of the year.  2/13 is the 44th day of the year.

So what exactly is the query doing to get this ???

It is starting with all rows in the table, then sorting them by RPT_CAL_YR, and SUR_REF_TIME_PERD_ID.  This is column c7 in table.. If we look back we see this column is

ROW_NUMBER() OVER (PARTITION BY T17189.RTP_CAL_YR, T17189.SUR_REF_TIME_PERD_ID ORDER BY T17189.RTP_CAL_YR DESC, T17189.SUR_REF_TIME_PERD_ID DESC) as c7

This column does the sorting to get the result.. Now if you look at C6, you see that it is a row_number over just the 1 column RPT_CAL_YR.  Since the rows are now sorted in data order (thanks to c7), C6 is just  the row number for the value passed to the query..

I know it gets kind of complicated.. The lesson learned from this, is that analytical functions sometimes need to do FTS to do their work.  By giving it the key value within the select, I removed the queries ability to look at the full tables data.

I also learned that by properly limiting the values in the right spot, you can eliminate rows.  Here is a better query. I am limiting the analytical function to just look at the current year.


   WITH
SACOMMON71991 AS (select T17189.SUR_REF_TIME_PERD_ID as SUR_REF_TIME_PERD_ID,
    T17189.RTP_CAL_DT as c4,
    T17189.RTP_CAL_YR as c5,
    ROW_NUMBER() OVER (PARTITION BY T17189.RTP_CAL_YR ORDER BY
T17189.RTP_CAL_YR DESC) as c6,
    ROW_NUMBER() OVER (PARTITION BY T17189.RTP_CAL_YR,
T17189.SUR_REF_TIME_PERD_ID ORDER BY T17189.RTP_CAL_YR DESC,
T17189.SUR_REF_TIME_PERD_ID DESC) as c7
from
    REF_TIME_PERD T17189 /* Dim_TIME_PERD_D_Check_Date */ 
    where RTP_CAL_YR='2024')
select * from SACOMMON71991 where SUR_REF_TIME_PERD_ID=20240213

The plan is going through less data, but the appropriate data to get the right answer.



Friday, May 4, 2012

Hyperthreading with Oracle (update)

After my first post on hyperthreading, and all the hits I've been getting I've decided to update it..

My first post was pased on testing with a DL980.. This is an 8 socket server with   X7560  @ 2.27GHz processes.

My currently updated post is on a new 2 socket server with the E7- 2870 @ 2.40GHz chipset.

The servers I tested on were

2 Socket
10 Core (dual threaded)
11.2.0.3 Oracle
Linux RHEL  2.6.18-274.17.1.el5
132g of ram.


I tested by using Kevin Clossons SLOB test which can be found here.

I tested using multiple process settings, and you see how these  servers scaled up with the processes.

I warmed up with 15 processes.  Looking at the LIO count, we are doing 7.2 million. output



Load Profile              Per Second    Per Transaction   Per Exec   Per Call
~~~~~~~~~~~~         ---------------    --------------- ---------- ----------
      DB Time(s):               14.4               24.9       0.00       4.60
       DB CPU(s):               14.3               24.7       0.00       4.56
       Redo size:           13,974.5           24,137.1
   Logical reads:        7,259,470.5       12,538,708.3
   Block changes:               32.9               56.8
  Physical reads:               69.9              120.8
 Physical writes:               15.8               27.2
      User calls:                3.1                5.4
          Parses:                3.8                6.6
     Hard parses:                0.0                0.0
W/A MB processed:                0.4                0.7
          Logons:                0.1                0.2
        Executes:           28,206.8           48,719.3
       Rollbacks:                0.0                0.0
    Transactions:                0.6


Then 20 processes.  8.7 million still looking good.  output


Load Profile              Per Second    Per Transaction   Per Exec   Per Call
~~~~~~~~~~~~         ---------------    --------------- ---------- ----------
      DB Time(s):               18.9              127.0       0.00      10.83
       DB CPU(s):               18.9              126.8       0.00      10.82
       Redo size:           15,073.6          101,207.6
   Logical reads:        8,714,693.4       58,512,431.8
   Block changes:               39.2              263.4
  Physical reads:                2.1               13.9
 Physical writes:               13.0               87.0
      User calls:                1.8               11.7
          Parses:                3.8               25.6
     Hard parses:                0.0                0.1
W/A MB processed:                0.4                2.5
          Logons:                0.1                0.4
        Executes:           33,859.7          227,341.5
       Rollbacks:                0.0                0.0
    Transactions:                0.2

Now lets try 25.. see how we go past the number of cores .  9.4 million.. Hyperthreading is scaling nicely. output


Load Profile              Per Second    Per Transaction   Per Exec   Per Call
~~~~~~~~~~~~         ---------------    --------------- ---------- ----------
      DB Time(s):               23.1               91.7       0.00       9.26
       DB CPU(s):               23.0               91.6       0.00       9.24
       Redo size:           21,634.2           86,063.7
   Logical reads:        9,406,658.2       37,420,998.8
   Block changes:               68.2              271.4
  Physical reads:                3.7               14.8
 Physical writes:                5.7               22.5
      User calls:                2.5                9.9
          Parses:                3.3               13.3
     Hard parses:                0.1                0.3
W/A MB processed:                0.4                1.6
          Logons:                0.1                0.5
        Executes:           36,544.7          145,380.0
       Rollbacks:                0.0                0.0
    Transactions:                0.3


Next 35.. Still scaling... output


Load Profile              Per Second    Per Transaction   Per Exec   Per Call
~~~~~~~~~~~~         ---------------    --------------- ---------- ----------
      DB Time(s):               33.5              310.9       0.00      18.74
       DB CPU(s):               33.5              310.3       0.00      18.70
       Redo size:            9,379.2           86,997.8
   Logical reads:       11,039,158.6      102,395,221.3
   Block changes:               17.7              164.4
  Physical reads:                1.3               12.0
 Physical writes:                4.0               36.9
      User calls:                1.8               16.6
          Parses:                2.0               18.9
     Hard parses:                0.1                1.0
W/A MB processed:                0.2                2.2
          Logons:                0.1                0.6
        Executes:           42,882.8          397,765.4
       Rollbacks:                0.0                0.0
    Transactions:                0.1

Finally the number of threads.  This appears to be the peak. output


Load Profile              Per Second    Per Transaction   Per Exec   Per Call
~~~~~~~~~~~~         ---------------    --------------- ---------- ----------
      DB Time(s):               38.3              387.0       0.00      17.48
       DB CPU(s):               37.6              380.4       0.00      17.18
       Redo size:            9,207.1           93,060.2
   Logical reads:       11,577,951.1      117,023,088.1
   Block changes:               16.4              166.1
  Physical reads:                1.1               11.5
 Physical writes:                4.5               45.1
      User calls:                2.2               22.1
          Parses:                2.0               19.9
     Hard parses:                0.1                0.6
W/A MB processed:                0.2                2.5
          Logons:                0.1                0.9
        Executes:           44,975.6          454,587.0
       Rollbacks:                0.0                0.0
    Transactions:                0.1

Now lets go up to 45 ...   things start dropping off.  output


oad Profile              Per Second    Per Transaction   Per Exec   Per Call
~~~~~~~~~~~~         ---------------    --------------- ---------- ----------
      DB Time(s):               40.7              482.9       0.00      16.78
       DB CPU(s):               36.0              427.5       0.00      14.86
       Redo size:            8,205.8           97,318.2
   Logical reads:       11,100,719.6      131,651,002.6
   Block changes:               14.0              165.9
  Physical reads:                0.9               10.3
 Physical writes:                3.2               38.3
      User calls:                2.4               28.8
          Parses:                2.3               27.5
     Hard parses:                0.1                0.6
W/A MB processed:                0.3                3.6
          Logons:                0.1                1.4
        Executes:           43,122.5          511,418.6
       Rollbacks:                0.0                0.0
    Transactions:                0.1




So if you look at the throughput..

20 processes    8,7 Million LIO's
40 Processes   11.6 Million LIO's

Not quite linaear progression, but things don't really top off until I get to the number of threads..  This scaled much better, and it seems that hyperthreading is helpgin push more workload through.

Saturday, April 14, 2012

Analyzing the last query run

I don't know how many times I find myself trying to tune a query, and going through the same tasks.

I run the sql in my session (most likely on sqlplus on the unix server), but I then try to track down the sql_id, and finally (if I am lucky) find the SQL Monitor report (tuning pack license required).

So I came up with a script that I am putting on all my severs. post_execute.sql

This script can be executed in your shell as the next step after running the query in question.. but first.

If you want to use the SQL Monitor output, you are best adding the /*+monitor */ hint to your query just to be sure it is captured.

Well this is what the script does.

1) run your query.. I used this query for my example.


select /*+monitor */ a.owner,b.table_name ,sum(bytes) from dba_segments a,
                                     dba_tables b
where a.owner=b.owner and 
      a.segment_name=b.table_name 
group by a.owner,b.table_name
order by a.owner,b.table_name;


2) It captures the dbms_xplan.display_cursor output for the last run sql.. Here is snippet of what the output looked like..

Plan information for current query




SQL_ID  208ttrm2nbt0u, child number 0
-------------------------------------
select /*+monitor */ a.owner,b.table_name ,sum(bytes) from dba_segments 
a,                                      dba_tables b where 
a.owner=b.owner and       a.segment_name=b.table_name group by 
a.owner,b.table_name order by a.owner,b.table_name
 
Plan hash value: 1601540958
 
--------------------------------------------------------------------------------------------------------------
| Id  | Operation                                   | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                            |                |       |       | 14425 (100)|          |
|   1 |  SORT GROUP BY                              |                |     6 |  2184 | 14425   (1)| 00:02:54 |
|*  2 |   HASH JOIN OUTER                           |                |     6 |  2184 | 14424   (1)| 00:02:54 |
|   3 |    NESTED LOOPS OUTER                       |                |     6 |  2160 | 14422   (1)| 00:02:54 |
|   4 |     NESTED LOOPS OUTER                      |                |     6 |  2106 | 14410   (1)| 00:02:53 |
|   5 |      NESTED LOOPS OUTER                     |                |     6 |  2070 | 14408   (1)| 00:02:53 |
|   6 |       NESTED LOOPS                          |                |     6 |  1998 | 14396   (1)| 00:02:53 |
|   7 |        NESTED LOOPS                         |                |     6 |  1974 | 14390   (1)| 00:02:53 |
|*  8 |         HASH JOIN                           |                |    67 | 19899 | 14318   (1)| 00:02:52 |
|*  9 |          HASH JOIN                          |                |  5095 |  1298K| 13638   (1)| 00:02:44 |
|  10 |           MERGE JOIN CARTESIAN              |                |   131 | 11135 |     5  (20)| 00:00:01 |
|* 11 |            HASH JOIN                        |                |     1 |    68 |     1 (100)| 00:00:01 |
|* 12 |             FIXED TABLE FULL                | X$KSPPI        |     1 |    55 |     0   (0)|          |
|  13 |             FIXED TABLE FULL                | X$KSPPCV       |   100 |  1300 |     0   (0)|          |
|  14 |            BUFFER SORT                      |                |   131 |  2227 |     5  (20)| 00:00:01 |
|  15 |             TABLE ACCESS STORAGE FULL       | USER$          |   131 |  2227 |     4   (0)| 00:00:01 |
|  16 |           VIEW                              | SYS_DBA_SEGS   |  5095 |   875K| 13633   (1)| 00:02:44 |
|  17 |            UNION-ALL                        |                |       |       |            |          |
|* 18 |             HASH JOIN RIGHT OUTER           |                |   672 |   103K| 12305   (1)| 00:02:28 |
|  19 |              TABLE ACCESS STORAGE FULL      | USER$          |   131 |  2227 |     4   (0)| 00:00:01 |
|* 20 |              HASH JOIN                      |                |   672 | 94752 | 12301   (1)| 00:02:28 |
|  21 |               NESTED LOOPS                  |                |   672 | 68544 | 11623   (1)| 00:02:20 |
|* 22 |                HASH JOIN                    |                |   672 | 63168 | 11623   (1)| 00:02:20 |
|  23 |                 TABLE ACCESS STORAGE FULL   | TS$            |   167 |  1336 |    48   (0)| 00:00:01 |
|* 24 |                 HASH JOIN                   |                |   672 | 57792 | 11574   (1)| 00:02:19 |
|  25 |                  TABLE ACCESS STORAGE FULL  | SEG$           | 16421 |   449K|   821   (1)| 00:00:10 |
|  26 |                  VIEW                       | SYS_OBJECTS    | 21369 |  1210K| 10752   (1)| 00:02:10 |
|  27 |                   UNION-ALL                 |                |       |       |            |          |
|* 28 |                    TABLE ACCESS STORAGE FULL| TAB$           |  7284 |   177K|  2681   (1)| 00:00:33 |
|  29 |                    TABLE ACCESS STORAGE FULL| TABPART$       |  1525 | 28975 |    12   (0)| 00:00:01 |
|  30 |                    TABLE ACCESS STORAGE FULL| CLU$           |    10 |   140 |  2680   (1)| 00:00:33 |
|* 31 |                    TABLE ACCESS STORAGE FULL| IND$           |  9647 |   188K|  2681   (1)| 00:00:33 |
|  32 |                    TABLE ACCESS STORAGE FULL| INDPART$       |  1322 | 25118 |    12   (0)| 00:00:01 |
|* 33 |                    TABLE ACCESS STORAGE FULL| LOB$           |  1547 | 32487 |  2680   (1)| 00:00:33 |
|  34 |                    TABLE ACCESS STORAGE FULL| TABSUBPART$    |    32 |   448 |     2   (0)| 00:00:01 |
|  35 |                    TABLE ACCESS STORAGE FULL| INDSUBPART$    |     1 |    52 |     2   (0)| 00:00:01 |
|  36 |                    TABLE ACCESS STORAGE FULL| LOBFRAG$       |     1 |    17 |     2   (0)| 00:00:01 |
|* 37 |                INDEX UNIQUE SCAN            | I_FILE2        |     1 |     8 |     0   (0)|          |
|  38 |               TABLE ACCESS STORAGE FULL     | OBJ$           | 88138 |  3356K|   677   (1)| 00:00:09 |
|  39 |             NESTED LOOPS                    |                |    34 |  3638 |   450   (1)| 00:00:06 |
|  40 |              NESTED LOOPS                   |                |    34 |  3366 |   450   (1)| 00:00:06 |
|* 41 |               HASH JOIN OUTER               |                |    34 |  3094 |   416   (1)| 00:00:05 |
|  42 |                NESTED LOOPS                 |                |    34 |  2516 |   411   (0)| 00:00:05 |
|* 43 |                 TABLE ACCESS STORAGE FULL   | UNDO$          |   204 |  8568 |     3   (0)| 00:00:01 |
|* 44 |                 TABLE ACCESS CLUSTER        | SEG$           |     1 |    32 |     2   (0)| 00:00:01 |
|* 45 |                  INDEX UNIQUE SCAN          | I_FILE#_BLOCK# |     1 |       |     1   (0)| 00:00:01 |
|  46 |                TABLE ACCESS STORAGE FULL    | USER$          |   131 |  2227 |     4   (0)| 00:00:01 |
|  47 |               TABLE ACCESS CLUSTER          | TS$            |     1 |     8 |     1   (0)| 00:00:01 |
|* 48 |                INDEX UNIQUE SCAN            | I_TS#          |     1 |       |     0   (0)|          |
|* 49 |              INDEX UNIQUE SCAN              | I_FILE2        |     1 |     8 |     0   (0)|          |
|* 50 |             HASH JOIN                       |                |  4389 |   321K|   878   (1)| 00:00:11 |
|  51 |              TABLE ACCESS STORAGE FULL      | FILE$          |   569 |  6828 |     3   (0)| 00:00:01 |
|* 52 |              HASH JOIN RIGHT OUTER          |                |  4389 |   270K|   874   (1)| 00:00:11 |
|  53 |               TABLE ACCESS STORAGE FULL     | USER$          |   131 |  2227 |     4   (0)| 00:00:01 |
|* 54 |               HASH JOIN                     |                |  4389 |   197K|   870   (1)| 00:00:11 |
|  55 |                TABLE ACCESS STORAGE FULL    | TS$            |   167 |  1336 |    48   (0)| 00:00:01 |
|* 56 |                TABLE ACCESS STORAGE FULL    | SEG$           |  4389 |   162K|   821   (1)| 00:00:10 |
|* 57 |          TABLE ACCESS STORAGE FULL          | OBJ$           | 88138 |  3098K|   679   (1)| 00:00:09 |
|* 58 |         TABLE ACCESS CLUSTER                | TAB$           |     1 |    32 |     2   (0)| 00:00:01 |
|* 59 |          INDEX UNIQUE SCAN                  | I_OBJ#         |     1 |       |     1   (0)| 00:00:01 |
|  60 |        TABLE ACCESS CLUSTER                 | TS$            |     1 |     4 |     1   (0)| 00:00:01 |
|* 61 |         INDEX UNIQUE SCAN                   | I_TS#          |     1 |       |     0   (0)|          |
|  62 |       TABLE ACCESS CLUSTER                  | SEG$           |     1 |    12 |     2   (0)| 00:00:01 |
|* 63 |        INDEX UNIQUE SCAN                    | I_FILE#_BLOCK# |     1 |       |     1   (0)| 00:00:01 |
|* 64 |      INDEX RANGE SCAN                       | I_OBJ1         |     1 |     6 |     2   (0)| 00:00:01 |
|* 65 |     INDEX RANGE SCAN                        | I_OBJ1         |     1 |     9 |     2   (0)| 00:00:01 |
|  66 |    INDEX FULL SCAN                          | I_USER2        |   131 |   524 |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - access("CX"."OWNER#"="CU"."USER#")
   8 - access("SEGMENT_NAME"="O"."NAME" AND "O"."OWNER#"="U"."USER#")
   9 - access("OWNER"="U"."NAME")
  11 - access("KSPPI"."INDX"="KSPPCV"."INDX")
  12 - filter("KSPPI"."KSPPINM"='_dml_monitoring_enabled')
  18 - access("O"."OWNER#"="U"."USER#")
  20 - access("O"."OBJ#"="SO"."OBJECT_ID" AND "O"."TYPE#"="SO"."OBJECT_TYPE_ID")
  22 - access("S"."TS#"="TS"."TS#")
  24 - access("S"."FILE#"="SO"."HEADER_FILE" AND "S"."BLOCK#"="SO"."HEADER_BLOCK" AND 
              "S"."TS#"="SO"."TS_NUMBER" AND "S"."TYPE#"="SO"."SEGMENT_TYPE_ID")
  28 - filter(BITAND("T"."PROPERTY",1024)=0)
  31 - filter(("I"."TYPE#"=1 OR "I"."TYPE#"=2 OR "I"."TYPE#"=3 OR "I"."TYPE#"=4 OR "I"."TYPE#"=6 OR 
              "I"."TYPE#"=7 OR "I"."TYPE#"=8 OR "I"."TYPE#"=9))
  33 - filter((BITAND("L"."PROPERTY",64)=0 OR BITAND("L"."PROPERTY",128)=128))
  37 - access("S"."TS#"="F"."TS#" AND "S"."FILE#"="F"."RELFILE#")
  41 - access("S"."USER#"="U"."USER#")
  43 - storage("UN"."STATUS$"<>1)
       filter("UN"."STATUS$"<>1)
  44 - filter(("S"."TYPE#"=1 OR "S"."TYPE#"=10))
  45 - access("S"."TS#"="UN"."TS#" AND "S"."FILE#"="UN"."FILE#" AND "S"."BLOCK#"="UN"."BLOCK#")
  48 - access("S"."TS#"="TS"."TS#")
  49 - access("UN"."TS#"="F"."TS#" AND "UN"."FILE#"="F"."RELFILE#")
  50 - access("S"."TS#"="F"."TS#" AND "S"."FILE#"="F"."RELFILE#")
  52 - access("S"."USER#"="U"."USER#")
  54 - access("S"."TS#"="TS"."TS#")
  56 - filter(("S"."TYPE#"<>6 AND "S"."TYPE#"<>5 AND "S"."TYPE#"<>8 AND "S"."TYPE#"<>10 AND 
              "S"."TYPE#"<>1))
  57 - storage(BITAND("O"."FLAGS",128)=0)
       filter(BITAND("O"."FLAGS",128)=0)
  58 - filter(BITAND("T"."PROPERTY",1)=0)
  59 - access("O"."OBJ#"="T"."OBJ#")
  61 - access("T"."TS#"="TS"."TS#")
  63 - access("T"."TS#"="S"."TS#" AND "T"."FILE#"="S"."FILE#" AND "T"."BLOCK#"="S"."BLOCK#")
  64 - access("T"."BOBJ#"="CO"."OBJ#")
  65 - access("T"."DATAOBJ#"="CX"."OBJ#")

3) It captures some execution statistics that I like to look at .. Here is the output below.





Summary of query stats





 ************** plan_hash_value       = 1188305021
 ************** sql_id               = 208ttrm2nbt0u

avg_elapsed_time      = 2.59
total_executions      = 2
avg_rows_processed    = 5372
avg_disk_reads        = 202
avg_buffer_gets       = 189970
avg_cpu_time          = 2426631
avg_iowait            = 99965
avg_cluster_wait      = 245947
avg_direct_writes     = 0
avg_plssql_exec_time  = 162398
avg_cell_offload      = 0



And finally it creates the beautiful sql_monitor output we are all used to.


Pretty cool huh ?  It creates these 3 pieces based on the last sql executed in the session.. Now this is the best part.. At the end of the script is this code.

!echo "From: bryan.grenn@oracle.com"  > /tmp/file
!echo "To: bryan.grenn@oracle.com"   >> /tmp/file
!echo "Subject: analyze output sql_monitor"   >> /tmp/file
!echo "Mime-Version: 1.0"      >> /tmp/file
!echo 'Content-Type: multipart/mixed; boundary="DMW.Boundary.605592468"'   >> /tmp/file
!echo "--DMW.Boundary.605592468" >> /tmp/file
!echo " " >> /tmp/file
!echo " analyze report sql_monitor " >> /tmp/file
!echo " " >> /tmp/file
!echo "--DMW.Boundary.605592468" >> /tmp/file
!echo 'Content-Disposition: inline; filename="analyze_dbms_xplan.txt"' >> /tmp/file
!echo "Content-Transfer-Encoding: 7bit" >> /tmp/file
!cat /tmp/analyze_dbms_xplan.txt >> /tmp/file
!echo "--DMW.Boundary.605592468" >> /tmp/file

!echo 'Content-Disposition: inline; filename="analyze_query_stats.txt"' >> /tmp/file
!echo "Content-Transfer-Encoding: 7bit" >> /tmp/file
!cat /tmp/analyze_query_stats.txt >> /tmp/file
!echo "--DMW.Boundary.605592468" >> /tmp/file

!echo 'Content-Disposition: inline; filename="analyze_sql_monitor.htm"' >> /tmp/file
!echo "Content-Transfer-Encoding: 7bit" >> /tmp/file
!cat /tmp/analyze_html.htm >> /tmp/file
!echo "--DMW.Boundary.605592468" >> /tmp/file

!/usr/sbin/sendmail bryan.grenn@oracle.com < /tmp/file



This code will take the 3 output files, and mail them to you as attachments.  Notice my e-mail is in the script, so change it to what you need to get it going.  

I'm going to use the often, by putting it on all my servers and just running it after any stubborn sql show up. I will be instantly e-mailed the mail pieces I need to figure out what is going on with the sql.

Saturday, March 31, 2012

R and visualing your execution times

Well, I think I'm a little late to the party..   I know Greg Rahn did a great post on utilizing R to visual your ash data.  I figure I would do a simple example of how to build something myself to show how easy it is to utilyze R to visualize query execution times..

Well first I stated by downloading R from cran.r-project.org.

Once I downloaded R, I went to one of my databases, and found a query that had different execution times I wanted to play with.  I created an output file from the query.. Here is the script I used..


set pagesize 10000
set feedback off
spool rtest.txt

select trunc((elapsed_time_delta/executions_delta)/1000000,4) avg_execution_time "AVG_EXECUTION_TIME",
       PLAN_HASH_VALUE "PLAN_HASH_VALUE",
       execution_date "EXECUTION_DATE"
from
(
select sum(elapsed_time_delta) elapsed_time_delta,
       sum(executions_delta) executions_delta,
              PLAN_HASH_VALUE,
              to_char(trunc(end_interval_time),'mm/dd/yy') execution_date
from dba_hist_sqlstat a,
     dba_hist_snapshot b
 where sql_id='19sqmxkc58wqm'
and a.snap_id=b.snap_id
and a.instance_number=b.instance_number
--and executions_delta>0
group by plan_hash_value,to_char(trunc(end_interval_time),'mm/dd/yy')
)
where executions_delta > 0
order by execution_date;
spool off


This script created a file I brought over to my pc and cleaned up the format. Here is part of the file..

AVG_EXECUTION_TIME PLAN_HASH_VALUE execution_date                                     
           20.4368       566875892 01/01/12                                     
           50.3253      4009342004 01/01/12                                     
           21.4655       566875892 01/02/12                                     
           19.8312      4009342004 01/02/12                                     
           69.9299      4009342004 01/03/12                                     
          135.7153      4009342004 01/04/12                                     
           39.3972      4009342004 01/05/12                                     
           65.2833      4009342004 01/06/12                                     
           39.8093      4009342004 01/07/12                                     
           35.8615      4009342004 01/08/12                                     
           18.7553       566875892 01/09/12                                     
          134.7431      4009342004 01/09/12                                     
           76.2954      4009342004 01/10/12                                     
          115.8707      4009342004 01/11/12                                     
           60.0754      4009342004 01/12/12                                     
          102.6432      4009342004 01/13/12                                     
           22.2528       566875892 01/14/12                                     
          119.8541      4009342004 01/14/12                                     
           21.8552       566875892 01/15/12                                     
           18.5785      4009342004 01/15/12                                     
           19.3179       566875892 01/16/12                                     
            80.794      4009342004 01/16/12                                     
           67.0872      4009342004 01/17/12                                     
          107.1604      4009342004 01/18/12                                     
           28.9797      4009342004 01/19/12                                     
          


I put this file into c:\r and named it query_performance .txt.

I then went into R and ran the following commands.


setwd("c:\\r")
query_data <- read.table("query_performance.txt",header=T)


max_num <- max(query_data$AVG_EXECUTION_TIME)

hist(query_data$AVG_EXECUTION_TIME,col=heat.colors(max_num),breaks=max_num,xlim=c(0,max_num),
     right=F,main="Execution Time Histogram",las=1)






You can see I just ran a few simple commands...

setwd --- set the working directory to c:\r
read.table --- read in my space delimitted table (there is a read.csv for a comma separated file)
max_num  --- is set to the maximum execution time in the file

hist   -- creates a histogram of the execution times.. Check out below what comes out. Sweet !!


This was easy, and gives me a great picture of the variance in execution times. 

I am going to work more with this file since it had 2 different plans I want to visual the differences.

Thursday, March 22, 2012

Cooking for Geeks (shameless plug)

I am part of the O-Really Blogger review program, and long, long ago I picked up the book

"Cooking for Geeks" by Jeff Potter  for review. 

This was a fantastic book.. I am both a geek, and cook.  This book does a great job of tying the 2 together.  It not only talks about spices, and ingredients but explains in geekspeak why they work togethor.

I thoroughly enjoyed this book ,and I would recommend it to anyone that is interested in cooking, or at all interested in the chemistry, and biology behind how and why we enjoy foods.

Here is a link to the book.
http://www.amazon.com/Cooking-Geeks-Science-Great-Hacks/dp/0596805888

Sunday, March 18, 2012

Benchmarks for CPU's

I have been doing some benchmarks on a couple of different systems for LIO's. I have been using Kevin Closson's great SLOB toolkit.  You can find more information about it on his blog here.

I have been looking at 2 different systems, and here are my results
These 2 systems are both HP.

The first is an AMD 6276 server. 2 socket x 16 cores. (465 G7)

./runit.sh 0 40
The awr is posted.

Here is the summary of throughput.
oad Profile              Per Second    Per Transaction   Per Load Profile              Per Second    Per Transaction   Per Exec   Per Call
~~~~~~~~~~~~         ---------------    --------------- ---------- ----------
      DB Time(s):               37.9               46.6       0.00       5.96
       DB CPU(s):               30.3               37.2       0.00       4.76
       Redo size:           15,791.3           19,408.2
   Logical reads:       10,119,426.4       12,437,215.0
   Block changes:               83.4              102.5
  Physical reads:                0.4                0.6
 Physical writes:               11.5               14.1
      User calls:                6.4                7.8
          Parses:                3.0                3.7
     Hard parses:                0.1                0.1
W/A MB processed:                0.2                0.2
          Logons:                0.1                0.1
        Executes:           39,333.0           48,342.0
       Rollbacks:                0.0                0.0

I then looked at the new Intel E7 2870 I got.  2 socket 10 core, dual threaded (BL620 E7)

./runit.sh 0 43

the awr is here
Load Profile              Per Second    Per Transaction   Per Load Profile              Per Second    Per Transaction   Per Exec   Per Call
~~~~~~~~~~~~         ---------------    --------------- ---------- ----------
      DB Time(s):               40.9              108.6       0.00       6.93
       DB CPU(s):               37.8              100.4       0.00       6.41
       Redo size:           10,053.3           26,674.3
   Logical reads:       13,203,419.8       35,032,516.8
   Block changes:               36.9               97.9
  Physical reads:                0.0                0.0
 Physical writes:                9.6               25.4
      User calls:                5.9               15.7
          Parses:                4.0               10.7
     Hard parses:                0.0                0.0
W/A MB processed:                0.2                0.6
          Logons:                0.3                0.7
        Executes:           51,300.2          136,114.4
       Rollbacks:                0.0                0.1
    Transactions:                0.4


Look at that throughput.. the 43 process count looks to be the best throughput of over 13 Million LIOS/second

WOW  the new AMD Bulldozer has great numbers, but the intel really Rocks !

Saturday, March 10, 2012

IOUG Real World Performance tour

Last Thursday the Real World Performance Tour came to Rochester, NY.

I know what you're probably thinking. One of 2 things.

1) Why did it come there, aren't you a suburb of NYC (we are actually about a 7 hour drive from NYC)
                   or
2) Why there ?  Did the cows enjoy it ?

We we had a huge turnout. There were about 90 people in attendance.  For this area, that is one of the biggest attendence I have every seen. Especially since it was a paid event, and the lunch was boxed.

The Tour consists of three individuals

1) Tom Kyte... He needs no explaination.

2) Andrew Holdsorth -  Head of the real world performance team.  As a point of full disclosure, I've had a couple of meetings with Andrew in the past, so I already have discussed some of the topics with him in those meetings. 

3) Graham Wood -  Oracle Database Architect in database development.  He is the person responsible for AWR reports.

The day was broken up in 2 halfs.  The morning concentrated on how to manage a data warehouse, and the afternoon concentrated on OLTP.  Of course the approach to both of these areas is different.

The morning covered a number of topics, especially concentrating on the challenges of a data warehouse.

Parallelization
Hash joins vs Nested loops
indexing vs FTS.

Some of the presentation talked about HCC and the exadata, but I would say in general only about 10-20% was exadata specific. No sales pitch, just reasons why it helps..

The afternoon was dedicated to the issues revolving around an OLTP system.  A lot of it covered the material in the youtube video narrated by Andrew on the connections pooling, and how it affects performance.

It was a great day, and there was a lot of great material.. I have talked to Andrew before, and I've seen his videos, but I still got a lot out of the day.

If it is coming to your city, it is definately worth going to.

Here are some links to check out.

Here is Tom's presentation, but like most good presentations, the slides miss a lot.

Here are the Youtube videos from Andrew .. Thanks Andrew for creating these !


And finally, here is the upcoming schedule of events.