Tuesday, July 31, 2012

What extended stats do I have on my database?

I've been starting to work with Extended statistics to help the optimizer find the best plan. This is a great feature that is outlined by @sqlmaria (Maria Coogan) here.

But once you create extended statistics, how do you know what is there ?  I wrote this query to find out what function based indexes, what extended statistics, and what their definition are. 

Here is my script.

column table_owner alias "owner" format a15
column table_name alias  "Table Name" format  a30
column function_index alias  "F Index" format  a8
column Index_name  alias  "Index Name"  format a30
column data_default alias  "Definition"  format a50
set pagesize 1000
select table_owner,
         table_name,
        nvl2(index_name,'YES','NO') function_index,
        index_name,
        data_default
        from
        (
select owner table_owner,table_name,
(select distinct index_name from dba_ind_columns b where a.column_name=b.column_name and a.owner=b.index_owner and a.table_name=b.table_name) index_name
,data_default
-- ,     DBMS_LOB.SUBSTR( to_lob(data_default),100,1)
 from dba_tab_cols a
  where virtual_column='YES' and hidden_column='YES'  and (owner not in ('SYS','WMSYS','XDB','SYSMAN','MDSYS','EXFSYS','PR_MDS') and owner not like 'APEX_%')
  )
order by table_owner,table_name;


and this is what the output looks like..

TABLE_OWNER     TABLE_NAME                     FUNCTION INDEX_NAME                     DATA_DEFINITION
--------------- ------------------------------ -------- ------------------------------ --------------------------------------------------
BGRENN          TAB_SCHR_PERD                      NO                                  COALESCE("COL1","COL2")
BGRENN          TAB2                              YES   IDX_TAB2                       "COL1"||' '||"COL2"
BGRENN          TAB3                               NO                                  COALESCE("COL1","COL2")
BGRENN          TAB4                              YES   IDX_TAB4                       COALESCE("COL1","COL2",0)
BGRENN          TAB4                              YES   IDX_TAB4                       COALESCE("COL3",0)
BGRENN          TAB5                              YES   IDX_TAB5                       COALESCE("COL1","COL2",0)
BGRENN          TAB6                              YES   IDX_TAB6                       NVL("COL1",(-1))
BGRENN          TAB6                              YES   IDX_TAB6                       NVL("COL2",(-1))
BGRENN          TAB6                              YES   IDX_TAB6                       NVL("COL3",(-1))
BGRENN          TAB6                              YES   IDX_TAB6                       NVL("COL4",'x')
BGRENN          TAB7                              YES   IDX_COMPOSITE                  "COL1"
BGRENN          TAB7                              YES   IDX_COMPOSITE                  "COL3"

Notice the Function colunmn. This is a "YES" or "NO" depending on if this is a function based index, or just extended statistics.

This should help tell where your extended statistics are in your database.

Sunday, July 15, 2012

Exadata tips

I wanted to write up an Exadata tip that I learned.

Background :  I wanted to do a simple "select count(1) from mytable".  mytable has a primary key on it.  The count seemed to be taking a long time for an Exadata.

First the "select count(1) from mytable". You can see that it uses an index storage fast full scan.  The top wait event is "cell multiblock physical read".  The query does  2 Million Disk reads in 3 Minutes 28 seconds.

But this seems slow...



select count(1)
from
     MYTABLE MYTABLE

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2     56.46     208.27    2030378    2031797         20           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4     56.46     208.27    2030378    2031797         20           1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  SORT AGGREGATE (cr=2031797 pr=2030378 pw=0 time=208276577 us)
 592848893  592848893  592848893   INDEX STORAGE FAST FULL SCAN PK_MYTABLE (cr=2031797 pr=2030378 pw=0 time=245927651 us cost=523897 size=0 card=572441788)(object id 312310)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  library cache lock                              4        0.00          0.00
  Disk file operations I/O                       42        0.00          0.00
  library cache pin                               2        0.00          0.00
  SQL*Net message to client                       2        0.00          0.00
  cell single block physical read                15        0.00          0.00
  cell list of blocks physical read               2        0.00          0.00
  cell multiblock physical read               15959        0.26        152.20
  latch: object queue header operation            1        0.00          0.00
  SQL*Net message from client                     2       10.38         10.38
********************************************************************************



Next I did a FTS.

"select /*+full(MYTABLE) */ count(1) from  MYTABLE MYTABLE ;

You can see this did 2.2 Million disk reads (more than the index scan), but the wait event is sql_net. With the "cell smart table scan", there were very few waits, and the wait was much shorter.



select /*+full(MYTABLE) */ count(1)
from
     MYTABLE MYTABLE  

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2     65.11      66.48    2224642    2225028         21           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4     65.11      66.48    2224642    2225028         21           1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  SORT AGGREGATE (cr=2225028 pr=2224642 pw=0 time=66486729 us)
 592848893  592848893  592848893   PARTITION RANGE ALL PARTITION: 1 33 (cr=2225028 pr=2224642 pw=0 time=140325566 us cost=533066 size=0 card=572441788)
 592848893  592848893  592848893    TABLE ACCESS STORAGE FULL MYTABLE PARTITION: 1 33 (cr=2225028 pr=2224642 pw=0 time=54479242 us cost=533066 size=0 card=572441788)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       2        0.00          0.00
  Disk file operations I/O                      129        0.00          0.00
  gc current block 2-way                         65        0.00          0.00
  enq: KO - fast object checkpoint              101        0.00          0.02
  reliable message                               33        0.09          0.30
  gc current block 3-way                         13        0.00          0.00
  cell smart table scan                        1403        0.03          1.01
  gc cr block 3-way                               1        0.00          0.00
  gc current grant busy                          18        0.00          0.00
  gc cr block 2-way                              17        0.00          0.00
  gc cr multi block request                       5        0.00          0.00
  cell single block physical read                11        0.00          0.00
  cell list of blocks physical read               2        0.00          0.00
  gc cr grant 2-way                               3        0.00          0.00
  SQL*Net message from client                     2        9.62          9.62






Bottom line, if you want to a count on a table use the "FULL" hint.  The exadata is built for table scans, and this example shows that.

It also should make you rethink when to use indexes for an application, you see they can hurt you in some cases.

Friday, July 6, 2012

What happened to my sql (sql_id) ?


While finishing up a few things, I ran across a query that wasn't playing nicely. It had 4 different plans over the course of the last couple of days, and I wanted to see what happend.. I came up with the nifty query below.  If you plug in a sql_id, it will go through the AWR history, and return (ordered by date last executed), the plans grouped by plan_hash_value. Within each plan_hash_value it will give you the objects in the plan, and when they were last analyzed.  By using this you should see what plans are good, when they were last executed, and if anything was analyzed to change the plan.


set linesize 160
set pagesize 1000
break on plan_hash_value skip 1 nodup  on last_executed skip 1 nodup  on avg_exec_time skip 1
select object_owner ||'.'|| object_name object_name,
object_type,
a.plan_hash_value,
case object_type
  when 'INDEX' then (select last_analyzed from dba_indexes b where owner=object_owner and index_name=object_name)
  when 'TABLE'  then (select last_analyzed from dba_tables b where owner=object_owner and table_name=object_name)
 else null 
end last_analyzed,
 to_char((select max(end_interval_time) from dba_hist_snapshot b,
                                           dba_hist_sqlstat c 
                            where c.sql_id=a.sql_id and 
                                  c.plan_hash_value=a.plan_hash_value and 
                              b.snap_id=c.snap_id),'mm/dd/yy hh24:mi') last_Executed,
to_char((select sum(elapsed_time_delta)/sum(executions_delta) from dba_hist_sqlstat d where d.sql_id=a.sql_id and d.plan_hash_value=a.plan_hash_value)/1024/1024,'999.99') avg_exec_time
 from DBA_HIST_SQL_PLAN  a
where a.SQL_ID='gbug7dg8adhgh'
 and object_type in ('INDEX','TABLE')
order by last_executed desc ,a.plan_hash_value , last_analyzed desc;


Here is an example of the output


              OBJECT_NAME                                 OBJECT_TYPE          PLAN_HASH_VALUE LAST_ANALYZED       LAST_EXECUTED  AVG_EXE
-------------------------------------------------------------- -------------------- --------------- ------------------- -------------- -------
MY_SCHEMA.SNP_CDC_SUBS                                     TABLE                     2518369181 2012-07-06 09:25:15 07/06/12 10:00  791.37
MY_SCHEMA.SNP_CDC_SUBS                                     TABLE                                2012-07-06 09:25:15
MY_SCHEMA.D$TAB_REG                                        TABLE                                2012-07-06 09:25:06
MY_SCHEMA.J$TAB_REG                                        TABLE                                2012-07-06 09:24:50
MY_SCHEMA.J$TAB_REG                                        TABLE                                2012-07-06 09:24:50
MY_SCHEMA.J$TAB_REG                                        TABLE                                2012-07-06 09:24:50
MY_SCHEMA.J$TAB_REG                                        TABLE                                2012-07-06 09:24:50
ERD.DIM_TABS_COMP_PLCY_AGMT                                TABLE                                2012-07-06 00:39:33
MY_SCHEMA.TAB_COMP_PLCY_AGMT                               TABLE                                2012-05-25 09:20:39
MY_SCHEMA.IDX_WCPA_AGMT_ID                                 INDEX                                2012-05-25 09:20:39
MY_SCHEMA.TAB_COMP_PLCY_ST_CLSF_VT                         TABLE                                2012-05-15 18:49:43
MY_SCHEMA.TAB_COMP_PLCY_ST_CLSF                            TABLE                                2012-05-15 18:49:43
MY_SCHEMA.WPTD_COMP_PER_TAX                                TABLE                                2012-05-15 18:39:30
MY_SCHEMA.TAB_REG                                          TABLE                                2012-05-15 18:31:18
MY_SCHEMA.CO_TAB                                           TABLE                                2012-05-15 18:27:50
MY_SCHEMA.TAB_PAYR                                         TABLE                                2012-05-15 18:26:47
MY_SCHEMA.AGMT_REG                                         TABLE                                2012-05-15 18:21:09



MY_SCHEMA.SNP_CDC_SUBS                                     TABLE                     1903861587 2012-07-06 09:25:15 07/06/12 09:00  882.94
MY_SCHEMA.SNP_CDC_SUBS                                     TABLE                                2012-07-06 09:25:15
MY_SCHEMA.D$TAB_REG                                        TABLE                                2012-07-06 09:25:06
MY_SCHEMA.J$TAB_REG                                        TABLE                                2012-07-06 09:24:50
MY_SCHEMA.J$TAB_REG                                        TABLE                                2012-07-06 09:24:50
MY_SCHEMA.J$TAB_REG                                        TABLE                                2012-07-06 09:24:50
MY_SCHEMA.J$TAB_REG                                        TABLE                                2012-07-06 09:24:50
ERD.DIM_TABS_COMP_PLCY_AGMT                                TABLE                                2012-07-06 00:39:33
MY_SCHEMA.TAB_COMP_PLCY_AGMT                               TABLE                                2012-05-25 09:20:39
MY_SCHEMA.IDX_WCPA_AGMT_ID                                 INDEX                                2012-05-25 09:20:39
MY_SCHEMA.TAB_COMP_PLCY_ST_CLSF_VT                         TABLE                                2012-05-15 18:49:43
MY_SCHEMA.TAB_COMP_PLCY_ST_CLSF                            TABLE                                2012-05-15 18:49:43
MY_SCHEMA.WPTD_COMP_PER_TAX                                TABLE                                2012-05-15 18:39:30
MY_SCHEMA.TAB_REG                                          TABLE                                2012-05-15 18:31:18
MY_SCHEMA.CO_TAB                                           TABLE                                2012-05-15 18:27:50
MY_SCHEMA.TAB_PAYR                                         TABLE                                2012-05-15 18:26:47
MY_SCHEMA.AGMT_REG                                         TABLE                                2012-05-15 18:21:09

Friday, June 29, 2012

AWR compare report

I came across this while doing some dbreplays, and found it very useful.

First, lets say you have a RAC cluster,  and you want to do some performance comparisons .. What's one of the issues you run into ?  For me it is trying to figure out which nodes I care about, and running the AWR report for that node. This is exasperated with a Full Rack Exadata.  8 nodes to compare.  Well this is what I use to compare 2 time periods across all nodes.  I also increase some of the reporting thresholds..

First the script to gather the report. (here)

To get this to work change the following

dbid  - dbid for the first time period
begin_snap - begin snap first time period
end_snap - end snap first time period

dbid2 - dbid for the second time period
begin_snap2 - begin snap second time period
end_snap2 - end snap second time period

Also notice that I changed top_n_** values to give me more data

Rem    NAME
Rem      awr_full.sql - Workload Repository Global Compare Periods Report
Rem
Rem    DESCRIPTION
Rem      RAC Version of Compare Period Report
Rem
Rem
Rem    NOTES
Rem      Run as SYSDBA.  Generally this script should be invoked by awrgdrpt,
Rem      unless you want to pick a database and/or specific instances
Rem      other than the default.
Rem
Rem      If you want to use this script in an non-interactive fashion,
Rem      without executing the script through awrgdrpt, then
Rem      do something similar to the following:
Rem
      define  num_days     = 0;
      define  dbid         =2415508472; 
      define  instance_numbers_or_ALL    = 'ALL';
      define  begin_snap   = 35727;
      define  end_snap     = 35728;
      define  num_days2    = 0;
      define  dbid2        = 2415508472;
      define  instance_numbers_or_ALL2    = 'ALL';
      define  begin_snap2  = 35728;
      define  end_snap2    = 35729; 
      define  report_type  = 'html';
      define  report_name  = /tmp/awr_report.html
      define top_n_files        = 50;
      define top_n_segments     = 50;
      define top_n_services     = 50;
      define top_n_sql          = 100;
      @@?/rdbms/admin/awrgdrpi
!./mail_full.sql
exit



The second to last line of the script is to mail the report, and the script is here.

echo "From: replay_report@oracle.com"  > /tmp/mailfilebsg
echo "To: raddba@oracle.com"   >> /tmp/mailfilebsg
echo "Subject: DBREPLAY output "   >> /tmp/mailfilebsg
echo "Mime-Version: 1.0"      >> /tmp/mailfilebsg
echo 'Content-Type: multipart/mixed; boundary="DMW.Boundary.605592468"'   >> /tmp/mailfilebsg
echo "--DMW.Boundary.605592468" >> /tmp/mailfilebsg
echo " " >> /tmp/mailfilebsg
echo " dbreplay output " >> /tmp/mailfilebsg
echo " " >> /tmp/mailfilebsg
echo "--DMW.Boundary.605592468" >> /tmp/mailfilebsg
echo 'Content-Disposition: inline; filename="dbreplay.html"' >> /tmp/mailfilebsg
echo "Content-Transfer-Encoding: 7bit" >> /tmp/mailfilebsg
cat /tmp/awr_report.html >> /tmp/mailfilebsg
echo "--DMW.Boundary.605592468" >> /tmp/mailfilebsg
/usr/sbin/sendmail bryan..grenn@oracle.com < /tmp/mailfilebsg


The second script will mail you the output as an attachement.  So when using it, be sure to make the E-mail address yours, and change the subject, and filename to be what you want.

Enjoy. 

Sunday, June 24, 2012

Taking a career change

Well, I have decided to make a change and take a job with oracle.  I am very excited about this move, and I look forward to being more involved in big data.  As any of you know (who have read my blog posts), I have taken a strong interest in this area.  I know I'm not the only one.  You probably have heard the terms  "Data Scientist"...   Hadoop...  R..  These are all the areas that I'm going to be delving into in my new position.
  I will continue to blog, probably mostly about the same topics I blog about now.  I am looking forward to this change, and becoming part of this evolution.  Many people are saying that Big Data is the next big change (like the internet),  whether this is true or not, we shall see. 

Wednesday, May 23, 2012

Problem debugging for DBFS

In trying to find out the cause of a DBFS issue, I learned what is expected (or helpful) when working with Support on DBFS issues.

1)  Logon trigger for dbfs user to create a trace file..
CREATE OR REPLACE TRIGGER DBFS_LOGON
AFTER LOGON
ON DATABASE
declare
username VARCHAR2(30);
BEGIN
  username:=SYS_CONTEXT('USERENV','SESSION_USER');
  IF username like 'FOO' then
    dbms_dbfs_content.setTrace(3);
    execute immediate 'alter session set events ''45050 trace name context forever, level 0xfffff'' ';
  END IF;
EXCEPTION
   WHEN OTHERS THEN
      NULL;
END;
/ 



2) Start the DBFS client with tracing turned on.  (-otrace_file=,trace_level=2,trace_size=0). (see How to trace DBFS when any failure happens [ID 1320683.1])

Remember if you are running DBFS, you are probably on a multi-node clustered environment, so you only need to do these steps on one of the nodes to gather the data.  I turned the logon trigger on, remounted the FS with tracing..  reproduced issue. Verified the log files were created, disabled trigger, remounted without tracing..  I did this on only one node, and gathered what I needed with minimal issues.


Friday, May 18, 2012

Where is my space on DBFS


I just ran into an issue on DBFS where I ran out of space.

First here is the df -k

Filesystem           1K-blocks      Used Available Use% Mounted on
dbfs-dbfs_admin2@:/   20983808  11443696   9540112  55% /dbfs/dba


OK, everything looks good.. I am using 11g and I have 9.5g available.

I go to copy a file on the os (you can see it is 240m).  Lots of room

 ls -al bsg.out
-rw-r--r-- 1 oracle oinstall 240794862 May 18 11:37 bsg.out


cp bsg.out bsg.out1
cp: writing `bsg.out1': No space left on device
cp: closing `bsg.out1': No space left on device


So where is my space.  ?? I find this query..

set serveroutput on;
declare
v_segment_size_blocks number;
v_segment_size_bytes number;
v_ number;
v_used_blocks number;
v_used_bytes number;
v_expired_blocks number;
v_expired_bytes number;
v_unexpired_blocks number;
v_unexpired_bytes number;
begin
dbms_space.space_usage ('DBFS_OWNER', 'LOB_SFS$_FST_12345', 'LOB', 
v_segment_size_blocks, v_segment_size_bytes,
v_used_blocks, v_used_bytes, v_expired_blocks, v_expired_bytes, 
v_unexpired_blocks, v_unexpired_bytes );
dbms_output.put_line('Segment Size blocks = '||v_segment_size_blocks);
dbms_output.put_line('Segment Size bytes = '||v_segment_size_bytes);
dbms_output.put_line('Used blocks = '||v_used_blocks);
dbms_output.put_line('Used bytes = '||v_used_bytes);
dbms_output.put_line('Expired Blocks = '||v_expired_blocks);
dbms_output.put_line('Expired Bytes = '||v_expired_bytes);
dbms_output.put_line('UNExpired Blocks = '||v_unexpired_blocks);
dbms_output.put_line('UNExpired Bytes = '||v_unexpired_bytes);
end;
/



And I see this output

Segment Size blocks = 2619024
Segment Size bytes = 21455044608
Used blocks = 1425916
Used bytes = 11681103872
Expired Blocks = 1190111
Expired Bytes = 9749389312
UNExpired Blocks = 0
UNExpired Bytes = 0


So.. according to this.. The segment is 21.4 g

11.7g is used space
  9.7g is Expired space
   0g   is unexpired space.

So if I have 9.7g of Expired space why can't I use it ??  My file is only 244m, and I should have 9.7 g available.

So my questions out of this are (if anyone knows the answer).

1) How does this happen and how do I avoid it ?

2) How do I size tablespaces for DBFS ?  They need more space to be available then I need for the file system.

3) How do I monitor the sizing since the DF -k does not report unexpired bytes that are available to be used ?

4) How does the clause "retention" fit into this ?  retention defaults to "auto" rather than "none".  Can I set it to "none", but what happens and does this solve my problem ?


Oh, and I wanted to make sure that I included the ouput of  whats in the tablespace.

SEGMENT_NAME                                             SEGMENT_TYPE       SEGMENT_SIZE
-------------------------------------------------------- ------------------ ------------
LOB_SFS$_FST_12345                                       LOBSEGMENT                20461
T_ADMIN                                                  TABLE                        17
IP_SFS$_FST_12345                                        INDEX                         4
IPG_SFS$_FST_12345                                       INDEX                         3
IG_SFS$_FST_12345                                        INDEX                         2
SYS_IL0000095835C00007$$                                 LOBINDEX                      0




UPDATE (6/13/12)  --

After working with support on this, it was filed as a bug.  This occured because I was using DBFS as a filesytem for my dbreplay capture.  After thinking about it the dbcapture is probably the most intensive workload I could throw at DBFS.  Not only does it simultaneously write to multiple files, but it writes to those files across multiple nodes.  In my capture there were 4 nodes writing to 100's of files at the same time.
   I will be testing the patch, and see if it corrects the problem.  Support is telling me that the writing across multiple nodes is causing some of the issues..