I had an interesting problem last week with a customer who was performance testing on a new system compared to their current system.
The script was pretty simple. It was a PL/SQL package that inserted into a table 10M rows, and committed every 1,000 rows. To make the data more "normal" the customer used DBMS_RANDOM .
The basic insert looked like this.
INSERT INTO TEST_TABLE1
(ID,DTVAL,
DTSTAMP,
COL1,
NUM)
VALUES
(:B1 ,
SYSDATE,
SYSTIMESTAMP,
DBMS_RANDOM.STRING('A', 100),
DBMS_RANDOM.RANDOM);
To me it seemed like a simple test. Unfortunately the performance results were not as expected. To step back for a minute the current system was running on 11.1.0.7 and the new system they were benchmarking against was 11.2.0.4.
I even had them check the output of the Table to ensure no changes in the output.. Everything looked the same.
You wouldn't think that would matter, but the differences in DBMS_RANDOM between versions seemed to be issue. You see DBMS_RANDOM periodically has logic changes, and the performance of DBMS_RANDOM cannot be compared between versions in a performance benchmark.
I had the customer re-run the tests with constants instead of calling DBMS_RANDOM and the results were much better.
To reproduce what they saw I finally tested against 11.2.0.2 and 12.1.0.1 (on the same machine). I could not get a copy of 11.1.0.7 and 11.2.0.4 to test. These 2 versions were enough to see the difference that affected the Customers Benchmark.
Below I've included the TKPROF formatted output from the trace file on 11.2.0.2
SQL ID: fg7gf0m6a2ca4 Plan Hash: 0
INSERT INTO TEST_TABLE1 (ID,DTVAL,DTSTAMP,COL1,NUM)
VALUES
(:B1 , SYSDATE, SYSTIMESTAMP, DBMS_RANDOM.STRING('A', 100),
DBMS_RANDOM.RANDOM)
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 100000 25.81 39.51 0 2464 119161 100000
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 100001 25.81 39.51 0 2464 119161 100000
Notice the CPU time.. 25.81 seconds of CPU time on 11.2.0.2
Below is the TKPROF formatted output from the trace file on 12..1.0.1
INSERT INTO TEST_TABLE1 (ID,DTVAL,DTSTAMP,COL1,NUM)
VALUES
(:B1 , SYSDATE, SYSTIMESTAMP, DBMS_RANDOM.STRING('A', 100),
DBMS_RANDOM.RANDOM)
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 100000 74.01 90.31 1 3722 111116 100000
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 100001 74.01 90.31 1 3722 111116 100000
This time notice that it 74.01 seconds of CPU.. Same statement executed the same number of times..
The difference between the 2 versions is almost 3X longer in 12.1.0.1
No I re-ran it with constants
11.2.0.2 Test
INSERT INTO TEST_TABLE1 (ID,DTVAL,DTSTAMP,COL1,NUM)
VALUES
(:B1 , SYSDATE, SYSTIMESTAMP, 'a', 1)
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 100000 4.62 6.02 0 536 108087 100000
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 100001 4.62 6.02 0 536 108087 100000
12.1.0.1 test
INSERT INTO TEST_TABLE1 (ID,DTVAL,DTSTAMP,COL1,NUM)
VALUES
(:B1 , SYSDATE, SYSTIMESTAMP, 'a', 1)
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 100000 4.78 7.09 1 586 105731 100000
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 100001 4.78 7.09 1 586 105731 100000
Wow.. Now that I use constants, the CPU time was almost identical.
There are absolutely some major performance differences with DBMS_RANDOM between versions.
Moral of the story is don't use internal packages for benchmarking (unless they are critical to your application).
Finally, this is the package code I used for testing..
OWNER SET TIME ON
SET TIMING ON
SET ECHO ON
SET SERVEROUTPUT ON
SET TERMOUT ON
SET VERIFY ON
SET FEEDBACK ON
WHENEVER SQLERROR CONTINUE
select to_char(sysdate,'mm/dd/yyyy hh:mi:ss AM') from dual;
prompt create the test_table1
drop table test_table1;
create table test_table1
(
id NUMBER,
dtval DATE,
dtstamp TIMESTAMP,
col1 varchar2(100),
num NUMBER
);
prompt Insert 1M with commit every 1000 records
alter session set tracefile_identifier = 'test_sess_1';
exec dbms_monitor.session_trace_enable( waits => true );
DECLARE
x PLS_INTEGER;
rn NUMBER(20);
BEGIN
SELECT hsecs
INTO rn
FROM v$timer;
dbms_random.initialize(rn);
FOR i IN 1..100000
LOOP
x := dbms_random.random;
rn := x;
insert into test_table1 (id,dtval,dtstamp,col1,num)
values(x, sysdate, systimestamp, DBMS_RANDOM.string('A', 100), dbms_random.random);
If ( MOD(i,100) = 0) then
commit;
end if;
END LOOP;
dbms_random.terminate;
END;
/
EXEC DBMS_MONITOR.session_trace_disable;
prompt Count of all records
select count(*) from test_table1;
select count(distinct col1) from test_table1 ;
select count(distinct num) from test_table1 ;