28 November, 2008

Database Event Trigger and SYSOPER

Responding to a forums.oracle.com query on SYSOPER and ALTER SYSTEM, I quickly wrote a Database Event trigger.
Here I demonstrate the Database Event Trigger which has been written to execute certain commands on shutdown. The Trigger is written as a BEFORE SHUTDOWN trigger.
Also, I demonstrate how the SYSOPER privilege is to be used.

This is the Trigger code :

SQL> drop trigger shutdown_trigger;

Trigger dropped.

SQL> create trigger shutdown_trigger before shutdown on database
2 begin
3 dbms_system.ksdwrt(2,'Database Shutdown manually initiated');
4 execute immediate 'alter system switch logfile ';
5 end;
6 /

Trigger created.

SQL>


This is how I create an account with SYSOPER privileges so that the account can be used for shutdowns.
Note how the account had to explicitly connect AS SYSOPER in order to be able to effect a shutdown.

SQL> create user shutdown_user identified by shutdown_user;

User created.

SQL> grant create session, sysoper to shutdown_user;

Grant succeeded.

SQL> connect shutdown_user/shutdown_user
Connected.
SQL> shutdown immediate
ORA-01031: insufficient privileges
SQL> shutdown
ORA-01031: insufficient privileges
SQL> connect shutdown_user/shutdown_user as sysoper
Connected.
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>


And here, in the alert.log, we see evidence of the Trigger :

Shutting down instance: further logons disabled
Fri Nov 28 22:05:53 2008
Stopping background process CJQ0
Fri Nov 28 22:05:53 2008
Stopping background process QMNC
Fri Nov 28 22:05:56 2008
Stopping background process MMNL
Fri Nov 28 22:05:57 2008
Stopping background process MMON
Database Shutdown manually initiated
Fri Nov 28 22:05:58 2008
Shutting down instance (immediate)
License high water mark = 8
Fri Nov 28 22:05:58 2008
Stopping Job queue slave processes, flags = 7
Fri Nov 28 22:05:58 2008
Job queue slave processes stopped
Waiting for dispatcher 'D000' to shutdown
Fri Nov 28 22:05:58 2008
Thread 1 advanced to log sequence 9 (LGWR switch)
Current log# 3 seq# 9 mem# 0: /oracle_fs/Databases/ORT24FS/redo03.dbf
Fri Nov 28 22:06:00 2008
All dispatchers and shared servers shutdown
Fri Nov 28 22:06:00 2008
ALTER DATABASE CLOSE NORMAL


The DBMS_SYSTEM.KSDWRT is not a supported call. So use it at your own risk.

27 November, 2008

Tracing a Process -- Tracing DBWR

-Do- try this at home. DON'T try this at work !

Here, I see my DBWR process does not have tracing enabled currently :

SQL> !ps -ef |grep dbw
2001 4051 1 0 22:11 ? 00:00:00 asm_dbw0_+ASM
ora10204 4101 1 0 22:12 ? 00:00:00 ora_dbw0_ORT24FS
ora10204 4157 4154 0 22:13 pts/1 00:00:00 /bin/sh -c ps -ef |grep dbw
ora10204 4159 4157 0 22:13 pts/1 00:00:00 grep dbw

SQL> select s.sid, s.serial#, s.sql_trace from v$session s, v$process p
2 where s.paddr=p.addr
3 and p.spid=4101;

SID SERIAL# SQL_TRAC
---------- ---------- --------
167 1 DISABLED

SQL>


Now, I enable tracing on DBWR :

SQL> exec dbms_monitor.session_trace_enable(167,1);

PL/SQL procedure successfully completed.

SQL> select s.sid, s.serial#, s.sql_trace from v$session s, v$process p
2 where s.paddr=p.addr
3 and p.spid=4101;

SID SERIAL# SQL_TRAC
---------- ---------- --------
167 1 ENABLED

SQL>


I can see a trace file being written to :

ora10204>pwd
/oracle_fs/ora10204/admin/ORT24FS/bdump
ora10204>ls -ltr|tail -3
-rw-r----- 1 ora10204 dba 900 Nov 27 22:12 ort24fs_lgwr_4103.trc
-rw-r----- 1 ora10204 dba 1659410 Nov 27 22:13 alert_ORT24FS.log
-rw-r----- 1 ora10204 dba 6457 Nov 27 22:18 ort24fs_dbw0_4101.trc
ora10204>cat ort24fs_dbw0_4101.trc |head -20
/oracle_fs/ora10204/admin/ORT24FS/bdump/ort24fs_dbw0_4101.trc
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORACLE_HOME = /oracle_fs/ora10204/product/10.2.0.4
System name: Linux
Node name: linux64
Release: 2.6.18-53.el5
Version: #1 SMP Sat Nov 10 19:37:22 EST 2007
Machine: x86_64
Instance name: ORT24FS
Redo thread mounted by this instance: 1
Oracle process number: 5
Unix process pid: 4101, image: oracle@linux64 (DBW0)

*** 2008-11-27 22:15:20.886
*** SERVICE NAME:(SYS$BACKGROUND) 2008-11-27 22:15:20.866
*** SESSION ID:(167.1) 2008-11-27 22:15:20.866
WAIT #0: nam='rdbms ipc message' ela= 625573 timeout=64 p2=0 p3=0 obj#=-1 tim=1199018868033615
WAIT #0: nam='rdbms ipc message' ela= 2930717 timeout=300 p2=0 p3=0 obj#=-1 tim=1199018870984135
WAIT #0: nam='rdbms ipc message' ela= 2930648 timeout=300 p2=0 p3=0 obj#=-1 tim=1199018873915147
ora10204>


I disable tracing :

SQL> exec dbms_monitor.session_trace_disable(167,1);

PL/SQL procedure successfully completed.

SQL> select s.sid, s.serial#, s.sql_trace from v$session s, v$process p
2 where s.paddr=p.addr
3 and p.spid=4101;

SID SERIAL# SQL_TRAC
---------- ---------- --------
167 1 DISABLED

SQL>


I then see that the trace file is no longer being updated. (Unfortunately, stopping tracing on a process does NOT write any message like "tracing stopped" to the trace file, you have to watch the trace file or run an "fuser" command on Unix/Linux to see if it is still being wrttien to (on Windows, I guess the file would be locked by the process if tracing hasn't been disabled).

24 November, 2008

expdp to the default directory without the DBA role

Here I show how EXP_FULL_DATABASE (and not the DBA Role) is sufficient for an account to be able to export to the default directory.



ora10204>pwd
/oracle_fs/ora10204/admin/ORT24FS/dpdump
ora10204>ls -l
total 0
ora10204>sqlplus hemant/hemant

SQL*Plus: Release 10.2.0.4.0 - Production on Mon Nov 24 22:15:31 2008

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> create user expdp_test identified by expdp_test;

User created.

SQL> alter user expdp_test default tablespace users;

User altered.

SQL> alter user expdp_test quota unlimited on users;

User altered.

SQL> grant create session, create table to expdp_test;

Grant succeeded.

SQL> connect expdp_test/expdp_test
Connected.
SQL> create table my_exp_test_tab (col_1 varchar2(5));

Table created.

SQL> insert into my_exp_test_tab values ('a');

1 row created.

SQL> commit;

Commit complete.

SQL> quit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ora10204>
ora10204>expdp expdp_test/expdp_test

Export: Release 10.2.0.4.0 - 64bit Production on Monday, 24 November, 2008 22:17:19

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-39145: directory object parameter must be specified and non-null


ora10204>sqlplus

SQL*Plus: Release 10.2.0.4.0 - Production on Mon Nov 24 22:18:04 2008

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.

Enter user-name: hemant/hemant

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> grant exp_full_database to expdp_test;

Grant succeeded.

SQL> quit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ora10204>
ora10204>expdp expdp_test/expdp_test

Export: Release 10.2.0.4.0 - 64bit Production on Monday, 24 November, 2008 22:18:54

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "EXPDP_TEST"."SYS_EXPORT_SCHEMA_01": expdp_test/********
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
. . exported "EXPDP_TEST"."MY_EXP_TEST_TAB" 4.937 KB 1 rows
Master table "EXPDP_TEST"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for EXPDP_TEST.SYS_EXPORT_SCHEMA_01 is:
/oracle_fs/ora10204/admin/ORT24FS/dpdump/expdat.dmp
Job "EXPDP_TEST"."SYS_EXPORT_SCHEMA_01" successfully completed at 22:19:22

ora10204>pwd
/oracle_fs/ora10204/admin/ORT24FS/dpdump
ora10204>ls -l
total 164
-rw-r----- 1 ora10204 dba 159744 Nov 24 22:19 expdat.dmp
-rw-r--r-- 1 ora10204 dba 1553 Nov 24 22:19 export.log
ora10204>



Without the EXP_FULL_DATABASE privileges, the account could not do an export to the default DATA_PUMP_DIR.

14 November, 2008

Data Pump using default directory

A default "DATA_PUMP_DIR" is available for "privileged users" (Not necessarily having the DBA Role. See the example with a non DBA account here).


SQL> select owner, directory_name, directory_path from dba_directories where directory_name like 'DATA_PU%';

OWNER DIRECTORY_NAME
------------------------------ ------------------------------
DIRECTORY_PATH
--------------------------------------------------------------------------------
SYS DATA_PUMP_DIR
/oracle_fs/ora10204/admin/ORT24FS/dpdump/


SQL>

However, since Oracle uses a default filename, it will not overwrite an existing datapump export.


ora10204>expdp hemant/hemant

Export: Release 10.2.0.4.0 - 64bit Production on Friday, 14 November, 2008 23:29:51

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file "/oracle_fs/ora10204/admin/ORT24FS/dpdump/expdat.dmp"
ORA-27038: created file already exists
Additional information: 1


ora10204>

Thus, the pre-existing dumpfile has to be removed

ora10204>cd $ORACLE_BASE/admin/ORT24FS/dpdump
ora10204>ls -ltr|tail -2
-rw-r----- 1 ora10204 dba 483500032 Nov 13 22:38 expdat.dmp
-rw-r--r-- 1 ora10204 dba 516 Nov 14 23:30 export.log
ora10204>cat export.log
;;;
Export: Release 10.2.0.4.0 - 64bit Production on Friday, 14 November, 2008 23:29:51

Copyright (c) 2003, 2007, Oracle. All rights reserved.
;;;
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file "/oracle_fs/ora10204/admin/ORT24FS/dpdump/expdat.dmp"
ORA-27038: created file already exists
Additional information: 1
ora10204>rm expdat.dmp
ora10204>


I am now able to run a datapump export :

ora10204>cd
ora10204>expdp hemant/hemant

Export: Release 10.2.0.4.0 - 64bit Production on Friday, 14 November, 2008 23:32:42

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "HEMANT"."SYS_EXPORT_SCHEMA_01": hemant/********
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 534.0 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "HEMANT"."SOURCE_TABLE" 150.9 MB 1620512 rows
. . exported "HEMANT"."TARGET_ASM" 150.9 MB 1620512 rows
. . exported "HEMANT"."TARGET_FS" 150.9 MB 1620512 rows
. . exported "HEMANT"."TARGET_TABLE" 4.721 MB 50612 rows
. . exported "HEMANT"."T_AH_O_G_U" 3.328 MB 105329 rows
. . exported "HEMANT"."TEST_SUM_NUMBERS" 5.265 KB 4 rows
Master table "HEMANT"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for HEMANT.SYS_EXPORT_SCHEMA_01 is:
/oracle_fs/ora10204/admin/ORT24FS/dpdump/expdat.dmp
Job "HEMANT"."SYS_EXPORT_SCHEMA_01" successfully completed at 23:33:43

ora10204>
ora10204>cd $ORACLE_BASE/admin/ORT24FS/dpdump
ora10204>ls -ltr|tail -2
-rw-r--r-- 1 ora10204 dba 1948 Nov 14 23:33 export.log
-rw-r----- 1 ora10204 dba 483500032 Nov 14 23:33 expdat.dmp
ora10204>ls -ltr
total 472644
-rw-r--r-- 1 ora10204 dba 116 Jun 14 23:30 dp.log
-rw-r--r-- 1 ora10204 dba 1948 Nov 14 23:33 export.log
-rw-r----- 1 ora10204 dba 483500032 Nov 14 23:33 expdat.dmp
ora10204>
ora10204>cat export.log
;;;
Export: Release 10.2.0.4.0 - 64bit Production on Friday, 14 November, 2008 23:32:42

Copyright (c) 2003, 2007, Oracle. All rights reserved.
;;;
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "HEMANT"."SYS_EXPORT_SCHEMA_01": hemant/********
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 534.0 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "HEMANT"."SOURCE_TABLE" 150.9 MB 1620512 rows
. . exported "HEMANT"."TARGET_ASM" 150.9 MB 1620512 rows
. . exported "HEMANT"."TARGET_FS" 150.9 MB 1620512 rows
. . exported "HEMANT"."TARGET_TABLE" 4.721 MB 50612 rows
. . exported "HEMANT"."T_AH_O_G_U" 3.328 MB 105329 rows
. . exported "HEMANT"."TEST_SUM_NUMBERS" 5.265 KB 4 rows
Master table "HEMANT"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for HEMANT.SYS_EXPORT_SCHEMA_01 is:
/oracle_fs/ora10204/admin/ORT24FS/dpdump/expdat.dmp
Job "HEMANT"."SYS_EXPORT_SCHEMA_01" successfully completed at 23:33:43
ora10204>


I now drop some of the tables

ora10204>sqlplus

SQL*Plus: Release 10.2.0.4.0 - Production on Fri Nov 14 23:35:41 2008

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.

Enter user-name: hemant/hemant

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> drop table source_table purge;
drop t
Table dropped.

SQL>
2
SQL> drop table target_asm purge;

Table dropped.

SQL> drop table target_fs purge;

Table dropped.

SQL> drop table target_table purge;

Table dropped.

SQL> quit


I am now able to import, again using the default DATA_PUMP_DIR and expdat.dmp :


ora10204>impdp hemant/hemant

Import: Release 10.2.0.4.0 - 64bit Production on Friday, 14 November, 2008 23:37:02

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "HEMANT"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "HEMANT"."SYS_IMPORT_FULL_01": hemant/********
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"HEMANT" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
ORA-39151: Table "HEMANT"."TEST_SUM_NUMBERS" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
ORA-39151: Table "HEMANT"."T_AH_O_G_U" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "HEMANT"."SOURCE_TABLE" 150.9 MB 1620512 rows
. . imported "HEMANT"."TARGET_ASM" 150.9 MB 1620512 rows
. . imported "HEMANT"."TARGET_FS" 150.9 MB 1620512 rows
. . imported "HEMANT"."TARGET_TABLE" 4.721 MB 50612 rows
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "HEMANT"."SYS_IMPORT_FULL_01" completed with 3 error(s) at 23:37:48

ora10204>


So, expdp and impdp are easy to use !

10 November, 2008

Histogram (skew) on Unique Values

Till a few days ago, I thought that, since NOT NULL column have only Unique Values would not need a Histogram, Oracle would not generate a Histogram, there being no data skew when no value is repeated.

However, Randolf Geist, in a recent forums.oracle.com thread, suggested that it is possible for Oracle's "auto" behaviour to generate a Histogram on such data where there is a noticeable gap in the data. That seems to be a result of the way the skewness detection algorith works.

So I set out to test it. This quick test is on 10.2.0.4 (64bit Linux). I have 105,329 unique values ranging from "2" to "259290". I have some gaps in the values, but I ensure that Uniqueness is maintained.


SQL>
SQL> drop table t_ah_o_g_u ;

Table dropped.

SQL> create table t_ah_o_g_u (gapped_uniq_number number not null, a_string varchar2(128));

Table created.

SQL>
SQL> create unique index t_ah_o_g_u_uk on t_ah_o_g_u (gapped_uniq_number);

Index created.

SQL>
SQL> REM Insert Unique Values but with significant gaps between Ranges
SQL> insert into t_ah_o_g_u select object_id, object_name from dba_objects where object_id between 2 and 20;

19 rows created.

SQL> insert into t_ah_o_g_u select object_id, object_name from dba_objects where object_id between 30000 and 32000;

2001 rows created.

SQL> insert into t_ah_o_g_u select object_id, object_name from dba_objects where object_id > 50000;

2087 rows created.

SQL> insert into t_ah_o_g_u select object_id+100000, object_name from dba_objects ;

50611 rows created.

SQL> insert into t_ah_o_g_u select object_id+200000, object_name from dba_objects ;

50611 rows created.

SQL>
SQL>
SQL> select count(*) from t_ah_o_g_u;

COUNT(*)
----------
105329

SQL> REM let's check the skew
SQL> select min(gapped_uniq_number), max(gapped_uniq_number) from t_ah_o_g_u;

MIN(GAPPED_UNIQ_NUMBER) MAX(GAPPED_UNIQ_NUMBER)
----------------------- -----------------------
2 259290

SQL> select count(*) from t_ah_o_g_u where gapped_uniq_number < 50000 ;

COUNT(*)
----------
2020

SQL> select count(*) from t_ah_o_g_u where gapped_uniq_number between 50000 and 100000;

COUNT(*)
----------
2087

SQL> select count(*) from t_ah_o_g_u where gapped_uniq_number between 100000 and 200000;

COUNT(*)
----------
50611

SQL> select count(*) from t_ah_o_g_u where gapped_uniq_number > 200000;

COUNT(*)
----------
50611

SQL>
SQL> REM =============================================================================================
SQL> REM =============================================================================================
SQL> REM We know that where the column is a Unique Key, there cn be no data skew -- and we need no histogram
SQL> REM Does Oracle's autogather create a histogram ?
SQL>
SQL> REM =============================================================================================
SQL> REM with SIZE AUTO
SQL> exec dbms_stats.gather_table_stats(user,'T_AH_O_G_U',method_opt=>'FOR ALL COLUMNS SIZE AUTO', estimate_percent=>100, cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL>
SQL> REM Let's look for histograms
SQL> select num_rows, sample_size
2 from user_tables where table_name = 'T_AH_O_G_U';

NUM_ROWS SAMPLE_SIZE
---------- -----------
105329 105329

SQL> select low_value, high_value, num_distinct,num_buckets,sample_size, histogram
2 from user_tab_col_statistics where table_name = 'T_AH_O_G_U' and column_name = 'GAPPED_UNIQ_NUMBER';

LOW_VALUE HIGH_VALUE
---------------------------------------------------------------- ----------------------------------------------------------------
NUM_DISTINCT NUM_BUCKETS SAMPLE_SIZE HISTOGRAM
------------ ----------- ----------- ---------------
C103 C31A5D5B
105329 254 105329 HEIGHT BALANCED


SQL> select count(*)
2 from user_tab_histograms where table_name = 'T_AH_O_G_U' and column_name = 'GAPPED_UNIQ_NUMBER';

COUNT(*)
----------
255

SQL> REM =============================================================================================
SQL> REM =============================================================================================
SQL> REM !!!! Yes, the SIZE AUTO did create a Height Balanced Histogram with 254 buckets !
SQL> REM =============================================================================================
SQL> REM =============================================================================================
SQL>
SQL> REM =============================================================================================
SQL> REM with SIZE SKEWONLY
SQL> exec dbms_stats.delete_table_stats(user,'T_AH_O_G_U');

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.gather_table_stats(user,'T_AH_O_G_U',method_opt=>'FOR ALL COLUMNS SIZE SKEWONLY', estimate_percent=>100, cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL>
SQL> REM Let's look for histograms
SQL> select num_rows, sample_size
2 from user_tables where table_name = 'T_AH_O_G_U';

NUM_ROWS SAMPLE_SIZE
---------- -----------
105329 105329

SQL> select low_value, high_value, num_distinct,num_buckets,sample_size, histogram
2 from user_tab_col_statistics where table_name = 'T_AH_O_G_U' and column_name = 'GAPPED_UNIQ_NUMBER';

LOW_VALUE HIGH_VALUE
---------------------------------------------------------------- ----------------------------------------------------------------
NUM_DISTINCT NUM_BUCKETS SAMPLE_SIZE HISTOGRAM
------------ ----------- ----------- ---------------
C103 C31A5D5B
105329 254 105329 HEIGHT BALANCED


SQL> select count(*)
2 from user_tab_histograms where table_name = 'T_AH_O_G_U' and column_name = 'GAPPED_UNIQ_NUMBER';

COUNT(*)
----------
255

SQL> REM =============================================================================================
SQL> REM =============================================================================================
SQL> REM !!!! Yes, the SKEWONLY did create a Height Balanced Histogram with 254 buckets !
SQL> REM =============================================================================================
SQL> REM =============================================================================================
SQL>
SQL>
SQL>
SQL> REM =============================================================================================
SQL> REM with SIZE 1 -- reverting to the 9i default behaviour
SQL> exec dbms_stats.delete_table_stats(user,'T_AH_O_G_U');

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.gather_table_stats(user,'T_AH_O_G_U',method_opt=>'FOR ALL COLUMNS SIZE 1', estimate_percent=>100, cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL>
SQL> REM Let's look for histograms
SQL> select num_rows, sample_size
2 from user_tables where table_name = 'T_AH_O_G_U';

NUM_ROWS SAMPLE_SIZE
---------- -----------
105329 105329

SQL> select low_value, high_value, num_distinct,num_buckets,sample_size, histogram
2 from user_tab_col_statistics where table_name = 'T_AH_O_G_U' and column_name = 'GAPPED_UNIQ_NUMBER';

LOW_VALUE HIGH_VALUE
---------------------------------------------------------------- ----------------------------------------------------------------
NUM_DISTINCT NUM_BUCKETS SAMPLE_SIZE HISTOGRAM
------------ ----------- ----------- ---------------
C103 C31A5D5B
105329 1 105329 NONE


SQL> select count(*)
2 from user_tab_histograms where table_name = 'T_AH_O_G_U' and column_name = 'GAPPED_UNIQ_NUMBER';

COUNT(*)
----------
2

SQL> REM =============================================================================================
SQL> REM This is the correct (ie as I expect it to be) behaviour ! -- No Histogram on a Unique Key
SQL> REM =============================================================================================
SQL>
SQL> spool off


So we do have a case where SIZE AUTO and SIZE SKEWONLY create a Histogram on Unique Values.

Would we have such data in real life ? Certainly.
These numbers could be, say, Identifier numbers for, say, a Rewards Program membership, which are reset every 6 months or every year such that a large gap is created between each set of numbers (the Sequence generating the numbers is recreated with a large, gapped, START WITH value over the previous incarnation's LAST_NUMBER).
These could be Product Codes for different sets of products , which a range (0 to 50,000 for one Business Unit or Subsidiary, 50,000 to 100,000 for another Business Unit or subsidiary etc) allocated to each Business Unit/Subsidiary. These could be Bar Code numbers generated in batches with a deliberately created gap to preven overlapping.

08 November, 2008

OPEN RESETLOGS without really doing a Recovery

The OPEN RESETLOGS command does a check to see if it has been called after an Incomplete Recovery and does not execute if the database is in a NORMAL shutdown state.


Here is how the "Incomplete Recovery" could be simulated :
(I learnt this trick from HJR (dizwell) on forums.oracle.com !)

First I confirm that the database is a NORMAL shutdown :

ora10204>sqlplus
/ as sysdba
SQL*Plus: Release 10.2.0.4.0 - Production on Sat Nov 8 23:23:43 2008

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.

Enter user-name:
startup
Connected to an idle instance.

SQL> ORACLE instance started.

Total System Global Area 838860800 bytes
Fixed Size 2087672 bytes
Variable Size 213910792 bytes
Database Buffers 616562688 bytes
Redo Buffers 6299648 bytes
Database mounted.
Database opened.
SQL> shutdown
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit



Now I simulate a Recovery and then do an OPEN RESETLOGS :

ora10204>sqlplus

SQL*Plus: Release 10.2.0.4.0 - Production on Sat Nov 8 23:25:43 2008

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.

Enter user-name: / as sysdba
Connected to an idle instance.

SQL> startup mount
ORACLE instance started.

Total System Global Area 838860800 bytes
Fixed Size 2087672 bytes
Variable Size 213910792 bytes
Database Buffers 616562688 bytes
Redo Buffers 6299648 bytes
Database mounted.
SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-01139: RESETLOGS option only valid after an incomplete database recovery


SQL> recover database;
ORA-00283: recovery session canceled due to errors
ORA-00264: no recovery required


SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-01139: RESETLOGS option only valid after an incomplete database recovery


SQL> recover database until cancel;
Media recovery complete.
SQL> alter database open resetlogs;

Database altered.

SQL>



The first "RECOVER DATABASE" command failed with an ORA-264 error. This is explained as :


00264, 00000, "no recovery required"
// *Cause: An attempt was made to perform media recovery on files that do not // need any type of recovery.
// *Action: Do not attempt to perform media recovery on the selected
// files. Check to see that the filenames were entered properly.
// If not, retry the command with the proper filenames.


The "UNTIL CANCEL" in the second RECOVER DATABASE was an instruction to Oracle that I was attempting an Incomplete Recovery.
Therefore, the OPEN RESETLOGS after that allowed me to proceed !
These are the messages in the alert.log relating to the second (simulated Incomplete Recovery) :


Sat Nov 8 23:26:37 2008
ALTER DATABASE RECOVER database until cancel
Media Recovery Start
Sat Nov 8 23:26:37 2008
SUCCESS: diskgroup DG_1_2G was dismounted
SUCCESS: diskgroup DG_1_2G was mounted
Sat Nov 8 23:26:38 2008
parallel recovery started with 2 processes
Media Recovery Not Required
Sat Nov 8 23:26:38 2008
SUCCESS: diskgroup DG_1_2G was dismounted
SUCCESS: diskgroup DG_1_2G was mounted
SUCCESS: diskgroup DG_1_2G was dismounted
Sat Nov 8 23:26:38 2008
Completed: ALTER DATABASE RECOVER database until cancel
Sat Nov 8 23:26:43 2008
alter database open resetlogs
Sat Nov 8 23:26:46 2008
SUCCESS: diskgroup DG_1_2G was mounted
Sat Nov 8 23:26:48 2008
RESETLOGS after complete recovery through change 99552446
Resetting resetlogs activation ID 4148073936 (0xf73e95d0)
Sat Nov 8 23:26:53 2008
Setting recovery target incarnation to 11
Sat Nov 8 23:26:53 2008
SUCCESS: diskgroup DG_1_2G was dismounted
SUCCESS: diskgroup DG_1_2G was mounted
Sat Nov 8 23:26:54 2008
Assigning activation ID 4150071358 (0xf75d103e)
LGWR: STARTING ARCH PROCESSES
ARC0 started with pid=23, OS id=4290
Sat Nov 8 23:26:54 2008
ARC0: Archival started
ARC1: Archival started
LGWR: STARTING ARCH PROCESSES COMPLETE
ARC1 started with pid=24, OS id=4292
Sat Nov 8 23:26:54 2008
Thread 1 opened at log sequence 1
Current log# 1 seq# 1 mem# 0: /oracle_fs/Databases/ORT24FS/redo01.dbf
Successful open of redo thread 1


No Redo actually needed to be applied yet, Oracle did do an OPEN RESETLOGS and change the activation ID and reset the Log Sequence Number to 1.

Here we can see that this was the 11th incarnation of this database :


SQL> select dbid, name, created, resetlogs_change#, resetlogs_time, activation#, recovery_target_incarnation#, last_open_incarnation#, current_scn
2 from v$database;

DBID NAME CREATED RESETLOGS_CHANGE# RESETLOGS_TIME
---------- --------- ------------------ ----------------- ------------------
ACTIVATION# RECOVERY_TARGET_INCARNATION# LAST_OPEN_INCARNATION# CURRENT_SCN
----------- ---------------------------- ---------------------- -----------
4137213278 ORT24FS 14-JUN-08 23:28:30 99552447 08-NOV-08 23:26:46
4150071358 11 11 99553429


SQL>
SQL> select incarnation#, resetlogs_change#, resetlogs_time, status, resetlogs_id from v$database_incarnation where incarnation#=11;

INCARNATION# RESETLOGS_CHANGE# RESETLOGS_TIME STATUS RESETLOGS_ID
------------ ----------------- ------------------ ------- ------------
11 99552447 08-NOV-08 23:26:46 CURRENT 670289206

SQL>



Therefore, it is possible to simulate an Incomplete Recovery to fool the ALTER DATABASE OPEN RESETLOGS into thinking that Recovery is Incomplete -- even though we know we have not lost any transactions as the last shutdown was a NORMAL shutdown.

01 November, 2008

Numbers and NULLs

In a recent forums.oracle.com thread I had mistakenly stated that "Therefore, even if 1 row out of a million rows has a NULL, you'd get a SUM result as a NUL" after correctly asserting that "Adding a value to a NULL returns a NULL." I did point out later that although SUM (and AVG) ignores NULLs, a COUNT(*) doesnt.Link

Here is a test case about
a) how Average != Sum/Count if any of the rows contains a NULL.
b) how adding a NULL to a number returns a NULL (and adding a number to a NULL still returns a NULL !)


SQL>
SQL> drop table test_sum_numbers purge;

Table dropped.

SQL>
SQL> REM COL_2 has a NULL in one of 4 rows
SQL> create table test_sum_numbers (col_1 number, col_2 number);

Table created.

SQL> insert into test_sum_numbers values (10, 10);

1 row created.

SQL> insert into test_sum_numbers values (20, 20);

1 row created.

SQL> insert into test_sum_numbers values (30, NULL);

1 row created.

SQL> insert into test_sum_numbers values (40, 40);

1 row created.

SQL> commit;

Commit complete.

SQL>
SQL> REM Here we see that SUM/COUNT != AVG because the SUM excluded NULLs but the COUNT of all rows includes NULLs
SQL> select count(*) Num_Rows, sum(col_1) Total_1, avg(col_1) Avg_1, sum(col_2) Total_2, avg(col_2) Avg_2 from test_sum_numbers ;

NUM_ROWS TOTAL_1 AVG_1 TOTAL_2 AVG_2
---------- ---------- ---------- ---------- ----------
4 100 25 70 23.3333333

SQL>
SQL> REM Here we see that we can "adjust" NULLs to 0 so that SUM/COUNT == AVG
SQL> select count(*) Num_Rows, sum(nvl(col_1,0)) Total_1, avg(nvl(col_1,0)) Avg_1, sum(nvl(col_2,0)) Total_2, avg(nvl(col_2,0)) Avg_2 from test_sum_numbers ;

NUM_ROWS TOTAL_1 AVG_1 TOTAL_2 AVG_2
---------- ---------- ---------- ---------- ----------
4 100 25 70 17.5

SQL>
SQL>
SQL> REM What happens if we were to add two sets of columns from two tables ?
SQL> select CNT_1 + CNT_2 Total_Rows, SC_1 + SC_2 Total_Sum, (SC_1+SC_2)/(CNT_1+CNT_2) Is_this_the_Average
2 from
3 (select count(*) CNT_1, sum(col_1) SC_1 from test_sum_numbers table_a),
4 (select count(*) CNT_2, sum(col_2) SC_2 from test_sum_numbers table_b)
5 /

TOTAL_ROWS TOTAL_SUM IS_THIS_THE_AVERAGE
---------- ---------- -------------------
8 170 21.25

SQL>
SQL> REM What happens if we ADD the values for each of the rows
SQL> set serveroutput on size 10000
SQL> variable i number;
SQL>
SQL> declare
2 cursor fetch_cursor is select col_1 from test_sum_numbers;
3 c1 number;
4 agg_1 number;
5 cnv_agg_1 number;
6 begin
7 agg_1 := 0;
8 open fetch_cursor ;
9 loop
10 fetch fetch_cursor into c1;
11 exit when fetch_cursor%NOTFOUND;
12 agg_1 := agg_1 + c1;
13 end loop;
14 dbms_output.put_line('Sum of col_1 is ' agg_1);
15 cnv_agg_1 := nvl(agg_1,0);
16 dbms_output.put_line('Testing agg_1 for NULL : ' cnv_agg_1);
17 end;
18 /
Sum of col_1 is 100
Testing agg_1 for NULL : 100

PL/SQL procedure successfully completed.

SQL>
SQL> declare
2 cursor fetch_cursor is select col_2 from test_sum_numbers;
3 c2 number;
4 agg_2 number;
5 cnv_agg_2 number;
6 begin
7 agg_2 := 0;
8 open fetch_cursor ;
9 loop
10 fetch fetch_cursor into c2;
11 exit when fetch_cursor%NOTFOUND;
12 agg_2 := agg_2 + c2;
13 end loop;
14 dbms_output.put_line('Sum of col_2 is ' agg_2);
15 cnv_agg_2 := nvl(agg_2,0);
16 dbms_output.put_line('Testing agg_2 for NULL : ' cnv_agg_2);
17 end;
18 /
Sum of col_2 is
Testing agg_2 for NULL : 0

PL/SQL procedure successfully completed.

SQL>
SQL> REM So the addition in col_2 returned a NULL ? Let's check it again
SQL>
SQL> declare
2 cursor fetch_cursor is select col_2 from test_sum_numbers order by col_1;
3 c2 number;
4 agg_2 number;
5 cnv_agg_2 number;
6 begin
7 agg_2 := 0;
8 dbms_output.put_line ('Begin aggregation at :' agg_2);
9 open fetch_cursor ;
10 loop
11 fetch fetch_cursor into c2;
12 exit when fetch_cursor%NOTFOUND;
13 dbms_output.put_line('Have read value 'c2);
14 agg_2 := agg_2 + c2;
15 dbms_output.put_line('The aggregation is now : ' agg_2);
16 end loop;
17 dbms_output.put_line('Sum of col_2 is ' agg_2);
18 cnv_agg_2 := nvl(agg_2,0);
19 dbms_output.put_line('Testing agg_2 for NULL : ' cnv_agg_2);
20 end;
21 /
Begin aggregation at :0
Have read value 10
The aggregation is now : 10
Have read value 20
The aggregation is now : 30
Have read value
The aggregation is now :
Have read value 40
The aggregation is now :
Sum of col_2 is
Testing agg_2 for NULL : 0

PL/SQL procedure successfully completed.

SQL>
SQL> REM Addition of the col_2 value was running fine till it hit a NULL
SQL> REM The result of adding a number to a NULL was a NULL
SQL> REM Adding numbers to it after that still kept returning NULL


As robert has clarified in his comment, if I did a select count(col_2) it would ignore the NULL. Therefore, while a count(*) returns 4, a count(col_2) returns 3 (and count(col_1) -- all being NOT NULL values -- returns 4).


19 October, 2008

Using STATISTICS_LEVEL='ALL' and 10046, level 8

Although I have, on occassion, used event 10046, level 8 to trace database operations in production, I am taken aback when a vendor wants to use both STATISTICS_LEVEL='ALL' and 10046, level 8 together for a long-running job ostensibly to collect more detailed information.

I can understand STATISTICS_LEVEL='ALL' providing better information in V$ views. But does it provide "better" information when the reviewing only AWR reports and tkprof outputs of the trace files (gathered with event 10046, level 8) ? I am not convinced of the need for this in production.

I have decided to run a simple test with an UPDATE statement on a 10million row table.
One run will be with STATISTICS_LEVEL='TYPICAL' and tracing with event 10046 set at level 1.
The second run will be with STATISTICS_LEVEL='ALL' and tracing with event 10046 set at level 8.

FIRST RUN, with STATISTICS_LEVEL='TYPICAL' and Level 1 Tracing :

update my_transactions mt
set mt.converted_amount =
(select st.transaction_amount*sf.conv_rate
from my_transactions st, my_fx_rates sf
where
st.transaction_c_c=sf.currency_code
and
trunc(st.transaction_date)=sf.conv_rate_date
and
st.transaction_id=mt.transaction_id),
mt.updated_at=sysdate

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.03 0.03 0 0 0 0
Execute 1 1074.94 1184.89 753724 62611107 10743805 10000000
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 1074.98 1184.93 753724 62611107 10743805 10000000

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 64

Rows Row Source Operation
------- ---------------------------------------------------
0 UPDATE MY_TRANSACTIONS (cr=60928473 pr=742011 pw=0 time=1157314862 us)
20338218 TABLE ACCESS FULL MY_TRANSACTIONS (cr=10979090 pr=732914 pw=0 time=183093956 us)
10278974 NESTED LOOPS (cr=51631852 pr=20663 pw=0 time=715168290 us)
10338218 TABLE ACCESS BY INDEX ROWID MY_TRANSACTIONS (cr=31014657 pr=20469 pw=0 time=305940011 us)
10338218 INDEX UNIQUE SCAN MY_TRANSACTIONS_PK (cr=20676439 pr=20469 pw=0 time=163468271 us)(object id 58495)
10278974 TABLE ACCESS BY INDEX ROWID MY_FX_RATES (cr=20617195 pr=194 pw=0 time=256486318 us)
10278974 INDEX UNIQUE SCAN MY_FX_RATES_PK (cr=10338221 pr=119 pw=0 time=124931227 us)(object id 58464)

********************************************************************************


SECOND RUN, with STATISTICS_LEVEL='ALL' and Level 8 Tracing :


update my_transactions mt
set mt.converted_amount =
(select st.transaction_amount*sf.conv_rate
from my_transactions st, my_fx_rates sf
where
st.transaction_c_c=sf.currency_code
and
trunc(st.transaction_date)=sf.conv_rate_date
and
st.transaction_id=mt.transaction_id),
mt.updated_at=sysdate

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.02 0.03 0 0 0 0
Execute 1 1165.17 1261.06 753135 62114208 10724083 10000000
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 1165.20 1261.10 753135 62114208 10724083 10000000

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 64

Rows Row Source Operation
------- ---------------------------------------------------
0 UPDATE MY_TRANSACTIONS (cr=60933001 pr=744851 pw=0 time=1240297420 us)
20239080 TABLE ACCESS FULL MY_TRANSACTIONS (cr=10975891 pr=732493 pw=0 time=222679174 us)
10179836 NESTED LOOPS (cr=51136162 pr=20449 pw=0 time=716122504 us)
10239080 TABLE ACCESS BY INDEX ROWID MY_TRANSACTIONS (cr=30717243 pr=20259 pw=0 time=308789524 us)
10239080 INDEX UNIQUE SCAN MY_TRANSACTIONS_PK (cr=20478163 pr=20259 pw=0 time=167672057 us)(object id 58495)
10179836 TABLE ACCESS BY INDEX ROWID MY_FX_RATES (cr=20418919 pr=190 pw=0 time=255736454 us)
10179836 INDEX UNIQUE SCAN MY_FX_RATES_PK (cr=10239083 pr=115 pw=0 time=125362994 us)(object id 58464)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 107600 0.07 56.46
db file scattered read 40467 0.26 65.70
log file switch completion 171 0.17 3.42
log buffer space 3 0.04 0.12
log file switch (checkpoint incomplete) 5 0.97 1.06
latch: cache buffers lru chain 152 0.00 0.01
rdbms ipc reply 221 0.03 0.11
latch: object queue header operation 11 0.10 0.10
latch: checkpoint queue latch 1 0.00 0.00
buffer busy waits 2 0.00 0.01
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.03 0.03
********************************************************************************


The execution time was only slightly higher at 1,261seconds versus 1,185seconds. However, I also note that, other than the listing of Wait Events and durations of each event, I do not notice information in the tkprof output of the trace file when I review it post-facto.
STATISTICS_LEVEL='ALL' does provide additional information in V$ views but that is unlikely to be visible in a tkprof of an SQL Trace file (whether level 1 or 8) that is generated and reviewed later.

12 October, 2008

Delayed Block Cleanout -- through Instance Restart

Continuing my previous post on Delayed Block Cleanout, in this post, I show how a database shutdown doesn't guarantee that all the modified blocks in a table are "cleaned out" before the shutdown.

I run a large update and commit it. Next, I shutdown the database. The first query against the table after the subsequent restart still has to examine modified blocks, confirm if the updates to rows in the block have been committed and then execute a cleanout of these modified blocks. Thus, we see "consistent gets - examination" and "cleanout%" statistics being incremented for the first query after the restart :


SQL> select blocks,num_rows from user_tables where table_name = upper('test_d_b_c');

BLOCKS NUM_ROWS
---------- ----------
47069 3242752

SQL> SELECT COUNT(*) FROM TEST_D_B_C;

COUNT(*)
----------
3242752

SQL> @Report_select_Statistics
SQL> set echo off

Statistic Value
---------------------------------------------------------- ----------------
cleanout - number of ktugct calls 0
cleanouts and rollbacks - consistent read gets 0
cleanouts only - consistent read gets 0
consistent changes 0
consistent gets 47,513
consistent gets - examination 207
db block changes 0
db block gets 0
redo entries 0
redo size 0
undo change vector size 0
SQL>
SQL> REM Run an update on 60% (not all) of the rows
SQL> UPDATE TEST_D_B_C SET COL_1 = 'HKC_'||SUBSTR(COL_1,4,11) WHERE MOD(COL_4,10) > 3 ;

1945344 rows updated.

SQL> COMMIT;

Commit complete.

SQL>
SQL> REM We will now shutdown and restart the database
SQL> connect / as sysdba
Connected.
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 734003200 bytes
Fixed Size 2023688 bytes
Variable Size 264244984 bytes
Database Buffers 461373440 bytes
Redo Buffers 6361088 bytes
Database mounted.
Database opened.
SQL>
SQL> connect hemant/hemant
Connected.
SQL> REM Now let's query the table
SQL> REM Will the first execution suffer delayed block cleanout ?
SQL>
SQL> SELECT COUNT(*) FROM TEST_D_B_C;

COUNT(*)
----------
3242752

SQL> @Report_select_Statistics
SQL> set echo off

Statistic Value
---------------------------------------------------------- ----------------
cleanout - number of ktugct calls 46,969
cleanouts and rollbacks - consistent read gets 0
cleanouts only - consistent read gets 46,969
consistent changes 0
consistent gets 94,910
consistent gets - examination 47,367
db block changes 46,973
db block gets 3
redo entries 46,971
redo size 3,383,012
undo change vector size 204
SQL>
SQL> SELECT COUNT(*) FROM TEST_D_B_C;

COUNT(*)
----------
3242752

SQL> @Report_select_Statistics
SQL> set echo off

Statistic Value
---------------------------------------------------------- ----------------
cleanout - number of ktugct calls 46,969
cleanouts and rollbacks - consistent read gets 0
cleanouts only - consistent read gets 46,969
consistent changes 0
consistent gets 141,993
consistent gets - examination 47,367
db block changes 46,973
db block gets 3
redo entries 46,971
redo size 3,383,012
undo change vector size 204


It is the first full table scan query that shows "consistent gets - examination" and "cleanout%" statistics. It did 46,969 examinations and cleanouts and generated 3.383MB of redo (for the correspondng 46,971 redo entries).

04 October, 2008

Delayed Block Cleanout

A query that reads datablocks that have been updated by a large DML (INSERT/UPDATE/DELETE) may well incur "delayed block cleanout". The commit operation of the DML does not involve clearing the ITL entries in each modified datablock but only marking the transaction in the undo slot entry as "COMMITTED" (plus, possibly, a few blocks being updated). Therefore, the next query (whether from the same database session OR from another database session) to read the datablock has to verify the ITL entries against the transaction in the undo segment and then "clean out" the ITL entry and the row marker in each datablock. This process of updating the datablock also generates redo.
The presence of the "Penalty" of delayed block cleanout is visible in the "cleanout %" statistics and also in the "consistent gets - examination" count [which is a subset of the "consistent gets" count].

This has implications in that large batch jobs that update very many datablocks and are able to exeute a COMMIT quickly actually cause subsequent jobs or queries against the same table to suffer this overhead.

You may not see "delayed block cleanout" when only a small set of datablocks are updated by a transaction.


This set of SQL commands and outputs below show how the *first* query against the table undergoing DML suffers this Penalty and the "cleanout %" and "consistent gets - examination" and "redo size" statistics are incremented (ie incurred) for this query but not the next query on the same datablocks.

I use with a table with 3.24million rows and run UPDATE, DELETE and INSERT operations against the table and identify the changes to these statistics which indicate delayed block cleanout.





SQL> select blocks,num_rows from user_tables where table_name = upper('test_d_b_c');

BLOCKS NUM_ROWS
---------- ----------
47197 3242752

SQL> REM Get the number of consistent gets -- the query has been run once ahead to parse it
SQL> SELECT COUNT(*) FROM TEST_D_B_C;

COUNT(*)
----------
3242752

SQL> @Report_select_Statistics
SQL> set echo off

Statistic Value
---------------------------------------------------------- ----------------
cleanout - number of ktugct calls 0
cleanouts and rollbacks - consistent read gets 0
cleanouts only - consistent read gets 0
consistent changes 0
consistent gets 48,174
consistent gets - examination 498
db block changes 4
db block gets 3
redo entries 2
redo size 672
undo change vector size 204
SQL>
SQL> REM REM ########## Test for an UPDATE ################ REM REM ####################
SQL> REM Run an update on 60% (not all) of the rows
SQL> UPDATE TEST_D_B_C SET COL_1 = 'HKC_'||SUBSTR(COL_1,1,15) WHERE MOD(COL_4,10) > 3 ;

1945344 rows updated.

SQL> COMMIT;

Commit complete.

SQL> @Report_select_Statistics
SQL> set echo off

Statistic Value
---------------------------------------------------------- ----------------
cleanout - number of ktugct calls 0
cleanouts and rollbacks - consistent read gets 0
cleanouts only - consistent read gets 0
consistent changes 0
consistent gets 95,395
consistent gets - examination 503
db block changes 1,471,008
db block gets 732,521
redo entries 726,323
redo size 340,420,260
undo change vector size 141,809,380
SQL>
SQL>
SQL> REM Now let's query the table
SQL> REM The first execution will suffer delayed block cleanout
SQL>
SQL> SELECT COUNT(*) FROM TEST_D_B_C;

COUNT(*)
----------
3242752

SQL> @Report_select_Statistics
SQL> set echo off

Statistic Value
---------------------------------------------------------- ----------------
cleanout - number of ktugct calls 46,969
cleanouts and rollbacks - consistent read gets 0
cleanouts only - consistent read gets 46,969
consistent changes 0
consistent gets 189,575
consistent gets - examination 47,472
db block changes 1,517,977
db block gets 732,521
redo entries 773,293
redo size 343,803,772
undo change vector size 141,809,380
SQL>
SQL> SELECT COUNT(*) FROM TEST_D_B_C;

COUNT(*)
----------
3242752

SQL> @Report_select_Statistics
SQL> set echo off

Statistic Value
---------------------------------------------------------- ----------------
cleanout - number of ktugct calls 46,969
cleanouts and rollbacks - consistent read gets 0
cleanouts only - consistent read gets 46,969
consistent changes 0
consistent gets 236,786
consistent gets - examination 47,472
db block changes 1,517,977
db block gets 732,521
redo entries 773,293
redo size 343,803,772
undo change vector size 141,809,380
SQL>
SQL>
SQL> REM REM ########## Test for a DELETE ################ REM REM ####################
SQL> REM Test for a DELETE
SQL> DELETE TEST_D_B_C WHERE MOD(COL_4,5) > 3;

648640 rows deleted.

SQL> COMMIT;

Commit complete.

SQL> @Report_select_Statistics
SQL> set echo off

Statistic Value
---------------------------------------------------------- ----------------
cleanout - number of ktugct calls 46,969
cleanouts and rollbacks - consistent read gets 0
cleanouts only - consistent read gets 46,969
consistent changes 0
consistent gets 284,451
consistent gets - examination 47,486
db block changes 2,870,813
db block gets 1,465,003
redo entries 1,458,221
redo size 586,670,260
undo change vector size 292,754,836
SQL>
SQL> REM Now let's query the table
SQL> REM The first execution will suffer delayed block cleanout
SQL>
SQL> SELECT COUNT(*) FROM TEST_D_B_C;

COUNT(*)
----------
2594112

SQL> @Report_select_Statistics
SQL> set echo off

Statistic Value
---------------------------------------------------------- ----------------
cleanout - number of ktugct calls 56,180
cleanouts and rollbacks - consistent read gets 0
cleanouts only - consistent read gets 56,180
consistent changes 0
consistent gets 340,873
consistent gets - examination 56,697
db block changes 2,880,024
db block gets 1,465,003
redo entries 1,467,432
redo size 587,334,024
undo change vector size 292,754,836
SQL>
SQL> SELECT COUNT(*) FROM TEST_D_B_C;

COUNT(*)
----------
2594112

SQL> @Report_select_Statistics
SQL> set echo off

Statistic Value
---------------------------------------------------------- ----------------
cleanout - number of ktugct calls 56,180
cleanouts and rollbacks - consistent read gets 0
cleanouts only - consistent read gets 56,180
consistent changes 0
consistent gets 388,084
consistent gets - examination 56,697
db block changes 2,880,024
db block gets 1,465,003
redo entries 1,467,432
redo size 587,334,024
undo change vector size 292,754,836
SQL>
SQL>
SQL> REM REM ########## Test for a INSERT ################ REM REM ####################
SQL> REM Test for an INSERT
SQL> INSERT INTO TEST_D_B_C SELECT * FROM TEST_D_B_C_BAK WHERE MOD(COL_4,5) > 3;

648640 rows created.

SQL> COMMIT;

Commit complete.

SQL> @Report_select_Statistics
SQL> set echo off

Statistic Value
---------------------------------------------------------- ----------------
cleanout - number of ktugct calls 65,255
cleanouts and rollbacks - consistent read gets 0
cleanouts only - consistent read gets 56,180
consistent changes 0
consistent gets 453,419
consistent gets - examination 65,810
db block changes 2,936,433
db block gets 1,532,712
redo entries 1,506,633
redo size 659,399,228
undo change vector size 295,378,592
SQL>
SQL> REM Now let's query the table
SQL> REM The first execution will suffer delayed block cleanout
SQL>
SQL> SELECT COUNT(*) FROM TEST_D_B_C;

COUNT(*)
----------
3242752

SQL> @Report_select_Statistics
SQL> set echo off

Statistic Value
---------------------------------------------------------- ----------------
cleanout - number of ktugct calls 72,839
cleanouts and rollbacks - consistent read gets 0
cleanouts only - consistent read gets 63,764
consistent changes 0
consistent gets 508,214
consistent gets - examination 73,394
db block changes 2,944,017
db block gets 1,532,712
redo entries 1,514,217
redo size 659,945,320
undo change vector size 295,378,592
SQL>
SQL> SELECT COUNT(*) FROM TEST_D_B_C;

COUNT(*)
----------
3242752

SQL> @Report_select_Statistics
SQL> set echo off

Statistic Value
---------------------------------------------------------- ----------------
cleanout - number of ktugct calls 72,839
cleanouts and rollbacks - consistent read gets 0
cleanouts only - consistent read gets 63,764
consistent changes 0
consistent gets 555,425
consistent gets - examination 73,394
db block changes 2,944,017
db block gets 1,532,712
redo entries 1,514,217
redo size 659,945,320
undo change vector size 295,378,592
SQL>


As you can see "cleanout - number of ktugct calls" and "cleanouts only - consistent read gets" are incremented as with "consistent gets - examination". [Remember that "consistent gets - examination" count is *included* in "consistent gets" count]. Also, "redo entries" and "redo size" are incremented. These occur for the first query against the modified datablocks.
NOTE : This table [TEST_D_B_C] does NOT have any indexes so "consistent gets - examination" does NOT reflect reads on index blocks [root blocks or otherwise]. {google might show a few references indicating the "consistent gets - examination" occurs for index blocks but that isn't the only case for this statistic}.


The above test was on 10.2.0.1 I have had similar results on 10.2.0.2

However, on 10.2.0.4 I have had a very surprisingly large number of "consistent gets - examination" {exceeding 1.9million} for the UPDATE itself [testing with both ASSM and MSSM], which I cannot explain yet. The "consistent gets - examination" count for the "delayed block cleanout" effect on the first query is still consistent. However, I haven't published the 10.2.0.4 numbers because of the 1.9million "consistent gets - examination" in the UPDATE. That is something I have to investigate.

12 September, 2008

Relational Theory and SQL

Here's an interesting thread on Oracle-L developing on how well should SQL practitioners know Relational Theory, Relational Calculus etc.

30 August, 2008

ASSM or MSSM ? -- DELETE and INSERT

Continuing my first set of tests, I now look at Bug#4475314 which shows that an INSERT that follows a DELETE without an intervening COMMIT can suffer much higher 'current gets'. The teste case in that Bug actually shows a very significant difference between the DELETE-COMMIT-INSERT and DELETE-INSERT runs.

So, here I ran similar SQLs as in my first set of tests :

MSSM :


drop tablespace mssm_tbs including contents and datafiles;
create tablespace mssm_tbs
datafile '/oracle_fs/Databases/ORT24FS/mssm01.dbf' size 1000M
extent management local autoallocate segment space management manual;

undefine sid
col mysid new_value sid
select distinct sid mysid from V$mystat ;


alter session set tracefile_identifier='run_delete_insert_tests_mssm';
alter session set events '10046 trace name context forever, level 8';

alter table source_table cache;
select /*+ FULL (s) */ count(*) from source_table s;

create table target_table_mssm tablespace mssm_tbs as select * from source_table where 1=2;

insert /* first_mssm_insert */ into target_table_mssm select * from source_table;
commit;

delete /* first_mssm_delete */ target_table_mssm;

insert /* reinsert_mssm_insert */ into target_table_mssm select * from source_table;
commit;

create index target_table_mssm_ndx on target_table_mssm(object_id) tablespace mssm_tbs;

delete /* indexed_mssm_delete */ target_table_mssm;

insert /* indexed_mssm_insert */ into target_table_mssm select * from source_table;
commit;


ASSM :


drop tablespace assm_tbs including contents and datafiles;
create tablespace assm_tbs
datafile '/oracle_fs/Databases/ORT24FS/assm01.dbf' size 1000M
extent management local autoallocate segment space management auto;

undefine sid
col mysid new_value sid
select distinct sid mysid from V$mystat ;


alter session set tracefile_identifier='run_delete_insert_tests_assm';
alter session set events '10046 trace name context forever, level 8';

alter table source_table cache;
select /*+ FULL (s) */ count(*) from source_table s;

create table target_table_assm tablespace assm_tbs as select * from source_table where 1=2;

insert /* first_assm_insert */ into target_table_assm select * from source_table;
commit;

delete /* first_assm_delete */ target_table_assm;

insert /* reinsert_assm_insert */ into target_table_assm select * from source_table;
commit;

create index target_table_assm_ndx on target_table_assm(object_id) tablespace assm_tbs;

delete /* indexed_assm_delete */ target_table_assm;

insert /* indexed_assm_insert */ into target_table_assm select * from source_table;
commit;



Rather than repeating the normal INSERT statements (which you can also see in my first set of tests), I will emphasise the INSERTs that followed DELETEs (without intervening COMMITs) :

MSSM :

delete /* first_mssm_delete */ target_table_mssm


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 1 0 0
Execute 1 6.27 12.47 0 5206 326203 303666
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 6.27 12.48 0 5207 326203 303666

Rows Row Source Operation
------- ---------------------------------------------------
0 DELETE TARGET_TABLE_MSSM (cr=5206 pr=0 pw=0 time=12478853 us)
303666 TABLE ACCESS FULL TARGET_TABLE_MSSM (cr=4857 pr=0 pw=0 time=911106 us)

insert /* reinsert_mssm_insert */ into target_table_mssm select * from
source_table


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 1.00 1.65 0 12367 17122 303666
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 1.01 1.65 0 12367 17122 303666

Rows Row Source Operation
------- ---------------------------------------------------
303666 TABLE ACCESS FULL SOURCE_TABLE (cr=4193 pr=0 pw=0 time=607452 us)

delete /* indexed_mssm_delete */ target_table_mssm


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.01 0 2 0 0
Execute 1 11.01 16.14 671 4325 1243651 303666
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 11.02 16.16 671 4327 1243651 303666

Rows Row Source Operation
------- ---------------------------------------------------
0 DELETE TARGET_TABLE_MSSM (cr=4325 pr=671 pw=0 time=16140802 us)
303666 TABLE ACCESS FULL TARGET_TABLE_MSSM (cr=4325 pr=0 pw=0 time=1518587 us)

insert /* indexed_mssm_insert */ into target_table_mssm select * from
source_table


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 3.09 7.54 63 16018 78398 303666
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 3.09 7.54 63 16018 78398 303666

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 64

Rows Row Source Operation
------- ---------------------------------------------------
303666 TABLE ACCESS FULL SOURCE_TABLE (cr=4193 pr=0 pw=0 time=911112 us)



ASSM :

delete /* first_assm_delete */ target_table_assm


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 1 0 0
Execute 1 7.04 14.30 0 4880 337910 303666
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 7.04 14.30 0 4881 337910 303666

Rows Row Source Operation
------- ---------------------------------------------------
0 DELETE TARGET_TABLE_ASSM (cr=4880 pr=0 pw=0 time=14305853 us)
303666 TABLE ACCESS FULL TARGET_TABLE_ASSM (cr=4880 pr=0 pw=0 time=934334 us)

insert /* reinsert_assm_insert */ into target_table_assm select * from
source_table


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 1.50 2.02 0 21396 52236 303666
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 1.50 2.02 0 21396 52236 303666

Rows Row Source Operation
------- ---------------------------------------------------
303666 TABLE ACCESS FULL SOURCE_TABLE (cr=4193 pr=0 pw=0 time=911120 us)

delete /* indexed_assm_delete */ target_table_assm


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.02 0 2 0 0
Execute 1 14.84 22.03 672 4502 1258653 303666
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 14.84 22.06 672 4504 1258653 303666

Rows Row Source Operation
------- ---------------------------------------------------
0 DELETE TARGET_TABLE_ASSM (cr=4988 pr=673 pw=0 time=22090153 us)
303666 TABLE ACCESS FULL TARGET_TABLE_ASSM (cr=4364 pr=0 pw=0 time=1214785 us)

insert /* indexed_assm_insert */ into target_table_assm select * from
source_table


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 2.92 5.45 1 26743 117158 303666
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 2.92 5.45 1 26743 117158 303666

Rows Row Source Operation
------- ---------------------------------------------------
303666 TABLE ACCESS FULL SOURCE_TABLE (cr=4193 pr=0 pw=0 time=911153 us)


The INSERT operations in the ASSM tablespace are more expensive.


MSSM ASSM
Delete
Consistent Gets 5,206 4,880
Current Gets 326,203 337,910
CPU Used (seconds) 6.27 7.04
Insert after Delete
Consistent Gets 12,367 21,396
Current Gets 17,122 52,236
CPU Used (seconds) 1.00 1.50

Delete with Index present
Consistent Gets 4,325 4,502
Current Gets 1,243,651 1,258,653
CPU Used (seconds) 11.01 14.84
Insert after this Delete
Consistent Gets 16,018 26,743
Current Gets 78,398 117,158
CPU Used (seconds) 3.09 2.92


Think carefully before using ASSM for segments that
a) Undergo large Inserts OR Deletes and Inserts
b) Are executed by batch jobs, not multiple,concurrent, online users.