25 September, 2014

The ADMINISTER SQL MANAGEMENT OBJECT Privilege

In 11.2.0.2

Having seen in the previous post, "EXECUTE Privilege on DBMS_SPM not sufficient", let's see if there is a risk to the ADMINISTER SQL MANAGEMENT OBJECT privilege.

First, recreating the SQL Plan

SQL> connect spm_test/spm_test
Connected.
SQL> alter session set OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES=TRUE;

Session altered.

SQL> variable qrn number ;
SQL> exec :qrn := 5;

PL/SQL procedure successfully completed.

SQL> select * from spm_test_table where id_column=:qrn;
5
 ID_COLUMN DATA_COL
---------- ---------------
         5 5

SQL>select * from spm_test_table where id_column=:qrn;

 ID_COLUMN DATA_COL
---------- ---------------
         5 5

SQL> alter session set OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES=FALSE;

Session altered.

SQL> 
SQL> connect hemant/hemant
Connected.
SQL> select sql_handle, plan_name, sql_text, enabled, accepted, fixed
  2  from dba_sql_plan_baselines
  3  where creator='SPM_TEST'
  4  /

SQL_HANDLE                     PLAN_NAME
------------------------------ ------------------------------
SQL_TEXT
--------------------------------------------------------------------------------
ENA ACC FIX
--- --- ---
SQL_6ceee9b24e9fd50a           SQL_PLAN_6tvr9q979zp8a1e198e55
select * from spm_test_table where id_column=:qrn
YES YES NO


SQL> 

Next, setup the BREAK !

SQL> create user spm_break identified by spm_break;

User created.

SQL> grant create session, administer sql management object to spm_break;

Grant succeeded.

SQL> connect spm_break;
Enter password: 
Connected.
SQL> 
SQL> set serveroutput on
SQL> declare
  2  ret_value pls_integer;
  3  begin
  4  ret_value := dbms_spm.drop_sql_plan_baseline(
  5  sql_handle=>'SQL_6ceee9b24e9fd50a',plan_name=>'SQL_PLAN_6tvr9q979zp8a1e198e55');
  6  dbms_output.put_line('Return : ' || ret_value);
  7  end;
  8  /
Return : 1

PL/SQL procedure successfully completed.

SQL> 
SQL> connect hemant/hemant
Connected.
SQL> select sql_handle, plan_name, sql_text, enabled, accepted, fixed
  2  from dba_sql_plan_baselines
  3  where creator = 'SPM_TEST'
  4  /

no rows selected

SQL> 

I was able to use the SPM_BREAK account to *DROP* an SQL Plan Baseline that was created by the SPM_TEST account without identifying which account it belonged to -- i.e. which account was the creator ! See Oracle Support Doc 1469099.1 and reference to Bug 12932784.   Isn't that a bug, or a security loophole ?
Apparently, this privilege is to be used only by Administrators.  But a non-Administrator cannot manage and evolve his own SQL Plan Baselines without this privilege.  So does that mean that only an Administrator should capture, evolve and manage SQL Plan Baselines ?

If you have a shared environment with different development teams developing different applications in different schemas, how do you provide them the facility to manage their own SQL Plan Baselines ?  The EXECUTE privilege on DBMS_SPM is not sufficient.  Yet, the ADMINISTER SQL MANAGEMENT OBJECT is excessive as one development team could drop the SQL Plan Baselines of another development team (i.e. another application).


Can anyone test that the ADMINISTER SQL MANAGEMENT privilege is required in addition to the EXECUTE on DBMS_SPM  in order to simply manage / evolve one's own SQL Plans in 11.2.0.4 / 12.1.0.1 / 12.1.0.2  ?
.
.
.

24 September, 2014

EXECUTE Privilege on DBMS_SPM not sufficient

In 11.2.0.2

Here is a quick demo to show that the "ADMINISTER SQL MANAGEMENT OBJECT"  privilege is required for a non-DBA user to use DBMS_SPM even if EXECUTE has been granted on DBMS_SPM.

SQL> create user spm_test identified by spm_test quota unlimited on users;

User created.

SQL> alter user spm_test default tablespace users;   

User altered.

SQL> grant create session, create table to spm_test;

Grant succeeded.

SQL> connect spm_test/spm_test
Connected.
SQL> create table spm_test_table (id_column number primary key, data_col varchar2(15));

Table created.

SQL> alter session set OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES=TRUE;

Session altered.

SQL> insert into spm_test_table select rownum, to_char(rownum) from dual  connect by level < 10001;

10000 rows created.

SQL> commit;

Commit complete.

SQL> variable qrn number;
SQL> exec :qrn := 5;

PL/SQL procedure successfully completed.

SQL> select * from spm_test_table where id_column=:qrn;

 ID_COLUMN DATA_COL
---------- ---------------
         5 5

SQL> select * from spm_test_table where id_column=:qrn;

 ID_COLUMN DATA_COL
---------- ---------------
         5 5

SQL> alter session set OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES=FALSE;

Session altered.

SQL> 
SQL> connect hemant/hemant
Connected.
SQL>  
SQL> select sql_handle, plan_name, sql_text, enabled, accepted, fixed
  2  from dba_sql_plan_baselines
  3  where creator='SPM_TEST'
  4  /

SQL_HANDLE                     PLAN_NAME
------------------------------ ------------------------------
SQL_TEXT
--------------------------------------------------------------------------------
ENA ACC FIX
--- --- ---
SQL_6ceee9b24e9fd50a           SQL_PLAN_6tvr9q979zp8a1e198e55
select * from spm_test_table where id_column=:qrn
YES YES NO


SQL> 
SQL> connect spm_test/spm_test
Connected.
SQL> declare 
  2  ret_value pls_integer;
  3  begin
  4  ret_value := dbms_spm.drop_sql_plan_baseline(
  5  sql_handle=>'SQL_6ceee9b24e9fd50a',plan_name=>'SQL_PLAN_6tvr9q979zp8a1e198e55');
  6  end;
  7  /
declare
*
ERROR at line 1:
ORA-38171: Insufficient privileges for SQL management object operation
ORA-06512: at "SYS.DBMS_SPM", line 2532
ORA-06512: at line 4


SQL> select table_name, grantee, privilege
  2  from all_tab_privs
  3  where table_name='DBMS_SPM' 
  4  order by 2,3;

TABLE_NAME                     GRANTEE
------------------------------ ------------------------------
PRIVILEGE
----------------------------------------
DBMS_SPM                       PUBLIC
EXECUTE


SQL> 
SQL> connect / as sysdba
Connected.
SQL> grant execute on dbms_spm to spm_test;

Grant succeeded.

SQL> connect spm_test/spm_test
Connected.
SQL> declare
  2  ret_value pls_integer;
  3  begin 
  4  ret_value := dbms_spm.drop_sql_plan_baseline(
  5  sql_handle=>'SQL_6ceee9b24e9fd50a',plan_name=>'SQL_PLAN_6tvr9q979zp8a1e198e55');
  6  dbms_output.put_line(ret_value);
  7  end;
  8  /
declare
*
ERROR at line 1:
ORA-38171: Insufficient privileges for SQL management object operation
ORA-06512: at "SYS.DBMS_SPM", line 2532
ORA-06512: at line 4


SQL> 
SQL> connect / as sysdba
Connected.
SQL> 
SQL> grant administer sql management object to spm_test;

Grant succeeded.

SQL> 
SQL> connect spm_test/spm_test
Connected.
SQL> declare
  2  ret_value pls_integer;
  3  begin
  4  ret_value := dbms_spm.drop_sql_plan_baseline(
  5  sql_handle=>'SQL_6ceee9b24e9fd50a',plan_name=>'SQL_PLAN_6tvr9q979zp8a1e198e55');
  6  dbms_output.put_line(ret_value);
  7  end;
  8  /

PL/SQL procedure successfully completed.

SQL> 
SQL> connect hemant/hemant
Connected.
SQL> select sql_handle, plan_name, sql_text, enabled, accepted, fixed
  2  from dba_sql_plan_baselines
  3  where creator = 'SPM_TEST'
  4  /

no rows selected

SQL> 

Thus, although EXECUTE on DBMS_SPM had been granted to PUBLIC and even explicitly to this ordinary user, it couldn't execute DROP_SQL_PLAN_BASELINE.  The ADMINISTER SQL MANAGEMENT OBJECT privilege was required.
.
.
.


10 September, 2014

Index Growing Larger Than The Table

Here is a very simple demonstration of a case where an Index can grow larger than the table.  This happens because the pattern of data deleted and inserted doesn't allow deleted entries to be reused.  For every 10 rows that are inserted, 7 rows are subsequently deleted after their status is changed to "Processed".  But the space for the deleted entries from the index cannot be reused.

SQL>
SQL>REM Demo Index growth larger than table !
SQL>
SQL>drop table hkc_process_list purge;

Table dropped.

SQL>
SQL>create table hkc_process_list
  2  (transaction_id number,
  3  status_flag varchar2(1),
  4  last_update_date date,
  5  transaction_type number,
  6  details varchar2(25))
  7  /

Table created.

SQL>
SQL>create index hkc_process_list_ndx
  2  on hkc_process_list
  3  (transaction_id, status_flag)
  4  /

Index created.

SQL>
SQL>
SQL>REM Cycle 1 -------------------------------------
> -- create first 1000 transactions
SQL>insert into hkc_process_list
  2  select rownum, 'N', sysdate, mod(rownum,4)+1, dbms_random.string('X',10)
  3  from dual
  4  connect by level < 1001
  5  /

1000 rows created.

SQL>commit;

Commit complete.

SQL>
SQL>-- get sizes of table and index
SQL>exec dbms_stats.gather_table_stats('','HKC_PROCESS_LIST',estimate_percent=>100,cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL>select 'Table' Obj_Type, table_name, blocks Blocks
  2  from user_tables
  3  where table_name like 'HKC_PROCE%'
  4  union
  5  select 'Index', index_name, leaf_blocks
  6  from user_indexes
  7  where index_name like 'HKC_PROCE%'
  8  order by 1
  9  /

OBJ_T TABLE_NAME                         BLOCKS                                 
----- ------------------------------ ----------                                 
Index HKC_PROCESS_LIST_NDX                    3                                 
Table HKC_PROCESS_LIST                        5                                 

2 rows selected.

SQL>
SQL>-- change status flag for 70% of the transactions to 'P'
SQL>update hkc_process_list
  2  set status_flag='P'
  3  where mod(transaction_id,10) < 7
  4  /

700 rows updated.

SQL>commit;

Commit complete.

SQL>
SQL>-- delete processed rows
SQL>delete hkc_process_list
  2  where status_flag='P'
  3  /

700 rows deleted.

SQL>commit;

Commit complete.

SQL>
SQL>REM Cycle 2 -------------------------------------
> -- insert another 1000 rows
SQL>insert into hkc_process_list
  2  select rownum+1000, 'N', sysdate, mod(rownum,4)+1, dbms_random.string('X',10)
  3  from dual
  4  connect by level < 1001
  5  /

1000 rows created.

SQL>commit;

Commit complete.

SQL>
SQL>-- get sizes of table and index
SQL>exec dbms_stats.gather_table_stats('','HKC_PROCESS_LIST',estimate_percent=>100,cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL>select 'Table' Obj_Type, table_name, blocks Blocks
  2  from user_tables
  3  where table_name like 'HKC_PROCE%'
  4  union
  5  select 'Index', index_name, leaf_blocks
  6  from user_indexes
  7  where index_name like 'HKC_PROCE%'
  8  order by 1
  9  /

OBJ_T TABLE_NAME                         BLOCKS                                 
----- ------------------------------ ----------                                 
Index HKC_PROCESS_LIST_NDX                    7                                 
Table HKC_PROCESS_LIST                       13                                 

2 rows selected.

SQL>
SQL>-- change status flag for 70% of the transactions to 'P'
SQL>update hkc_process_list
  2  set status_flag='P'
  3  where mod(transaction_id,10) < 7
  4  /

700 rows updated.

SQL>commit;

Commit complete.

SQL>
SQL>-- delete processed rows
SQL>delete hkc_process_list
  2  where status_flag='P'
  3  /

700 rows deleted.

SQL>commit;

Commit complete.

SQL>
SQL>
SQL>REM Cycle 3 -------------------------------------
> -- insert another 1000 rows
SQL>insert into hkc_process_list
  2  select rownum+2000, 'N', sysdate, mod(rownum,4)+1, dbms_random.string('X',10)
  3  from dual
  4  connect by level < 1001
  5  /

1000 rows created.

SQL>commit;

Commit complete.

SQL>
SQL>-- get sizes of table and index
SQL>exec dbms_stats.gather_table_stats('','HKC_PROCESS_LIST',estimate_percent=>100,cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL>select 'Table' Obj_Type, table_name, blocks Blocks
  2  from user_tables
  3  where table_name like 'HKC_PROCE%'
  4  union
  5  select 'Index', index_name, leaf_blocks
  6  from user_indexes
  7  where index_name like 'HKC_PROCE%'
  8  order by 1
  9  /

OBJ_T TABLE_NAME                         BLOCKS                                 
----- ------------------------------ ----------                                 
Index HKC_PROCESS_LIST_NDX                   11                                 
Table HKC_PROCESS_LIST                       13                                 

2 rows selected.

SQL>
SQL>-- change status flag for 70% of the transactions to 'P'
SQL>update hkc_process_list
  2  set status_flag='P'
  3  where mod(transaction_id,10) < 7
  4  /

700 rows updated.

SQL>commit;

Commit complete.

SQL>
SQL>-- delete processed rows
SQL>delete hkc_process_list
  2  where status_flag='P'
  3  /

700 rows deleted.

SQL>commit;

Commit complete.

SQL>
SQL>
SQL>REM Cycle 4 -------------------------------------
> -- insert another 1000 rows
SQL>insert into hkc_process_list
  2  select rownum+3000, 'N', sysdate, mod(rownum,4)+1, dbms_random.string('X',10)
  3  from dual
  4  connect by level < 1001
  5  /

1000 rows created.

SQL>commit;

Commit complete.

SQL>
SQL>-- get sizes of table and index
SQL>exec dbms_stats.gather_table_stats('','HKC_PROCESS_LIST',estimate_percent=>100,cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL>select 'Table' Obj_Type, table_name, blocks Blocks
  2  from user_tables
  3  where table_name like 'HKC_PROCE%'
  4  union
  5  select 'Index', index_name, leaf_blocks
  6  from user_indexes
  7  where index_name like 'HKC_PROCE%'
  8  order by 1
  9  /

OBJ_T TABLE_NAME                         BLOCKS                                 
----- ------------------------------ ----------                                 
Index HKC_PROCESS_LIST_NDX                   15                                 
Table HKC_PROCESS_LIST                       13                                 

2 rows selected.

SQL>
SQL>-- change status flag for 70% of the transactions to 'P'
SQL>update hkc_process_list
  2  set status_flag='P'
  3  where mod(transaction_id,10) < 7
  4  /

700 rows updated.

SQL>commit;

Commit complete.

SQL>
SQL>-- delete processed rows
SQL>delete hkc_process_list
  2  where status_flag='P'
  3  /

700 rows deleted.

SQL>commit;

Commit complete.

SQL>
SQL>
SQL>REM  Latest State size -------------------------
> -- get sizes of table and index
SQL>exec dbms_stats.gather_table_stats('','HKC_PROCESS_LIST',estimate_percent=>100,cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL>select 'Table' Obj_Type, table_name, blocks Blocks
  2  from user_tables
  3  where table_name like 'HKC_PROCE%'
  4  union
  5  select 'Index', index_name, leaf_blocks
  6  from user_indexes
  7  where index_name like 'HKC_PROCE%'
  8  order by 1
  9  /

OBJ_T TABLE_NAME                         BLOCKS                                 
----- ------------------------------ ----------                                 
Index HKC_PROCESS_LIST_NDX                   17                                 
Table HKC_PROCESS_LIST                       13                                 

2 rows selected.

SQL>
SQL>

Note how the Index grew from 3 blocks to 17 blocks, larger than the table that grew to 13 and seemed to have reached a "steady-state" at 13 blocks.

The Index is built on only 2 of the 5 columns of the table and these two columns are also "narrow" in that they are a number and a single character.  Yet it grows faster through the INSERT - DELETE - INSERT cycles.

Note the difference between the Index definition (built on TRANSACTION_ID as the leading column) and the pattern of DELETEs (which is on STATUS_FLAG).

Deleted rows leave "holes" in the index but these are entries that cannot be reused by subsequent
Inserts.  The Index is ordered on TRANSACTION_ID.  So if an Index entry for TRANSACTION_ID = n is deleted, the entry can be reused only for the same (or very close) TRANSACTION_ID.

Assume that an Index Leaf Block contains entries for TRANSACTION_IDs 1, 2, 3, 4 and so on upto 10.  If rows for TRANSACTION_IDs 2,3,5,6,8 and 9 are deleted but 1,4,7 and 10  are not deleted then the Leaf Block has "free" space for new rows only with TRANSACTION_IDs 2,3,5,6,8 and 9.  New rows with TRANSACTION_IDs 11 and above will take a new Index Leaf Block and not re-use the "free" space in the first Index Leaf Block.  The first Leaf Block remains with deleted entries that are not reused.
On the other hand, when the rows are delete from the Table Block, new rows can be reinserted into the same Table Block.  The Table is Heap Organised, not Ordered like the Index.  Therefore, new rows are permitted to be inserted into any Block(s) that contain space for those new rows -- e.g. blocks from which rows are deleted.  Therefore, after deleting TRANSACTION_IDs 2,3,5,6 from a Table Block, new TRANSACTION_IDs 11,12,13,14 can be re-inserted into the *same* Block.

.
.
.

07 September, 2014

RAC Database Backups

In 11gR2 Grid Infrastructure and RAC


UPDATE : 13-Sep-14 : How to run the RMAN Backup using server sessions concurrently on each node.  Please scroll down to the update.


In a RAC environment, the database backups can be executed from any one node or distributed across multiple nodes of the cluster.

In my two-node environment, I have backups configured to go to an FRA.  This is defined by the instance parameter "db_recovery_file_dest" (and "db_recovery_file_dest_size").  This can be a shared location -- e.g. an ASM DiskGroup or a ClusterFileSystem.  Therefore, the parameter should ideally be the same across all nodes so that backups may be executed from any or multiple nodes without changing the backup location.

Running the RMAN commands from node1 :
[root@node1 ~]# su - oracle
-sh-3.2$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Sun Sep 7 21:56:46 2014

Copyright (c) 1982, 2010, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> show parameter db_recovery_file

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest                string      +FRA
db_recovery_file_dest_size           big integer 4000M
SQL>
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
-sh-3.2$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Sun Sep 7 21:57:49 2014

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to target database: RACDB (DBID=762767011)

RMAN> list backup summary;

using target database control file instead of recovery catalog

List of Backups
===============
Key     TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
12      B  F  A DISK        26-NOV-11       1       1       YES        TAG20111126T224849
13      B  A  A DISK        26-NOV-11       1       1       YES        TAG20111126T230108
16      B  A  A DISK        16-JUN-14       1       1       YES        TAG20140616T222340
18      B  A  A DISK        16-JUN-14       1       1       YES        TAG20140616T222738
19      B  F  A DISK        16-JUN-14       1       1       NO         TAG20140616T222742
20      B  F  A DISK        05-JUL-14       1       1       NO         TAG20140705T173046
21      B  F  A DISK        16-AUG-14       1       1       NO         TAG20140816T231412
22      B  F  A DISK        17-AUG-14       1       1       NO         TAG20140817T002340

RMAN> 
RMAN> backup as compressed backupset database plus archivelog delete input;


Starting backup at 07-SEP-14
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=111 RECID=77 STAMP=857685630
input archived log thread=2 sequence=37 RECID=76 STAMP=857685626
input archived log thread=2 sequence=38 RECID=79 STAMP=857685684
input archived log thread=1 sequence=112 RECID=78 STAMP=857685681
channel ORA_DISK_1: starting piece 1 at 07-SEP-14
channel ORA_DISK_1: finished piece 1 at 07-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_07/annnf0_tag20140907t220131_0.288.857685699 tag=TAG20140907T220131 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:09
channel ORA_DISK_1: deleting archived log(s)
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_1_seq_111.307.857685623 RECID=77 STAMP=857685630
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_2_seq_37.309.857685623 RECID=76 STAMP=857685626
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_2_seq_38.277.857685685 RECID=79 STAMP=857685684
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_1_seq_112.270.857685681 RECID=78 STAMP=857685681
Finished backup at 07-SEP-14

Starting backup at 07-SEP-14
using channel ORA_DISK_1
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATA1/racdb/datafile/system.257.765499365
input datafile file number=00002 name=+DATA2/racdb/datafile/sysaux.256.765502307
input datafile file number=00003 name=+DATA1/racdb/datafile/undotbs1.259.765500033
input datafile file number=00004 name=+DATA2/racdb/datafile/undotbs2.257.765503281
input datafile file number=00006 name=+DATA1/racdb/datafile/partition_test.265.809628399
input datafile file number=00007 name=+DATA1/racdb/datafile/hemant_tbs.266.852139375
input datafile file number=00008 name=+DATA3/racdb/datafile/new_tbs.256.855792859
input datafile file number=00005 name=+DATA1/racdb/datafile/users.261.765500215
channel ORA_DISK_1: starting piece 1 at 07-SEP-14
channel ORA_DISK_1: finished piece 1 at 07-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_07/nnndf0_tag20140907t220145_0.270.857685709 tag=TAG20140907T220145 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:06:15
Finished backup at 07-SEP-14

Starting backup at 07-SEP-14
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=113 RECID=81 STAMP=857686085
input archived log thread=2 sequence=39 RECID=80 STAMP=857686083
channel ORA_DISK_1: starting piece 1 at 07-SEP-14
channel ORA_DISK_1: finished piece 1 at 07-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_07/annnf0_tag20140907t220807_0.307.857686087 tag=TAG20140907T220807 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_1_seq_113.309.857686085 RECID=81 STAMP=857686085
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_2_seq_39.277.857686083 RECID=80 STAMP=857686083
Finished backup at 07-SEP-14

Starting Control File and SPFILE Autobackup at 07-SEP-14
piece handle=+FRA/racdb/autobackup/2014_09_07/s_857686089.277.857686097 comment=NONE
Finished Control File and SPFILE Autobackup at 07-SEP-14

RMAN> 

Note how the "PLUS ARCHIVELOG" specification also included archivelogs from both threads (instances) of the database.

Let's verify these details from the instance on node2 :

[root@node2 ~]# su - oracle
-sh-3.2$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Sun Sep 7 22:11:00 2014

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to target database: RACDB (DBID=762767011)

RMAN> 

RMAN> list backup of database completed after 'trunc(sysdate)-1';

using target database control file instead of recovery catalog

List of Backup Sets
===================


BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
24      Full    258.21M    DISK        00:06:12     07-SEP-14      
        BP Key: 24   Status: AVAILABLE  Compressed: YES  Tag: TAG20140907T220145
        Piece Name: +FRA/racdb/backupset/2014_09_07/nnndf0_tag20140907t220145_0.270.857685709
  List of Datafiles in backup set 24
  File LV Type Ckp SCN    Ckp Time  Name
  ---- -- ---- ---------- --------- ----
  1       Full 1160228    07-SEP-14 +DATA1/racdb/datafile/system.257.765499365
  2       Full 1160228    07-SEP-14 +DATA2/racdb/datafile/sysaux.256.765502307
  3       Full 1160228    07-SEP-14 +DATA1/racdb/datafile/undotbs1.259.765500033
  4       Full 1160228    07-SEP-14 +DATA2/racdb/datafile/undotbs2.257.765503281
  5       Full 1160228    07-SEP-14 +DATA1/racdb/datafile/users.261.765500215
  6       Full 1160228    07-SEP-14 +DATA1/racdb/datafile/partition_test.265.809628399
  7       Full 1160228    07-SEP-14 +DATA1/racdb/datafile/hemant_tbs.266.852139375
  8       Full 1160228    07-SEP-14 +DATA3/racdb/datafile/new_tbs.256.855792859

RMAN> 

Yes, today's backup is visible from node2 as it retrieves the information from the controlfile that is common across all the instances of the database.

How are the archivelogs configured ?

RMAN> exit


Recovery Manager complete.
-sh-3.2$
-sh-3.2$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Sun Sep 7 22:15:51 2014

Copyright (c) 1982, 2010, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> archive log list;
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     39
Next log sequence to archive   40
Current log sequence           40
SQL> 
SQL> show parameter db_recovery_file_dest

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest                string      +FRA
db_recovery_file_dest_size           big integer 4000M
SQL> 

Both instances have the same destination configured for archivelogs and backups.
.
.
.
=======================================================
UPDATE : 13-Sep-14 :  Running the backup concurrently from both nodes 

There are two ways to have the RMAN Backup run from both nodes.
A.   Issue a seperate RMAN BACKUP DATAFILE or BACKUP TABLESPACE command from each node, such that the two nodes have an independent list of Datafiles / Tablespaces

B.  Issue a BACKUP DATABASE command from one node but with two channels open, one against each node.

Here, method A is easy to do but difficult to control as you add Tablespaces and Datafiles.  So, I will demonstrate method B.

I begin with ensuring that
a.  I have REMOTE_LOGIN_PASSWORDFILE configured so that I can make a SQLNet connection from node1 to node2  (RMAN requires the connect AS SYSDBA in 11g)
b.  I have a TNSNAMES.ORA entry configured to the instance on node2 (note that the service name is common across all [both] instances in the Cluster)

-sh-3.2$ hostname
node1.mydomain.com
-sh-3.2$ id
uid=800(oracle) gid=1001(oinstall) groups=1001(oinstall),1011(asmdba),1021(dba)
-sh-3.2$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Sat Sep 13 23:22:09 2014

Copyright (c) 1982, 2010, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> show parameter remote_login_passwordfile;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
remote_login_passwordfile            string      EXCLUSIVE
SQL> quit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
-sh-3.2$ cat $ORACLE_HOME/network/admin/tnsnames.ora
# tnsnames.ora.node1 Network Configuration File: /u01/app/oracle/rdbms/11.2.0/network/admin/tnsnames.ora.node1
# Generated by Oracle configuration tools.

RACDB_1 =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = node1)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = RACDB)
    )
  )

RACDB_2 =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = node2)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = RACDB)
    )
  )

-sh-3.2$ 

Next, I start RMAN and allocate two Channels, one for each Instance (on each Node in the Cluster) and issue a BACKUP DATABASE that is automatically executed across both Channels.

-sh-3.2$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Sat Sep 13 23:23:24 2014

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to target database: RACDB (DBID=762767011)

RMAN> run
2> {allocate channel  ch1 device type disk connect 'sys/manager@RACDB_1';
3> allocate channel ch2 device type disk connect 'sys/manager@RACDB_2';
4> backup as compressed backupset database plus archivelog delete input;
5> }

using target database control file instead of recovery catalog
allocated channel: ch1
channel ch1: SID=61 instance=RACDB_1 device type=DISK

allocated channel: ch2
channel ch2: SID=61 instance=RACDB_2 device type=DISK


Starting backup at 13-SEP-14
current log archived
channel ch1: starting compressed archived log backup set
channel ch1: specifying archived log(s) in backup set
input archived log thread=2 sequence=40 RECID=82 STAMP=857687640
input archived log thread=1 sequence=114 RECID=84 STAMP=858204801
input archived log thread=2 sequence=41 RECID=83 STAMP=857687641
input archived log thread=1 sequence=115 RECID=86 STAMP=858208025
channel ch1: starting piece 1 at 13-SEP-14
channel ch2: starting compressed archived log backup set
channel ch2: specifying archived log(s) in backup set
input archived log thread=2 sequence=42 RECID=85 STAMP=858208000
input archived log thread=1 sequence=116 RECID=87 STAMP=858209078
input archived log thread=2 sequence=43 RECID=88 STAMP=858209079
channel ch2: starting piece 1 at 13-SEP-14
channel ch2: finished piece 1 at 13-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_13/annnf0_tag20140913t232445_0.279.858209109 tag=TAG20140913T232445 comment=NONE
channel ch2: backup set complete, elapsed time: 00:00:26
channel ch2: deleting archived log(s)
archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_2_seq_42.296.858207997 RECID=85 STAMP=858208000
archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_1_seq_116.263.858209079 RECID=87 STAMP=858209078
archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_2_seq_43.265.858209079 RECID=88 STAMP=858209079
channel ch1: finished piece 1 at 13-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_13/annnf0_tag20140913t232445_0.275.858209099 tag=TAG20140913T232445 comment=NONE
channel ch1: backup set complete, elapsed time: 00:00:56
channel ch1: deleting archived log(s)
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_2_seq_40.309.857687641 RECID=82 STAMP=857687640
archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_1_seq_114.295.858204777 RECID=84 STAMP=858204801
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_2_seq_41.293.857687641 RECID=83 STAMP=857687641
archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_1_seq_115.305.858208001 RECID=86 STAMP=858208025
Finished backup at 13-SEP-14

Starting backup at 13-SEP-14
channel ch1: starting compressed full datafile backup set
channel ch1: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATA1/racdb/datafile/system.257.765499365
input datafile file number=00004 name=+DATA2/racdb/datafile/undotbs2.257.765503281
input datafile file number=00007 name=+DATA1/racdb/datafile/hemant_tbs.266.852139375
input datafile file number=00008 name=+DATA3/racdb/datafile/new_tbs.256.855792859
channel ch1: starting piece 1 at 13-SEP-14
channel ch2: starting compressed full datafile backup set
channel ch2: specifying datafile(s) in backup set
input datafile file number=00002 name=+DATA2/racdb/datafile/sysaux.256.765502307
input datafile file number=00003 name=+DATA1/racdb/datafile/undotbs1.259.765500033
input datafile file number=00006 name=+DATA1/racdb/datafile/partition_test.265.809628399
input datafile file number=00005 name=+DATA1/racdb/datafile/users.261.765500215
channel ch2: starting piece 1 at 13-SEP-14
channel ch2: finished piece 1 at 13-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_13/nnndf0_tag20140913t232557_0.293.858209175 tag=TAG20140913T232557 comment=NONE
channel ch2: backup set complete, elapsed time: 00:12:02
channel ch1: finished piece 1 at 13-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_13/nnndf0_tag20140913t232557_0.305.858209163 tag=TAG20140913T232557 comment=NONE
channel ch1: backup set complete, elapsed time: 00:13:06
Finished backup at 13-SEP-14

Starting backup at 13-SEP-14
current log archived
channel ch1: starting compressed archived log backup set
channel ch1: specifying archived log(s) in backup set
input archived log thread=1 sequence=117 RECID=90 STAMP=858209954
channel ch1: starting piece 1 at 13-SEP-14
channel ch2: starting compressed archived log backup set
channel ch2: specifying archived log(s) in backup set
input archived log thread=2 sequence=44 RECID=89 STAMP=858209952
channel ch2: starting piece 1 at 13-SEP-14
channel ch1: finished piece 1 at 13-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_13/annnf0_tag20140913t233915_0.265.858209957 tag=TAG20140913T233915 comment=NONE
channel ch1: backup set complete, elapsed time: 00:00:03
channel ch1: deleting archived log(s)
archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_1_seq_117.309.858209953 RECID=90 STAMP=858209954
channel ch2: finished piece 1 at 13-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_13/annnf0_tag20140913t233915_0.263.858209957 tag=TAG20140913T233915 comment=NONE
channel ch2: backup set complete, elapsed time: 00:00:03
channel ch2: deleting archived log(s)
archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_2_seq_44.295.858209951 RECID=89 STAMP=858209952
Finished backup at 13-SEP-14

Starting Control File and SPFILE Autobackup at 13-SEP-14
piece handle=+FRA/racdb/autobackup/2014_09_13/s_858209961.295.858209967 comment=NONE
Finished Control File and SPFILE Autobackup at 13-SEP-14
released channel: ch1
released channel: ch2

RMAN> 

We can see that Channel ch1 was connected to Instance RACDB_1 and ch2 was connected to RACDB_2. Also, the messages indicate that both channels were running concurrently.
I also verified that the Channels did connect to each instance :

[root@node1 ~]# ps -ef |grep RACDB_1 |grep LOCAL=NO
oracle   11205     1  1 23:24 ?        00:00:00 oracleRACDB_1 (LOCAL=NO)
You have new mail in /var/spool/mail/root
[root@node1 ~]# ps -ef  |grep RACDB_1 |grep LOCAL=NO
oracle   11205     1  3 23:24 ?        00:00:04 oracleRACDB_1 (LOCAL=NO)
[root@node1 ~]# ps -ef |grep RACDB_1 |grep LOCAL=NO
oracle   11205     1  4 23:24 ?        00:00:49 oracleRACDB_1 (LOCAL=NO)
[root@node1 ~]# 
[root@node2 ~]# ps -ef |grep RACDB_2 | grep LOCAL=NO
oracle    6233     1  0 23:24 ?        00:00:00 oracleRACDB_2 (LOCAL=NO)
You have new mail in /var/spool/mail/root
[root@node2 ~]# ps -ef |grep RACDB_2 |grep LOCAL=NO
oracle    6233     1  0 23:24 ?        00:00:00 oracleRACDB_2 (LOCAL=NO)
[root@node2 ~]# ps -ef |grep RACDB_2 |grep LOCAL=NO
oracle    6233     1  2 23:24 ?        00:00:24 oracleRACDB_2 (LOCAL=NO)
[root@node2 ~]# 

As soon as I closed the RMAN (client) session, the two server processes also terminated.

This method (Method B) allows me to run an RMAN client session from any node in the Cluster and have RMAN server sessions running concurrently across all or some nodes of the Cluster, if I have not designated a single, specific node, as my RMAN Backups node.

Edit : I have demonstrated using ALLOCATE CHANNEL to run an adhoc, interactive, backup.  If you want to create a persistent script, you might want to use CONFIGURE CHANNEL and have the SYS password persisted in the configuration (saved in the controlfile) so that it is not in "plain text" in a script.

.
.
.