Saturday, March 31, 2012

RMAN redirect restore

Redirecting a restore to a different client

With Netbackup for Oracle you have the option to restore a database to a client other than the one that originally performed the backup. The process of restoring data to another client is called a redirected restore.





The user on client A cannot initiate a redirected restore to client B. Only the user on client B, which is the client receiving the backup image, can initiate the redirected restore.

The following sections describe how to perform a redirected restore using Oracle RMAN.

Server configuration


Ensure that the NetBackup server is configured to allow a redirected restore. The administrator can remove restrictions for all clients by creating the

following file on the Netbackup master server:
/usr/openv/netbackup/db/altnames/No.Restrictions


Or, to restrict clients to restore only from certain other clients, create the following file:


/usr/openv/netbackup/db/altnames/client_name



Where client_name is the name of the client allowed to do the redirected restore (the destination client). Then, add the name of the NetBackup for different server of source client to that file.

Using RMAN to perform a redirected restore

Perform the following procedure on the destination client host if you want to restore any RMAN backups that another client owns.



To perform a redirected restore


1. Enable a network connection to the RMAN catalog database that the source client used.

Note: If the RMAN catalog database has been lost, restore the catalog database first before continuing with the redirected restore.
2. Set the NB_ORA_CLIENT environment variable to the source client.
3. Check the bp.conf files on the source client. Make sure that the CLIENT_NAME variable either is not set or is set to the hostname of the source client.
4. Make the init.ora file of the source client available to the destination client. Copy the file to the destination client or modify the file on the destination

client. Change all location-specific parameters.
5. Grant write permission to the directory to which you want to restore the data files.
6 Set up a password file for the destination client database.
7 Start up the database in the nomount state.
8 Start RMAN, connecting to the catalog.
9 Set dbid to be the DBID of the source client database.
10 Connect to the target database without using a userid and password.
11 Run an RMAN restore script or type the RMAN commands for the restore.

Example
For example, assume the following:

■ Source client is Linux-MDDATA-mddsuwldb01
■ Destination client is Linux-MDDATA-mddsuwldb02
■ Master server is nbms001
■ Rman Catalog database is mddrcat
■ ORACLE_SID is mddrest
■ UNIX user is oracle on both mddsuwldb01 and mddsuwldb02





Using NetBackup for Oracle Performing a restore
1> Create the following file on server nbms001 and edit it to contain the name Linux-MDDATA-mddsuwldb01:

% touch /usr/openv/netbackup/db/altnames/Linux-MDDATA-mddsuwldb02
Or
%touch /usr/openv/netbackup/db/altnames/No.Restrictions

2> Log in to mddsuwldb02 as oracle
3> Set SERVER= nbms001 in /usr/openv/netbackup/bp.conf
This server must be the first server that is listed in the bp.conf file.
4> Modify the network tnsnames.ora file to enable RMAN catalog connection.
5> Set the environment variables ORACLE_SID to test and NB_ORA_CLIENT to camel.
6> Make sure the destination database directory exists and has appropriate access permissions. The data files are restored to the directory path with the same name they had when they were backed up.
7> Create an initmddrest.ora file.
8> Start up the database in a nomount state.

SQL>startup nomount pfile=$ORACLE_HOME/dbs/initmddrest.ora

% rman rcvcat rman/rman@mddrcat
RMAN> set dbid=
RMAN> connect target/
RMAN> run {
RMAN> ALLOCATE CHANNEL CH00 TYPE 'SBT_TAPE';
RMAN> SEND 'NB_ORA_SERV=nbms001, NB_ORA_CLIENT=Linux-MDDATA-mddsuwldb01';
RMAN> restore controlfile;
RMAN> }
SVRMGR> alter database mount;
%orapwd file=$ORACLE_HOME/dbs/orapwmddrest password=
%rman rcvcat rman/rman@mddrcat
RMAN>set dbid=
RMAN>connect target/
RMAN>run {
RMAN> ALLOCATE CHANNEL CH00 TYPE 'SBT_TAPE';
RMAN> ALLOCATE CHANNEL CH01 TYPE 'SBT_TAPE';
RMAN> SEND 'NB_ORA_SERV= nbms001, NB_ORA_CLIENT= Linux-MDDATA-mddsuwldb01';
RMAN> restore database;
RMAN> restore archivelog all;
RMAN> }
SQL>recover database until cancel using backup controlfile;

Now apply the archived logs. Type cancel when you decide to stop
recovery.
SQL>alter database open resetlogs;

Reference : Netbackup for Oracle Administrator’s Guide for Unix and Linux

Friday, February 3, 2012

Part 3 : DBMS_REPAIR scnario

DBMS_REPAIR package is used to work with corruption in the transaction layer and the data layer only (software corrupt blocks).locks with physical corruption (ex. fractured block) are marked as the block is read into the buffer cache and DBMS_REPAIR ignores all blocks marked corrupt.
Step 1: A corrupt block exists in table T1.

 SQL> desc scott.t1

Name Null? Type
---- -------- ----------------------------
COL1 NOT NULL NUMBER(38)
COL2 CHAR(512)


SQL> analyze table t1 validate structure;
analyze table t1 validate structure
*
ERROR at line 1:
ORA-01498: block check failure - see trace file


Step 2 : Dumpfile Details

dump file /u01/app/oracle/admin/mddtest/udump/mddtest_ora_2835.trc

kdbchk: row locked by non-existent transaction
table=0 slot=0
lockid=32 ktbbhitc=1
Block header dump: 0x01800003
Object id on Block? Y
seg/obj: 0xb6d csc: 0x00.1cf5f itc: 1 flg: - typ: 1 - DATA
fsl: 0 fnx: 0x0 ver: 0x01
Itl Xid Uba Flag Lck Scn/Fsc
0x01 xid: 0x0002.011.00000121 uba: 0x008018fb.0345.0d --U- 3 fsc
0x0000.0001cf60



data_block_dump
=============
tsiz: 0x7b8
hsiz: 0x18
pbl: 0x28088044
bdba: 0x01800003
flag=-----------
ntab=1
nrow=3
frre=-1
fsbo=0x18
fseo=0x19d
avsp=0x185
tosp=0x185


0xe:pti[0] nrow=3 offs=0
0x12:pri[0] offs=0x5ff
0x14:pri[1] offs=0x3a6
0x16:pri[2] offs=0x19d
block_row_dump:
[... remainder of file not included]
end_of_block_dump






Step 3: Create repair table and orphan table

SQL>
Declare
begin
dbms_repair.admin_tables (
table_name => 'REPAIR_TABLE',
table_type => dbms_repair.repair_table,
action => dbms_repair.create_action,
tablespace => 'USERS');
end;
/


SQL> select owner, object_name, object_type from dba_objects where object_name like '%REPAIR_TABLE';


OWNER OBJECT_NAME OBJECT_TYPE
-----------------------------
SYS DBA_REPAIR_TABLE VIEW
SYS REPAIR_TABLE TABLE


SQL> declare
begin
dbms_repair.admin_tables (
table_type => dbms_repair.orphan_table,
action => dbms_repair.create_action,
tablespace => 'USERS'); -- default TS of SYS if not specified
end;
/

PL/SQL procedure successfully completed.

SQL> select owner, object_name, object_type from dba_objects where object_name like '%ORPHAN_KEY_TABLE';




OWNER OBJECT_NAME OBJECT_TYPE
------------------------------
SYS DBA_ORPHAN_KEY_TABLE VIEW
SYS ORPHAN_KEY_TABLE TABLE


Step 4: Start check object

set serveroutput on
SQL> declare
rpr_count int;
begin
rpr_count := 0;
dbms_repair.check_object (
schema_name => 'SCOTT',
object_name => 'T1',
repair_table_name => 'REPAIR_TABLE',
corrupt_count => rpr_count);
dbms_output.put_line('repair count: 'to_char(rpr_count));
end;
/
repair count: 1

PL/SQL procedure successfully completed.


SQL> select object_name, block_id, corrupt_type, marked_corrupt,corrupt_description, repair_description from repair_table;


OBJECT_NAME : T1
BLOCK_ID : 3
CORRUPT_TYPE : 1
MARKED_COR : 3
CORRUPT_DESCRIPTION : kdbchk: row locked by non-existent transaction
REPAIR_DESCRIPTION : mark block software corrupt


Step 4: Fix corrupted block


SQL> declare
fix_count int;
begin
fix_count := 0;
dbms_repair.fix_corrupt_blocks (
schema_name => 'SYSTEM',
object_name => 'T1',
object_type => dbms_repair.table_object,
repair_table_name => 'REPAIR_TABLE',
fix_count => fix_count);
dbms_output.put_line('fix count: '
to_char(fix_count));
end;
/


fix count: 1


PL/SQL procedure successfully completed.



SQL> select object_name, block_id, marked_corrupt from repair_table;


OBJECT_NAME BLOCK_ID MARKED_COR
------------------------------ ---------- ----------
T1 3 TRUE





SQL> select * from scott.t1;
select * from system.t1
*
ERROR at line 1:
ORA-01578: ORACLE data block corrupted (file # 6, block # 3)
ORA-01110: data file 6: '/tmp/ts_corrupt.dbf'


Step 5: Fix orphan keys
SQL> select index_name from dba_indexes where table_name in (select distinct object_name from repair_table);



INDEX_NAME
------------------------------
T1_PK




SQL> set serveroutput on
SQL> declare
key_count int;
begin
key_count := 0;
dbms_repair.dump_orphan_keys (
schema_name => 'SYSTEM',
object_name => 'T1_PK',
object_type => dbms_repair.index_object,
repair_table_name => 'REPAIR_TABLE',
orphan_table_name => 'ORPHAN_KEY_TABLE',
key_count => key_count);
dbms_output.put_line('orphan key count: '
to_char(key_count));
end;
/

orphan key count: 3
PL/SQL procedure successfully completed.


SQL> select index_name, count(*) from orphan_key_table group by index_name;


INDEX_NAME COUNT(*)
------------------------------ ----------
T1_PK 3


Step 5: skip corrupt block
SQL> declare
begin
dbms_repair.skip_corrupt_blocks (
schema_name => 'SCOTT',
object_name => 'T1',
object_type => dbms_repair.table_object,
flags => dbms_repair.skip_flag);
end;
/

PL/SQL procedure successfully completed.


SQL> select table_name, skip_corrupt from dba_tables where table_name = 'T1';

TABLE_NAME SKIP_COR
------------------------------ --------
T1 ENABLED


SQL> select * from scott.t1;


COL1 COL2
----------
4 dddd
5 eeee

SQL> insert into scott.t1 values (1,'aaaa');
SQL> select * from system.t1 where col1 = 1;


no rows selected

Step 5: rebuild fresslists

SQL > declare
begin
dbms_repair.rebuild_freelists (
schema_name => 'SYSTEM',
object_name => 'T1',
object_type => dbms_repair.table_object);
end;
/

PL/SQL procedure successfully completed.


Step 6: Rebuild Index

SQL> alter index scott.t1_pk rebuild online;
Index altered.



SQL> insert into system.t1 values (1, 'aaaa');
1 row created.


SQL> select * from scott.t1;


COL1 COL2
-----------
4 dddd
5 eeee
1 aaaa


Note - The above insert statement was used to provide a simple example.This is the perfect world - we know the data that was lost. The temporary table (temp_t1) should also be used to include all rows extracted from the corrupt block.


















Thursday, December 15, 2011

Part 2 : Identify Corrupted Database Object


After collection corrutpion report in V$DATABASE_BLOCK_CORRUPTION

when I run the query:


SQL > select * from v$database_block_corruption;

FILE# BLOCK# BLOCKS CORRUPTION_CHANGE# CORRUPTION_TYPE
----- -----  ------  ----------------   ------------- 
64    57306     1      42451322513        CORRUPT

64    57148     1      41767312306        CORRUPT
64    57132     1      41767312306        CORRUPT

Step 1 : Create base table


create table pradipta.extent_report
as
select owner,segment_name,partition_name,segment_type,tablespace_name,file_id,block_id
from
dba_extents where rownum < 0;

Step 2 :  Run Segment Identifier


BEGIN
for c1 in (select file#,block# from V$DATABASE_BLOCK_CORRUPTION)
LOOP
insert into pradipta.extent_report
(select owner,segment_name,partition_name,segment_type,tablespace_name,file_id,block_id
from
dba_extents where file_id = c1.file# and c1.block# between block_id and
block_id +
blocks - 1);
END LOOP;
commit;
END;
/


Step 3: Check corrupted segments

PDD    IDX_PAT_01   ORG0330    INDEX PARTITION     IDX_PDD     64    57093

PDD    IDX_PAT_01   ORG0330    INDEX PARTITION     IDX_PDD     64    57093
PDD    IDX_PAT_01   ORG0330    INDEX PARTITION     IDX_PDD     64    57093

Tuesday, November 8, 2011

Part 1: Logical validatation of database

After we restore our production database from Tape backup. we found there are serveral loggical corruption in our database.

i.e

ORA-01578: ORACLE data block corrupted (file # 11, block # 4262)

ORA-01110: data file 11: '/db01/oracle/PROD/datafile//data01.dbf'

So I decided to run full datababase validatation in my way


Step 1: Create status table for datafile  

SQL> create table pradipta.rman_data_file as select file_id,null from dba_data_files;

 (This table will use as status and monitor)    

Step 2 : run_data_chk.sh  (Pilot Script)

#!/bin/sh

. ~oracle/.oenv

ORACLE_SID=mdd11g
export ORACLE_SID
ORACLE_HOME=$ORACLE_HOME
export ORACLE_HOME
ORACLE_SID=$ORACLE_SID
export ORACLE_SID


sqlplus -s / as sysdba <
Declare
v_input varchar2(4000);
BEGIN
for c1 in (select file_id from pradipta.rman_data_file_chk order by file_id)
LOOP
v_input :='/u02/bin/RMAN/script/logical_chk.sh '
c1.file_id;
sys.host(v_input); #(My magic OS setup) 
END LOOP;
END;
/


exit;
!EOF
exit



Step 3: Unix Script : logical_chk.sh (Slave Script)

#!/bin/sh

. ~oracle/.oenv
ORACLE_SID=mdd11g
export ORACLE_SID
datafile=$1
ORACLE_HOME=$ORACLE_HOME
export ORACLE_HOME
ORACLE_SID=$ORACLE_SID
export ORACLE_SID
rman target / nocatalog << EOF
RUN
{
allocate channel d1 type disk;
backup check logical validate datafile $datafile;
release channel d1;
}
EOF
RSTAT=$?
if [ "$RSTAT" = "0" ]
then
LOGMSG="SUCESS"
else
LOGMSG="FAILED"
fi


sqlplus -s / as sysdba <
update pradipta.rman_data_file_chk set status='${LOGMSG}' where file_id=${datafile};
commit;
exit;
!EOF
exit $RSTAT


 Step 4: Run pilot script to identify logical corruption

$ ./run_data_chk.sh &


 Step 5: Check Status

SQL> select * from pradipta.rman_data_file;

FILE_ID           STATUS
---------       ----------
64                FAILED

68                SUCESS
69                SUCESS
79                SUCESS
81                SUCESS
82                SUCESS
84                SUCESS

Wednesday, October 12, 2011

ORA-15196 After ASM Was Upgraded From 10gR2 To 11gR2

After upgrading a 10gR2 ASM instance to 11.2, ASM is able to mount the disk groups however these quickly dismount as soon as v$asm_file is queried:

SUCCESS: diskgroup DATA_DG01 was mounted
SUCCESS: ALTER DISKGROUP DATA_DG01 MOUNT /* asm agent */
Thu Jun 24 15:13:05 2010
NOTE: diskgroup resource ora.DATA_DG01.dg is online
Thu Jun 24 15:18:31 2010
WARNNING: cache read a corrupted block group=DATA_DG01 fn=1 blk=512 from disk 0
NOTE: a corrupted block from group DATA_DG01 was dumped to
/u000/app/grid/diag/asm/+asm/+ASM/trace/+ASM_ora_348464.trc
WARNNING: cache read(retry) a corrupted block group=DATA_DG01 fn=1 blk=512
from disk 0
ERROR: cache failed to read group=DATA_DG01 fn=1 blk=512 from disk(s): 0
DATA_DG01_0000
ORA-15196: invalid ASM block header [kfc.c:23925] [hard_kfbh] [1] [512] [0 !=130]




Solution

-----------

1) Run the following from 11.2 ASM on each disk group a few times (say 3 times):

SQL> ALTER DISKGROUP DATA_DG_1 CHECK ALL REPAIR;



On the first time they execute this command, the ASM alert log will show entries like:
"ERROR: file 1 extent 0: blocks XX to XX are unformatted"
The next runs, the ASM alert logs will show something like:
ERROR: file 1 extent 0: blocks XX to XX are unformatted"
"SUCCESS: file 1 extent 0 repaired"



2) Check if ASM in 11.2 still dismounts the disk group when querying v$asm_file.

3) If the disk group continues to be dismounted with the above errors, then restore the 10gR2 environment (like prior to the ASM upgrade) and then apply Patch 5100163 to the 10.2.0.4 ASM home.


4) After the patch was installed, then run the ALTER DISKGROUP...CHECK ALL REPAIR on all the disk groups. Then retry the ASM upgrade.

Thursday, September 22, 2011

Add a New Table/New Schema to an Existing Streams Setup



Step 1: Stop Apply process in Target

SQL> conn strmadmin/strmadmin

-- Check apply process name
select apply_name from dba_apply where apply_user=USER;


-- Stop apply process

SQL>
begin
dbms_apply_adm.stop_apply('TARGET_APPLY');
end;
/


Step 2: Creating Schema Apply rule in Target system

SQL> conn strmadmin/strmadmin
SQL>


BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name => 'STUDENT',
streams_type => 'APPLY',
streams_name => 'TARGET_APPLY',
queue_name => 'STRMADMIN.TARGET_Q',
include_dml => TRUE,
include_ddl => TRUE,
source_database => 'MDDRCAT.MDDATACOR.ORG');
END;
/



Step 3: Creating propagation process on Downstream for new schema

SQL> conn strmadmin/strmadmin
SQL>

begin
dbms_streams_adm.add_schema_propagation_rules (
schema_name => 'STUDENT',
streams_name => 'DOWNSTREAM_PROPAGATE',
source_queue_name => 'STRMADMIN.DOWNSTREAM_Q',
destination_queue_name=> 'STRMADMIN.TARGET_Q@MDDPROD.MDDATACOR.ORG',
include_dml => TRUE,
include_ddl => TRUE,
source_database => 'MDDRCAT.MDDATACOR.ORG',
inclusion_rule => TRUE,
queue_to_queue => TRUE);
end;
/



Step 4: Creating Schema CAPTURE rules on Downstream for new schema

SQL > conn strmadmin/strmadmin

SQL>


BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name => 'STUDENT',
streams_type => 'CAPTURE',
streams_name => 'DOWNSTREAM_CAPTURE',
queue_name => 'STRMADMIN.DOWNSTREAM_Q',
include_dml => TRUE,
include_ddl => TRUE,
include_tagged_lcr => FALSE,
source_database => 'MDDRCAT.MDDATACOR.ORG',
inclusion_rule => TRUE);
end;
/


Step 5: Instantiate the Schema at Target



SQL > conn strmadmin/strmadmin
SQL>

DECLARE
iscn NUMBER; -- Variable to hold instantiation SCN value
BEGIN
iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER@MDDRCAT.MDDATACOR.ORG ();
DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN(
source_schema_name => 'STUDENT',
source_database_name => 'MDDRCAT.MDDATACOR.ORG',
instantiation_scn => iscn,
recursive => true);
END;
/


-- Check Schema INSTANTIATION

SQL> select * from DBA_APPLY_INSTANTIATED_SCHEMAS;


Step 6: Start Apply process in Target

SQL> conn strmadmin/strmadmin

-- Check apply process name

select apply_name from dba_apply where apply_user=USER;


-- Stop apply process



SQL>
begin
dbms_apply_adm.start_apply('TARGET_APPLY');
end;

/







Wednesday, June 8, 2011

ASM foot notes


ASM exists to manage file storage for the RDBMS
  • ASM does NOT perform I/O on behalf of the RDBMS
  • I/O is performed by the RDBMS processes as it does with other storage types
  • Thus, ASM is not an intermediary for I/O (would be a bottleneck)
  • I/O can occur synchronously or asynchronously depending on the value of the DISK_ASYNCH_IO parameter
  • Disks are RAW devices to ASM
  • Files that can be stored in ASM: typical database data files, control files, redologs, archivelogs, flashback logs, spfiles,
  • RMAN backups and incremental tracking bitmaps, datapump dumpsets.

In 11gR2, ASM has been extended to allow storing any kind of file using Oracle ACFS capability (it appears as another filesystem to clients). Note that database files are not supported within ACFS


  
ASM Basics
  • The smallest unit of storage written to disk is called an "allocation unit" (AU) and is usually 1MB (4MB recommended for Exadata)
  • Very simply, ASM is organized around storing files
  • Files are divided into pieces called "extents"
  • Extent sizes are typically equal to 1 AU, except in 11g where it will use variable extent sizes that can be 1, 8, or 64 AUs
  • File extent locations are maintained by ASM using file extent maps.
  • ASM maintains file metadata in headers on the disks rather than in a data dictionary
  • The file extent maps are cached in the RDBMS shared pool; these are consulted when an RDBMS process does I/O
  • ASM is very crash resilient since it uses instance / crash recovery similar to a normal RDBMS (similar to using undo and redo logging)
Storage is organized into "diskgroups" (DGs)
  • A DG has a name like "DATA" in ASM which is visible to the RDBMS as a file begining with "+DATA"; when tablespaces are created, they refer to a DG for storage such as "+DATA/.../..."
  • Beneath a diskgroup are one or more failure groups (FGs)
  • FGs are defined over a set of "disks"
  • "Disks" can be based on raw physical volumes, a disk partition, a LUN presenting a disk array, or even an LVM or NAS device
  • FGs should have disks defined that have a common failure component, otherwise ASM redundancy will not be effective
High availability
  • ASM can perform mirroring to recover from device failures
  • You have a choice of EXTERNAL, NORMAL, OR HIGH redundancy mirroring
  • EXTERNAL means allow the underlying physical disk array do the mirroring
  • NORMAL means ASM will create one additional copy of an extent for redundancy 
  • HIGH means ASM will create two additional copies of an extent for redundancy
  • Mirroring is implemented via "failure groups" and extent partnering; ASM can tolerate the complete loss of all disks in a failure group when NORMAL or HIGH redundancy is implemented
 FG mirroring implementation 
  • Mirroring is not implemented like RAID 1 arrays (where a disk is partnered with another disk
  • Mirroring occurs at the file extent level and these extents are distributed among several disks known as "partners"
  • Partner disks will reside in one or more separate failure groups (otherwise mirror copies would be vulnerable)
  • ASM automatically choses partners and limits the number of them to less than 10 (varies by RDBMS version) in order to contain the overall impact of multiple disk failures
  • If a disk fails, then ASM updates its extent mapping such that reads will now occur on the surviving partners
  • This is one example when ASM and the RDBMS communicate with each other


Rebalancing 
  • "Rebalancing" is the process of moving file extents onto or off of disks for the purpose of evenly distributing the I/O load of the diskgroup
  • It occurs asynchronously in the background and can be monitored
  • In a clustered environment, rebalancing for a disk group is done within a single ASM instance only and cannot be distributed across multiple cluster node to speed it up
  • ASM will automatically rebalance data on disks when disks are added or removed
  • The speed and effort placed on rebalancing can be controlled via a POWER LIMIT setting
  • POWER LIMIT controls the number of background processes involved in the rebalancing effort and is limited to 11. Level 0 means no rebalancing will occur
  • I/O performance is impacted during rebalancing, but the amount of impact varies on which disks are being rebalanced and how much they are part of the I/O workload. The default power limit was chosen so as not to impact application performance

Performance
  • ASM will maximize the available bandwidth of disks by striping file extents across all disks in a DG
  • Two stripe widths are available: coarse which has a stripe size of 1 AU, and fine with stripe size of 128K
  • Fine striping still uses normally-sized file extents, but the striping occurs in small pieces across these extents in a round-robin fashion
  • ASM does not read from alternating mirror copies since disks contain primary and mirror extents and I/O is already balanced
  • By default the RDBMS will read from a primary extent; in 11.1 this can be changed via the PREFERRED_READ_FAILURE_GROUP parameter setting for cases where reading extents from a local node results in lower latency. Note: This is a special case applicable to "stretch clusters" and not applicable in the general usage of ASM
Miscellaneous

  • ASM can work for RAC and non-RAC databases
  • One ASM instance on a node will service any number of instances on that node
  • If using ASM for RAC, ASM must also be clustered to allow instances to update each other when file mapping changes occur
  • In 11.2 onwards, ASM is installed in a grid home along with the clusterware as opposed to an RDBMS home in prior versions.