Wednesday, June 8, 2011

ASM foot notes


ASM exists to manage file storage for the RDBMS
  • ASM does NOT perform I/O on behalf of the RDBMS
  • I/O is performed by the RDBMS processes as it does with other storage types
  • Thus, ASM is not an intermediary for I/O (would be a bottleneck)
  • I/O can occur synchronously or asynchronously depending on the value of the DISK_ASYNCH_IO parameter
  • Disks are RAW devices to ASM
  • Files that can be stored in ASM: typical database data files, control files, redologs, archivelogs, flashback logs, spfiles,
  • RMAN backups and incremental tracking bitmaps, datapump dumpsets.

In 11gR2, ASM has been extended to allow storing any kind of file using Oracle ACFS capability (it appears as another filesystem to clients). Note that database files are not supported within ACFS


  
ASM Basics
  • The smallest unit of storage written to disk is called an "allocation unit" (AU) and is usually 1MB (4MB recommended for Exadata)
  • Very simply, ASM is organized around storing files
  • Files are divided into pieces called "extents"
  • Extent sizes are typically equal to 1 AU, except in 11g where it will use variable extent sizes that can be 1, 8, or 64 AUs
  • File extent locations are maintained by ASM using file extent maps.
  • ASM maintains file metadata in headers on the disks rather than in a data dictionary
  • The file extent maps are cached in the RDBMS shared pool; these are consulted when an RDBMS process does I/O
  • ASM is very crash resilient since it uses instance / crash recovery similar to a normal RDBMS (similar to using undo and redo logging)
Storage is organized into "diskgroups" (DGs)
  • A DG has a name like "DATA" in ASM which is visible to the RDBMS as a file begining with "+DATA"; when tablespaces are created, they refer to a DG for storage such as "+DATA/.../..."
  • Beneath a diskgroup are one or more failure groups (FGs)
  • FGs are defined over a set of "disks"
  • "Disks" can be based on raw physical volumes, a disk partition, a LUN presenting a disk array, or even an LVM or NAS device
  • FGs should have disks defined that have a common failure component, otherwise ASM redundancy will not be effective
High availability
  • ASM can perform mirroring to recover from device failures
  • You have a choice of EXTERNAL, NORMAL, OR HIGH redundancy mirroring
  • EXTERNAL means allow the underlying physical disk array do the mirroring
  • NORMAL means ASM will create one additional copy of an extent for redundancy 
  • HIGH means ASM will create two additional copies of an extent for redundancy
  • Mirroring is implemented via "failure groups" and extent partnering; ASM can tolerate the complete loss of all disks in a failure group when NORMAL or HIGH redundancy is implemented
 FG mirroring implementation 
  • Mirroring is not implemented like RAID 1 arrays (where a disk is partnered with another disk
  • Mirroring occurs at the file extent level and these extents are distributed among several disks known as "partners"
  • Partner disks will reside in one or more separate failure groups (otherwise mirror copies would be vulnerable)
  • ASM automatically choses partners and limits the number of them to less than 10 (varies by RDBMS version) in order to contain the overall impact of multiple disk failures
  • If a disk fails, then ASM updates its extent mapping such that reads will now occur on the surviving partners
  • This is one example when ASM and the RDBMS communicate with each other


Rebalancing 
  • "Rebalancing" is the process of moving file extents onto or off of disks for the purpose of evenly distributing the I/O load of the diskgroup
  • It occurs asynchronously in the background and can be monitored
  • In a clustered environment, rebalancing for a disk group is done within a single ASM instance only and cannot be distributed across multiple cluster node to speed it up
  • ASM will automatically rebalance data on disks when disks are added or removed
  • The speed and effort placed on rebalancing can be controlled via a POWER LIMIT setting
  • POWER LIMIT controls the number of background processes involved in the rebalancing effort and is limited to 11. Level 0 means no rebalancing will occur
  • I/O performance is impacted during rebalancing, but the amount of impact varies on which disks are being rebalanced and how much they are part of the I/O workload. The default power limit was chosen so as not to impact application performance

Performance
  • ASM will maximize the available bandwidth of disks by striping file extents across all disks in a DG
  • Two stripe widths are available: coarse which has a stripe size of 1 AU, and fine with stripe size of 128K
  • Fine striping still uses normally-sized file extents, but the striping occurs in small pieces across these extents in a round-robin fashion
  • ASM does not read from alternating mirror copies since disks contain primary and mirror extents and I/O is already balanced
  • By default the RDBMS will read from a primary extent; in 11.1 this can be changed via the PREFERRED_READ_FAILURE_GROUP parameter setting for cases where reading extents from a local node results in lower latency. Note: This is a special case applicable to "stretch clusters" and not applicable in the general usage of ASM
Miscellaneous

  • ASM can work for RAC and non-RAC databases
  • One ASM instance on a node will service any number of instances on that node
  • If using ASM for RAC, ASM must also be clustered to allow instances to update each other when file mapping changes occur
  • In 11.2 onwards, ASM is installed in a grid home along with the clusterware as opposed to an RDBMS home in prior versions.



Thursday, May 19, 2011


This script calculates the size of all databases in the oratab file if no arguments are passed or the size of a specific database if an argument is passed.The first argument $1 is considered to be the instance name. All other arguments are ignored.


#! /bin/ksh

PROGRAM_NAME=`basename $0`
PROG_BIN_DIR=/users/oracle/local/prod/sh
PROG_LOG_DIR=/users/oracle/local/prod/logs
PATH=${PROG_BIN_DIR}:/users/oracle/product/10.2.0/db/bin:/usr/bin:/users/oracle/bin:/usr/local/bin:/sbin:/usr/sbin:/usr/ccs/bin:/usr/ucb:/usr/bin/X11:/usr/bin:/usr/proc/bin:/usr/ucb:/usr/local/bin
ORATABFILE=/etc/oratab
# If the first argument to this script is null, then calculate the size all databases in the oratab file
if [[ -z $1 ]]; then
# Get all SIDs from oratab file
for dbname in `sed -e '/^[ ]*$/d' -e '/^[ ]*\#/d' -e '/^[ ]*\*/d' ${ORATABFILE}
cut -d":" -f1
sed 's/ //'
grep -v '\+ASM'` ; do
RUNDATE=`date '+%m%d%Y_%H%M%S'`
# Continue with next SID when instance is not running
if [[ `ps -ef
grep "ora_smon_${dbname}"
grep -v 'grep'
wc -l
awk '{ print $1 }'` -eq 0 ]]; then
echo "Database Name ${dbname} is not up and running"
continue
fi
# Change the Oracle environment for the current dbname
ORACLE_SID=${dbname}
ORAENV_ASK=NO
. oraenv
ORAENV_ASK=YES
PATH=${ORACLE_HOME}/bin:${PATH}
LIBPATH=${ORACLE_HOME}/lib


echo "Database Size of ${dbname} = \c"
sqlplus -s / <<-EOF
WHENEVER SQLERROR EXIT FAILURE
SET PAGES 0 HEA OFF FEED OFF VER OFF
--
SELECT TO_CHAR(ROUND(SUM(bytes)/1024/1024/1024, 2))
' GB'
FROM
(
SELECT SUM(bytes) bytes
FROM dba_data_files
UNION
SELECT SUM(bytes) bytes
FROM dba_temp_files
)
/
EXIT
EOF
RETCODE=$?



if [[ ${RETCODE} != 0 ]]; then
echo "FAILURE : Calculating Database Size of Database ${dbname} On ${RUNDATE}"
continue
fi
done

# If the first argument to this script is not null, then calculate the size of the database specified as the argument
else
dbname=`echo $1
tr "[A-Z]" "[a-z]"`
RUNDATE=`date '+%m%d%Y_%H%M%S'`


# Exit when dbname does not exist in oratab file
# This is required because oraenv file is being called later
# and if dbname is missing from oratab file, the oraenv script
# will prompt for ORACLE_HOME




if [[ `grep "^${dbname}:" ${ORATABFILE}
wc -l
awk '{ print $1 }'` -eq 0 ]]; then
echo "Database Name ${dbname} does not exist in ${ORATABFILE} file"
exit 16
fi


# Exit when instance is not running
if [[ `ps -ef
grep "ora_smon_${dbname}"
grep -v 'grep'
wc -l
awk '{ print $1 }'` -eq 0 ]]; then
echo "Database Name ${dbname} is not up and running"
exit 16
fi






# Change the Oracle environment for the current dbname
ORACLE_SID=${dbname}
ORAENV_ASK=NO
. oraenv
ORAENV_ASK=YES
PATH=${ORACLE_HOME}/bin:${PATH}
LIBPATH=${ORACLE_HOME}/lib




echo "Database Size of ${dbname} = \c"
sqlplus -s / <<-EOF
WHENEVER SQLERROR EXIT FAILURE
SET PAGES 0 HEA OFF FEED OFF VER OFF
--
SELECT TO_CHAR(ROUND(SUM(bytes)/1024/1024/1024, 2))
' GB'
FROM
(
SELECT SUM(bytes) bytes
FROM dba_data_files
UNION
SELECT SUM(bytes) bytes
FROM dba_temp_files
)
/
EXIT
EOF
RETCODE=$?






if [[ ${RETCODE} != 0 ]]; then
echo "FAILURE : Calculating Database Size of Database ${dbname} On ${RUNDATE}"
fi
fi



Thursday, May 12, 2011

Apply or Delete Apply errors and Restart Stream Apply process

Step 1: Check Apply status



select apply_name,status from dba_apply;


APPLY_NAME                     STATUS
------------------------------ --------
TARGET_APPLY                   ABORTED


If status is aborted then check dba_apply_error view


Step 2: Check Apply error

 select local_transaction_id,error_number,error_message from dba_apply_error;


LOCAL_TRANSACTION_ID  ERROR_NUMBER ERROR_MESSAGE
-------------------- ------------- ------------------------
9.59.1031 959    ORA-00959: tablespace 'CATTBS' does not exist


Step 3: Print Apply error statement and LCRs

SQL> select local_transaction_id from dba_apply_error;


If the local transaction identifier is 1.17.2485, then enter the following:


SQL> set serveroutput on size unlimited
SQL> spool error.log
SQL> EXEC print_transaction('1.17.2485');
SQL> spool error.log



Step 4: Re-execute apply error

-- To Execute specific transaction

exec DBMS_APPLY_ADM.EXECUTE_ERROR ('');

-- To Execute all errors:

exec dbms_apply_Adm.execute_all_errors;


Step 5: Delete the apply errors

-- Check the errors in the error queue:

select apply_name, LOCAL_TRANSACTION_ID, ERROR_MESSAGE from dba_apply_error;

-- To delete specific transaction:

exec DBMS_APPLY_ADM.DELETE_ERROR('');

-- To delete all errors:

exec DBMS_APPLY_ADM.DELETE_ALL_ERRORS('');


Step 6: Check Apply status


select apply_name,status from dba_apply;


APPLY_NAME                      STATUS
------------------------------ --------
TARGET_APPLY                    ABORTED







Tuesday, March 1, 2011

Stream Administration

Handy queries for administration of stream.


--- Info about apply process

select * from dba_apply;


SELECT APPLY_NAME,QUEUE_NAME,RULE_SET_NAME,DECODE(APPLY_CAPTURED,'YES', 'Captured','NO', 'User-Enqueued') APPLY_CAPTURED,STATUS
FROM DBA_APPLY;


--- Find out apply process status

select apply_name,status from dba_apply;

exec dbms_apply_adm.start_apply(apply_name=>'&apply_name');

exec dbms_apply_adm.stop_apply(apply_name=>'&apply_name');

--- Apply parameter

SELECT PARAMETER,VALUE,SET_BY_USER FROM DBA_APPLY_PARAMETERS
WHERE APPLY_NAME = '&apply_parameter';


BEGIN
DBMS_APPLY_ADM.SET_PARAMETER(apply_name => '&apply_parameter',parameter => 'commit_serialization',value => 'none');
END;
/

-- Determine any instantiation

-- Instantiation SCN is SCN that apply process discard any SCN lower than or equal and commit any SCN higher than that.



SELECT SOURCE_DATABASE,SOURCE_OBJECT_OWNER,SOURCE_OBJECT_NAME,
INSTANTIATION_SCN
FROM DBA_APPLY_INSTANTIATED_OBJECTS;



--- Find out about any handler -- Any DML handler

SELECT OBJECT_OWNER,OBJECT_NAME,OPERATION_NAME,USER_PROCEDURE,
DECODE(ERROR_HANDLER,'Y', 'Error','N', 'DML') ERROR_HANDLER
FROM DBA_APPLY_DML_HANDLERS
WHERE APPLY_DATABASE_LINK IS NULLORDER BY OBJECT_OWNER, OBJECT_NAME, ERROR_HANDLER;



-- Any DDL/Message handler

SELECT APPLY_NAME, DDL_HANDLER, MESSAGE_HANDLER FROM DBA_APPLY;



-- Using key substitution for any table


-- Key substitution is useful when there is no PK for distinguishing rows.

SELECT OBJECT_OWNER, OBJECT_NAME, COLUMN_NAME, APPLY_DATABASE_LINK
FROM DBA_APPLY_KEY_COLUMNS
ORDER BY APPLY_DATABASE_LINK, OBJECT_OWNER, OBJECT_NAME;


-- Update conflict handler

SELECT OBJECT_OWNER,OBJECT_NAME,METHOD_NAME,RESOLUTION_COLUMN,
COLUMN_NAME
FROM DBA_APPLY_CONFLICT_COLUMNS
ORDER BY OBJECT_OWNER, OBJECT_NAME, RESOLUTION_COLUMN;



-- Reader servers


-- In parallel servers, reader server dequeues data and translates them to transactions,It also checks dependencies, Delivers all data to coordinator


SELECT DECODE(ap.APPLY_CAPTURED,'YES','Captured LCRS','NO','User-enqueued messages','UNKNOWN') APPLY_CAPTURED,SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4) PROCESS_NAME,r.STATE,r.TOTAL_MESSAGES_DEQUEUED
FROM V$STREAMS_APPLY_READER r, V$SESSION s, DBA_APPLY ap
WHERE r.APPLY_NAME = ap.APPLY_NAME AND r.SID = s.SID AND r.SERIAL# = s.SERIAL# AND r.APPLY_NAME = ap.APPLY_NAME;



-- Check out latency


-- Capture to dequeue latency


-- Latency : =(Dequeue_time in dest database - event creation in source DB)


-- Creation: = Time when redo log generated in source database / time user-message enqueued


-- Message number = MSG number which was dequeued.



SELECT (DEQUEUE_TIME - DEQUEUED_MESSAGE_CREATE_TIME) * 86400 LATENCY,
TO_CHAR (DEQUEUED_MESSAGE_CREATE_TIME, 'HH24:MI:SS MM/DD/YY') CREATION,
TO_CHAR (DEQUEUE_TIME, 'HH24:MI:SS MM/DD/YY') LAST_DEQUEUE,
DEQUEUED_MESSAGE_NUMBER FROM V$STREAMS_APPLY_READER a,dba_apply b
WHERE a.APPLY_NAME = b.apply_name;



--- Capture to apply latency

SELECT (HWM_TIME-HWM_MESSAGE_CREATE_TIME)*86400 "Latency in Seconds",TO_CHAR(HWM_MESSAGE_CREATE_TIME,'HH24:MI:SS MM/DD/YY') "Event Creation",TO_CHAR(HWM_TIME,'HH24:MI:SS MM/DD/YY') "Apply Time",HWM_MESSAGE_NUMBER "Applied Message Number"
FROM V$STREAMS_APPLY_COORDINATOR
WHERE APPLY_NAME = '&apply_name';
OR
SELECT (APPLY_TIME-APPLIED_MESSAGE_CREATE_TIME)*86400 "Latency in Seconds",TO_CHAR(APPLIED_MESSAGE_CREATE_TIME,'HH24:MI:SS MM/DD/YY')"Event Creation",TO_CHAR(APPLY_TIME,'HH24:MI:SS MM/DD/YY') "Apply Time",APPLIED_MESSAGE_NUMBER "Applied Message Number"
FROM DBA_APPLY_PROGRESSWHERE APPLY_NAME = '&apply_name';



--- Apply coordinator

select * from v$streams_apply_coordinator;



------ Effectiveness of parallelism in apply process.

SELECT COUNT(SERVER_ID) "Effective Parallelism"
FROM V$STREAMS_APPLY_SERVER
WHERE APPLY_NAME = 'STRMADMIN_ORA9IPR_US_ORAC' ANDTOTAL_MESSAGES_APPLIED > 0;

-- How many message applied by each apply process

SELECT SERVER_ID, TOTAL_MESSAGES_APPLIED
FROM V$STREAMS_APPLY_SERVER
WHERE APPLY_NAME = 'STRMADMIN_ORA9IPR_US_ORAC'ORDER BY SERVER_ID;

--- Find apply error -- Find out apply error reason, two options : reexecute transaction, delete transaction.

select * from apply_error;

-- Find out progpagation rule set

SELECT RULE_SET_OWNER, RULE_SET_NAMEFROM DBA_applyWHERE apply_NAME = '&apply_name';



select STREAMS_NAME,STREAMS_TYPE,RULE_TYPE,RULE_NAME,
TABLE_OWNER'.'TABLE_NAME,RULE_OWNER,RULE_CONDITION
from "DBA_STREAMS_TABLE_RULES"
where streams_type='APPLY' and rule_name in (select rule_name from DBA_RULE_SET_RULES a, dba_apply b where a.rule_set_name=b.rule_set_name)
union all
select STREAMS_NAME,STREAMS_TYPE,RULE_TYPE,RULE_NAME,
SCHEMA_NAME,RULE_OWNER,RULE_CONDITION
from "DBA_STREAMS_SCHEMA_RULES"
where streams_type='APPLY' and rule_name in (select rule_name from DBA_RULE_SET_RULES a, dba_apply b where a.rule_set_name=b.rule_set_name)
union all
select STREAMS_NAME,STREAMS_TYPE,RULE_TYPE,RULE_NAME,
null,RULE_OWNER,RULE_CONDITION
from "DBA_STREAMS_GLOBAL_RULES"
where streams_type='APPLY' and rule_name in (select rule_name from DBA_RULE_SET_RULES a, dba_apply b where a.rule_set_name=b.rule_set_name);



-- Which DML/DDL rules the capture process is capturing

select * from "DBA_STREAMS_TABLE_RULES"

where streams_type='APPLY' and rule_name in (select rule_name from DBA_RULE_SET_RULES a, dba_apply b where a.rule_set_name=b.rule_set_name and apply_name='&apply_name');

-- Which schame rules the capture process is capturing

select * from "DBA_STREAMS_SCHEMA_RULES"
where streams_type='APPLY' and rule_name in (select rule_name from DBA_RULE_SET_RULES a, dba_apply b where a.rule_set_name=b.rule_set_name and apply_name='&apply_name');

-- Which global rules the capture process is capturing

select * from "DBA_STREAMS_GLOBAL_RULES"

where streams_type='PROPAGATION' and rule_name in (select rule_name from DBA_RULE_SET_RULES a, dba_apply b where a.rule_set_name=b.rule_set_name and apply_name='&apply_name');



Thursday, November 11, 2010

Re-Synchronize Streams objects online


In-cases we need to re-synchronize the data between the replicating sites, like for example in cases of errors like ORA-01403 / ORA-1422 reported by the apply process because data is out of sync, and re-synchronizing manually is not feasible.

Here are the steps we need to do to re-synchronize the data between the source and target site in the streams environment, without the need to stop the activity on the replicated objects on the source site.



Steps to follow

1. Stop the capture and apply processes:

-- Stop the capture process:

exec dbms_capture_adm.stop_capture('');

-- Make sure it is stopped:

select capture_name, status from dba_capture;



-- Stop the apply process:

exec dbms_apply_adm.stop_apply('');

-- Make sure it is stopped:

select apply_name, status from dba_apply;



2. Purge the Capture and apply queue tables:


- As we will re-synch; The messages in both source queue that has not been propagated yet, and messages in the apply queue that has not been applied yet, can both be purged. You can do this step of purging the capture and apply queue if and only if you are doing complete resync of all the streams replication objects. If you are doing synchronization for some of the objects, then do not attempt to purge any of the queues or the other objects will get out of sync

- There will be no problem if you skipped that step, the apply process will still bypass all those messages, we just do that incase we got lot of messages that may take long time to be propagated and skipped by the apply process.



-- Stop the propagation:

exec dbms_propagation_adm.stop_propagation('');

select propagation_name, status from dba_propagation;



-- To purge perform the following for both the capture and apply queue tables:

DECLARE
options dbms_aqadm.aq$_purge_options_t;
BEGIN
options.delivery_mode:=DBMS_AQADM.BUFFERED;
DBMS_AQADM.PURGE_QUEUE_TABLE('',NULL,options);
END;
/



-- check that the messages has been successfully purged for capture and apply queue:

SELECT QUEUE_SCHEMA, QUEUE_NAME, (NUM_MSGS - SPILL_MSGS) MEM_MSG, SPILL_MSGS, NUM_MSGS
FROM V$BUFFERED_QUEUES
where queue_name='';



-- Start the propagation again:

exec dbms_propagation_adm.start_propagation('');
select propagation_name, status from dba_propagation;



3. Export the source schema:

exp system/oracle owner=HR file=HR.dump log=HR.log object_consistent=Y



*Object_consistent must be set to Y, so the imported data would be consistent with the source.

4) Delete the apply errors:



As we will re-synchronize, we can delete the errors that are in the error_queue of the apply process.

To do so, we can execute either DBMS_APPLY_ADM.DELETE_ERROR or DBMS_APPLY_ADM.DELETE_ALL_ERRORS at the apply site.



You need to make sure that you are not deleting any needed transaction, specially if you are not synchronizing all the replicated for that apply process.

If you are synchronizing all the data you can execute directly DBMS_APPLY_ADM.DELETE_ALL_ERRORS.



-- Check the errors in the error queue:

select apply_name, LOCAL_TRANSACTION_ID, ERROR_MESSAGE from dba_apply_error;



-- To delete specific transaction:

exec DBMS_APPLY_ADM.DELETE_ERROR('');



-- To delete all errors:

exec DBMS_APPLY_ADM.DELETE_ALL_ERRORS('');







If you need to check the contents of the transactions before deleting check the procedures in documentation:

Reference :
Oracle® Streams Concepts and Administration 10g Release 2 (10.2) Part Number B14229-04



5. Import the source schema to target:
imp system/oracle file=HR.dump full=y ignore=y STREAMS_INSTANTIATION=Y


When doing STREAMS_INSTANTIATION=Y and having the export done with object_consistent=Y, the instantiation SCN for the apply will be modified to the SCN at the time the export was taken, and this will insure that data at the target is consistent with data at the source.



In the example above of exp/imp we are assuming that we are replicating schema MARS, and we are trying to synchronize it, but we may also exp/imp specific tables, we do not have to synchronize the whole schema when we are doing schema level streams.



Remember that when you do the import, you will need to truncate the tables being re-synchronized to avoid the failure of import due to ORA-00001 "unique constraint violated", incase we have unique keys on the table. Or, to avoid duplicate rows, incase we have do not have unique keys.




Note:

- After doing the import, we are now sure that the data is now consistent, and the apply process will only apply changes starting from the SCN after the time of the export.

- If the exp/imp will be done using datapump, then we will not need to set any parameters, as the imported objects will be automatically imported with the SCN of the source at point in time where the export was taken, as long as there is an existing apply process on the apply site.



6. Start capture and apply:

-- Start the apply process:

exec dbms_apply_adm.start_apply('');

-- Make sure apply is started

select apply_name, status from dba_apply;



-- Start the capture process:

exec dbms_capture_adm.start_capture('');

-- Make sure capture is started

select capture_name, status from dba_capture;










Thursday, September 16, 2010





Objective: Migration of Production database with Minimal downtime


Hardware and Software:



1. Source Database: AIX-Based Systems (64-bit)

   OS Version: AIX 5.2


   Endrian Format of Source Database: Big


2. Target Database:


   OS Version: Linux 5.6

   Endrian Format of Source Database: Little






Migration Strategy:


• Transportable Tablespace using RMAN

AND

• Oracle Downstream Replication (Schema Level)


Steps Action



1 Create New production database


2 Setup Oracle Stream environment in Source Database


3 Setup Oracle Stream environment in Target Database


4 Setup Heartbeat table in both Source and Target Database


5 Start and Stop Oracle Downstream


6 Verify Transport tablespace prerequisites


7 Create users in Target database


8 Restart Source database in read only mode


9 Export Metadata (for transport tablespace) from source db


10 Convert datafiles to target db’s Endrian format


11 Restart Source Database in normal read/write mode


12 Collect SCN number of Source Database


13 Move backup datafiles and dumpfile to Target database(Linux)


14 Import Dumpfile into Target database


15 Restart Target Database in normal read/write mode


16 Create Roles and Directory


17 Export and Import DBA objects from Source to target


18 Compile invalided objects


19 Setup Schema for downstream


20 Start Stream Process with using source scn (from Step 12)


21 Stop Oracle Stream process when heartbeat value same as source

 

 

Wednesday, June 16, 2010

OS command /Shell Script from PL/SQL


Running OS command /Shell Script from PL/SQL



Step 1 : Check Java installation

SQL>
Declare
v_check varchar2(20);
begin
select count(1) into v_check from ALL_REGISTRY_BANNERS where BANNER like '%JAVA%';
if v_check<>'1' then
raise_application_error(-20000,'ERROR: Jave is not installed !!!');
-- dbms_output.put_line('ERROR: Jave is not installed !!!');
return;
end if;
end;
/

Step 2: Run OS Command Setup
create or replace and compile java source named "DebaShellUtil"
as
import java.io.*;
import java.lang.*;
public class DebaShellUtil extends Object
{
public static int RunThis(String args)
{
Runtime rt = Runtime.getRuntime();
int rc = -1;
try
{
Process p = rt.exec(args);
int bufSize = 4096;
BufferedInputStream bis =
new BufferedInputStream(p.getInputStream(), bufSize);
int len;
byte buffer[] = new byte[bufSize];
// Echo back what the program spit out
while ((len = bis.read(buffer, 0, bufSize)) != -1)
System.out.write(buffer, 0, len);
rc = p.waitFor();
}
catch (Exception e)
{
e.printStackTrace();
rc = -1;
}
finally
{
return rc;
}
}
}
/
SQL > show errors

Step 4: Intrgrated Function Setup Started
SQL>
create or replace function RUN_CMD(p_cmd in varchar2) return number
as
language java
name 'DebaShellUtil.RunThis(java.lang.String) return integer';
/
SQL>show errors

Step 5:  User Function Setup Started

create or replace procedure HOST(p_cmd in varchar2)
as
x number;
begin
x:= run_cmd(p_cmd);
end;
/

SQL> show errors


Example : How to run


SQL> exec dbms_java.set_output(100000);

SQL>set serveroutput on

SQL> exec sys.host('ls -lrt');

-rwxr-xr-x 1 oracle oinstall 1461 Jan 21 13:13 fri_traitsync.sql*

-rw-r--r-- 1 oracle oinstall 85 Jan 21 13:13 doc_stats_SUM.txt
-rw-r--r-- 1 oracle oinstall 86 Jan 21 13:13 doc_stats_CARE.txt
-rw-r--r-- 1 oracle oinstall 85 Jan 21 13:13 doc_stats_BCK.txt
-rw------- 1 oracle oinstall 637402338 Jan 21 13:13 dead.letter