Thursday, November 11, 2010

Re-Synchronize Streams objects online


In-cases we need to re-synchronize the data between the replicating sites, like for example in cases of errors like ORA-01403 / ORA-1422 reported by the apply process because data is out of sync, and re-synchronizing manually is not feasible.

Here are the steps we need to do to re-synchronize the data between the source and target site in the streams environment, without the need to stop the activity on the replicated objects on the source site.



Steps to follow

1. Stop the capture and apply processes:

-- Stop the capture process:

exec dbms_capture_adm.stop_capture('');

-- Make sure it is stopped:

select capture_name, status from dba_capture;



-- Stop the apply process:

exec dbms_apply_adm.stop_apply('');

-- Make sure it is stopped:

select apply_name, status from dba_apply;



2. Purge the Capture and apply queue tables:


- As we will re-synch; The messages in both source queue that has not been propagated yet, and messages in the apply queue that has not been applied yet, can both be purged. You can do this step of purging the capture and apply queue if and only if you are doing complete resync of all the streams replication objects. If you are doing synchronization for some of the objects, then do not attempt to purge any of the queues or the other objects will get out of sync

- There will be no problem if you skipped that step, the apply process will still bypass all those messages, we just do that incase we got lot of messages that may take long time to be propagated and skipped by the apply process.



-- Stop the propagation:

exec dbms_propagation_adm.stop_propagation('');

select propagation_name, status from dba_propagation;



-- To purge perform the following for both the capture and apply queue tables:

DECLARE
options dbms_aqadm.aq$_purge_options_t;
BEGIN
options.delivery_mode:=DBMS_AQADM.BUFFERED;
DBMS_AQADM.PURGE_QUEUE_TABLE('',NULL,options);
END;
/



-- check that the messages has been successfully purged for capture and apply queue:

SELECT QUEUE_SCHEMA, QUEUE_NAME, (NUM_MSGS - SPILL_MSGS) MEM_MSG, SPILL_MSGS, NUM_MSGS
FROM V$BUFFERED_QUEUES
where queue_name='';



-- Start the propagation again:

exec dbms_propagation_adm.start_propagation('');
select propagation_name, status from dba_propagation;



3. Export the source schema:

exp system/oracle owner=HR file=HR.dump log=HR.log object_consistent=Y



*Object_consistent must be set to Y, so the imported data would be consistent with the source.

4) Delete the apply errors:



As we will re-synchronize, we can delete the errors that are in the error_queue of the apply process.

To do so, we can execute either DBMS_APPLY_ADM.DELETE_ERROR or DBMS_APPLY_ADM.DELETE_ALL_ERRORS at the apply site.



You need to make sure that you are not deleting any needed transaction, specially if you are not synchronizing all the replicated for that apply process.

If you are synchronizing all the data you can execute directly DBMS_APPLY_ADM.DELETE_ALL_ERRORS.



-- Check the errors in the error queue:

select apply_name, LOCAL_TRANSACTION_ID, ERROR_MESSAGE from dba_apply_error;



-- To delete specific transaction:

exec DBMS_APPLY_ADM.DELETE_ERROR('');



-- To delete all errors:

exec DBMS_APPLY_ADM.DELETE_ALL_ERRORS('');







If you need to check the contents of the transactions before deleting check the procedures in documentation:

Reference :
Oracle® Streams Concepts and Administration 10g Release 2 (10.2) Part Number B14229-04



5. Import the source schema to target:
imp system/oracle file=HR.dump full=y ignore=y STREAMS_INSTANTIATION=Y


When doing STREAMS_INSTANTIATION=Y and having the export done with object_consistent=Y, the instantiation SCN for the apply will be modified to the SCN at the time the export was taken, and this will insure that data at the target is consistent with data at the source.



In the example above of exp/imp we are assuming that we are replicating schema MARS, and we are trying to synchronize it, but we may also exp/imp specific tables, we do not have to synchronize the whole schema when we are doing schema level streams.



Remember that when you do the import, you will need to truncate the tables being re-synchronized to avoid the failure of import due to ORA-00001 "unique constraint violated", incase we have unique keys on the table. Or, to avoid duplicate rows, incase we have do not have unique keys.




Note:

- After doing the import, we are now sure that the data is now consistent, and the apply process will only apply changes starting from the SCN after the time of the export.

- If the exp/imp will be done using datapump, then we will not need to set any parameters, as the imported objects will be automatically imported with the SCN of the source at point in time where the export was taken, as long as there is an existing apply process on the apply site.



6. Start capture and apply:

-- Start the apply process:

exec dbms_apply_adm.start_apply('');

-- Make sure apply is started

select apply_name, status from dba_apply;



-- Start the capture process:

exec dbms_capture_adm.start_capture('');

-- Make sure capture is started

select capture_name, status from dba_capture;










Thursday, September 16, 2010





Objective: Migration of Production database with Minimal downtime


Hardware and Software:



1. Source Database: AIX-Based Systems (64-bit)

   OS Version: AIX 5.2


   Endrian Format of Source Database: Big


2. Target Database:


   OS Version: Linux 5.6

   Endrian Format of Source Database: Little






Migration Strategy:


• Transportable Tablespace using RMAN

AND

• Oracle Downstream Replication (Schema Level)


Steps Action



1 Create New production database


2 Setup Oracle Stream environment in Source Database


3 Setup Oracle Stream environment in Target Database


4 Setup Heartbeat table in both Source and Target Database


5 Start and Stop Oracle Downstream


6 Verify Transport tablespace prerequisites


7 Create users in Target database


8 Restart Source database in read only mode


9 Export Metadata (for transport tablespace) from source db


10 Convert datafiles to target db’s Endrian format


11 Restart Source Database in normal read/write mode


12 Collect SCN number of Source Database


13 Move backup datafiles and dumpfile to Target database(Linux)


14 Import Dumpfile into Target database


15 Restart Target Database in normal read/write mode


16 Create Roles and Directory


17 Export and Import DBA objects from Source to target


18 Compile invalided objects


19 Setup Schema for downstream


20 Start Stream Process with using source scn (from Step 12)


21 Stop Oracle Stream process when heartbeat value same as source

 

 

Wednesday, June 16, 2010

OS command /Shell Script from PL/SQL


Running OS command /Shell Script from PL/SQL



Step 1 : Check Java installation

SQL>
Declare
v_check varchar2(20);
begin
select count(1) into v_check from ALL_REGISTRY_BANNERS where BANNER like '%JAVA%';
if v_check<>'1' then
raise_application_error(-20000,'ERROR: Jave is not installed !!!');
-- dbms_output.put_line('ERROR: Jave is not installed !!!');
return;
end if;
end;
/

Step 2: Run OS Command Setup
create or replace and compile java source named "DebaShellUtil"
as
import java.io.*;
import java.lang.*;
public class DebaShellUtil extends Object
{
public static int RunThis(String args)
{
Runtime rt = Runtime.getRuntime();
int rc = -1;
try
{
Process p = rt.exec(args);
int bufSize = 4096;
BufferedInputStream bis =
new BufferedInputStream(p.getInputStream(), bufSize);
int len;
byte buffer[] = new byte[bufSize];
// Echo back what the program spit out
while ((len = bis.read(buffer, 0, bufSize)) != -1)
System.out.write(buffer, 0, len);
rc = p.waitFor();
}
catch (Exception e)
{
e.printStackTrace();
rc = -1;
}
finally
{
return rc;
}
}
}
/
SQL > show errors

Step 4: Intrgrated Function Setup Started
SQL>
create or replace function RUN_CMD(p_cmd in varchar2) return number
as
language java
name 'DebaShellUtil.RunThis(java.lang.String) return integer';
/
SQL>show errors

Step 5:  User Function Setup Started

create or replace procedure HOST(p_cmd in varchar2)
as
x number;
begin
x:= run_cmd(p_cmd);
end;
/

SQL> show errors


Example : How to run


SQL> exec dbms_java.set_output(100000);

SQL>set serveroutput on

SQL> exec sys.host('ls -lrt');

-rwxr-xr-x 1 oracle oinstall 1461 Jan 21 13:13 fri_traitsync.sql*

-rw-r--r-- 1 oracle oinstall 85 Jan 21 13:13 doc_stats_SUM.txt
-rw-r--r-- 1 oracle oinstall 86 Jan 21 13:13 doc_stats_CARE.txt
-rw-r--r-- 1 oracle oinstall 85 Jan 21 13:13 doc_stats_BCK.txt
-rw------- 1 oracle oinstall 637402338 Jan 21 13:13 dead.letter




Friday, March 12, 2010

Configuring Hugepages For Oracle on Linux

Implemented following Configuration Changes.

1)SGA_MAX_SIZE=40G

2)SGA_TARGET = 40G

3)PGA_ AGGRIGATE_SIZE=15G



Step 1: Change kernel parameter : Kernel.shmmax (/etc/sysctl.conf)

Kernel.shmmax should be equivalent to SGA_TARGET + PGA_AGGRIGATE_SIZE + 1GB ( for other proces)


Example:

SGA_TARGET + PGA_AGGRIGATGE_SIZE + 1

40G + 15 + 1G (for other process) = 56G / 60129542144


kernel.shmmax = 68719476736 (This value did not change as per the calculated minimum original considerations, as the current value is already a high value by default )


Step 2: Change kernel parameters: Kernel.shmall (/etc/sysctl.conf )

Kernel.shmall should be set to the shmmax values divided by the page size

PAGE SIZE 4k (The standard pagesize on Linux x86_64 is 4096, or 4k)

60129542144 / 4096 = 14680064

Kernel.shmall = 14680064


kernel.shmall = 4294967296 (This value did not change as per the calculated minimum original considerations, as the current value is already a high value by default )


Step 3 : Add kernel parameter for huge page

vm.nr_hugepages is to set to (/etc/sysctl.conf )
60129542144/(2048 * 1024) = 28672
vm.nr_hugepages = 28672



Step 4: Oracle soft memlock and oracle hard memlock should be set to slightly less than the total memory on the server, If memory is 64G. Units are kbytes, so the number is 56000000. This is the total amount of memory Oracle is allowed to lock.

/etc/security/limits.conf

oracle soft memlock 56000000

oracle hard memlock 56000000



Step 5: Following Database parameters configured with below values

SGA_TARGET = 40G

SGA_MAX_SIZE=40G

PGA_AGGRIGATE_SIZE= 15G

All other memory parameter should be disable .

USE_LARGE_PAGES=only

MEMORY_MAX_TARGET=0

MEMORY_TARGET=0



Step 6: In order to verify that hugepages are being used, run this command

[mddsuwldb01:mddprod:/home/oracle:] cat /proc/meminfo
grep Huge

HugePages_Total: 28672

HugePages_Free: XXXX

HugePages_Rsvd: XXXX

Hugepagesize: 2048 kB