Oracle DBA Tips and Techniques

By Arjun Raja

Archive for June, 2009

Script – Temporary tablespace usage

Posted by Gavin Soorma on June 30, 2009

Listing of temp segments.

SELECT A.tablespace_name tablespace, D.mb_total,
SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_used,
D.mb_total – SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_free
FROM v$sort_segment A,
(
SELECT B.name, C.block_size, SUM (C.bytes) / 1024 / 1024 mb_total
FROM v$tablespace B, v$tempfile C
WHERE B.ts#= C.ts#
GROUP BY B.name, C.block_size
) D
WHERE A.tablespace_name = D.name
GROUP by A.tablespace_name, D.mb_total;

Temp segment usage per session.

SELECT S.sid || ‘,’ || S.serial# sid_serial, S.username, S.osuser, P.spid, S.module,
P.program, SUM (T.blocks) * TBS.block_size / 1024 / 1024 mb_used, T.tablespace,
COUNT(*) statements
FROM v$sort_usage T, v$session S, dba_tablespaces TBS, v$process P
WHERE T.session_addr = S.saddr
AND S.paddr = P.addr
AND T.tablespace = TBS.tablespace_name
GROUP BY S.sid, S.serial#, S.username, S.osuser, P.spid, S.module,
P.program, TBS.block_size, T.tablespace
ORDER BY sid_serial;

Posted in Monitoring, Scripts | Tagged: , , , | Leave a Comment »

10g Datapump – An Introduction

Posted by Gavin Soorma on June 30, 2009

In 10g you can continue to use the earlier exp and imp versions to export/import data.

However if you export data using exp, then the import has to be with imp utility.

You cannot export using the old exp utility and import using impdp

The 2 new utilites in 10g are EXPDP and IMPDP.

EXPORT USING DATAPUMP (EXPDP).

Important pre-requisites:

Two directories need to be created in the database with the path pointing to an existing physical path on the Database Server: Ex HQLINUX5.

By default dumps and logs created in $ORACLE_HOME/rdbms/log area.

To avoid using the disk on which the ORACLE_HOME resides…

1. CREATE DIRECTORIES on server: example /u01/ORACLE/bozo/datapump and /u02/ORACLE/bozo/pumplogs

2. CREATE DIRECTORIES IN DATABASE.

Sql> create directory dump_dir as ‘/u02/ORACLE/bozo/datapump’; ………All dumps are sent to this area.

Sql> create directory log_dir as ‘/u02/ORACLE/bozo/pumplogs’; ………All logs are sent to this area.

Above directories must exist on unix machine for above command to work

SQL> grant read,write on directory dump_dir to arjun; —user exporting needs write priv and user importing needs read priv.

Grant succeeded.

SQL> grant read,write on directory log_dir to scott;

Grant succeeded.

ESTIMATE SIZE OF EXPORT

expdp arjun/arjun logfile=log_dir:full1.log estimate_only=y ……no parameter dump_dir allowed when using estimate_only=Y

Does not export…only estimates size of export dump.

SCHEMA EXPORT

expdp arjun/arjun schemas=arjun dumpfile=dump_dir:schema1.dmp logfile=log_dir:full1.log parallel=2 filesize=2G

Dump_dir and log_dir are the directories created above.

FULL DATABASE EXPORT

expdp system/temp full=y dumpfile=dump_dir:full.dmp logfile=log_dir:full3.log parallel=2 filesize=2G

IMPORTING USING DATAPUMP

To import into another schema, example – from arjun to scott.

impdp arjun/arjun remap_schema=arjun:scott dumpfile=dump_dir:schema.dmp logfile=log_dir:full4.log

Import into same schema name…i.e…export taken from arjun and import into arjun.

impdp system/temp schemas=arjun dumpfile=dump_dir:schema.dmp logfile=log_dir:scott1.log
The dumpfile should point to the dump_dir:*.dmp file.

FULL DATABASE IMPORT:

impdp system/temp dumpfile=dump_dir:full.dmp logfile=log_dir:full10.log

Posted in Administration | Tagged: , , | Leave a Comment »

Renaming datafile on standby database

Posted by Gavin Soorma on June 30, 2009

DATAGUARD TROUBLE SHOOTING –

It is possible that a datafile is added on primary in some other area other than paths covered by the db_file_name_convert parameter which ensures the file is added on the standby database as well.

This documents the real life scenario if a datafile is added to an wrong area on the primary database which is not in the db_file_name_convert path in the standby init.ora and how to recover the standby database in this case.

STANDBY SITE

SQL> show parameter convert

NAME TYPE VALUE
———————————— ———– ——————————
db_file_name_convert string /opt/oracle/, /opt/oracle/
log_file_name_convert string /opt/oracle/, /opt/oracle/

SQL> show parameter standby

NAME TYPE VALUE
———————————— ———– ——————————
standby_file_management string AUTO

PRIMARY SITE:

If a file is created in any other area other than /opt/oracle ( which is in the db_file_name_convert path)

EX: SQL> create tablespace arjun datafile ‘/u01/ORACLE/dgtest9i_bkp/arjun.dbf’ size 10m;

Tablespace created.

SQL> alter system switch logfile;

Since the db_file_name_convert on the standby site only handles files created in /opt/oracle area, the datafile cannot be added to the standby site….

Alert log in standby site…

Media Recovery Log /opt/oracle/dgtest9i/arch/arch235.log
File #7 added to control file as ‘UNNAMED00007’. Originally created as:
‘/u01/ORACLE/dgtest9i_bkp/arjun.dbf’
Recovery was unable to create the file as:
‘/u01/ORACLE/dgtest9i_bkp/arjun.dbf’
MRP0: Background Media Recovery terminated with error 1274
ORACLE AUTOMATICALLY STOPS MEDIA RECOVERY
Sun Mar 5 18:49:06 2006
Errors in file /opt/oracle/admin/dgtest9i/bdump/dgtest9i_mrp0_30121.trc:
ORA-01274: cannot add datafile ‘/u01/ORACLE/dgtest9i_bkp/arjun.dbf’ – file could not be created
ORA-01119: error in creating database file ‘/u01/ORACLE/dgtest9i_bkp/arjun.dbf’
ORA-27040: skgfrcre: create error, unable to create file
Linux-x86_64 Error: 2: No such file or directory
Some recovered datafiles maybe left media fuzzy
Media recovery may continue but open resetlogs may fail
MRP0: Background Media Recovery process shutdown…………..Automatically shuts down the recovery process in this case.

Check whether logs have been applied….

SQL> SQL> select max(SEQUENCE#) “LAST_LOG_GENERATED” FROM V$ARCHIVED_LOG WHERE THREAD# =1 and ARCHIVED=’YES’;

LAST_LOG_GENERATED
——————
237

select max(SEQUENCE#) “LAST_LOG_APPLIED” FROM V$ARCHIVED_LOG WHERE THREAD# = 1 and APPLIED=’YES’;

LAST_LOG_APPLIED
—————-
234

Now check the v$datafile view in standby database…..

SQL> select name from v$datafile;

NAME
——————————————————————————–
/opt/oracle/oradata/dgtest9i/system01.dbf
/opt/oracle/oradata/dgtest9i/undotbs01.dbf
/opt/oracle/oradata/dgtest9i/users01.dbf
/opt/oracle/product9204/dbs/UNNAMED00007 – This is the new file.

To rename the file as required…

In standby database….

alter system set standby_file_management=manual;

SQL> alter database create datafile ‘/opt/oracle/product9204/dbs/UNNAMED00007’ as ‘/opt/oracle/oradata/dgtest9i/arjun1.dbf’ ;
Database altered.

SQL> select name from v$datafile;

NAME
——————————————————————————–
/opt/oracle/oradata/dgtest9i/system01.dbf
/opt/oracle/oradata/dgtest9i/undotbs01.dbf
/opt/oracle/oradata/dgtest9i/users01.dbf
/opt/oracle/oradata/dgtest9i/arjun1.dbf

alter system set standby_file_management=AUTO;

alter database recover managed standby database disconnect;

Now check the alertlog again….and notice recovery has started on the standby site.

ALTER SYSTEM SET standby_file_management=’MANUAL’ SCOPE=BOTH;
Sun Mar 5 18:56:23 2006
alter database create datafile ‘/opt/oracle/product9204/dbs/UNNAMED00007’
as ‘/opt/oracle/oradata/dgtest9i/arjun1.dbf’
Sun Mar 5 18:56:23 2006
Starting control autobackup
Control autobackup failed with following error :
ORA-00236: snapshot operation disallowed: mounted controlfile is a backup
Completed: alter database create datafile ‘/opt/oracle/produc
Sun Mar 5 18:57:01 2006
ALTER SYSTEM SET standby_file_management=’AUTO’ SCOPE=BOTH;
Sun Mar 5 18:57:08 2006
Alter database recover managed standby database disconnect
Attempt to start background Managed Standby Recovery process
MRP0 started with pid=11
MRP0: Background Managed Standby Recovery process started
Starting datafile 1 recovery in thread 1 sequence 235
Datafile 1: ‘/opt/oracle/oradata/dgtest9i/system01.dbf’
Starting datafile 2 recovery in thread 1 sequence 235
Datafile 2: ‘/opt/oracle/oradata/dgtest9i/undotbs01.dbf’
Starting datafile 3 recovery in thread 1 sequence 235
Datafile 3: ‘/opt/oracle/oradata/dgtest9i/users01.dbf’
Starting datafile 4 recovery in thread 1 sequence 235
Datafile 4: ‘/opt/oracle/oradata/dgtest9i/arjun1.dbf’
Media Recovery Log /opt/oracle/dgtest9i/arch/arch235.log
Media Recovery Log /opt/oracle/dgtest9i/arch/arch236.log
Media Recovery Log /opt/oracle/dgtest9i/arch/arch237.log
Media Recovery Waiting for thread 1 seq# 238 (in transit)

Now check logs applied…..

SQL> SQL> select max(SEQUENCE#) “LAST_LOG_GENERATED” FROM V$ARCHIVED_LOG WHERE THREAD# =1 and ARCHIVED=’YES’;

LAST_LOG_GENERATED
——————
237

select max(SEQUENCE#) “LAST_LOG_APPLIED” FROM V$ARCHIVED_LOG WHERE THREAD# = 1 and APPLIED=’YES’;

LAST_LOG_APPLIED
—————-
237

Therefore it is a good idea to check the alert_log on a regular basis to make sure managed recovery has not crashed, just log shipping taking place is not enough as logs will continue to get shipped irrespective of whether recovery is in progress or not.

Posted in Administration, High Availability | Leave a Comment »

Data Guard switchover checklist

Posted by Gavin Soorma on June 30, 2009

In some establishments, Dataguard switchovers are manual –

Please perform these pre-requisite checks before undertaking a switchover to primary.

1. ON STANDBY SITE:

SQL> select database_role from v$database;

DATABASE_ROLE
—————-
PHYSICAL STANDBY…………….PROCEED.

If you receive a reply like the one below, then do not proceed- you are most probably firing the sql command in the primary site:

SQL> select database_role from v$database;

DATABASE_ROLE
—————-
PRIMARY…………….do not proceed if this message is received …in this case you are most probably on the primary site..

Again on STANDBY SITE:

SQL> select name,value from v$parameter where name in (‘log_archive_dest_1′,’log_archive_dest_state_1’, ‘log_archive_dest_2′,’log_archive_dest_state_2′);

NAME
—————————————————————-
VALUE
——————————————————————————–
log_archive_dest_1
LOCATION=/opt/oracle/opsdb9i/arch

log_archive_dest_2
SERVICE=opsdb9i_blade07 lgwr sync affirm nodelay……….Make sure lgwr and not arch is mentioned here, otherwise new primary database will not open after switchover ( if the protection_mode is Maximum availability).

log_archive_dest_state_1
ENABLE

log_archive_dest_state_2
DEFER …………..DO NOT PROCEED.

While functioning as a STANDBY, it is better to set log_archive_dest_state_2 to DEFER to avoid errors appearing in the alert_log , but make sure it is set to ENABLE before starting a switchover.

This command can be issued to convert it to ENABLE

sql> alter system set log_archive_dest_state_2=’ENABLE’ scope=both;

Now check again…

SQL>NAME
—————————————————————-
VALUE
——————————————————————————–
log_archive_dest_state_2
ENABLE…………………………………PROCEED .

SQL> select distinct status from v$datafile;

STATUS
——-
ONLINE
SYSTEM

If any file/files are in RECOVER status, DO NOT PROCEED with the switchover…

SQL> select distinct open_mode from v$database;

OPEN_MODE
———-
MOUNTED………This is the correct response ..PROCEED with switchover.

If the STANDBY database has been opened in READ ONLY mode, you will receive the following message…

SQL> select distinct open_mode from v$database;

OPEN_MODE
———-
READ ONLY……………….DO NOT PROCEED with switchover.

To confirm that logs are being shipped and more importantly the LAST LOG archived has been applied…

On PRIMARY site
SQL> select max(SEQUENCE#) “LAST_LOG_GENERATED” FROM V$ARCHIVED_LOG WHERE THREAD# =1 and ARCHIVED=’YES’;

LAST_LOG_GENERATED
——————
14

Now on STANDBY site.
SQL> select max(SEQUENCE#) “LAST_LOG_APPLIED” FROM V$ARCHIVED_LOG WHERE THREAD# = 1 and APPLIED=’YES’;

LAST_LOG_APPLIED
—————-
14

SAFE TO PROCEED.

Posted in Administration, High Availability | Tagged: , , , | Leave a Comment »

Enable block change tracking

Posted by Gavin Soorma on June 30, 2009

BLOCK CHANGE TRACKING

From version 10.2 onwards, Oracle have provided a new tool which is very useful to reduce the time of RMAN incremental backups.

Prior to 10.2, all incremental backups had to read every single block in the database, and if the block has changed, it was backed up. This meant the RMAN backup job took nearly as long as a normal full backup because every block had to be read regardless.

To avoid this, Oracle introduced the BLOCK CHANGE TRACKING file – if this was enabled, then a file called the block change tracking file kept information of all changes to blocks since the last backup. This file was read instead of all the blocks in the database to arrive at changed blocks and then these blocks were backed up.

This reduced the backup time considerably – sometimes, especially in the case of Data Warehouse type databases, since changes happened infrequently, incremental backups hardly took minutes to complete as opposed to many hours.

To enable block change tracking – this can be run with the database open ( as sysdba ).

alter database enable block change tracking using file ‘/hqlinux01db05/ORACLE/test/block_change.dbf’;

Database altered.

select filename,status from v$block_change_tracking;

FILENAME
——————————————————————————–
STATUS
———-
/hqlinux01db05/ORACLE/test/block_change.dbf
ENABLED

Posted in Backup and Recovery | Tagged: | Leave a Comment »

Resize standby datafile if disk runs out of space on standby site.

Posted by Gavin Soorma on June 30, 2009

PURPOSE: TO AVOID RECREATION OF STANDBY DATABASE IN CASE FILE IS NOT RESIZED ON STANDBY :

ITLINUXDEVBLADE07-PRIMARY

Database is DGTEST9i

[oracle@itlinuxdevblade07 dgtest9i]$ df -h .

Filesystem Size Used Avail Use% Mounted on
/dev/cciss/c0d0p2 25G 21G 2.0G 92% /opt

[oracle@itlinuxdevblade07 dgtest9i]$

2 gb freespace on disk on PRIMARY.

ITLINUXDEVBLADE08

[oracle@itlinuxdevblade08 oradata]$ df -h .

Filesystem Size Used Avail Use% Mounted on
/dev/cciss/c0d0p2 25G 23G 550M 98% /opt

[oracle@itlinuxdevblade08 oradata]$

The corresponding disk on the standby site as in db_file_name_convert has only 550 mb free.

Now create a new tablespace for the testing purpose..

SQL> create tablespace dropme datafile ‘/opt/oracle/oradata/dgtest9i/dropme.dbf’ size 200m;

Tablespace created.

SQL> alter system switch logfile;

System altered.

SQL> /

System altered.

[oracle@itlinuxdevblade07 dgtest9i]$ df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/cciss/c0d0p2 25G 22G 1.8G 93% /opt
[oracle@itlinuxdevblade07 dgtest9i]$

On STANDBY

[oracle@itlinuxdevblade08 dgtest9i]$ df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/cciss/c0d0p2 25G 23G 350M 99% /opt
[oracle@itlinuxdevblade08 dgtest9i]$

Space available reduced from 550 mb to 350 mb

The logs were succesfully applied on STANDBY disk as there was enough space.

Now add a sum of more than the amount available on the standby disk by RESIZING the file on PRIMARY to the tablespace.

PRIMARY site..

SQL> Alter database datafile ‘/opt/oracle/oradata/dgtest9i/dropme.dbf’ RESIZE 1024M;
Database altered.

SQL> SQL>
SQL> alter system switch logfile;
System altered.
Go to STANDBY…

[oracle@itlinuxdevblade08 dgtest9i]$ df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/cciss/c0d0p2 25G 23G 350M 99% /opt
[oracle@itlinuxdevblade08 dgtest9i]$ ls -lrt
total 801848
-rw-r—– 1 oracle dba 209723392 May 10 14:23 dropme.dbf
-rw-r—– 1 oracle dba 7544832 May 10 14:23 control01.ctl
[oracle@itlinuxdevblade08 dgtest9i]$

The file has not been resized….as there was not enough space…

Now check the logs for errors..

On standby …alert log shows media recovery failed as the datafile could not be resized and media recovery stops

The standby database cannot be opened now as the files need recovery..

Media Recovery Log /opt/oracle/dgtest9i/arch/arch59.log
MRP0: Background Media Recovery terminated with error 1237
Wed May 10 14:23:04 2006
Errors in file /opt/oracle/admin/dgtest9i/bdump/dgtest9i_mrp0_10198.trc:
ORA-01237: cannot extend datafile 3
ORA-01110: data file 3: ‘/opt/oracle/dgtest9i/dropme.dbf’
ORA-19502: write error on file “/opt/oracle/dgtest9i/dropme.dbf”, blockno 71297 (blocksize=8192)
ORA-27072: skgfdisp: I/O error
Linux-x86_64 Error: 2: No such file or directory
Additional information: 71296
Some recovered datafiles maybe left media fuzzy
Media recovery may continue but open resetlogs may fail
MRP0: Background Media Recovery process shutdown

STEPS to solve this problem.

1. Shutdown both databases.

2. Create a new db_file_name_convert path as the file in question has to be moved to different disks where space exists.

*.db_file_name_convert=’/opt/oracle/dgtest9i/’,’/opt/oracle/oradata/dgtest9i/’,’/tmp/dgtest9i/’,’/tmp/dgtest9i/’…both machines.

3. Copy dropme.dbf to /tmp/dgtest9i on both machines from original area.

4. Startup mount;..PRIMARY DATABASE

5. alter database rename file ‘/opt/oracle/oradata/dgtest9i/dropmedbf’ to ‘/tmp/dgtest9i/dropme.dbf’;

6. alter database open;

7. alter database create standby controlfile as ‘/tmp/control01.ctl’ ….primary site.

8. ftp the standby controlfile to /opt/oracle/dgtest9i on standby….controlfile area in standby spfile.

9.startup nomount;…….STANDBY site.

SQL> show parameter convert;

NAME TYPE VALUE
———————————— ———– ——————————
db_file_name_convert string /opt/oracle/oradata/dgtest9i/, /opt/oracle/dgtest9i/, /tmp/dgtest9i/, /tmp/dgtest9i/
log_file_name_convert string /opt/oracle/oradata/dgtest9i/, /opt/oracle/dgtest9i/

SQL> alter database mount standby database;

Database altered.

SQL> select name from v$datafile;

/opt/oracle/dgtest9i/system01.dbf
/opt/oracle/dgtest9i/undotbs01.dbf
/tmp/dgtest9i/dropme.dbf

Primary site : Switch a few logfiles….

Now start managed recovery on standby.

SQL>recover managed standby database disconnect;

Alert log below…

Completed: ALTER DATABASE RECOVER managed standby database d
Wed May 10 14:54:30 2006
RFS: Possible network disconnect with primary database
Wed May 10 14:54:38 2006
Fetching gap sequence for thread 1, gap sequence 59-63
Trying FAL server: DGTEST9I_BLADE07
Wed May 10 14:54:41 2006
RFS: Successfully opened standby logfile 4: ‘/opt/oracle/dgtest9i/st01.log’
Wed May 10 14:54:56 2006
Media Recovery Log /opt/oracle/dgtest9i/arch/arch59.log
Media Recovery Log /opt/oracle/dgtest9i/arch/arch60.log
Media Recovery Log /opt/oracle/dgtest9i/arch/arch61.log
Media Recovery Log /opt/oracle/dgtest9i/arch/arch62.log
Media Recovery Log /opt/oracle/dgtest9i/arch/arch63.log
Media Recovery Log /opt/oracle/dgtest9i/arch/arch64.log
Media Recovery Log /opt/oracle/dgtest9i/arch/arch65.log
Media Recovery Waiting for thread 1 seq# 66 (in transit)

Go to /tmp/dgtest9i on STANDBY and make sure file is resized to 1 gb..

[oracle@itlinuxdevblade08 dgtest9i]$ ls -lrt
total 1049612
-rw-r—– 1 oracle dba 1073750016 May 10 14:55 dropme.dbf
[oracle@itlinuxdevblade08 dgtest9i]$ pwd

SUCCESS !

Posted in Administration, High Availability | Tagged: , | 1 Comment »

Export and Import using unix pipes and compression

Posted by Gavin Soorma on June 29, 2009

EXPORT AND IMPORT USING UNIX PIPES.

Sometimes, the space on disk may not be enough to hold a full export dump if uncompressed.

EXAMPLE – export schema ARJUN from Primary database and import into secondary database.

To avoid space running out, unix pipes and compression can be used.


EXPORT IN Primary DATABASE

cd /u02/oradata/export

CREATE UNIX PIPE IN THIS AREA – WHERE THE EXPORT DUMP WILL BE WRITTEN TO.

mknod pipe p

CREATE PAR FILE –

parfile is arjun.par

vi arjun.par

buffer=2097152
recordlength=65535
consistent=y
owner=arjun
log=/u02/oradata/export/arjun.log
file=/u02/oradata/export/pipe

Now export schema ARJUN.

1. nohup gzip -c </u02/oradata/export/pipe > /u02/oradata/export/arjun.dmp.gz &

Immediately enter next command –

2. nohup exp \’/ as sysdba\’ parfile=/u02/oradata/export/arjun.par &

Export of ARJUN schema completes – compressed dump arjun.dmp.gz created.

ftp or copy the dump file arjun.dmp.gz to the secondary database box.

IMPORT IN secondary DATABASE – – Presume same directory structure exists on secondary box.

Create UNIX PIPE in area where dump is copied to in DEV box.

cd /u02/oradata/export

mknod import_pipe p

Create import parfile – called imp.par

vi imp.par

fromuser=arjun
touser=arjun
commit=y
buffer=2097152
ignore=y
file=/u02/oradata/export/import_pipe
log=/u02/oradata/export//imp_arjun.log

Enter commands –

1. nohup gzip -dc </u02/oradata/export/arjun.dmp.gz > import_pipe &

Immediately enter next command –

2. nohup imp \’/ as sysdba\’ parfile=/u02/oradata/export/imp.par

Check the logs for output of import.

Posted in Administration | Tagged: , , , | Leave a Comment »

Recreate online redo logs and standby logs

Posted by Gavin Soorma on June 29, 2009

In a DATAGUARD ENVIRONMENT, If online redologs are resized, the standby logs also need to be resized to match the online redo logs in the Primary database.

STEPS TO ACHIEVE THIS –

1. Drop and recreate online redologs on the PRIMARY site.

2. On primary site, create the standby controlfile –

alter database create standby controlfile as ‘/tmp/standby01.ctl’;

3. Copy the standby control file to standby site…

4. Cancel recovery, shutdown immediate the standby database.

5. Startup and mount the standby database -using the new standby controlfile.

5. Defer recovery until standby logs are created..…do not start managed recovery.

6. Add the standby logs with same size as Primary online redologs.

7. The online logs on the standby side will be created at switchover time so no need to do anything now.

8. Start recovery on standby site..

To recreate ONLY the STANDBY logs only…follow from point 2 above.

Posted in High Availability | Tagged: , | Leave a Comment »

Resumable space allocation.

Posted by Gavin Soorma on June 29, 2009

RESUMABLE SPACE ALLOCATION AND AFTER SUSPEND TRIGGER

A resumable statement allows you to :

1. Suspend large operations instead of receiving an error.

2. Allows you to fix the problem while the operation is suspended, rather than start from scratch.

A session remains suspended for the following reasons

1. Out of space
2. Max extents reached.
3. Space quota exceeded on tablespace.

Pre-requisites:

1. INIT.ORA prameter : RESUMBABLE_TIMEOUT=3600( 1 HOUR) , default is 7200 seconds or 2 hours…This parameter can be changed dynamically with an ‘alter system set resumable_timeout=3600’

2. Grant RESUMABLE system privilege to schema owner/user….

3. User has to issue ‘ alter session enable resumable’ before running the transaction.

EXAMPLE…

Table test in arjun schema….

SQL> create table test (col_1 number(10),col_2 date);

Table created.

Create a script called loop.sql which is created in this manner…..

vi loop.sql in unix .

declare
x number;
begin
x:= 0;
for i in 1 .. 5000
loop
insert into test
values
(x,sysdate);
x := x+1;
commit;
end loop;
end;
/
save the file as loop.sql

Now set the Resumable_timeout parameter to 0…

SQL> alter system set resumable_timeout=0 scope=both;

System altered.

SQL> @loop
declare
*
ERROR at line 1:
ORA-01653: unable to extend table ARJUN.TEST by 128 in tablespace ARJUN
ORA-06512: at line 7

Now issue command as sysdba.

Sql> alter system set resumable_timeout=3600 scope=both;

Conn arjun/arjun

@loop

Session hangs after sometime….but no error message is thrown( see below the use of after suspend trigger to throw an error message when the session hangs)

From another session….

SQL> select count(*) from arjun.test;

COUNT(*)
———-
98900

SQL> /

COUNT(*)
———-
98900

SQL> /

COUNT(*)
———-
98900

Although space has run out tablespace arjun, the first session does not throw an error but allows you to increase space in the tablespace from another session.

The first session will wait for 1 hour – the time set by you in the resumable_timeout=3600 parameter…If you do not take corrective action, ie add space within that period, an error is thrown and transaction rolled back.

SQL> select sum(bytes/1048576) from dba_free_space where tablespace_name=’ARJUN’;

SUM(BYTES/1048576)
——————
.9375

Add space to tablespace..

SQL> Alter database datafile ‘/u03/ORACLE/test9/arjun.dbf’ resize 5m;

Database altered.

As soon as you add space the table begins to get populated again…

SQL> select count(*) from arjun.test;

COUNT(*)
———-
119097

SQL> /

COUNT(*)
———-
136680

SQL> /

COUNT(*)
———-
145413

##################################################################

USAGE OF AFTER SUSPEND TRIGGER IN RELATION TO RESUMABLE SPACE ALLOCATION:

Another very useful feature is the usage of the AFTER SUSPEND trigger which results in an error message appearing in the session, rather than the session hanging( in which case the DBA or USER will never know there is an error unless he checks from another session.)

This helps the DBA to take corrective action once the error message is thrown.

Once corrective action is taken , the original session can be restarted.

Firstly create the trigger…As SYS user.

CREATE OR REPLACE TRIGGER ARJUN_RESUMABLE
AFTER SUSPEND ON DATABASE
BEGIN
DBMS_RESUMABLE.SET_TIMEOUT(60);
END;
/

This creates a trigger which sets the RESUMABLE_TIMEOUT to 60 seconds or 1 minute if there is a situation where a session is stuck. This is the time allowed to take corrective action before oracle aborts the transaction.

Begin inserting into the TEST table…

@loop

Hangs after sometime….

To check if a session is hanging and the error message—

From another session just select error_msg from the dba_resumable view….

SQL> select error_msg from dba_resumable;

ERROR_MSG
——————————————————————————–
ORA-01653: unable to extend table ARJUN.TEST by 128 in tablespace ARJUN

SQL> select count(*) from arjun.test;

COUNT(*)
———-
353926

Now the original session that was running the insert hangs for 1 minute and then throws the error….

declare
*
ERROR at line 1:
ORA-30032: the suspended (resumable) statement has timed out
ORA-01653: unable to extend table ARJUN.TEST by 128 in tablespace ARJUN
ORA-06512: at line 7

Take corrective action and then rerun the insert …

@loop…

SQL> select count(*) from arjun.test;………notice the session has resumed and count increases….

COUNT(*)
———-
416321

SQL> /

COUNT(*)
———-
421232

SQL> /

COUNT(*)
———-
425740

Posted in Administration, High Availability | Leave a Comment »

Flashback query

Posted by Gavin Soorma on June 29, 2009

FLASHBACK QUERY

FLASHBACK QUERY feature allows you to perform queries on the database as of a certain time or user-specified SCN.

FLASHBACK VERSIONS QUERY feature allows you to use the VERSIONS clause to retrieve all of the versions of the rows that exist between two points in time, or two SCN’s.

YOU REQUIRE THE SELECT ANY TRANSACTION PRIVILEGE TO BE ABLE TO ISSUE A QUERY AGAINST FLASHBACK_TRANSACTION_QUERY.

SQL> create table test(salary number(10));

Table created.

SQL> insert into test values(1000);

1 row created.

SQL> select * from test;

SALARY
———-
1000

SQL> update test set salary=2000;

1 row updated.

SQL> commit;

Commit complete.

SQL> update test set salary=4000;

1 row updated.

SQL> commit;

Commit complete.

SQL> update test set salary=5000;

1 row updated.

SQL> commit;

Commit complete.

SQL> select salary from test versions between scn minvalue and maxvalue;

SALARY
———-
5000
4000
2000

Using FLASHBACK VERSIONS query

select salary from test versions between timestamp
TO_TIMESTAMP( ‘2005-09-12 14:38:00′,’YYYY-MM-DD HH24:MI:SS’) and
TO_TIMESTAMP( ‘2005-09-12 14:43:00′,’YYYY-MM-DD HH24:MI:SS’);

SALARY
———-
5000
4000
2000

Select to_char(versions_starttime,’DD-MON HH:MI’) “START DATE”,
to_char (versions_endtime,’DD-MON HH:MI’) “END DATE”,VERSIONS_XID,VERSIONS_OPERATION,empno FROM VERSIONS BETWEEN SCN MINVALUE AND MAXVALUE;

START DATE END DATE VERSIONS_XID V SALARY
———— ———— —————- – ———-
12-SEP 02:40 0700190004010000 U 5000
12-SEP 02:40 12-SEP 02:40 0A00290001020000 U 4000
12-SEP 02:40 12-SEP 02:40 07000C0005010000 I 2000

Using FLASHBACK TRANSACTION query:

The flashback transaction query helps to get the actual query run:

SQL> Select table_name,operation, undo_sql from flashback_transaction_query Where XID=’06002C00F7060000′;

TABLE_NAME
——————————————————————————–
OPERATION
——————————–
UNDO_SQL
——————————————————————————–
TEST
UPDATE
update “ARJUN”.”TEST” set “SALARY” = ‘1000’ where ROWID = ‘AAAC7iAAFAAAAAPAAA’;

TEST
INSERT
delete from “ARJUN”.”TEST” where ROWID = ‘AAAC7iAAFAAAAAPAAA’;

TABLE_NAME
——————————————————————————–
OPERATION
——————————–
UNDO_SQL
——————————————————————————–

To narrow down to a point-in-time…

Select table_name,operation ,undo_sql
From flashback_transaction_query where start_timestamp >= to_timestamp ( ‘2005-09-12 14:38:00′,’YYYY-MM-DD HH24:MI:SS’)
AND TABLE_OWNER=’ARJUN’;

TABLE_NAME
——————————————————————————–
OPERATION
——————————–
UNDO_SQL
——————————————————————————–
TEST
UPDATE
update “ARJUN”.”TEST” set “SALARY” = ‘4000’ where ROWID = ‘AAAC7iAAFAAAAAPAAA’;

TEST
UPDATE
update “ARJUN”.”TEST” set “SALARY” = ‘2000’ where ROWID = ‘AAAC7iAAFAAAAAPAAA’;

Please remember that the FLASHBACK VERSIONS QUERY cannot be used to query:

1. EXTERNAL TABLES
2. TEMPORARY TABLES
3. FIXED TABLES
4. VIEWS

IT also cannot span DDL commands, ie…alter table etc..

Posted in Administration, High Availability | Leave a Comment »