Sunday 30 July 2017

DATABASE CLONE USING RMAN BACKUP




Here we are cloning/refreshing full database using rman backup.


  1.  Take full database backup of source database.
  2.  Take control file  and spfile backup (if you are cloning to new database create source pfile).
  3.  Restore Control file from Backup.
  4.  Restore database using any of the available method.
  5.  Recover Database.
  6.  Open the database using reset log.
  7.  Change database/instance name.


Before we perform clone/refresh of database we have to take full database backup of source database, Below the steps is for Disk Backup and Restore, same steps can be performed for tape also by doing minimal changes.

Backup Source database:

Before taking Backup don't forget to check for the available space.

Check for the database size by using below SQL.

As we are taking RMAN backup we need to check the size of datafiles.

SQL> select sum(bytes)/1024/1024/1024 from dba_data_files;

Check the size of the Backup and estimate the backup size and check for the sufficient space available for backup on the mount point.

$ vi backup.scr

run
{
allocate channel t1 type disk;
allocate channel t2 type disk;
allocate channel t3 type disk;
allocate channel t4 type disk;
allocate channel t5 type disk;
backup as compressed backupset database tag 'BACKUP' format='/Bacup_location/datafile_%d_%U';
backup as compressed backupset archivelog all tag 'BACKUP' format='Bacup_location//arch_%d_%U';
backup spfile format='/Bacup_location/spfile_%d_%U' ;
backup current controlfile format='/Bacup_location/control_%d_%U' ;
release channel t1;
release channel t2;
release channel t3;
release channel t4;
release channel t5;
}

Above script can be used to take backup on source, you can use channel has per the requirement and depending on number of CPU’s available on the server.

Above script will backup datafiles,spfile,controlfile,archivelogs, so no need to take backup separately.

You can run the rman script in background (nohup) using below command.

$ nohup rman target / cmdfile=backup.scr log=backup.log

Monitor the log file for any errors.

Once full backup has completed successfully we have to move the backup pieces to target server (where we want to clone/refresh), by whatever available mode and verify all the pieces are copied properly.

Provide proper permissions to the backup files.

We imagine that backup sets are copied to target using SCP.

On Target.
If you are doing refresh on the fresh server (no database running), then copy the pfile and password files from source and make necessary changes.

If you have currently database running on the target server then create pfile from spfile(if database is running on spfile).

Here we imagine that we have already database running and we have create pfile from spfile.

Follow below steps for refresh.

  • Shutdown the currently running database.
  • Startup database in restricted mode and drop the database.
  • Edit the dbname in the pfile.




After making the changes, start the database with the new pfile.

SQL> Startup nomount pfile=$ORACLE_HOME/dbs/initDBNAME.ora

After Database started in NOMOUNT mode then we have to restore control file from backup.

SQL> RESTORE CONTROLFILE FROM ‘/Backup_path/Controfile_backup_filename’;

Once control file is restored start the recovery with backup.

Before the restoring the database on the target we have to CATALOG the backup pices.

RMAN> catalog start with ‘/Backup location’

We can even catalog individuale pices.

catalog backuppiece '/Backup_location/Backup_pieces_name';
catalog backuppiece '/Backup_location/Backup_pieces_name';
catalog backuppiece '/Backup_location/Backup_pieces_name';

We can restore backup using different ways.

Method 1: When all the Data file path are same(identical mount points eg :ASM to ASM)

run
{
allocate channel c1 type disk;
allocate channel c2 type disk;
allocate channel c3 type disk;
allocate channel c4 type disk;
allocate channel c5 type disk;
allocate channel c6 type disk;
allocate channel c7 type disk;
allocate channel c8 type disk;
set until sequence 152421;             ---------------->> take from backup log.
restore database;
switch datafile all;
switch tempfile all;
recover database;
release channel c1 ;
release channel c2 ;
release channel c3 ;
release channel c4 ;
release channel c5 ;
release channel c6 ;
release channel c7 ;
release channel c8 ;
}

Method 2: When the Data file path are in different mount points (identical mount points eg :ASM to NON ASM or visa versa)

Take the list of Data files and edit the script.

run
{
allocate channel c1 type disk;
allocate channel c2 type disk;
allocate channel c3 type disk;
allocate channel c4 type disk;
set until sequence 152421 thread 2;    ------->> if you have RAC database use both the thread sequence.
set until sequence 152802 thread 1;                                         
set newname for datafile 1 TO '+DISK_GROUP';
set newname for datafile 2 TO '+DISK_GROUP';
set newname for datafile 3 TO '+DISK_GROUP';
set newname for datafile 4 TO '+DISK_GROUP';
set newname for datafile 5 TO '+DISK_GROUP';
set newname for datafile 6 TO '+DISK_GROUP';
set newname for datafile 7 TO '+DISK_GROUP';
set newname for datafile 8 TO '+DISK_GROUP';
set newname for datafile 9 TO '+DISK_GROUP';
set newname for datafile 10 TO '+DISK_GROUP';
restore database;
switch datafile all;
switch tempfile all;
recover database;
release channel c1 ;
release channel c2 ;
release channel c3 ;
release channel c4 ;
}

We can even restore the database using TAG of the Backup instead of Sequence.

run
{
allocate channel c1 type disk;
allocate channel c2 type disk;
allocate channel c3 type disk;
allocate channel c4 type disk;
allocate channel c5 type disk;
set newname for datafile 1 TO '+DISK_GROUP';
set newname for datafile 2 TO '+DISK_GROUP';
set newname for datafile 3 TO '+DISK_GROUP';
set newname for datafile 4 TO '+DISK_GROUP';
set newname for datafile 5 TO '+DISK_GROUP';
set newname for datafile 6 TO '+DISK_GROUP';
set newname for datafile 7 TO '+DISK_GROUP';
set newname for datafile 8 TO '+DISK_GROUP';
set newname for datafile 9 TO '+DISK_GROUP';
set newname for datafile 10 TO '+DISK_GROUP';
restore database from TAG='BACKUP';   ----->> Take from Backup log file.
switch datafile all;
switch tempfile all;
recover database;
release channel c1 ;
release channel c2 ;
release channel c3 ;
release channel c4 ;
release channel c5 ;
}


You can run the rman script in background (nohup) using below command.

$ nohup rman target / restore.scr log=restore.log &


Once Restore and Recover of the database has been done by any one of the above method, create spfile bounce the database and open the database using reset log .

SQL> Startup mount

SQL> alter database open resetlog;

Do NID to Change the DB name or by recreating control file change the DB name.


Create the TEMP Files if required.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


ERRORS:

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at xx/xx/xxxx xx:xx:xx
RMAN-06026: some targets not found - aborting restore
RMAN-06023: no backup or copy of datafile 154 found to restore
RMAN-06023: no backup or copy of datafile 153 found to restore
RMAN-06023: no backup or copy of datafile 152 found to restore
                                                                                                                              
SOLUTION:

Cause:

RMAN is looking in different backup piece which is not available physically.

Solution:

Cross check backup pieces and delete the expired and obsolete backup in RMAN.


                                   There you, good to go.................................................



Please subscribe for latest updates.

Thursday 12 January 2017

DELETING NODE FROM CLUSTER ON A RAC 11R2


                                      In this Article I am elaborating how to delete the node from cluster (RAC).
As a DBA many times we may come across a situation that we have to delete one or multiple nodes from the cluster for various reasons.
To demonstrate, I have performing activity in configuration of 2 node RAC, where my requirement is to remove the first node from the Cluster.

      I.            Prep-Checks – (To be performed before deleting the node).


1.      Check the node name

grid@node-1:/oracle/grid/product/11.2.0/grid$ olsnodes
node-1
node-2

2.      Check the VIP of nodes

grid@node-1:/oracle/grid/product/11.2.0/grid$ olsnodes -i
node-1       node-1-vip
node-2       node-2-vip

3.      Check the Cluster Name

oracle@node-1:~$ cd /oracle/grid/product/11.2.0/grid/bin
oracle@node-1:/oracle/grid/product/11.2.0/grid/bin$ ./cemutlo -n
accupdb-cluster




  

To delete a NODE (node-1) from a cluster

1.      Ensure that Grid_home correctly specifies the full directory path for the Oracle Cluster-ware home on each node, where Grid_home is the location of the installed Oracle Cluster-ware software.


2.      Run the following command as either root or the user that installed Oracle Cluster-ware to determine whether the node you want to delete is active and whether it is pinned
$ olsnodes -s –t

root@node-1:~# . oraenv
ORACLE_SID = [SID] ? +ASM1
The Oracle base has been changed from /oracle/app/oracle to /oracle/app/grid
root@node-1:~# olsnodes -s -t
node-1       Active  Unpinned
node-2       Active  Unpinned
                    
(If the node is pinned, then run the crsctl unpin css command. Otherwise, proceed to the next step.)

3.      From any node that you are not deleting(node-2), run the following command from the Grid_home/bin directory as root to delete the node from the cluster

                    # crsctl delete node -n node-1

4.      On the node you want to delete(node-1), run the following command as the user that installed Oracle Cluster-ware from the Grid_home/oui/bin directory where node-1  is the name of the node that you are deleting

               $ cd /oracle/grid/product/11.2.0/grid /oui/bin

$ ./runInstaller -updateNodeList ORACLE_HOME= /oracle/grid/product/11.2.0/grid "CLUSTER_NODES=
                    {node-1}" CRS=TRUE -silent –local
 
 
5.      De-install the Oracle Cluster-ware home from the node that you want to delete(node-1), as follows, by running the following command, where Grid_home  is the path defined for the Oracle Cluster-ware home

                    $ Grid_home/deinstall/deinstall –local
 
(Note: If you do not specify the -local flag, then the command removes the Grid Infrastructure home from every node in the cluster.)

6.      On any node other than the node you are deleting(node-2), run the following command from the Grid_home/oui/bin directory where node-2 is a comma-delimited list of the nodes that are going to remain part of your cluster

$ ./runInstaller -updateNodeList ORACLE_HOME= /oracle/grid/product/11.2.0/grid "CLUSTER_NODES=
                    {node-2}" CRS=TRUE –silent
 
7.      Run this command a second time where ORACLE_HOME=ORACLE_HOME, and CRS=TRUE -silent is omitted from the syntax, as follows

$ ./runInstaller -updateNodeList ORACLE_HOME= /oracle/app/oracle/product/11.2.0/db_1
                    "CLUSTER_NODES= {node-2}"
 
8.      Run the following CVU command to verify that the specified nodes have been successfully deleted from the cluster.
 
                    $ cluvfy stage -post nodedel -n node_list [-verbose]






                                                                          There you good to go.................................................




Please subscribe for latest updates.

RemoteHostExecutor.pl The file access permissions while patching

Hi, In this article, I am going to share one the issue which we usually face while patching. Here our DB and Grid home are 12.1.0.2, an...