Thursday 12 January 2017

DELETING NODE FROM CLUSTER ON A RAC 11R2


                                      In this Article I am elaborating how to delete the node from cluster (RAC).
As a DBA many times we may come across a situation that we have to delete one or multiple nodes from the cluster for various reasons.
To demonstrate, I have performing activity in configuration of 2 node RAC, where my requirement is to remove the first node from the Cluster.

      I.            Prep-Checks – (To be performed before deleting the node).


1.      Check the node name

grid@node-1:/oracle/grid/product/11.2.0/grid$ olsnodes
node-1
node-2

2.      Check the VIP of nodes

grid@node-1:/oracle/grid/product/11.2.0/grid$ olsnodes -i
node-1       node-1-vip
node-2       node-2-vip

3.      Check the Cluster Name

oracle@node-1:~$ cd /oracle/grid/product/11.2.0/grid/bin
oracle@node-1:/oracle/grid/product/11.2.0/grid/bin$ ./cemutlo -n
accupdb-cluster




  

To delete a NODE (node-1) from a cluster

1.      Ensure that Grid_home correctly specifies the full directory path for the Oracle Cluster-ware home on each node, where Grid_home is the location of the installed Oracle Cluster-ware software.


2.      Run the following command as either root or the user that installed Oracle Cluster-ware to determine whether the node you want to delete is active and whether it is pinned
$ olsnodes -s –t

root@node-1:~# . oraenv
ORACLE_SID = [SID] ? +ASM1
The Oracle base has been changed from /oracle/app/oracle to /oracle/app/grid
root@node-1:~# olsnodes -s -t
node-1       Active  Unpinned
node-2       Active  Unpinned
                    
(If the node is pinned, then run the crsctl unpin css command. Otherwise, proceed to the next step.)

3.      From any node that you are not deleting(node-2), run the following command from the Grid_home/bin directory as root to delete the node from the cluster

                    # crsctl delete node -n node-1

4.      On the node you want to delete(node-1), run the following command as the user that installed Oracle Cluster-ware from the Grid_home/oui/bin directory where node-1  is the name of the node that you are deleting

               $ cd /oracle/grid/product/11.2.0/grid /oui/bin

$ ./runInstaller -updateNodeList ORACLE_HOME= /oracle/grid/product/11.2.0/grid "CLUSTER_NODES=
                    {node-1}" CRS=TRUE -silent –local
 
 
5.      De-install the Oracle Cluster-ware home from the node that you want to delete(node-1), as follows, by running the following command, where Grid_home  is the path defined for the Oracle Cluster-ware home

                    $ Grid_home/deinstall/deinstall –local
 
(Note: If you do not specify the -local flag, then the command removes the Grid Infrastructure home from every node in the cluster.)

6.      On any node other than the node you are deleting(node-2), run the following command from the Grid_home/oui/bin directory where node-2 is a comma-delimited list of the nodes that are going to remain part of your cluster

$ ./runInstaller -updateNodeList ORACLE_HOME= /oracle/grid/product/11.2.0/grid "CLUSTER_NODES=
                    {node-2}" CRS=TRUE –silent
 
7.      Run this command a second time where ORACLE_HOME=ORACLE_HOME, and CRS=TRUE -silent is omitted from the syntax, as follows

$ ./runInstaller -updateNodeList ORACLE_HOME= /oracle/app/oracle/product/11.2.0/db_1
                    "CLUSTER_NODES= {node-2}"
 
8.      Run the following CVU command to verify that the specified nodes have been successfully deleted from the cluster.
 
                    $ cluvfy stage -post nodedel -n node_list [-verbose]






                                                                          There you good to go.................................................




Please subscribe for latest updates.

1 comment:

  1. it's so easy oracle with easy with ajeeth.... hatsoff to your hard work and keep posted

    ReplyDelete

Please leave your feedback, that improve me.............

RemoteHostExecutor.pl The file access permissions while patching

Hi, In this article, I am going to share one the issue which we usually face while patching. Here our DB and Grid home are 12.1.0.2, an...