Tuesday 15 December 2015

How to Modify SCAN Setting or SCAN Listener Port after Installation.

                   How to Modify SCAN Setting or SCAN Listener Port after Installation.


1. As per documentation "Oracle Grid Infrastructure Installation Guide", Oracle strongly recommend to configure SCAN name in either DNS or GNS as /etc/hosts file can only resolve to one IP address.

SCAN IP must be in same subnet as public and VIP, This Below steps is for the SCAN IP in same sub-net as public for different sub-net please search for oracle Doc.


1a. If you intend to use /etc/hosts for SCAN name resolution, the same and only entry for SCAN name must exist on all nodes.

1b. If you intend to use DNS for SCAN name resolution, remove entries for SCAN name from /etc/hosts on all nodes, and make sure nslookup returns good result on all nodes, for example:




Steps to follow:

Precheck

$GRID_HOME/bin$ nslookup New_scan_name.com
Server:        xx.xxx.xx.xx
Address:        xx.xx.xx.xx#xx

Name:   New_scan_name.com
Address: xx.xx.xx.xx1
Name:   New_scan_name.com
Address: xx.xx.xx.xx2
Name:   New_scan_name.com
Address: xx.xx.xx.xx0

$GRID_HOME/bin$ srvctl config scan
SCAN name: old_scan.com, Network: 1/xx.xx.xx.xx/xxx.xxx.xxx.0/eth0
SCAN VIP name: scan1, IP: old_scan.com/xx.xx.xx.xx2
SCAN VIP name: scan2, IP:old_scan.com/xx.xx.xx.xx0
SCAN VIP name: scan3, IP: old_scan.com/xx.xx.xx.xx1
$GRID_HOME/bin$ srvctl config network
Network exists: 1/xx.xx.xx.xx/xxx.xxx.xxx.x/eth0, type static


Once #1a or #1b is configured properly, execute the following to modify:

If name resolution for SCAN is being switched from local hosts file to DNS, be sure to remove SCAN name in local hosts file on all nodes prior to execute below commands.

2.1. To modify SCAN name or SCAN VIP addresses:

2.1.1. As grid user stop resources:

$ $GRID_HOME/bin/srvctl stop scan_listener
$ $GRID_HOME/bin/srvctl stop scan

Execution

2.1.2. As root user modify SCAN:
#$GRID_HOME/bin/srvctl modify scan -n New_scan_name.com
Once SCAN name is changed, update database init.ora/spfile parameter remote_listener to the new one.

alter system set remote_listener = "New_scan_name.com:1521" scope=both

2.1.3. As grid user modify and start resources:

$ $GRID_HOME/bin/srvctl modify scan_listener -u
$ $GRID_HOME/bin/srvctl start scan_listener

Post Check:

$ $GRID_HOME/bin$ ./srvctl config scan
SCAN name: New_scan_name.com, Network: 1/xx.xx.xx.xx/xxx.xxx.xxx.0/eth0
SCAN VIP name: scan1, IP: New_scan_name.com/xx.xx.xx.xx1
SCAN VIP name: scan2, IP: New_scan_name.com/xx.xx.xx.xx2
SCAN VIP name: scan3, IP: New_scan_name.comt/xx.xx.xx.xx0

$ $GRID_HOME/bin/srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1521
SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1521




Ref: ORACLE DOC: (Doc ID 972500.1)


There you good to go.................................................

Thursday 26 November 2015

REDO RESIZE ON RAC




PRECHECK

**********


Check the size of present REDO
SELECT a.group#, a.member, b.bytes FROM v$logfile a, v$log b WHERE a.group# = b.group#;
SQL> set lines 132
SQL> set trims on
SQL> select * from v$log;


SQL> col member format a35
SQL> select * from v$logfile;

Check for the recommended size by running any of the below queries providing peak hour on your environment.
SELECT
(SELECT ROUND (AVG (BYTES) / 1024 / 1024, 2) FROM V$LOG) AS "Redo size (MB)", ROUND ( (20 / AVERAGE_PERIOD) * (SELECT AVG (BYTES) FROM V$LOG) / 1024 / 1024, 2) AS "Recommended Size (MB)" FROM (SELECT AVG ( (NEXT_TIME - FIRST_TIME) * 24 * 60) AS AVERAGE_PERIOD FROM V$ARCHIVED_LOG WHERE FIRST_TIME > SYSDATE - 3 AND TO_CHAR (FIRST_TIME, 'HH24:MI') BETWEEN '16:00' AND '17:00');
SELECT
(SELECT ROUND(AVG(BYTES) / 1024 / 1024, 2) FROM V$LOG) AS "Redo size (MB)",
ROUND((20 / AVERAGE_PERIOD) * (SELECT AVG(BYTES)
FROM V$LOG) / 1024 / 1024, 2) AS "Recommended Size (MB)"
FROM (SELECT AVG((NEXT_TIME - FIRST_TIME) * 24 * 60) AS AVERAGE_PERIOD
FROM V$ARCHIVED_LOG
WHERE FIRST_TIME > SYSDATE - 3
AND TO_CHAR(FIRST_TIME, 'HH24:MI') BETWEEN
&START_OF_PEAK_HOURS AND &END_OF_PEAK_HOURS);

SQL> select group#,thread#,members,status from v$log;
    GROUP#    THREAD#    MEMBERS STATUS
---------- ---------- ---------- ----------------
         1          1          2 INACTIVE
         2          1          2 CURRENT
         3          2          2 CURRENT
         4          2          2 INACTIVE
         5          3          2 ACTIVE
         6          3          2 CURRENT

Check every time for the status of group you’re going to drop.
Note:
To drop the group the status should be INACTIVE, if it’s no then switch the log file to make it forcibly inactive.

SQL> alter system switch logfile;

Continue this until the group you want to drop comes to INACTIVE state.

EXECUTION


**********

If your each instance has only two groups then create another temporary group.

In RAC, each logfile is associated with thread# each instance can only use one thread,You need a minimal of two group for each thread#.

ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 8 ('+ DISKGROUP_NAME ') SIZE 100M;

When the status is inactive.

Drop the logfile
*************
SQL>  alter database drop logfile group 1;

To remove the file from asm disks (if it is not deleted automatically).

$ export ORACLE_SID=+ASM1
$ sqlplus / as sysdba
 
SQL>  ALTER DISKGROUP DISKGROUP_NAME DROP FILE '+DISKGROUP_NAME/PATH/group_1.XXXXX';


ADD the logfile with your desired size
*******************************
SQL>  ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 1 ('+ DISKGROUP_NAME ','+ DISKGROUP_NAME ') SIZE 500M;

After all done just drop the temporary group

Drop the logfile
*************
SQL>  alter database drop logfile group 8;



Repeat the above steps for the entire instances log file on RAC.


POST CHECK


***********


Check the size of REDO.
SQL> SELECT a.group#, a.member, b.bytes FROM v$logfile a, v$log b WHERE a.group# = b.group#;

Check the status

SQL> select group#,thread#,members,status from v$log;
SQL> set lines 132
SQL> set trims on
SQL> select * from v$log;


SQL> col member format a35
SQL> select * from v$logfile;




There you good to go.................................................

Thursday 5 November 2015

STATSPACK


 STATSPACK



                         It's Easy with AJEET,


In this Section I am going to provide the steps how to configure stats pack and how to generate stats pack report in oracle.

Stats pack is a earlier version of AWR, which is used for getting details of processes going internally in oracle database and can be used for performance tuning.



Installation of  STATSPACK  needs the following steps:

·    Creating the perfstat tablespace
·    Creating the perfstat user(automatically created)
·    Creating the tables(automatically created)
·    Installing the packages
·    Collecting data

·    Scheduling data collection snapshots with dbms_job

Create tablespace
create tablespace statspack_data datafile '/data/oracle/database/11gR2/oradata/scratch/statspack_data01.dbf' size 2G autoextend on maxsize 5G extent management local uniform size 1M  segment space management auto;

(Note: only tablespace should be created not the user as user perfstat will be created automatically when we run spcreate.sql)


CONFIGURING THE STATSPACK PACKAGE

1.      Conn as sysdba
2.      Run the SPCREATE.SQL file
              SQL> @?/rdbms/admin/spcreate
Enter password: give u r password for  the user perfstat
            Enter default tablespace: statspack_data
            Enter temporary tablespace:temp
3.      If any error occurred  then check the following log files
ORACLE_HOME/bin/spcusr.lis
            ORACLE_HOME/bin/spctab.lis
            ORACLE_HOME/bin/spcpkg.lis

  1. Conn as sysdba and make timed statistic to true dynamically
              alter system set TIMED_STATISTICS=TRUE; 

Collecting data
SQL> CONNECT perfstat/log           (The perfstat user will be created automaticaly when we run spcreate.sql)
            SQL> EXECUTE statspack.snap;
            PL/SQL procedure successfully completed.



Scheduling data collection snapshots with dbms_job
Schedule a snapshot to be run on this instance every hour, on the hour
(Note: - Run any one of the below, don’t’ run both or don’t’ run twice it creates the snap how many times  you run this .sql file or the script)
Conn as sysdba
SQL>@?/rdbms/admin/spatuo.sql
(OR)
variable jobno number;
variable instno number;
BEGIN
  SELECT instance_number INTO :instno FROM v$instance;
  DBMS_JOB.SUBMIT(:jobno, 'statspack.snap;', TRUNC(sysdate+1/24,'HH'), 'TRUNC(SYSDATE+1/24,''HH'')', TRUE, :instno);
  COMMIT;
END;
/
To check that the job is scheduled in dbjob
SQL>select * from dba_jobs;

(The above configuration must be done only once don’t repeat the above process at the next time, from next time only execute the below statement i.e., run only the spreport.sql)
Generate the stats-pack
(This is to Generate the report from the already generated snaps)
            as a perfstat user
SQL> @?/rdbms/admin/spreport
            The SPREPSQL.SQL report prompts you for the following:
                                    •           Beginning snapshot ID
                                    •           Ending snapshot ID
                                    •           Name of the report text file to be created


To purge the data of stats-pack
Important Note
Note that purging may require the use of a large rollback segment, as all data relating each snapshot to be purged will be deleted. You can get around the issue by issuing the alter session set rollback segment command before running sppurge.sql. The example below shows a sample execution of this script:

SQL> connect perfstat/perfstat
Connected.

SQL> set transaction use rollback segment big_rbs;
Session altered.

SQL> @$ORACLE_HOME/rdbms/admin/sppurge
 
Manual:
Use these two scripts run either sptrunc.sql (to truncate tables of perfstat user) or sppurge.sql (to delete rows from tables of perfstat user)
           
To truncate use this command:
SQL> conn perfstat/log
SQL> @?/rdbms/admin/sptrunc.sql    
 (or)  
To purge use this command:
SQL> @?/rdbms/admin/sppurge
 (We can use this command like “delete from stats$snapshot where snap_id < 10000;”)

Automatic:
Add the following parameters in the sppuge.sql (/rdbms/admin/sppurge.sql) to purge the snaps older than 15 days.
column min_snap_id new_val LoSnapId
column max_snap_id new_val HiSnapId
select nvl(min(s.snap_id), 0) min_snap_id, nvl(max(s.snap_id), 0) max_snap_id
  from perfstat.stats$snapshot s
     , perfstat.stats$database_instance di
                          where s.dbid              = :dbid
                          and di.dbid             = :dbid
                          and s.instance_number   = :inst_num
                          and di.instance_number  = :inst_num
                          and di.startup_time     = s.startup_time
                          and s.snap_time < sysdate-14;

Add this parameter in sppurge.sql or create another file and schedule it in cronjobs. Add start and end of file.


Manually removing STATSPACK snapshots
 
Fortunately, STATSPACK uses foreign-key referential integrity constraints with the ON CASCADE DELETE option. This means that all information for a given snapshot can be deleted by deleting the corresponding stats$snapshot record. For example, suppose that you wanted to delete all snapshots for 2002, and these were snapshots that have a snap_id of less than 10,000. The following DELETE would remove all of these snapshots, and all subordinate detail rows:
 

SQL > delete from stats$snapshot where snap_id < 10000;
28923563 rows deleted.
 
Of course, you can selectively delete rows from STATSPACK. For example, you might want to keep all of the system statistics and delete all of the SQL statements that were more than six months old. In this case, you could selectively remove the rows from stats$sql_summary:

SQL > delete from stats$sql_summary where snap_time < sysdate - 180;
2888363 rows deleted.
 
Now let's take a look at some handy shell scripts that will make your use of STATSPACK easier.
 
Removing ranges of snapshots

The STATSPACK utility provides a batch mode purging function. To run the sppurge.sql script in batch mode, you must assign SQL*Plus variables to identify the low snapshot and the high snapshot for purging.

#!/bin/ksh

# First, we must set the environment . . . .
ORACLE_SID=$1
export ORACLE_SID
ORACLE_HOME=`cat /etc/oratab|grep ^$ORACLE_SID:|cut -f2 -d':'`
export ORACLE_HOME
PATH=$ORACLE_HOME/bin:$PATH
export PATH

$ORACLE_HOME/bin/sqlplus system/manager<<!

select * from v\$database;
connect perfstat/perfstat
define losnapid=$2
define hisnapid=$3
@sppurge
exit
!


If db_job fails than perform the following steps

If the job fails to run and after checking alert log if we get the error like below
--------------------------------------------------------------------------------------
Mon May 21 06:06:08 2012
Errors in file /XXXX/bdump/PROd_j000_3976.trc:
ORA-12012: error on auto execute of job 23
ORA-00001: unique constraint (PERFSTAT.STATS$SQL_SUMMARY_PK) violated
ORA-06512: at "PERFSTAT.STATSPACK", line 1361
ORA-06512: at "PERFSTAT.STATSPACK", line 2471
ORA-06512: at "PERFSTAT.STATSPACK", line 91
ORA-06512: at line 1

--------------------------------------------------------------------------------------
As Job is scheduled through DBMS_JOBS package so it will run 16 times till it will get broken.
(Note: - please check the job no properly before performing the following operation please conform the job no and then you proceeded)

1.     Check the status of the Job from dba_jobs.

select JOB,LOG_USER,PRIV_USER,SCHEMA_USER,LAST_DATE,LAST_SEC,NEXT_DATE,NEXT_SEC,TOTAL_TIME,BROKEN,WHAT,FAILURES from dba_jobs where JOB=41;

The output will give all details about the job 23.(job number from the error)  Please be sure it is stats-pack job only.

(Note: - Once you get the error immediately make the job broken as below. If you don't make it broken it will run 16 times and fail every time. We will get the error in the alert log 16 times.)
2.     Making Job broken manually

Before making the job broken be sure job number is correct.


sqlplus perfstat/<Passwd>

SQL> exec dbms_job.broken(41,TRUE);

PL/SQL procedure successfully completed.

Now the job got broken it won't execute again.
Check again the status

SQL> select JOB,LOG_USER,PRIV_USER,SCHEMA_USER,LAST_DATE,LAST_SEC,NEXT_DATE,NEXT_SEC,TOTAL_TIME,BROKEN,WHAT,FAILURES from dba_jobs where JOB=23;
(After Job got broken it will not execute at all till we make it run once and then automatically it will schedule itself. But for running again we need to wait for some time. )
3.     Now wait for some 2 to 3 hours (Approx. time. We don't know exact   time) and then execute the job again. As this is the issue so till information is not cleared from the v$ tables the job can't run again. If you try it will fail again.

Note:- if you run the job and you got issue again then the broken status of the job will change to "N" and you need to again make it broken again so that it won't run 16 times with same problem.

E.g,
Execute the Job again

sqlplus perfstat/<Passwd>

SQL> exec DBMS_JOB.RUN(41);
[[[[
PL/SQL procedure successfully completed.

Check the Job status again. It will show that it is not broken and is executed the time you run the command and after 30 Minutes it will execute again. please be sure it is running fine.

4.     Again check that the job is running.

SQL> select * from dba_jobs where job=41;


5.     To remove the job

SELECT * FROM dba_jobs_running;

SELECT JOB FROM DBA_JOBS WHERE JOB=21;

BEGIN
EXECUTE DBMS_JOB.REMOVE(21);
COMMIT;
END;

SELECT JOB FROM DBA_JOBS WHERE JOB=21;



                       That's It 


                                                             There you good to go.................................................



Please subscribe for latest updates.

Sunday 1 November 2015

CLUSTER DE-CONFIGURING AND RE-CONFIGURING

CLUSTER DE-CONFIGURING 

AND 

RE-CONFIGURING



It's Easy with AJEET,

Below section I have provided complete steps to de-config and re-config the cluster on 3 node RAC.
It has been done when the host-name and IP address of all the 3 nodes have been changed.
When the host-name and IP address is changed the existing cluster will stop working so its very important to make your cluster to start so that your database works fine and good as earlier it use to be.

So this can be done in two ways.

1. Delete and add node (which I am going to post in another section)
2. De-configuring and Re-configuring cluster.

I perter doing it the 2nd way, you may ask why? 

the reason is very simple,  think of a RAC environment where you have more node (may be 5 to 10),
how much time it will take to delete the node and add the node again.


Grid Infrastructure Cluster - Entire Cluster

Deconfigure and reconfigure entire cluster will rebuild OCR and Voting Disk, user resources (database, instance, service, listener etc) will need to be added back to the cluster manually after reconfigure finishes.
Why is deconfigure needed?
Deconfigure is needed when:
  • OCR is corrupted without any good backup
  • Or GI stack will not come up on any nodes due to missing Oracle Clusterware related files in /etc or /var/opt/oracle, i.e. init.ohasd missing etc. If GI is able to come up on at least one node, refer to next Section "B. Grid Infrastructure Cluster - One or Partial Nodes".
  • $GRID_HOME should be intact as deconfigure will NOT fix $GRID_HOME corruption


In below case we consider IP addresses and hostname of all the node have been changed.

PRECHECK BEFORE PERFORMING DE-CONFIGURE OF CLUSTER
1.       Before de-configuring a node, ensure it's not pinned, i.e
$GI_HOME/bin/olsnodes -s -t
Node1 Active Unpinned
node2 Active Unpinned
node3 Active Unpinned
2.       If a node is pinned, unpin it first, i.e. as root user: 
/oracle/grid/product/11.2.0/grid/bin/crsctl unpin css -n <node_name>



3.      Before de-configuring, collect the following as grid user if possible to generate a list of user resources to be added back to the cluster after reconfigure finishes:
$GRID_HOME/bin/crsctl stat res -t
$GRID_HOME/bin/crsctl stat res -p
$GRID_HOME/bin/crsctl query css votedisk
$GRID_HOME/bin/ocrcheck
$GRID_HOME/bin/oifcfg getif
$GRID_HOME/bin/srvctl config nodeapps -a
$GRID_HOME/bin/srvctl config scan
$GRID_HOME/bin/srvctl config asm -a
$GRID_HOME/bin/srvctl config listener -l <listener-name> -a
$DB_HOME/bin/srvctl config database -d <dbname> -a
$DB_HOME/bin/srvctl config service -d <dbname> -s <service-name> -v

TO DECONFIG THE CLUSTERWARE
  • If OCR and Voting Disks are NOT on ASM, or If OCR and Voting Disks are on ASM but there's NO user data in OCR/Voting Disk ASM diskgroup:
On all remote nodes, as root execute:
/oracle/grid/product/11.2.0/grid/crs/install/rootcrs.pl -deconfig -force -verbose

Once the above command finishes on all remote nodes, on local node, as root execute:
/oracle/grid/product/11.2.0/grid/crs/install/rootcrs.pl -deconfig -force -verbose -lastnode

If there is user data in OCR/Voting Disk ASM diskgroup

# $GRID_HOME/crs/install/rootcrs.pl -deconfig -force -verbose -keepdg –lastnode

We do not had any user data on OCR VOTING DISK so we followed
$GRID_HOME/crs/install/rootcrs.pl -deconfig -force -verbose -lastnode


Once de-configuration completed follow the below steps before re- configuration of cluster


Clean up the profile.xml files
The profile.xml files will contains the old ip address so to update the new ip address to profile.xml we have to clean the old profile.xml file.
$GRID_HOME/gpnp/node1/profiles/peer#
In case if we are not cleaning the profile.xml file we may have to face the issue has below.
The below will occur while executing root.sh script.
ERROR:
CRS-2676: Start of 'ora.cssd' on 'node1' succeeded
Start of resource "ora.cluster_interconnect.haip" failed
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'node1'
CRS-5017: The resource action "ora.cluster_interconnect.haip start" encountered the following error:
Start action for HAIP aborted. For details refer to "(:CLSN00107:)" in "/oracle/grid/product/11.2.0/grid/log/node1/agent/ohasd/orarootagent_root/orarootagent_root.log".
CRS-2674: Start of 'ora.cluster_interconnect.haip' on 'node1' failed
CRS-2679: Attempting to clean 'ora.cluster_interconnect.haip' on 'node1'
CRS-2681: Clean of 'ora.cluster_interconnect.haip' on 'node1' succeeded
CRS-4000: Command Start failed, or completed with errors.
HAIP startup failure considered fatal, terminating at /oracle/grid/product/11.2.0/grid/crs/install/crsconfig_lib.pm line 1330.
/oracle/grid/product/11.2.0/grid/perl/bin/perl -I/oracle/grid/product/11.2.0/grid/perl/lib -I/oracle/grid/product/11.2.0/grid/crs/install /oracle/grid/product/11.2.0/grid/crs/install/rootcrs.pl execution failed
****************************************************************************
TO CLEAN PROFILE.XML  AND CHECKPOINT FILE USE THE FOLLOWING COMMAND
a. Step1 and 2 can be skipped on node(s) where root.sh haven't been executed this time.

    b. Step1 and 2 should remove checkpoint file. To verify:

          ls -l $
 /oracle/grid/oracle_base/Clusterware/ckptGridHA_.xml

    If it's still there, please remove it manually with "rm" command on all nodes

    c. If GPNP profile is different between nodes/setup, clean it up on all nodes as grid user

          $ find /oracle/grid/product/11.2.0/grid/gpnp/* -type f -exec rm -rf {} \;          


Clean up the OCR_VOTE disk or the disk which contains voting files.

If we are not cleaning OCR_VOTE1 disk we may have to face the error as below.
The error will occur at the time of executing root.sh.

ERROR:
bash: /root/.bashrc: Permission denied

Disk Group OCR_VOTE mounted successfully.

Existing OCR configuration found, aborting the configuration. Rerun configuration setup after deinstall at /oracle/grid/product/11.2.0/grid/crs/install/crsconfig_lib.pm line 10302.
/oracle/grid/product/11.2.0/grid/perl/bin/perl -I/oracle/grid/product/11.2.0/grid/perl/lib -I/oracle/grid/product/11.2.0/grid/crs/install /oracle/grid/product/11.2.0/grid/crs/install/rootcrs.pl execution failed

dd if=/dev/zero of=/dev/xvdd1 bs=1048576 count=10
dd if=/dev/zero of=/dev/xvde1 bs=1048576 count=10
dd if=/dev/zero of=/dev/xvdf1 bs=1048576 count=10
To clear the asm OCR_VOTE1 disk follow below steps

NOTE:
Befor performing below steps make sure you are using proper disk name.
Clear the details of disk from asm binary
dd if=/dev/zero of=/dev/xvdd1 bs=1048576 count=10
dd if=/dev/zero of=/dev/xvde1 bs=1048576 count=10
dd if=/dev/zero of=/dev/xvdf1 bs=1048576 count=10
Delete the OCR_VOTE DISKS
oracleasm deletedisk OCR_VOTE3
oracleasm deletedisk OCR_VOTE2
oracleasm deletedisk OCR_VOTE1

Create the OCR_VOTE DISKS
/etc/init.d/oracleasm createdisk OCR_VOTE1 /dev/xvdd1
/etc/init.d/oracleasm createdisk OCR_VOTE2 /dev/xvde1
etc/init.d/oracleasm createdisk OCR_VOTE3 /dev/xvdf1


TO CONFIGURE CLUSTER

export DISPLAY=HOSTNAME
$$GRID_HOME /crs/config/config.sh

Follow the instruction on GUI
Run the root.sh first on node1 after successful execution on NEW_NODE1 execute it on NEW_NODE2 and NEW_NODE3.


ISSUES FACE AT GUI.
*****************
If You face errors at GUI as below.





Just stop at this step and execute the below commands and click on RETRY
$GRID_HOME/oui/bin/runInstaller -nowait -noconsole -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=true "CLUSTER_NODES={node1,node2,node3}" ORACLE_HOME=GRID_HOME_path
Even after executing above command if error still exists then click on ignore and click on next.
Once the configuration completed execute below command to update oraInventory.
1. remove the old, incorrect CRS home entry from the inventory.xml:
$GRID_HOME/oui/bin/runinstaller -detachHome -local ORACLE_HOME GRID_HOME_path
 2. rerun the failed "attachhome" command:
$GRID_HOME/oui/bin/runInstaller -attachHome -noClusterEnabled ORACLE_HOME GRID_HOME_path ORACLE_HOME_NAME=Ora11g_gridinfrahome1 "CLUSTER_NODES={node1,node2,node3}" "INVENTORY_LOCATION=/oracle/app/oraInventory" LOCAL_NODE=node1

$GRID_HOME/oui/bin/runInstaller -attachHome -noClusterEnabled ORACLE_HOME GRID_HOME_path ORACLE_HOME_NAME=Ora11g_gridinfrahome1 "CLUSTER_NODES={node1,node2,node3}" "INVENTORY_LOCATION=/oracle/app/oraInventory" LOCAL_NODE=node2

./runInstaller -attachHome -noClusterEnabled ORACLE_HOME=GRID_HOME_path ORACLE_HOME_NAME=Ora11g_gridinfrahome1 "CLUSTER_NODES={node1,node2,node3}" "INVENTORY_LOCATION=/oracle/app/oraInventory" LOCAL_NODE=node3

3. mark the new home as the CRS home (CRS=true):
GRID_HOME/oui/bin/./runinstaller -local -updateNodeList ORACLE_HOME GRID_HOME_path "CLUSTER_NODES={node1,node2,node3}" CRS="true"




If you face issue at GUI as below.


The cluvfy can be ignored as this is just to get verification of how your cluster is.


Mount the diskgruops
. oraenv
+ASM1
Sqlplus ‘/as sysasm’
SQL> desc v$asm_diskgroup
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 GROUP_NUMBER                                       NUMBER
 NAME                                               VARCHAR2(30)
 SECTOR_SIZE                                        NUMBER
 BLOCK_SIZE                                         NUMBER
 ALLOCATION_UNIT_SIZE                               NUMBER
 STATE                                              VARCHAR2(11)
 TYPE                                               VARCHAR2(6)
 TOTAL_MB                                           NUMBER
 FREE_MB                                            NUMBER
 HOT_USED_MB                                        NUMBER
 COLD_USED_MB                                       NUMBER
 REQUIRED_MIRROR_FREE_MB                            NUMBER
 USABLE_FILE_MB                                     NUMBER
 OFFLINE_DISKS                                      NUMBER
 COMPATIBILITY                                      VARCHAR2(60)
 DATABASE_COMPATIBILITY                             VARCHAR2(60)
 VOTING_FILES                                       VARCHAR2(1)

SQL> select NAME,STATE from v$asm_diskgroup;

NAME                           STATE
------------------------------ -----------
OCR_VOTE                       MOUNTED
DG_ARCHIVE1                    DISMOUNTED
DG_DATA1                       DISMOUNTED
DG_REDO1                       DISMOUNTED

SQL> alter diskgroup DG_ARCHIVE1 mount;

Diskgroup altered.

SQL> select NAME,STATE from v$asm_diskgroup;

NAME                           STATE
------------------------------ -----------
OCR_VOTE                       MOUNTED
DG_ARCHIVE1                    MOUNTED
DG_DATA1                       DISMOUNTED
DG_REDO1                       DISMOUNTED

SQL> alter diskgroup DG_DATA1 mount;

Diskgroup altered.

SQL> alter diskgroup DG_REDO1 mount;

Diskgroup altered.

SQL> select NAME,STATE from v$asm_diskgroup;

NAME                           STATE
------------------------------ -----------
OCR_VOTE                       MOUNTED
DG_ARCHIVE1                    MOUNTED
DG_DATA1                       MOUNTED
DG_REDO1                       MOUNTED


Try to startup the database

root@NEW_node1:$ORACLE_HOME/bin# ./srvctl start database -d accup11
PRCD-1120 : The resource for database accup11 could not be found.
PRCR-1001 : Resource ora.accup11.db does not exist

If you face issue like below

check for the database resources are added in cluster or not.

crsctl stat res –t

if there is no database resources added in it try to startup database connecting to sqlplus.

Startup the instance on each node

. oraenv
NEW_NODE1 ACCUP111
NEW_NODE1 ACCUP112
NEW_NODE1 ACCUP113

Startup

If it’s get startup without any issue.
Then just add the data base resource to cluster by following command.
srvctl add database -d accup11 -o $ORACLE_HOME +DG_DATA1/accup11/spfileaccup11.ora -a DG_ARCHIVE1,DG_DATA1,DG_REDO1
                                                          ($ORACLE_HOME is the path of your oracle home)
crsctl stat res –t
ora.dbname.db
      1        OFFLINE OFFLINE
1 ONLINE ONLINE node1
If the instance are not added in the cluster add them using below command.

1. srvctl add instance -d dbname -i instance1 -n node1
2. srvctl add instance -d dbname -i instance2 -n node2
3. srvctl add instance -d dbname -i instance3 -n node3

srvctl config database -d dbname

Check again

crsctl stat res –t
1 ONLINE ONLINE node1
ora.dbname.db
1 OFFLINE OFFLINE
2 OFFLINE OFFLINE
3 OFFLINE OFFLINE

Just shutdown database connecting to all the nodes.
Startup using following command

./srvctl start database -d dbname


oracle@new_node1:/oracle/grid/product/11.2.0/grid/bin$ ps -aef |grep smon
root     18023     1  2 04:31 ?        00:01:34 /oracle/grid/product/11.2.0/grid/bin/osysmond.bin
grid     18162     1  0 04:31 ?        00:00:00 asm_smon_+ASM1
oracle   26572     1  0 05:29 ?        00:00:00 ora_smon_dbname
oracle   26835 25608  0 05:29 pts/0    00:00:00 grep smon

Check again

crsctl stat res –t
oracle   26835 25608  0 05:29 pts/0    00:00:00 grep smon
oracle@new_node1:$GRID_HOME/bin$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DG_ARCHIVE1.dg
               ONLINE  ONLINE       new_node1
               ONLINE  ONLINE       new_node2
               ONLINE  ONLINE       new_node3
ora.DG_DATA1.dg
               ONLINE  ONLINE       new_node1
               ONLINE  ONLINE       new_node2
               ONLINE  ONLINE       new_node3
ora.DG_REDO1.dg
               ONLINE  ONLINE       new_node1
               ONLINE  ONLINE       new_node2
               ONLINE  ONLINE       new_node3
ora.LISTENER.lsnr
               ONLINE  ONLINE       new_node1
               ONLINE  ONLINE       new_node2
               ONLINE  ONLINE       new_node3
ora.OCR_VOTE.dg
               ONLINE  ONLINE       new_node1
               ONLINE  ONLINE       new_node2
               ONLINE  ONLINE       new_node3
ora.asm
               ONLINE  ONLINE       new_node1                Started
               ONLINE  ONLINE       new_node2                Started
               ONLINE  ONLINE       new_node3                Started
ora.gsd
               OFFLINE OFFLINE      new_node1
               OFFLINE OFFLINE      new_node2
               OFFLINE OFFLINE      new_node3
ora.net1.network
               ONLINE  ONLINE       new_node1
               ONLINE  ONLINE       new_node2
               ONLINE  ONLINE       new_node3
ora.ons
               ONLINE  ONLINE       new_node1
               ONLINE  ONLINE       new_node2
               ONLINE  ONLINE       new_node3
ora.registry.acfs
               ONLINE  ONLINE       new_node1
               ONLINE  ONLINE       new_node2
               ONLINE  ONLINE       new_node3
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       new_node2
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       new_node3
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       new_node1
ora.accup11.db
      1        ONLINE  ONLINE       new_node1                Open
      2        ONLINE  ONLINE       new_node2                Open
      3        ONLINE  ONLINE       new_node3                Open
ora.cvu
      1        ONLINE  ONLINE       new_node1
ora.new_node1.vip
      1        ONLINE  ONLINE       new_node1
ora.new_node2.vip
      1        ONLINE  ONLINE       new_node2
ora.new_node3.vip
      1        ONLINE  ONLINE       new_node3
ora.oc4j
      1        ONLINE  ONLINE       new_node1
ora.scan1.vip
      1        ONLINE  ONLINE       new_node2
ora.scan2.vip
      1        ONLINE  ONLINE       new_node3
ora.scan3.vip
      1        ONLINE  ONLINE       new_node1

oracle@new_node1:$GRID_HOME/bin$
To stop database and cluster resources on RAC.
$ORACLE_HOME/bin/srvctl stop database -d DBNAME
$GRID_HOME/bin/crsctl stop cluster -all.



That's It 


There you good to go.................................................










Please subscribe for latest updates.

RemoteHostExecutor.pl The file access permissions while patching

Hi, In this article, I am going to share one the issue which we usually face while patching. Here our DB and Grid home are 12.1.0.2, an...