NodeName which has to be removed from the cluster - rac3

Backup OCR from rac1

NodeName which has to be removed from the cluster - rac3

Backup OCR from rac1

[root@rac1 ~]# /u01/app/11.2.0/grid/bin/ocrconfig -manualbackup


Remove instance by running DBCA from rac1

[root@rac1 ~]# xhost +
access control disabled, clients can connect from any host
[root@rac1 ~]# su - oracle
[oracle@rac1 ~]$ dbca



















Verify

[oracle@rac1 ~]$ srvctl status database -d dell
Code:
Instance dell1 is running on node rac1
Instance dell2 is running on node rac2

Remove oracle database software.



From RAC3 run the ./runInstaller to update oracle inventory.

[oracle@rac3 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@rac3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rac3}" -local
Code:
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 3950 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
# In the above command the "-local" hint is used, So that when we
# deinstall the oracle software, then only the local installation
# in removed otherwise. If this hint is not supplied then
# Oracle binaries from all the instances would be removed.

Your oratab should not contain any database entries except for ASM entries.

Sample oratab file

[root@rac3 ~]# cat /etc/oratab
Code:
#Backup file is  /u01/app/oracle/product/11.2.0/dbhome_1/srvm/admin/oratab.bak.rac3 line added by Agent
#
# This file is used by ORACLE utilities.  It is created by root.sh
# and updated by the Database Configuration Assistant when creating
# a database.

# A colon, ':', is used as the field terminator.  A new line terminates
# the entry.  Lines beginning with a pound sign, '#', are comments.
#
# Entries are of the form:
#   $ORACLE_SID:$ORACLE_HOME::
#
# The first and second fields are the system identifier and home
# directory of the database respectively.  The third filed indicates
# to the dbstart utility that the database should , "Y", or should not,
# "N", be brought up at system boot time.
#
# Multiple entries with the same $ORACLE_SID are not allowed.
#
#
+ASM3:/u01/app/11.2.0/grid:N            # line added by Agent

Deinstall Oracle Software.


[root@rac3 ~]# su - oracle
[oracle@rac3 ~]$
[oracle@rac3 ~]$ cd $ORACLE_HOME/deinstall
[oracle@rac3 deinstall]$ ./deinstall -local
Code:
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############


######################## CHECK OPERATION START ########################
Install check configuration START


Checking for existence of the Oracle home location /u01/app/oracle/product/11.2.0/dbhome_1
Oracle Home type selected for de-install is: RACDB
Oracle Base selected for de-install is: /u01/app/oracle
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /u01/app/11.2.0/grid
The following nodes are part of this cluster: rac3

Install check configuration END


Network Configuration check config START

Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check4706613160153670172.log

Network Configuration check config END

Database Check Configuration START

Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check6208371534310470930.log

Database Check Configuration END

Enterprise Manager Configuration Assistant START

EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_check.log 

Enterprise Manager Configuration Assistant END
Oracle Configuration Manager check START
OCM check log file location : /u01/app/oraInventory/logs//ocm_check2937.log
Oracle Configuration Manager check END

######################### CHECK OPERATION END #########################


####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /u01/app/11.2.0/grid
The cluster node(s) on which the Oracle home exists are: 
(Please input nodes seperated by ",", eg: node1,node2,...)rac3
Since -local option has been specified, the Oracle home will be 
de-installed only on the local node, 'rac3', and the global configuration will be removed.
Oracle Home selected for de-install is: /u01/app/oracle/product/11.2.0/dbhome_1
Inventory Location where the Oracle home registered is: 
/u01/app/oraInventory
The option -local will not modify any database configuration for this Oracle home.

No Enterprise Manager configuration to be updated for any database(s)
No Enterprise Manager ASM targets to update
No Enterprise Manager listener targets to migrate
Checking the config status for CCR
Oracle Home exists with CCR directory, but CCR is not configured
CCR check is finished
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: 
'/u01/app/oraInventory/logs/deinstall_deconfig2012-12-02_05-13-43-PM.out'
Any error messages from this session will be written to: 
'/u01/app/oraInventory/logs/deinstall_deconfig2012-12-02_05-13-43-PM.err'

######################## CLEAN OPERATION START ########################

Enterprise Manager Configuration Assistant START

EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_clean.log 

Updating Enterprise Manager ASM targets (if any)
Updating Enterprise Manager listener targets (if any)
Enterprise Manager Configuration Assistant END
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean3983205340911706037.log

Network Configuration clean config START

Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean4816433689117593893.log

De-configuring Local Net Service Names configuration file...
Local Net Service Names configuration file de-configured successfully.

De-configuring backup files...
Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END

Oracle Configuration Manager clean START
OCM clean log file location : /u01/app/oraInventory/logs//ocm_clean2937.log
Oracle Configuration Manager clean END
Oracle Universal Installer clean START

Detach Oracle home '/u01/app/oracle/product/11.2.0/dbhome_1' from the central inventory on the local node : Done

Delete directory '/u01/app/oracle/product/11.2.0/dbhome_1' on the local node : Done

Failed to delete the directory '/u01/app/oracle'. The directory is in use.
Delete directory '/u01/app/oracle' on the local node : Failed <<<<

Oracle Universal Installer cleanup completed with errors.

Oracle Universal Installer clean END


Oracle install clean START

Clean install operation removing temporary directory '/tmp/install' on node 'rac3'

Oracle install clean END


######################### CLEAN OPERATION END #########################


####################### CLEAN OPERATION SUMMARY #######################
Cleaning the config for CCR
As CCR is not configured, so skipping the cleaning of CCR configuration
CCR clean is finished
Successfully detached Oracle home '/u01/app/oracle/product/11.2.0/dbhome_1' 
from the central inventory on the local node.
Successfully deleted directory '/u01/app/oracle/product/11.2.0/dbhome_1' on the local node.
Failed to delete directory '/u01/app/oracle' on the local node.
Oracle Universal Installer cleanup completed with errors.

Oracle install successfully cleaned up the temporary directories.
#######################################################################


############# ORACLE DEINSTALL & DECONFIG TOOL END #############

On the remaining nodes (rac1,rac2) update the oraInventory.

[root@rac1 ~]# su - oracle
[oracle@rac1 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@rac1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={rac1,rac2}"
Code:
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 3907 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.

Verify that the Inventory.xml file on both nodes should only contain
dbhome entries for rac1 and rac2.


[oracle@rac1 ContentsXML]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
Code:
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2009, Oracle. All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>11.2.0.1.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true">
   <NODE_LIST>
      <NODE NAME="rac1"/>
      <NODE NAME="rac2"/>
      <NODE NAME="rac3"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/product/11.2.0/dbhome_1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="rac1"/>
      <NODE NAME="rac2"/>
   </NODE_LIST>
</HOME>
</HOME_LIST>
</INVENTORY>

Remove Clusterware from the rac3.

[root@rac1 ~]# cd /u01/app/11.2.0/grid/bin/
[root@rac1 bin]# ./olsnodes -s -t
Code:
rac1    Active  Unpinned
rac2    Active  Unpinned
rac3    Active  Unpinned
The third node should be unpinned if it's not then do the following.

[root@rac1 bin]#./crsctl unpin css -n rac3


Disable Oracle clusterware by executing the following from rac3.

[root@rac3 ~]# cd /u01/app/11.2.0/grid/crs/install/
[root@rac3 install]# ./rootcrs.pl -deconfig -force
Code:
2012-12-02 17:35:41: Parsing the host name
2012-12-02 17:35:41: Checking for super user privileges
2012-12-02 17:35:41: User has super user privileges
Using configuration parameter file: ./crsconfig_params
VIP exists.:rac1
VIP exists.: /rac1-vip/192.168.1.251/255.255.255.0/eth0
VIP exists.:rac2
VIP exists.: /rac2-vip/192.168.1.252/255.255.255.0/eth0
VIP exists.:rac3
VIP exists.: /rac3-vip/192.168.1.250/255.255.255.0/eth0
GSD exists.
ONS daemon exists. Local port 6100, remote port 6200
eONS daemon exists. Multicast port 17994, multicast IP address 234.58.121.78, listening port 2016
ACFS-9200: Supported
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'rac3'
CRS-2677: Stop of 'ora.registry.acfs' on 'rac3' succeeded
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac3'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac3'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac3'
CRS-2673: Attempting to stop 'ora.CRS.dg' on 'rac3'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac3'
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'rac3'
CRS-2677: Stop of 'ora.CRS.dg' on 'rac3' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'rac3' succeeded
CRS-2677: Stop of 'ora.FRA.dg' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac3'
CRS-2677: Stop of 'ora.asm' on 'rac3' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac3' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac3'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac3'
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rac3'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac3'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac3'
CRS-2673: Attempting to stop 'ora.asm' on 'rac3'
CRS-2677: Stop of 'ora.cssdmonitor' on 'rac3' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac3' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac3' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac3' succeeded
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac3' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac3'
CRS-2677: Stop of 'ora.cssd' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac3'
CRS-2673: Attempting to stop 'ora.diskmon' on 'rac3'
CRS-2677: Stop of 'ora.gpnpd' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac3'
CRS-2677: Stop of 'ora.gipcd' on 'rac3' succeeded
CRS-2677: Stop of 'ora.diskmon' on 'rac3' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node
Run the following from rac1 to verify.


[root@rac1 ~]# cd /u01/app/11.2.0/grid/bin/
[root@rac1 bin]# ./crsctl delete node -n rac3
Code:
CRS-4661: Node rac3 successfully deleted.
[root@rac1 bin]# ./olsnodes -t -s
Code:
rac1    Active  Unpinned
rac2    Active  Unpinned
Code:
Update Oracle inventory from rac3

[grid@rac3 ~]$ cd $GRID_HOME/oui/bin
[grid@rac3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$GRID_HOME "CLUSTER_NODES={rac3}" CRS=TRUE -local
Code:
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4094 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
Deinstall Grid Infrastructure Software from rac3.

[grid@rac3 ~]$ cd $GRID_HOME/deinstall
[grid@rac3 deinstall]$ ./deinstall -local
Code:
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2012-12-02_05-51-29-PM/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############


######################## CHECK OPERATION START ########################
Install check configuration START


Checking for existence of the Oracle home location /u01/app/11.2.0/grid
Oracle Home type selected for de-install is: CRS
Oracle Base selected for de-install is: /u01/app/grid
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home 
The following nodes are part of this cluster: rac3

Install check configuration END

Traces log file: /tmp/deinstall2012-12-02_05-51-29-PM/logs//crsdc.log
Enter an address or the name of the virtual IP used on node "rac3"[rac3-vip]
 > PRESS ENTER

The following information can be collected by running ifconfig -a on node "rac3"
Enter the IP netmask of Virtual IP "192.168.1.250" on node "rac3"[255.255.255.0]
 > PRESS ENTER

Enter the network interface name on which the virtual IP address "192.168.1.250" is active
 > PRESS ENTER

Enter an address or the name of the virtual IP[]
 > PRESS ENTER


Network Configuration check config START

Network de-configuration trace file location: 
/tmp/deinstall2012-12-02_05-51-29-PM/logs/netdc_check6970228565355526821.log

Specify all RAC listeners that are to be de-configured[LISTENER,LISTENER_SCAN2]:LISTENER

At least one listener from the discovered listener list[LISTENER,LISTENER_SCAN2] is missing 
in the specified listener list[LISTENER]. The Oracle home will be cleaned up, so all the 
listeners will not be available after deinstall. If you want to remove a specific listener, please use 
Oracle Net Configuration Assistant instead. Do you want to continue? (y|n) [n]: y

Network Configuration check config END

Asm Check Configuration START

ASM de-configuration trace file location: 
/tmp/deinstall2012-12-02_05-51-29-PM/logs/asmcadc_check1139713786370590829.log


######################### CHECK OPERATION END #########################


####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: 
The cluster node(s) on which the Oracle home exists are: (Please input nodes seperated 
by ",", eg: node1,node2,...)rac3
Since -local option has been specified, the Oracle home will be de-installed only on the local node, 
'rac3', and the global configuration will be removed.
Oracle Home selected for de-install is: /u01/app/11.2.0/grid
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Following RAC listener(s) will be de-configured: LISTENER
Option -local will not modify any ASM configuration.
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: 
'/tmp/deinstall2012-12-02_05-51-29-PM/logs/deinstall_deconfig2012-12-02_05-53-05-PM.out'
Any error messages from this session will be written to: 

'/tmp/deinstall2012-12-02_05-51-29-PM/logs/deinstall_deconfig2012-12-02_05-53-05-PM.err'

######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: 
/tmp/deinstall2012-12-02_05-51-29-PM/logs/asmcadc_clean5733666251865826553.log
ASM Clean Configuration END

Network Configuration clean config START

Network de-configuration trace file location: 
/tmp/deinstall2012-12-02_05-51-29-PM/logs/netdc_clean2216372759318186245.log

De-configuring RAC listener(s): LISTENER

De-configuring listener: LISTENER
    Stopping listener on node "rac3": LISTENER
    Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.

De-configuring Naming Methods configuration file...
Naming Methods configuration file de-configured successfully.

De-configuring backup files...
Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END


---------------------------------------->
Remove the directory: /tmp/deinstall2012-12-02_05-51-29-PM on node: 
Oracle Universal Installer clean START

Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node : Done

Delete directory '/u01/app/11.2.0/grid' on the local node : Done

Delete directory '/u01/app/oraInventory' on the local node : Done

Delete directory '/u01/app/grid' on the local node : Done

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END


Oracle install clean START

Clean install operation removing temporary directory '/tmp/install' on node 'rac3'

Oracle install clean END


######################### CLEAN OPERATION END #########################


####################### CLEAN OPERATION SUMMARY #######################
Following RAC listener(s) were de-configured successfully: LISTENER
Oracle Clusterware was already stopped and de-configured on node "rac3"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node.
Successfully deleted directory '/u01/app/11.2.0/grid' on the local node.
Successfully deleted directory '/u01/app/oraInventory' on the local node.
Successfully deleted directory '/u01/app/grid' on the local node.
Oracle Universal Installer cleanup was successful.


Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'rac3' at the end of the session.

Oracle install successfully cleaned up the temporary directories.
#######################################################################


############# ORACLE DEINSTALL & DECONFIG TOOL END #############
Run the specified commands as root from rac3.


[root@rac3 ~]# rm -rf /etc/oraInst.loc
[root@rac3 ~]# rm -rf /opt/ORCLfmap
[root@rac3 ~]# rm -rf /u01/app/11.2.0/
[root@rac3 ~]# rm -rf /u01/app/oracle/



After the de-install make sure that oracle clusterware does not start
by checking the following.

[root@rac3 ~]# diff /etc/inittab /etc/inittab.no_crs


Update Oracle Inventory on all nodes by running the following command from rac1.


[root@rac1 ~]# su - grid
[grid@rac1 ~]$ cd $GRID_HOME/oui/bin
[grid@rac1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$GRID_HOME "CLUSTER_NODES={rac1,rac2}" CRS=TRUE
Code:
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 3908 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.



Do post node removal checks from rac1.

[grid@rac1 bin]$ cluvfy stage -post nodedel -n rac3 -verbose

Performing post-checks for node removal 

Checking CRS integrity...
The Oracle clusterware is healthy on node "rac2"
The Oracle clusterware is healthy on node "rac1"

CRS integrity check passed




Result: 
Node removal check passed

Post-check for node removal was successful.

Remove ORACLE ASM from rac3.

[root@rac3 ~]# /usr/sbin/oracleasm exit
Code:
Unmounting ASMlib driver filesystem: /dev/oracleasm
Unloading module "oracleasm": oracleasm
[root@rac3 ~]# rpm -qa | grep asm
Code:
oracleasm-support-2.1.7-1.el5
ibmasm-3.0-9
nasm-0.98.39-3.2.2
ibmasm-xinput-2.1-1.el5
oracleasmlib-2.0.4-1.el5
oracleasm-2.6.18-164.el5-2.0.5-1.el5
[root@rac3 ~]# rpm -ev oracleasmlib-2.0.4-1.el5 oracleasm-2.6.18-164.el5-2.0.5-1.el5 oracleasm-support-2.1.7-1.el5
Code:
warning: /etc/sysconfig/oracleasm saved as /etc/sysconfig/oracleasm.rpmsave

[root@rac3 ~]# rm -f /etc/sysconfig/oracleasm.rpmsave
[root@rac3 ~]# rm -f /etc/sysconfig/oracleasm-_dev_oracleasm

[root@rac3 ~]# rm -f /etc/rc.d/rc2.d/S29oracleasm

[root@rac3 ~]# rm -f /etc/rc.d/rc0.d/K20oracleasm
[root@rac3 ~]# rm -f /etc/rc.d/rc5.d/S29oracleasm
[root@rac3 ~]# rm -f /etc/rc.d/rc4.d/S29oracleasm
[root@rac3 ~]# rm -f /etc/rc.d/rc1.d/K20oracleasm
[root@rac3 ~]# rm -f /etc/rc.d/rc3.d/S29oracleasm
[root@rac3 ~]# rm -f /etc/rc.d/rc6.d/K20oracleasm


Remove Oracle and Grid Users.

[root@rac3 ~]# userdel -r grid
[root@rac3 ~]# userdel -r oracle
[root@rac3 ~]# groupdel oinstall
[root@rac3 ~]# groupdel asmadmin
[root@rac3 ~]# groupdel asmdba
[root@rac3 ~]# groupdel asmoper
[root@rac3 ~]# groupdel dba
[root@rac3 ~]# groupdel oper