您的位置:首页 > 其它

uninstall 11.2.0.3.0 g…

2016-01-29 10:25 441 查看
OS:

Oracle Linux Server release 5.7

 

DB:

Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 -
64bit Production

 

1、以oracle用户登录启动dbca

[root@rac ~]# su - oracle

[oracle@rac ~]$ dbca

















会逐一删除所有节点的database

2、用oracle用户登录并在cd $ORACLE_HOME/deinstall目录下执行deinstall脚本

[root@rac ~]# su - oracle

[oracle@rac ~]$ cd $ORACLE_HOME/deinstall

[oracle@rac ~]$ ./deinstall

 

Checking for required files and bootstrapping ...

Please wait ...

Location of logs /u01/app/oraInventory/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START
############

######################### CHECK OPERATION START
#########################

## [START] Install check configuration ##

Checking for existence of the Oracle home location
/u01/app/oracle/11.2.0/db_1

Oracle Home type selected for deinstall is: Oracle Real Application
Cluster Database

Oracle Base selected for deinstall is: /u01/app/oracle

Checking for existence of central inventory location
/u01/app/oraInventory

Checking for existence of the Oracle Grid Infrastructure home
/u01/app/grid/11.2.0

The following nodes are part of this cluster: rac,rac1,rac2

Checking for sufficient temp space availability on node(s) :
'rac,rac1,rac2'

## [END] Install check configuration ##

Network Configuration check config START

Network de-configuration trace file location:
/u01/app/oraInventory/logs/netdc_check2013-08-14_02-22-03-AM.log

Network Configuration check config END

Database Check Configuration START

Database de-configuration trace file location:
/u01/app/oraInventory/logs/databasedc_check2013-08-14_02-22-09-AM.log

Use comma as separator when specifying list of values as
input

Specify the list of database names that are configured in this
Oracle home []:

Database Check Configuration END

Enterprise Manager Configuration Assistant START

EMCA de-configuration trace file location:
/u01/app/oraInventory/logs/emcadc_check2013-08-14_02-31-07-AM.log

Enterprise Manager Configuration Assistant END

Oracle Configuration Manager check START

OCM check log file location :
/u01/app/oraInventory/logs//ocm_check1556.log

Oracle Configuration Manager check END

######################### CHECK OPERATION END
#########################

####################### CHECK OPERATION SUMMARY
#######################

Oracle Grid Infrastructure Home is: /u01/app/grid/11.2.0

The cluster node(s) on which the Oracle home deinstallation will be
performed are:rac,rac1,rac2

Oracle Home selected for deinstall is:
/u01/app/oracle/11.2.0/db_1

Inventory Location where the Oracle home registered is:
/u01/app/oraInventory

No Enterprise Manager configuration to be updated for any
database(s)

No Enterprise Manager ASM targets to update

No Enterprise Manager listener targets to migrate

Checking the config status for CCR

rac : Oracle Home exists with CCR directory, but CCR is not
configured

rac1 : Oracle Home exists with CCR directory, but CCR is not
configured

rac2 : Oracle Home exists with CCR directory, but CCR is not
configured

CCR check is finished

Do you want to continue (y - yes, n - no)?
: y

A log of this session will be written to:
'/u01/app/oraInventory/logs/deinstall_deconfig2013-08-14_02-21-45-AM.out'

Any error messages from this session will be written to:
'/u01/app/oraInventory/logs/deinstall_deconfig2013-08-14_02-21-45-AM.err'

######################## CLEAN OPERATION START
########################

Enterprise Manager Configuration Assistant START

EMCA de-configuration trace file location:
/u01/app/oraInventory/logs/emcadc_clean2013-08-14_02-31-07-AM.log

Updating Enterprise Manager ASM targets (if any)

Updating Enterprise Manager listener targets (if any)

Enterprise Manager Configuration Assistant END

Database de-configuration trace file location:
/u01/app/oraInventory/logs/databasedc_clean2013-08-14_02-31-42-AM.log

Network Configuration clean config START

Network de-configuration trace file location:
/u01/app/oraInventory/logs/netdc_clean2013-08-14_02-31-42-AM.log

De-configuring Listener configuration file on all nodes...

Listener configuration file de-configured successfully.

De-configuring Naming Methods configuration file on all
nodes...

Naming Methods configuration file de-configured successfully.

De-configuring Local Net Service Names configuration file on all
nodes...

Local Net Service Names configuration file de-configured
successfully.

De-configuring Directory Usage configuration file on all
nodes...

Directory Usage configuration file de-configured successfully.

De-configuring backup files on all nodes...

Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END

Oracle Configuration Manager clean START

OCM clean log file location :
/u01/app/oraInventory/logs//ocm_clean1556.log

Oracle Configuration Manager clean END

Setting the force flag to false

Setting the force flag to cleanup the Oracle Base

Oracle Universal Installer clean START

Detach Oracle home '/u01/app/oracle/11.2.0/db_1' from the
central inventory on the local node : Done

Delete directory '/u01/app/oracle/11.2.0/db_1' on the local node
: Done

The Oracle Base directory '/u01/app/oracle' will not be removed
on local node. The directory is in use by Oracle Home
'/u01/app/grid/11.2.0'.

Detach Oracle home '/u01/app/oracle/11.2.0/db_1' from the
central inventory on the remote nodes 'rac2,rac1' : Done

Delete directory '/u01/app/oracle/11.2.0/db_1' on the remote
nodes 'rac1,rac2' : Done

The Oracle Base directory '/u01/app/oracle' will not be removed
on node 'rac2'. The directory is in use by Oracle Home
'/u01/app/grid/11.2.0'.

The Oracle Base directory '/u01/app/oracle' will not be removed
on node 'rac1'. The directory is in use by Oracle Home
'/u01/app/grid/11.2.0'.

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END

## [START] Oracle install clean ##

Clean install operation removing temporary directory
'/tmp/deinstall2013-08-14_02-15-33AM' on node 'rac'

Clean install operation removing temporary directory
'/tmp/deinstall2013-08-14_02-15-33AM' on node 'rac1,rac2'

## [END] Oracle install clean ##

######################### CLEAN OPERATION END
#########################

####################### CLEAN OPERATION SUMMARY
#######################

Cleaning the config for CCR

As CCR is not configured, so skipping the cleaning of CCR
configuration

CCR clean is finished

Successfully detached Oracle home '/u01/app/oracle/11.2.0/db_1'
from the central inventory on the local node.

Successfully deleted directory '/u01/app/oracle/11.2.0/db_1' on the
local node.

Successfully detached Oracle home '/u01/app/oracle/11.2.0/db_1'
from the central inventory on the remote nodes 'rac2,rac1'.

Successfully deleted directory '/u01/app/oracle/11.2.0/db_1' on the
remote nodes 'rac1,rac2'.

Oracle Universal Installer cleanup was successful.

Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'rac2' at the end
of the session.

Oracle deinstall tool successfully cleaned up temporary
directories.

#######################################################################

############# ORACLE DEINSTALL & DECONFIG TOOL END
#############

 

3、用root用户登录所有节点,运行/u01/app/grid/11.2.0/crs/install/rootcrs.pl
-verbose -deconfig

But,如果有3个节点,在前2个节点运行,最后一个节点不运行.

[root@rac ~]# cd /u01/app/grid/11.2.0/crs/install

[root@rac install]#  ./rootcrs.pl -verbose
-deconfig

 

Using configuration parameter file: ./crsconfig_params

Network exists: 1/192.168.12.0/255.255.255.0/eth0, type
static

VIP exists: /rac-vip/192.168.12.4/192.168.12.0/255.255.255.0/eth0,
hosting node rac

VIP exists: /rac1-vip/192.168.12.5/192.168.12.0/255.255.255.0/eth0,
hosting node rac1

VIP exists: /rac2-vip/192.168.12.9/192.168.12.0/255.255.255.0/eth0,
hosting node rac2

GSD exists

ONS exists: Local port 6100, remote port 6200, EM port 2016

PRCR-1065 : Failed to stop resource ora.rac.vip

CRS-2529: Unable to act on 'ora.rac.vip' because that would require
stopping or relocating 'ora.LISTENER.lsnr', but the force option
was not specified

PRCR-1014 : Failed to stop resource ora.net1.network

PRCR-1065 : Failed to stop resource ora.net1.network

CRS-2529: Unable to act on 'ora.net1.network' because that would
require stopping or relocating 'ora.scan1.vip', but the force
option was not specified

PRKO-2380 : VIP rac is still running on node: rac

CRS-2791: Starting shutdown of Oracle High Availability
Services-managed resources on 'rac'

CRS-2673: Attempting to stop 'ora.crsd' on 'rac'

CRS-2790: Starting shutdown of Cluster Ready Services-managed
resources on 'rac'

CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac'

CRS-2673: Attempting to stop 'ora.CRM.dg' on 'rac'

CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac'

CRS-2673: Attempting to stop 'ora.FLUSH.dg' on 'rac'

CRS-2673: Attempting to stop 'ora.oc4j' on 'rac'

CRS-2673: Attempting to stop 'ora.cvu' on 'rac'

CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on
'rac'

CRS-2677: Stop of 'ora.cvu' on 'rac' succeeded

CRS-2672: Attempting to start 'ora.cvu' on 'rac1'

CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac' succeeded

CRS-2673: Attempting to stop 'ora.rac.vip' on 'rac'

CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'rac'
succeeded

CRS-2673: Attempting to stop 'ora.scan1.vip' on 'rac'

CRS-2676: Start of 'ora.cvu' on 'rac1' succeeded

CRS-2677: Stop of 'ora.rac.vip' on 'rac' succeeded

CRS-2672: Attempting to start 'ora.rac.vip' on 'rac1'

CRS-2677: Stop of 'ora.scan1.vip' on 'rac' succeeded

CRS-2672: Attempting to start 'ora.scan1.vip' on 'rac2'

CRS-2676: Start of 'ora.rac.vip' on 'rac1' succeeded

CRS-2676: Start of 'ora.scan1.vip' on 'rac2' succeeded

CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on
'rac2'

CRS-2677: Stop of 'ora.FLUSH.dg' on 'rac' succeeded

CRS-2677: Stop of 'ora.DATA.dg' on 'rac' succeeded

CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'rac2'
succeeded

CRS-2677: Stop of 'ora.oc4j' on 'rac' succeeded

CRS-2672: Attempting to start 'ora.oc4j' on 'rac1'

CRS-2677: Stop of 'ora.CRM.dg' on 'rac' succeeded

CRS-2673: Attempting to stop 'ora.asm' on 'rac'

CRS-2677: Stop of 'ora.asm' on 'rac' succeeded

CRS-2676: Start of 'ora.oc4j' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.net1.network' on 'rac'

CRS-2677: Stop of 'ora.net1.network' on 'rac' succeeded

CRS-2792: Shutdown of Cluster Ready Services-managed resources on
'rac' has completed

CRS-2677: Stop of 'ora.crsd' on 'rac' succeeded

CRS-2673: Attempting to stop 'ora.crf' on 'rac'

CRS-2673: Attempting to stop 'ora.ctssd' on 'rac'

CRS-2673: Attempting to stop 'ora.evmd' on 'rac'

CRS-2673: Attempting to stop 'ora.asm' on 'rac'

CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac'

CRS-2677: Stop of 'ora.mdnsd' on 'rac' succeeded

CRS-2677: Stop of 'ora.crf' on 'rac' succeeded

CRS-2677: Stop of 'ora.evmd' on 'rac' succeeded

CRS-2677: Stop of 'ora.ctssd' on 'rac' succeeded

CRS-2677: Stop of 'ora.asm' on 'rac' succeeded

CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on
'rac'

CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac'
succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'rac'

CRS-2677: Stop of 'ora.cssd' on 'rac' succeeded

CRS-2673: Attempting to stop 'ora.gipcd' on 'rac'

CRS-2677: Stop of 'ora.gipcd' on 'rac' succeeded

CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac'

CRS-2677: Stop of 'ora.gpnpd' on 'rac' succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed
resources on 'rac' has completed

CRS-4133: Oracle High Availability Services has been stopped.

Successfully deconfigured Oracle clusterware stack on this node

 

4、在最后一个节点以root用户运行"/u01/app/grid/11.2.0/crs/install/rootcrs.pl
-verbose -deconfig -force -lastnode" 该命令会清空OCR和votedisk

[root@rac2 install]# ./rootcrs.pl -verbose -deconfig -force
-lastnode

 

Using configuration parameter file: ./crsconfig_params

CRS resources for listeners are still configured

Network exists: 1/192.168.12.0/255.255.255.0/eth0, type
static

VIP exists: /rac-vip/192.168.12.4/192.168.12.0/255.255.255.0/eth0,
hosting node rac

VIP exists: /rac1-vip/192.168.12.5/192.168.12.0/255.255.255.0/eth0,
hosting node rac1

VIP exists: /rac2-vip/192.168.12.9/192.168.12.0/255.255.255.0/eth0,
hosting node rac2

GSD exists

ONS exists: Local port 6100, remote port 6200, EM port 2016

CRS-2673: Attempting to stop 'ora.crsd' on 'rac2'

CRS-2790: Starting shutdown of Cluster Ready Services-managed
resources on 'rac2'

CRS-2673: Attempting to stop 'ora.CRM.dg' on 'rac2'

CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac2'

CRS-2673: Attempting to stop 'ora.FLUSH.dg' on 'rac2'

CRS-2677: Stop of 'ora.DATA.dg' on 'rac2' succeeded

CRS-2677: Stop of 'ora.FLUSH.dg' on 'rac2' succeeded

CRS-2677: Stop of 'ora.CRM.dg' on 'rac2' succeeded

CRS-2673: Attempting to stop 'ora.asm' on 'rac2'

CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded

CRS-2792: Shutdown of Cluster Ready Services-managed resources on
'rac2' has completed

CRS-2677: Stop of 'ora.crsd' on 'rac2' succeeded

CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2'

CRS-2673: Attempting to stop 'ora.evmd' on 'rac2'

CRS-2673: Attempting to stop 'ora.asm' on 'rac2'

CRS-2677: Stop of 'ora.evmd' on 'rac2' succeeded

CRS-2677: Stop of 'ora.ctssd' on 'rac2' succeeded

CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded

CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on
'rac2'

CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac2'
succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'rac2'

CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded

CRS-4402: The CSS daemon was started in exclusive mode but found an
active CSS daemon on node rac1, number 2, and is terminating

Unable to communicate with the Cluster Synchronization Services
daemon.

CRS-4000: Command Delete failed, or completed with errors.

crsctl delete for vds in CRM ... failed

CRS-2791: Starting shutdown of Oracle High Availability
Services-managed resources on 'rac2'

CRS-2673: Attempting to stop 'ora.crf' on 'rac2'

CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2'

CRS-2677: Stop of 'ora.crf' on 'rac2' succeeded

CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2'

CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded

CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded

CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2'

CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed
resources on 'rac2' has completed

CRS-4133: Oracle High Availability Services has been stopped.

Successfully deconfigured Oracle clusterware stack on this node

 

5、在任意节点以Grid Infrastructure拥有者执行deinstall脚本

[root@rac ~]# su - grid

[grid@rac ~]$ cd /u01/app/grid/11.2.0/deinstall

[grid@rac deinstall]$ ./deinstall

 

Checking for required files and bootstrapping ...

Please wait ...

Location of logs /tmp/deinstall2013-08-14_03-42-20AM/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START
############

######################### CHECK OPERATION START
#########################

## [START] Install check configuration ##

Checking for existence of the Oracle home location
/u01/app/grid/11.2.0

Oracle Home type selected for deinstall is: Oracle Grid
Infrastructure for a Cluster

Oracle Base selected for deinstall is: /u01/app/oracle

Checking for existence of central inventory location
/u01/app/oraInventory

Checking for existence of the Oracle Grid Infrastructure home

The following nodes are part of this cluster: rac,rac1,rac2

Checking for sufficient temp space availability on node(s) :
'rac,rac1,rac2'

## [END] Install check configuration ##

Traces log file:
/tmp/deinstall2013-08-14_03-42-20AM/logs//crsdc.log

Enter an address or the name of the virtual IP used on node
"rac"[rac-vip]

 >

The following information can be collected by running
"/sbin/ifconfig -a" on node "rac"

Enter the IP netmask of Virtual IP "192.168.12.4" on node
"rac"[255.255.255.0]

 >

Enter the network interface name on which the virtual IP address
"192.168.12.4" is active

 >

Enter an address or the name of the virtual IP used on node
"rac1"[rac1-vip]

 >

The following information can be collected by running
"/sbin/ifconfig -a" on node "rac1"

Enter the IP netmask of Virtual IP "192.168.12.5" on node
"rac1"[255.255.255.0]

 >

Enter the IP netmask of Virtual IP "192.168.12.5" on node
"rac1"[255.255.255.0]

 >

Enter the network interface name on which the virtual IP address
"192.168.12.5" is active[rac-vip]

 >

Enter an address or the name of the virtual IP used on node
"rac2"[rac2-vip]

 >

The following information can be collected by running
"/sbin/ifconfig -a" on node "rac2"

Enter the IP netmask of Virtual IP "192.168.12.9" on node
"rac2"[255.255.255.0]

 >

Enter the network interface name on which the virtual IP address
"192.168.12.9" is active[rac-vip]

 >

Enter an address or the name of the virtual IP[]

 >

Network Configuration check config START

Network de-configuration trace file location:
/tmp/deinstall2013-08-14_03-42-20AM/logs/netdc_check2013-08-14_04-31-25-AM.log

Specify all RAC listeners (do not include SCAN listener) that
are to be de-configured [LISTENER,LISTENER_SCAN1]:

Network Configuration check config END

Asm Check Configuration START

ASM de-configuration trace file location:
/tmp/deinstall2013-08-14_03-42-20AM/logs/asmcadc_check2013-08-14_04-31-31-AM.log

ASM configuration was not detected in this Oracle home. Was ASM
configured in this Oracle home (y|n)
: y

Is OCR/Voting Disk placed in ASM y|n
: y

Enter the OCR/Voting Disk diskgroup name []:

Specify the ASM Diagnostic Destination [ ]:

Specify the diskstring []:

Specify the diskgroups that are managed by this ASM instance
[]:

######################### CHECK OPERATION END
#########################

####################### CHECK OPERATION SUMMARY
#######################

Oracle Grid Infrastructure Home is:

The cluster node(s) on which the Oracle home deinstallation will be
performed are:rac,rac1,rac2

Oracle Home selected for deinstall is: /u01/app/grid/11.2.0

Inventory Location where the Oracle home registered is:
/u01/app/oraInventory

Following RAC listener(s) will be de-configured:
LISTENER,LISTENER_SCAN1

ASM instance will be de-configured from this Oracle home

Do you want to continue (y - yes, n - no)?
: y

A log of this session will be written to:
'/tmp/deinstall2013-08-14_03-42-20AM/logs/deinstall_deconfig2013-08-14_03-43-56-AM.out'

Any error messages from this session will be written to:
'/tmp/deinstall2013-08-14_03-42-20AM/logs/deinstall_deconfig2013-08-14_03-43-56-AM.err'

######################## CLEAN OPERATION START
########################

ASM de-configuration trace file location:
/tmp/deinstall2013-08-14_03-42-20AM/logs/asmcadc_clean2013-08-14_04-32-20-AM.log

ASM Clean Configuration START

ASM Clean Configuration END

Network Configuration clean config START

Network de-configuration trace file location:
/tmp/deinstall2013-08-14_03-42-20AM/logs/netdc_clean2013-08-14_04-32-26-AM.log

De-configuring RAC listener(s): LISTENER,LISTENER_SCAN1

De-configuring listener: LISTENER

    Stopping
listener: LISTENER

    Listener
stopped successfully.

Listener de-configured successfully.

De-configuring listener: LISTENER_SCAN1

    Stopping
listener: LISTENER_SCAN1

    Warning:
Failed to stop listener. Listener may not be running.

Listener de-configured successfully.

De-configuring Naming Methods configuration file on all
nodes...

Naming Methods configuration file de-configured successfully.

De-configuring Local Net Service Names configuration file on all
nodes...

Local Net Service Names configuration file de-configured
successfully.

De-configuring Directory Usage configuration file on all
nodes...

Directory Usage configuration file de-configured successfully.

De-configuring backup files on all nodes...

Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END

---------------------------------------->

The deconfig command below can be executed in parallel on all
the remote nodes. Execute the command on  the
local node after the execution completes on all the remote
nodes.

Run the following command as the root user or the administrator
on node "rac1".

/tmp/deinstall2013-08-14_03-42-20AM/perl/bin/perl
-I/tmp/deinstall2013-08-14_03-42-20AM/perl/lib
-I/tmp/deinstall2013-08-14_03-42-20AM/crs/install
/tmp/deinstall2013-08-14_03-42-20AM/crs/install/rootcrs.pl
-force  -deconfig -paramfile
"/tmp/deinstall2013-08-14_03-42-20AM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Run the following command as the root user or the administrator
on node "rac2".

/tmp/deinstall2013-08-14_03-42-20AM/perl/bin/perl
-I/tmp/deinstall2013-08-14_03-42-20AM/perl/lib
-I/tmp/deinstall2013-08-14_03-42-20AM/crs/install
/tmp/deinstall2013-08-14_03-42-20AM/crs/install/rootcrs.pl
-force  -deconfig -paramfile
"/tmp/deinstall2013-08-14_03-42-20AM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Run the following command as the root user or the administrator
on node "rac".

/tmp/deinstall2013-08-14_03-42-20AM/perl/bin/perl
-I/tmp/deinstall2013-08-14_03-42-20AM/perl/lib
-I/tmp/deinstall2013-08-14_03-42-20AM/crs/install
/tmp/deinstall2013-08-14_03-42-20AM/crs/install/rootcrs.pl
-force  -deconfig -paramfile
"/tmp/deinstall2013-08-14_03-42-20AM/response/deinstall_Ora11g_gridinfrahome1.rsp"
-lastnode

Press Enter after you finish running the above commands

<----------------------------------------

在所有节点以root用户执行上面的提示

## [START] Oracle install clean ##

Clean install operation removing temporary directory
'/tmp/deinstall2013-08-14_03-42-20AM' on node 'rac'

Clean install operation removing temporary directory
'/tmp/deinstall2013-08-14_03-42-20AM' on node 'rac1,rac2'

## [END] Oracle install clean ##

######################### CLEAN OPERATION END
#########################

####################### CLEAN OPERATION
SUMMARY #######################

ASM instance was de-configured successfully from the Oracle
home

Following RAC listener(s) were de-configured successfully:
LISTENER,LISTENER_SCAN1

Oracle Clusterware is stopped and successfully de-configured on
node "rac1"

Oracle Clusterware is stopped and successfully de-configured on
node "rac"

Oracle Clusterware is stopped and successfully de-configured on
node "rac2"

Oracle Clusterware is stopped and de-configured successfully.

Successfully detached Oracle home '/u01/app/grid/11.2.0' from the
central inventory on the local node.

Successfully deleted directory '/u01/app/grid/11.2.0' on the local
node.

Successfully deleted directory '/u01/app/oraInventory' on the local
node.

Successfully detached Oracle home '/u01/app/grid/11.2.0' from the
central inventory on the remote nodes 'rac2,rac1'.

Successfully deleted directory '/u01/app/grid/11.2.0' on the remote
nodes 'rac1,rac2'.

Successfully deleted directory '/u01/app/oraInventory' on the
remote nodes 'rac2'.

Successfully deleted directory '/u01/app/oraInventory' on the
remote nodes 'rac1'.

Failed to delete directory '/u01/app/oracle' on the remote nodes
'rac1'.

Failed to delete directory '/u01/app/oracle' on the remote nodes
'rac2'.

Oracle Universal Installer cleanup completed with
errors.

Run 'rm -rf /etc/oraInst.loc' as root on
node(s) 'rac,rac2,rac1' at the end of the session.

Run 'rm -rf /opt/ORCLfmap' as root on
node(s) 'rac,rac1,rac2' at the end of the session.

Oracle deinstall tool successfully cleaned up temporary
directories.

#######################################################################

############# ORACLE DEINSTALL & DECONFIG
TOOL END #############
deinstall运行完以后会提示在必要的节点运行'rm -rf
/etc/oraInst.loc' 和'rm -rf /opt/ORCLfmap'

 

 
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: