您的位置:首页 > 数据库 > Oracle

Oracle 11g RAC 修改IP

2015-03-31 11:57 399 查看
Oracle 11g RAC 修改IP      在RAC环境下修改ip地址,会造成数据库停机,所以在搭建、配置RAC前,必须做好ip规划,而主机名是不允许修改的,如果修改必须重新安装CRS.
系统环境:操作系统:RedHat EL5.5集群软件: GI 11G数据库软件:Oracle 11.2.0.1修改ip 方法:
一、禁用和停止相关的resource

1)禁用和停止 vip service
[root@node1 ~]# srvctl disable vip -h
Disable the VIPs from Oracle Clusterware management.
Usage: srvctl disable vip -i <vip_name> [-v]
   -i <vip_name>            VIP name
   -h                       Print usage
   -v                       Verbose output

[root@node1 ~]# srvctl disable vip -i "node1-vip"
[root@node1 ~]# srvctl disable vip -i "node2-vip"

[root@node1 ~]# srvctl stop vip -h
Stop the specified VIP or VIPs on a node.
Usage: srvctl stop vip { -n <node_name>  | -i <vip_name> } [-f] [-r] [-v]
   -n <node_name>           Node name
   -i <vip_name>            VIP name
   -r                       Relocate VIP
   -f                       Force stop
   -h                       Print usage
   -v                       Verbose output
[root@node1 ~]# srvctl stop vip -n node1
[root@node1 ~]# srvctl stop vip -n node2


2)禁用和停止listener
[root@node1 ~]# srvctl disable listener
[root@node1 ~]# srvctl stop listener

3)禁用和停止scan及scan_listener
[root@node1 ~]# srvctl disable scan_listener
[root@node1 ~]# srvctl stop scan_listener
[root@node1 ~]# srvctl disable scan
[root@node1 ~]# srvctl stop scan
[root@node1 ~]#
4)在所有节点停止CRS 服务
[root@node1 ~]# crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'node1'
CRS-2673: Attempting to stop 'ora.crsd' on 'node1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node1'
CRS-2673: Attempting to stop 'ora.DG2.dg' on 'node1'
CRS-2673: Attempting to stop 'ora.OCR_VOTE.dg' on 'node1'
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'node1'
CRS-2673: Attempting to stop 'ora.DG1.dg' on 'node1'
CRS-2673: Attempting to stop 'ora.RCY1.dg' on 'node1'
CRS-2677: Stop of 'ora.registry.acfs' on 'node1' succeeded
CRS-2677: Stop of 'ora.OCR_VOTE.dg' on 'node1' succeeded
CRS-2677: Stop of 'ora.RCY1.dg' on 'node1' succeeded
CRS-2677: Stop of 'ora.DG2.dg' on 'node1' succeeded
CRS-2677: Stop of 'ora.DG1.dg' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'node1'
CRS-2677: Stop of 'ora.asm' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.eons' on 'node1'
CRS-2673: Attempting to stop 'ora.ons' on 'node1'
CRS-2677: Stop of 'ora.ons' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'node1'
CRS-2677: Stop of 'ora.net1.network' on 'node1' succeeded
CRS-2677: Stop of 'ora.eons' on 'node1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node1' has completed
CRS-2677: Stop of 'ora.crsd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'node1'
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'node1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'node1'
CRS-2673: Attempting to stop 'ora.evmd' on 'node1'
CRS-2673: Attempting to stop 'ora.asm' on 'node1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'node1'
CRS-2677: Stop of 'ora.cssdmonitor' on 'node1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'node1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'node1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'node1' succeeded
CRS-2677: Stop of 'ora.asm' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'node1'
CRS-2677: Stop of 'ora.cssd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'node1'
CRS-2673: Attempting to stop 'ora.diskmon' on 'node1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'node1' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'node1'
CRS-2677: Stop of 'ora.gipcd' on 'node1' succeeded
CRS-2677: Stop of 'ora.diskmon' on 'node1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'node1' has completed
CRS-4133: Oracle High Availability Services has been stopped.

二、在操作系统下修改IP(所有节点)

[root@node1 ~]# cat /etc/hosts(原ip 信息)

# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1                localhost
192.168.8.11  node1
192.168.8.13  node1-vip
10.10.10.11  node1-priv
192.168.8.12  node2
192.168.8.14  node2-vip
10.10.10.12   node2-priv
192.168.8.15   rac_scan

[root@node1 ~]# vi /etc/hosts(修改后ip 信息)
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1                localhost
192.168.8.111  node1
192.168.8.113  node1-vip
10.10.10.11  node1-priv
192.168.8.112  node2
192.168.8.114  node2-vip
10.10.10.12   node2-priv
192.168.8.115   rac_scan
~
[root@node1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0

三、重新启动CRS 服务
[root@node1 ~]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.

[root@node1 ~]# crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

被禁用服务未启动
[root@node1 ~]# crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora.DG1.dg     ora....up.type OFFLINE   OFFLINE              
ora.DG2.dg     ora....up.type ONLINE    ONLINE    node2      
ora....ER.lsnr ora....er.type OFFLINE   OFFLINE              
ora....N1.lsnr ora....er.type OFFLINE   OFFLINE              
ora....VOTE.dg ora....up.type ONLINE    ONLINE    node1      
ora.RCY1.dg    ora....up.type ONLINE    ONLINE    node2      
ora.asm        ora.asm.type   ONLINE    ONLINE    node1      
ora.eons       ora.eons.type  ONLINE    ONLINE    node1      
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE              
ora....network ora....rk.type ONLINE    ONLINE    node1      
ora....SM1.asm application    ONLINE    ONLINE    node1      
ora....E1.lsnr application    OFFLINE   OFFLINE              
ora.node1.gsd  application    OFFLINE   OFFLINE              
ora.node1.ons  application    ONLINE    ONLINE    node1      
ora.node1.vip  ora....t1.type OFFLINE   OFFLINE              
ora....SM2.asm application    ONLINE    ONLINE    node2      
ora....E2.lsnr application    OFFLINE   OFFLINE              
ora.node2.gsd  application    OFFLINE   OFFLINE              
ora.node2.ons  application    ONLINE    ONLINE    node2      
ora.node2.vip  ora....t1.type OFFLINE   OFFLINE              
ora.oc4j       ora.oc4j.type  OFFLINE   OFFLINE              
ora.ons        ora.ons.type   ONLINE    ONLINE    node1      
ora.prod.db    ora....se.type OFFLINE   OFFLINE              
ora....taf.svc ora....ce.type OFFLINE   OFFLINE              
ora....ry.acfs ora....fs.type ONLINE    ONLINE    node1      
ora.scan1.vip  ora....ip.type OFFLINE   OFFLINE              
rac_web        application    ONLINE    ONLINE    node1      
web_vip        application    ONLINE    ONLINE    node1      

四、通过oifcfg 工具修改ip
[root@node1 ~]# oifcfg
Name:
       oifcfg - Oracle Interface Configuration Tool.
Usage:  oifcfg iflist [-p [-n]]
       oifcfg setif {-node <nodename> | -global} {<if_name>/<subnet>:<if_type>}...
       oifcfg getif [-node <nodename> | -global] [ -if <if_name>[/<subnet>] [-type <if_type>] ]
       oifcfg delif [{-node <nodename> | -global} [<if_name>[/<subnet>]]]
       oifcfg [-help]
       <nodename> - name of the host, as known to a communications network
       <if_name>  - name by which the interface is configured in the system
       <subnet>   - subnet address of the interface
       <if_type>  - type of the interface { cluster_interconnect | public }
[root@node1 ~]# oifcfg iflist
eth0  192.168.8.0
eth1  10.10.10.0
[root@node1 ~]# oifcfg iflist -p
eth0  192.168.8.0  PRIVATE
eth1  10.10.10.0  PRIVATE
[root@node1 ~]# oifcfg iflist -p -n
eth0  192.168.8.0  PRIVATE  255.255.255.0
eth1  10.10.10.0  PRIVATE  255.255.255.0
[root@node1 ~]# oifcfg getif -global
eth0  192.168.8.0  global  public
eth1  10.10.10.0  global  cluster_interconnect

删除原来的网卡ip 配置信息
[root@node1 ~]# oifcfg delif
[root@node1 ~]# oifcfg getif
[root@node1 ~]# oifcfg getif -global


重新配置网卡ip信息
[root@node1 ~]# oifcfg setif -global eth0/192.168.8.0:public
[root@node1 ~]# oifcfg getif
eth0  192.168.8.0  global  public
[root@node1 ~]# oifcfg setif -global eth1/10.10.10.0:cluster_interconnect
[root@node1 ~]# oifcfg getif -global
eth0  192.168.8.0  global  public
eth1  10.10.10.0  global  cluster_interconnect
[root@node1 ~]# oifcfg iflist
eth0  192.168.8.0
eth1  10.10.10.0
[root@node1 ~]# oifcfg iflist -p -n
eth0  192.168.8.0  PRIVATE  255.255.255.0
eth1  10.10.10.0  PRIVATE  255.255.255.0
查看并重新配置VIP
[root@node1 ~]# srvctl config vip -n node1
VIP exists.:node1
VIP exists.: /node1-vip/192.168.8.113/255.255.255.0/eth0
[root@node1 ~]# srvctl config vip -n node2
VIP exists.:node2
VIP exists.: /node2-vip/192.168.8.114/255.255.255.0/eth0
[root@node1 ~]# clear
[root@node1 ~]# srvctl modify nodeapps -h
Modifies the configuration for a node application.
Usage: srvctl modify nodeapps {[-n <node_name> -A <new_vip_address>/<netmask>[/if1[|if2|...]]] | [-S <subnet>/<netmask>[/if1[|if2|...]]]} [-m <multicast-ip-address>] [-p <multicast-portnum>] [-e <eons-listen-port>] [ -l <ons-local-port> ] [-r <ons-remote-port> ] [-t <host>[:<port>][,<host>[:<port>]...]] [-v]
   -A <addr_str>            Node level Virtual IP address
   -S <subnet>/<netmask>/[if1[|if2...]]  NET address spec for network
   -m <multicast-ip-address>   The multicast IP address for eONS
   -p <multicast-portnum>    The port number for eONS
   -e <eons-listen-port>     Local listen port for eONS daemon (Default port number is 2016)
   -l <ons-local-port>      ONS listening port for local client connections
   -r <ons-remote-port>     ONS listening port for connections from remote hosts
   -t <host>[:<port>][,<host>[:<port>]...]  List of remote host/port pairs for ONS daemons outside this cluster
   -h                       Print usage
   -v                       Verbose output

[root@node1 ~]# srvctl modify nodeapps -A 192.168.8.113/255.255.255.0/eth0 -n node1
[root@node1 ~]# srvctl modify nodeapps -A 192.168.8.114/255.255.255.0/eth0 -n node2
[root@node1 ~]# srvctl config vip -n node1
VIP exists.:node1
VIP exists.: /node1-vip/192.168.8.113/255.255.255.0/eth0
[root@node1 ~]# srvctl config vip -n node2
VIP exists.:node2
VIP exists.: /node2-vip/192.168.8.114/255.255.255.0/eth0
查看并重新配置SCAN:
[root@node1 ~]# srvctl config scan
SCAN name: rac_scan, Network: 1/192.168.8.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /192.168.8.15/192.168.8.15
[root@node1 ~]# srvctl modify scan -h
Modifies the SCAN name.
Usage: srvctl modify scan -n <scan_name>
   -n <scan_name>           Domain name qualified SCAN name
   -h                       Print usage
[root@node1 ~]# srvctl modify scan -n rac_scan
[root@node1 ~]# srvctl config scan
SCAN name: rac_scan, Network: 1/192.168.8.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /rac_scan/192.168.8.115
[root@node1 ~]#
如果需要修改private ip ,同样可用oifcfg 配置eth1网卡即可!

五、配置完成,启动相关的服务和resource
启动VIP、监听、scan和scan_listener、数据库
[root@node1 ~]# srvctl enable listener      
[root@node1 ~]# srvctl enable vip -i "node1-vip"
[root@node1 ~]# srvctl enable vip -i "node2-vip"
[root@node1 ~]# srvctl enable scan_listener
[root@node1 ~]# srvctl enable scan
[root@node1 ~]# srvctl enable database -d prod
[root@node1 ~]# srvctl start listener      
[root@node1 ~]# srvctl start vip -n node1,node2
     [[root@node1 ~]# srvctl start scan_listener
[root@node1 ~]# srvctl start scan
[root@node1 ~]# srvctl start database -d prod


本文出自 “天涯客的blog” 博客,请务必保留此出处http://tiany.blog.51cto.com/513694/1378083
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: