您的位置:首页 > 其它

静默模式下的rac部署安装

2017-05-09 12:00 429 查看
一、准备两台linux机器(两网卡,多块共享硬盘)

1、linux版本

$ uname -a

Linux rac1 2.6.32-642.15.1.el6.x86_64 #1 SMP Fri Feb 24 14:31:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

2、准备oracle安装包

官网地址:http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html

Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production

p13390677_112040_Linux-x86-64_1of7.zip

p13390677_112040_Linux-x86-64_2of7.zip

p13390677_112040_Linux-x86-64_3of7.zip

二、预安装准备

1、采用yum安装必要的包(仅供参考,每个节点执行),如下:

$ cd /etc/yum.repos.d/

$ mv CentOS-Base.repo CentOS-Base.repo.bak

修改源为阿里源

$ wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-6.repo
清除YUM缓存

$ yum clean all

将服务器上的软件包信息缓存到本地缓存,以提高搜索和安装软件的速度

$ yum makecache

$ yum install -y binutils*

$ yum install -y compat-libstdc*

$ yum install -y compat-libcap*

$ yum install -y elfutils-libelf*

$ yum install -y gcc*

$ yum install -y glibc*

$ yum install -y ksh*

$ yum install -y libaio*

$ yum install -y libgcc*

$ yum install -y libstdc*

$ yum install -y make*

$ yum install -y sysstat*

$ yum install -y libXp*

$ yum install -y glibc-kernheaders*

$ yum install -y libaio*

$ yum install -y compat-libstdc*

$ yum install -y libaio-devel*

$ yum install -y libgcc*

$ yum install -y unixODBC*

$ yum install -y unixODBC-devel*

$ yum install -y pdksh*

$ yum install -y rsh*

$ yum install -y cvuqdisk*

$ yum install -y compat-libcap1

$ yum install -y libcap*

注:后期安装GI和oracle检查会提示缺什么包,再下载安装。安装包时可能与其他有冲突,卸掉原来的或强制装。

$ rpm -vih pdksh-5.2.14-37.el5_8.1.x86_64.rpm 

warning: pdksh-5.2.14-37.el5_8.1.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID e8562897: NOKEY

error: Failed dependencies:

pdksh conflicts with ksh-20120801-33.el6.x86_64

$ rpm -e ksh-20120801-33.el6.x86_64

$ rpm -vih pdksh-5.2.14-37.el5_8.1.x86_64.rpm 

warning: pdksh-5.2.14-37.el5_8.1.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID e8562897: NOKEY

Preparing... ########################################### [100%]

1:pdksh ########################################### [100%]

创建后oinstall组后 cvuqdisk-1.0.9-1.rpm 包方可安装成功

$ rpm -vih pdksh-5.2.14-37.el5_8.1.x86_64.rpm

warning: pdksh-5.2.14-37.el5_8.1.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID e8562897: NOKEY

Preparing... ########################################### [100%]

1:pdksh ########################################### [100%]

[root@localhost ~]# rpm -vih cvuqdisk-1.0.9-1.rpm

Preparing... ########################################### [100%]

Using default group oinstall to install package

Group oinstall not found in /etc/group

oinstall : Group doesn't exist.

Please define environment variable CVUQDISK_GRP with the correct group to be used

error: %pre(cvuqdisk-1.0.9-1.x86_64) scriptlet failed, exit status 1

error: install: %pre scriptlet failed (2), skipping cvuqdisk-1.0.9-1

如果出现以下问题:

Another app is currently holding the yum lock; waiting for it to exit...

The other application is: PackageKit

Memory : 57 M RSS (365 MB VSZ)

Started: Fri Mar 10 03:42:13 2017 - 30:41 ago

通过强制关掉yum进程:

$ rm -f /var/run/yum.pid

检查数据包

$ yum install binutils compat-libstdc++-33 elfutils-libelf elfutils-libelf-devel glibc glibc-common glibc-devel gcc- gcc-c++ libaio-devel libaio libgcc libstdc++ libstdc++-devel make sysstat unixODBC unixODBC-devel pdksh ksh compat-libcap1

2、关防火墙:(每个节点执行)

临时关闭(重启失效):

$ service iptables stop

永久关闭:

$ chkconfig iptables off

3、禁用SELINUX(每个节点执行)

$ sed -i "s/SELINUX=enforcing/SELINUX=disabled/" /etc/selinux/config

及时生效

$ setenforce 0

4、修改hostname

rac1-> sed -i "s/HOSTNAME=localhost.localdomain/HOSTNAME=rac1/" /etc/sysconfig/network

及时生效

rac1-> hostname rac1

在第二节点2

rac2-> sed -i "s/HOSTNAME=localhost.localdomain/HOSTNAME=rac2/" /etc/sysconfig/network

及时生效

rac2-> hostname rac2

5、修改/etc/pam.d/login 文件(每个节点执行)

就可以防止本地登入一直回复到login状态

$ echo "session required pam_limits.so" >> /etc/pam.d/login

6、修改sysctl.conf 配置文件(#本机器默认,每个节点执行)


kernel.shmmax = 68719476736


kernel.shmall = 4294967296

$ cat >> /etc/sysctl.conf <<EOF

fs.file-max = 6815744

fs.aio-max-nr = 1048576

kernel.shmmni = 4096

kernel.sem = 250 32000 100 128

net.ipv4.ip_local_port_range = 9000 65500

net.core.rmem_default = 4194304

net.core.rmem_max = 4194304

net.core.wmem_default = 262144

net.core.wmem_max = 1048576

EOF

注:参数的意义

shmmax:该参数定义了共享内存段的最大尺寸(以字节为单位)。缺省为32M,对oracle来说,该缺省值太低了,这个设置的比SGA_MAX_SIZE大比较好。

shmall:该参数表示系统可以使用的共享内存总量(以页为单位)。Linux共享内存页大小为4KB,缺省值就是2097152

注:shmall 是全部允许使用的共享内存大小,shmmax 是单个段允许使用的大小。这两个可以设置为内存的 90%。例如 16G 内存,

1610241024102490% = 15461882265,shmall 的大小为 15461882265/4k(getconf PAGESIZE可得到) = 3774873。

shmmin:最小的内存segment的大小 

shmmni:这个内核参数用于设置系统范围内共享内存段的最大数量。该参数的默认值是 4096 。

shmseg:每个进程可以使用的内存segment的最大个数

fs.aio-max-nr:指的是 同时可以拥有的的异步IO请求数目。

fs.file-max:这个参数表示进程可以同时打开的最大句柄数,这个参数直接限制最大并发连接数。

sem:该参数表示设置的信号量。

net.ipv4.ip_local_port_range:这个参数定义了在UDP和TCP连接中本地端口的取值范围。表示应用程序可使用的IPv4端口范围。

net.core.rmem_max:设置客户端的最大接收缓存大小

net.core.wmem_max:buffer of socket max size

让配置生效:

$ sysctl -p

7、增加用户(每个节点执行)

7.1 创建用户组及用户

$ /usr/sbin/groupadd -g 501 oinstall

$ /usr/sbin/groupadd -g 502 dba

$ /usr/sbin/groupadd -g 507 oper

$ /usr/sbin/groupadd -g 504 asmadmin

$ /usr/sbin/groupadd -g 505 asmoper

$ /usr/sbin/groupadd -g 506 asmdba

$ /usr/sbin/useradd -g oinstall -G dba,asmdba,oper oracle

$ /usr/sbin/useradd -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid

$ cat /etc/group

7.2 修改用户密码:

$ passwd grid

$ passwd oracle

$ id oracle

$ id grid

7.3 创建目录及赋权

$ mkdir -p /u01/app/grid

$ mkdir -p /u01/app/11.2.0/grid

$ mkdir -p /u01/app/oraInventory

$ chown -R grid:oinstall /u01/app/oraInventory

$ chown -R grid:oinstall /u01/app

$ mkdir -p /u01/app/oracle

$ chown -R oracle:oinstall /u01/app/oracle

$ chmod -R 775 /u01

8、修改/etc/profile在后部追加(每个节点执行)

$ vi /etc/profile

if [ $USER = "oracle" ] || [ $USER = "grid" ]; then

if [ $SHELL = "/bin/ksh" ]; then

ulimit -p 16384

ulimit -n 65536

else

ulimit -u 16384 -n 65536

fi

umask 022

fi

export GRID_HOME=/u01/app/11.2.0/grid

export PATH=$GRID_HOME/bin:$PATH

9、修改/etc/security/limits.conf 文件,增加内容如下:(每个节点执行)

$ vi /etc/security/limits.conf

grid soft nproc 2047

grid hard nproc 16384

grid soft nofile 2048

grid hard nofile 65536

grid soft stack 10240

oracle soft nproc 2047

oracle hard nproc 16384

oracle soft nofile 2048

oracle hard nofile 65536

oracle soft stack 10240

注:soft 指的是当前系统生效的设置值。hard 表明系统中所能设定的最大值。

grid soft nproc 2047 #(软)定义用户grid的最大进程数为2047

grid hard nproc 16384 #(硬)定义用户grid的最大进程数为16384

nofile -- 打开文件的最大数目

stack -- 最大栈大小

noproc -- 进程的最大数目

10、创建磁盘(VMware)

10.1 创建磁盘文件

打开windows命令窗口

$ cmd

cd到vmware-vdiskmanager.exe 所安装的目录下执行

E:>cd vm

E:vm> vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 E:CentOS1sharediskocr1.vmdk

Creating disk 'E:CentOS1sharediskocr1.vmdk'

Create: 100% done.

Virtual disk creation successful.

E:vm> vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 E:CentOS1sharediskocr2.vmdk

E:vm> vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 E:CentOS1sharediskocr3.vmdk

E:vm> vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 E:CentOS1sharediskocr4.vmdk

E:vm> vmware-vdiskmanager.exe -c -s 5000Mb -a lsilogic -t 2 E:CentOS1sharediskdata1.vmdk

E:vm> vmware-vdiskmanager.exe -c -s 5000Mb -a lsilogic -t 2 E:CentOS1sharediskdata2.vmdk

E:vm> vmware-vdiskmanager.exe -c -s 5000Mb -a lsilogic -t 2 E:CentOS1sharediskfra.vmdk

10.2 在虚拟机上增加硬盘

编辑虚拟机1设置->添加->硬盘->下一步->使用现有虚拟磁盘->文件(选择上面创建的磁盘文件)->完成->保持现有格式

把一节点的配置文件复制到另个节点 vi E:vmCentOS2.vmx

scsi1:0.present = "TRUE"

scsi1:0.fileName = "E:CentOS1sharediskocr1.vmdk"

scsi1:1.present = "TRUE"

scsi1:1.fileName = "E:CentOS1sharediskocr2.vmdk"

scsi1:2.present = "TRUE"

scsi1:2.fileName = "E:CentOS1sharediskocr3.vmdk"

scsi1:3.present = "TRUE"

scsi1:3.fileName = "E:CentOS1sharediskocr4.vmdk"

scsi1:4.present = "TRUE"

scsi1:4.fileName = "E:CentOS1sharediskdata1.vmdk"

scsi1:5.present = "TRUE"

scsi1:5.fileName = "E:CentOS1sharediskfra.vmdk"

scsi1:6.present = "TRUE"

scsi1:6.fileName = "E:CentOS1sharediskdata2.vmdk"

disk.locking="false"

diskLib.dataCacheMaxSize = "0"

diskLib.dataCacheMaxReadAheadSize = "0"

diskLib.DataCacheMinReadAheadSize = "0"

diskLib.dataCachePageSize = "4096"

diskLib.maxUnsyncedWrites = "0"

编辑虚拟机2设置->添加->硬盘->下一步->使用现有虚拟磁盘->文件(选择上面创建的磁盘文件)->完成->保持现有格式

10.3 分区,可以不挂载(每个节点执行)

$ fdisk -l

$ fdisk /dev/sdb

$ fdisk /dev/sdc

$ fdisk /dev/sdd

$ fdisk /dev/sde

$ fdisk /dev/sdf

$ fdisk /dev/sdg

$ fdisk /dev/sdh

m->n->p->1->回车->回车->p->w

--------可用下面代替

for i in b c d e f g h

do

fdisk /dev/sd$i <<EOF

m

n

p

1

p

w

EOF


done

11、配置和检查网络, (在服务器上各增加1块网络适配器,以hostonly模式用于私网心跳,在所有节点上执行)

11.1修改节点的host配置(每个节点执行)

cat >> /etc/hosts <<EOF

192.168.91.140 rac1.burton.com rac1

192.168.214.130 rac1-priv.burton.com rac1-priv

192.168.91.152 rac1-vip.burton.com rac1-vip

192.168.91.142 rac2.burton.com rac2

192.168.214.131 rac2-priv.burton.com rac2-priv

192.168.91.153 rac2-vip.burton.com rac2-vip

192.168.91.154 scan-ip.burton.com scan-ip

EOF

11.2修改网卡配置(其他节点只要修改IPADDR,HWADDR即可)

$ ifconfig

$ vi /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE="eth0"

BOOTPROTO="static"

BROADCAST=192.168.91.255

IPADDR=192.168.91.140

NETMASK=255.255.255.0

ONBOOT="yes"

TYPE="Ethernet"

GATEWAY=192.168.91.2

HWADDR="00:0C:29:2B:0C:0B"

USERCTL="no"

IPV6INIT="no"

PEERDNS="yes"

DNS1=114.114.114.114

DNS2=8.8.8.8

$ vi /etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE="eth1"

BOOTPROTO="static"


BROADCAST=192.168.214.255

IPADDR=192.168.214.130

NETMASK=255.255.255.0

ONBOOT="yes"

TYPE="Ethernet"

HWADDR="00:0C:29:2B:0C:15"

USERCTL="no"

IPV6INIT="no"

PEERDNS="yes"


DNS1=114.114.114.114


DNS2=8.8.8.8

GATEWAY:参考 route -n

HWADDR:参考 cat /etc/udev/rules.d/70-persistent-net.rules

11.3 重启网络(每个节点执行)

$ service network restart

12、SSH设置互信关系,(oracle和grid)

12.1设置用户SSH(每个节点执行)

$ su - grid

$ mkdir -p ~/.ssh

$ cd ~/.ssh

$ ssh-keygen -t rsa

$ ssh-keygen -t dsa

以下操作在第一个节点上执行即可:

公钥存在authorized_keys文件中,写到本机

$ cat ~/.ssh/id_rsa.pub>>~/.ssh/authorized_keys

$ cat ~/.ssh/id_dsa.pub>>~/.ssh/authorized_keys

$ ssh rac2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

The authenticity of host 'rac2 (192.168.91.142)' can't be established.

RSA key fingerprint is bb:41:f5:d0:5f:84:8a:0d:90:a5:29:cb:0c:b1:12:cf.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'rac2,192.168.91.142' (RSA) to the list of known hosts.

$ ssh rac2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

grid@rac2's password: 

$ scp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys

grid@rac2's password: 

authorized_keys 100% 1980 1.9KB/s 00:00

12.2两个节点上分别验证

$ ssh rac1 date

$ ssh rac2 date

$ ssh rac1-priv date

$ ssh rac2-priv date

同样用oracle用户执行

su - oracle

.....

13、安装oracleasm(每个节点执行)

上传相应的rpm包并安装

关联问题:error: Failed dependencies:

oracleasm >= 1.0.4 is needed by oracleasmlib-2.0.4-1.el6.x86_64

$ yum install kmod-oracleasm* -y

$ yum install oracleasm* -y

$ yum localinstall -y kmod-oracleasm-2.0.6.rh1-2.el6.x86_64.rpm

$ yum localinstall -y oracleasmlib-2.0.4-1.el6.x86_64.rpm

$ yum localinstall -y oracleasm-support-2.1.8-1.el6.x86_64.rpm

$ rpm -vih oracleasm*

查看安装情况

$ rpm -qa|grep oracleasm

kmod-oracleasm-2.0.8-13.el6_8.x86_64

oracleasmlib-2.0.4-1.el6.x86_64

oracleasm-support-2.1.8-1.el6.x86_64

14、实现共享硬盘

14.1 两个节点都执行(root用户执行)

最好重启下,不重启可能由于某些参数没生效导致ASM失败

oracleasm configure -i 或 /etc/init.d/oracleasm configure

Default user to own the driver interface []: grid

Default group to own the driver interface []: asmadmin

Start Oracle ASM library driver on boot (y/n)
: y

Scan for Oracle ASM disks on boot (y/n) [y]: y

14.2启动(如Failed可以重启系统,再启动)

$ /etc/init.d/oracleasm restart

查看日志

$ cat /var/log/oracleasm

14.3建立共享磁盘

rac1-> service oracleasm createdisk FRA /dev/sdb1

rac1-> service oracleasm createdisk DATA1 /dev/sdc1

rac1-> service oracleasm createdisk DATA2 /dev/sdd1

rac1-> service oracleasm createdisk OCR_VOTE1 /dev/sde1

rac1-> service oracleasm createdisk OCR_VOTE2 /dev/sdf1

rac1-> service oracleasm createdisk OCR_VOTE3 /dev/sdg1

rac1-> service oracleasm createdisk OCR_VOTE4 /dev/sdh1

另个节点执行

rac2-> oracleasm scandisks

Reloading disk partitions: done

Cleaning any stale ASM disks...

Scanning system for ASM disks...

Instantiating disk "FRA"

Instantiating disk "DATA1"

Instantiating disk "DATA2"

Instantiating disk "OCR_VOTE1"

Instantiating disk "OCR_VOTE2"

Instantiating disk "OCR_VOTE3"

Instantiating disk "OCR_VOTE4"

帮助命令

$ oracleasm -h

查看磁盘情况

$ /etc/init.d/oracleasm listdisks

DATA1

DATA2

FRA

OCR_VOTE1

OCR_VOTE2

OCR_VOTE3

OCR_VOTE4

15.禁用NTP server(每个节点执行)

关闭ntp 时间同步服务, 时间同步所需要的设置(11gR2 新增检查项)

$ /sbin/service ntpd stop

$ chkconfig ntpd off

$ mv /etc/ntp.conf /etc/ntp.conf.bak

$ rm /var/run/ntpd.pid

注:我们这里用oracle rac自带CTSS时钟同步模式,CTSS时钟同步转换为NTP同步模式的实施记录参考下面链接:
http://blog.itpub.net/25116248/viewspace-1152989/
先设置好两台服务器的时区和同步时间

16.shm 的修改 (根据实际情况,可以略过)

/dev/shm 共享内存不足的处理

解决方法:

例如:为了将/dev/shm 的大小增加到1GB,修改/etc/fstab 的这行:默认的:

tmpfs /dev/shm tmpfs defaults 0 0

改成:

tmpfs /dev/shm tmpfs defaults,size=2048m 0 0

size 参数也可以用G 作单位:size=2G。

重新 mount /dev/shm 使之生效:

$ mount -o remount /dev/shm

或者:

$ umount /dev/shm

$ mount -a

马上可以用"df -h"命令检查变化

17、设置环境变量

17.1 oracle1环境变量配置

oracle用户(节点2 ORACLE_SID=burton2)

$ su - oracle

$ vi ~/.bash_profile

export TMP=/tmp

export TMPDIR=$TMP

export ORACLE_BASE=/u01/app/oracle

export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1

export ORACLE_SID=burton1

export PATH=/usr/sbin:$ORACLE_HOME/bin:$PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib

export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

export NLS_DATE_FORMAT="YYYY-MM-DD HH24:MI:SS"

export NLS_LANG=AMERICAN_AMERICA.AL32UTF8/ZHS16GBK

umask 022

$ source ~/.bash_profile

17.2 grid1环境变量配置

grid用户(节点2 ORACLE_SID=+ASM2)

$ su - grid

$ vi ~/.bash_profile

export TMP=/tmp

export TMPDIR=$TMP

export ORACLE_SID=+ASM1

export ORACLE_BASE=/u01/app/grid

export ORACLE_HOME=/u01/app/11.2.0/grid

export NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS"

export THREADS_FLAG=native

export PATH=$ORACLE_HOME/bin:$PATH

export NLS_LANG=AMERICAN_AMERICA.AL32UTF8

export PATH=$ORACLE_HOME/bin:$PATH

umask 022

$ source ~/.bash_profile

18、给oracle和grid用户授权(尽量让执行用户直接用户登入系统,避免用root用户切换过去)

root用户执行:

$ xhost +SI:localuser:grid

$ xhost +SI:localuser:oracle

三、安装GI软件和创建磁盘组(在主节点执行)

1、使用CVU工具执行以下的命令验证预安装环境:

rac1-> cd /home/grid/grid

rac1-> ./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -r 11gR2 -verbose

Performing pre-checks for cluster services setup

Checking node reachability...

Check: Node reachability from node "rac1"

Destination Node Reachable?

rac2 yes

rac1 yes

Result: Node reachability check passed from node "rac1"

.............

Checking user equivalence...

Check: Time zone consistency

Result: Time zone consistency check passed

Pre-check for cluster services setup was successful.

注:可能遇到问题一:File "/etc/resolv.conf" is not consistent across nodes,详解见最后部分。

2、编辑响应文件。

2.1 切换到Grid Infrastructure安装介质目录,找到response目录,编辑grid_install.rsp文件根据提示修改grid_install.rsp文件的内容,

下面是对 grid_install.rsp 文件修改的内容(仅供修改参考,不可直接拷贝使用):

rac1-> cd /home/grid/grid/response

rac1-> cp ./grid_install.rsp ./gi_install.rsp

具体配置(#标识的是默认项)

rac1-> vi ./gi_install.rsp


oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v11_2_0

ORACLE_HOSTNAME=rac1

INVENTORY_LOCATION=/u01/app/oraInventory


SELECTED_LANGUAGES=en

oracle.install.option=CRS_CONFIG

ORACLE_BASE=/u01/app/grid

ORACLE_HOME=/u01/app/11.2.0/grid

oracle.install.asm.OSDBA=asmdba

oracle.install.asm.OSOPER=asmoper

oracle.install.asm.OSASM=asmadmin

oracle.install.crs.config.gpnp.scanName=scan-ip.burton.com

oracle.install.crs.config.gpnp.scanPort=1521

oracle.install.crs.config.clusterName=rac-cluster


oracle.install.crs.config.gpnp.configureGNS=false

oracle.install.crs.config.clusterNodes=rac1:rac1-vip,rac2:rac2-vip

oracle.install.crs.config.networkInterfaceList=eth0:192.168.91.0:1,eth1:192.168.214.0:2

oracle.install.crs.config.storageOption=ASM_STORAGE


oracle.install.crs.config.useIPMI=false

oracle.install.asm.SYSASMPassword=oracle4U

oracle.install.asm.diskGroup.name=OCRVOTE


oracle.install.asm.diskGroup.redundancy=NORMAL


oracle.install.asm.diskGroup.AUSize=1

oracle.install.asm.diskGroup.disks=/dev/oracleasm/disks/OCR_VOTE1,/dev/oracleasm/disks/OCR_VOTE2,/dev/oracleasm/disks/OCR_VOTE3

oracle.install.asm.diskGroup.diskDiscoveryString=/dev/oracleasm/disks/*

oracle.install.asm.monitorPassword=oracle4U


oracle.install.asm.upgradeASM=false

oracle.installer.autoupdates.option=SKIP_UPDATES

注:
http://blog.chinaunix.net/xmlrpc.php?r=blog/article&id=4681351&uid=29655480 (图形化界面做对比会更深刻些)
http://blog.itpub.net/22128702/viewspace-730567/ (配置具体说明)

(1)例:oracle.install.crs.config.networkInterfaceList=bnxe2:192.168.129.192:1,bnxe3:10.31.130.200:2

1代表public,2代表private,3代表在群集中不使用该网卡;bnxe2和bnxe3是网卡的设备名,用ifconfig -a 可以看到

(2)例:如果两台机子公网私网设备名称不一样需要修改网络设备名(例如:eth0->eth2)

a.修改/etc/udev/rules.d/70-persistent-net.rules文件(eth0->eth2)

$ cat /etc/udev/rules.d/70-persistent-net.rules

# PCI device 0x15ad:0x07b0 (vmxnet3)

SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?", ATTR{address}=="00:0c:29:2c:b4:4a", ATTR{type}=="1", KERNEL=="eth", NAME="eth2"

# PCI device 0x8086:0x100f (e1000)

SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?", ATTR{address}=="00:0c:29:2c:b4:40", ATTR{type}=="1", KERNEL=="eth", NAME="eth1"

b.再文件重命名,并修改里面的DEVICE(eth0->eth2)

$ mv /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-eth2

$ vi /etc/sysconfig/network-scripts/ifcfg-eth2

DEVICE=eth2

c.重启系统

$ reboot

2.2 查看配置详情

rac1-> cat /home/grid/grid/response/gi_install.rsp | grep -v ^# | grep -v ^$

3、静默安装Grid Infrastructure软件。

以grid用户切换到Grid Infrastructure安装介质目录,执行以下的命令开始静默安装Grid Infrastructure软件:

rac1-> cd /home/grid/grid/

rac1-> /home/grid/grid/runInstaller -responseFile /home/grid/grid/response/gi_install.rsp -silent -ignorePrereq -showProgress

.................................

As a root user, execute the following script(s):

1. /u01/app/oraInventory/orainstRoot.sh

2. /u01/app/11.2.0/grid/root.sh

Execute /u01/app/oraInventory/orainstRoot.sh on the following nodes:

[rac1, rac2]

Execute /u01/app/11.2.0/grid/root.sh on the following nodes:

[rac1, rac2]

.................................................. 100% Done.

Execute Root Scripts successful.

As install user, execute the following script to complete the configuration.

1. /u01/app/11.2.0/grid/cfgtoollogs/configToolAllCommands RESPONSE_FILE=
Note:

1. This script must be run on the same host from where installer was run.

2. This script needs a small password properties file for configuration assistants that require passwords (refer to install guide documentation).


Successfully Setup Software.

注:执行./runInstaller -help可以看到详细的参数信息。

3.1 在每个节点按顺序分别执行orainstRoot.sh和root.sh脚本(execute all nodes order by node sequence)

$ /u01/app/oraInventory/orainstRoot.sh

$ /u01/app/grid/11.2.0/root.sh

执行tail -f /u01/app/11.2.0/grid/install/root_rac1_2017-03-06_14-52-36.log 可以看到详细的输出。

3.2 最后在安装节点执行的命令完成配置工作:

rac1-> su - grid

rac1-> cd /u01/app/11.2.0/grid/cfgtoollogs


编辑一个响应文件保存ASM的密码,生成密码文件的过程需要使用

rac1-> vi cfgrsp.properties

oracle.assistants.asm|S_ASMPASSWORD=oracle4U

oracle.assistants.asm|S_ASMMONITORPASSWORD=oracle4U

rac1-> chmod 700 cfgrsp.properties

rac1-> ./configToolAllCommands RESPONSE_FILE=./cfgrsp.properties

Setting the invPtrLoc to /u01/app/11.2.0/grid/oraInst.loc

perform - mode is starting for action: configure

perform - mode finished for action: configure

......

You can see the log file: /u01/app/11.2.0/grid/cfgtoollogs/oui/configActions2017-03-06_03-47-00-PM.log

4、增加磁盘组(RAC1)在安装GRID的过程中,已经建立了OCR磁盘组,这里增加其他的两个磁盘组。为了演示不同的语法,

这里用了增加磁盘的做法。(grid用户执行)

rac1-> asmca -silent -createDiskGroup -sysAsmPassword oracle4U -diskString '/dev/oracleasm/disks/' -diskGroupName FRA -diskList '/dev/oracleasm/disks/FRA' -redundancy EXTERNAL -compatible.asm 11.2 -compatible.rdbms
11.2

Disk Group FRA created successfully.

rac1-> asmca -silent -createDiskGroup -sysAsmPassword oracle4U -diskString '/dev/oracleasm/disks/' -diskGroupName DATA -diskList '/dev/oracleasm/disks/DATA1' -redundancy EXTERNAL -compatible.asm 11.2 -compatible.rdbms 11.2

Disk Group DATA created successfully.

rac1-> asmca -silent -addDisk -sysAsmPassword oracle4U -diskGroupName DATA -diskList '/dev/oracleasm/disks/DATA2'

Disks added successfully to disk group DATA

或者以下方式创建磁盘组(grid用户执行)

rac1-> sqlplus / as sysasm

SQL> create diskgroup FRA external redundancy disk '/dev/oracleasm/disks/FRA';

SQL> create diskgroup DATA external redundancy disk '/dev/oracleasm/disks/DATA1';

SQL> alter diskgroup DATA add disk 'ORCL:DATA2';

5、检查Clusterware环境:

rac1-> crsctl check cluster -all

rac1:

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

rac2:

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

四、数据库软件安装

1、安装前验证(RAC1)

在实际安装之前,通过cluvfy工具进行验证。这个时候因为仅安装了GI软件,所以暂时使用grid用户目录下的工具来运行。

把结果打印到一个文件,方便阅读。

rac1-> cd /u01/app/11.2.0/grid/bin

rac1-> ./cluvfy stage -pre dbinst -n rac1,rac2 -verbose

可能会遇到ERROR:PRVF-4657,PRVF-4664问题,详情见最后部分。

2、配置响应文件(安装包在主目录下)

rac1-> su - oracle

rac1-> cd /home/oracle/database/response

rac1-> cp db_install.rsp db_in.rsp

rac1-> chmod 755 db_in.rsp

rac1-> sed -i "s/oracle.install.option=/oracle.install.option=INSTALL_DB_SWONLY/" ./db_in.rsp

rac1-> sed -i "s/ORACLE_HOSTNAME=/ORACLE_HOSTNAME=rac1/" ./db_in.rsp

rac1-> sed -i "s/UNIX_GROUP_NAME=/UNIX_GROUP_NAME=oinstall/" ./db_in.rsp

rac1-> sed -i "s|INVENTORY_LOCATION=|INVENTORY_LOCATION=/u01/app/oraInventory|" ./db_in.rsp

rac1-> sed -i "s|ORACLE_HOME=|ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1|" ./db_in.rsp

rac1-> sed -i "s|ORACLE_BASE=|ORACLE_BASE=/u01/app/oracle|" ./db_in.rsp

rac1-> sed -i "s/oracle.install.db.InstallEdition=/oracle.install.db.InstallEdition=EE/" ./db_in.rsp

rac1-> sed -i "s/oracle.install.db.DBA_GROUP=/oracle.install.db.DBA_GROUP=dba/" ./db_in.rsp

rac1-> sed -i "s/oracle.install.db.OPER_GROUP=/oracle.install.db.OPER_GROUP=oper/" ./db_in.rsp

rac1-> sed -i "s/oracle.install.db.CLUSTER_NODES=/oracle.install.db.CLUSTER_NODES=rac1,rac2/" ./db_in.rsp

rac1-> sed -i "s/oracle.install.db.isRACOneInstall=/oracle.install.db.isRACOneInstall=false/" ./db_in.rsp

rac1-> sed -i "s/SECURITY_UPDATES_VIA_MYORACLESUPPORT=/SECURITY_UPDATES_VIA_MYORACLESUPPORT=false/" ./db_in.rsp

rac1-> sed -i "s/DECLINE_SECURITY_UPDATES=/DECLINE_SECURITY_UPDATES=true/" ./db_in.rsp

rac1-> sed -i "s/oracle.installer.autoupdates.option=/oracle.installer.autoupdates.option=SKIP_UPDATES/" ./db_in.rsp

3、查看配置情况

rac1-> cat /home/oracle/database/response/db_in.rsp | sed -n '/^[^#]/p'

4、安装数据库软件

注意:-responseFile参数必须使用绝对路径

rac1-> cd /home/oracle/database/

rac1-> ./runInstaller -silent -force -showProgress -ignorePrereq -responseFile /home/oracle/database/response/db_in.rsp

5、最后步骤是用root身份执行下面文件。(execute all nodes order by node sequence)

/u01/app/oracle/product/11.2.0/dbhome_1/root.sh

五、安装数据库

1、本节的操作如无特殊说明,都是使用oracle用户执行。

安装前验证(RAC1)

rac1-> cd /u01/app/oracle/product/11.2.0/dbhome_1/bin

rac1-> ./cluvfy stage -pre dbcfg -n rac1,rac2 -d $ORACLE_HOME

Performing pre-checks for database configuration

Checking node reachability...

Node reachability check passed from node "rac1"

Checking user equivalence...

User equivalence check passed for user "oracle"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity for interface "eth1"

Node connectivity passed for interface "eth1"

TCP connectivity check passed for subnet "192.168.10.0"

Check: Node connectivity for interface "eth2"

Node connectivity passed for interface "eth2"

TCP connectivity check passed for subnet "10.0.0.0"

Checking subnet mask consistency...

Subnet mask consistency check passed for subnet "10.0.0.0".

Subnet mask consistency check passed for subnet "192.168.10.0".

Subnet mask consistency check passed.

Node connectivity check passed

Checking multicast communication...

Checking subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0"...

Check of subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "192.168.10.0" for multicast communication with multicast group "230.0.1.0"...

Check of subnet "192.168.10.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.

Total memory check passed

Available memory check passed

Swap space check passed

Free disk space check passed for "rac2:/u01/app/oracle/product/11.2.0/dbhome_1"

Free disk space check passed for "rac1:/u01/app/oracle/product/11.2.0/dbhome_1"

Free disk space check passed for "rac2:/tmp"

Free disk space check passed for "rac1:/tmp"

Check for multiple users with UID value 501 passed

User existence check passed for "oracle"

Group existence check passed for "oinstall"

Group existence check passed for "dba"

Membership check for user "oracle" in group "oinstall" [as Primary] passed

Membership check for user "oracle" in group "dba" passed

Run level check passed

Hard limits check passed for "maximum open file descriptors"

Soft limits check passed for "maximum open file descriptors"

Hard limits check passed for "maximum user processes"

Soft limits check passed for "maximum user processes"

System architecture check passed

Kernel version check passed

Kernel parameter check passed for "semmsl"

Kernel parameter check passed for "semmns"

Kernel parameter check passed for "semopm"

Kernel parameter check passed for "semmni"

Kernel parameter check passed for "shmmax"

Kernel parameter check passed for "shmmni"

Kernel parameter check passed for "shmall"

Kernel parameter check passed for "file-max"

Kernel parameter check passed for "ip_local_port_range"

Kernel parameter check passed for "rmem_default"

Kernel parameter check passed for "rmem_max"

Kernel parameter check passed for "wmem_default"

Kernel parameter check passed for "wmem_max"

Kernel parameter check passed for "aio-max-nr"

Package existence check passed for "make"

Package existence check passed for "binutils"

Package existence check passed for "gcc(x86_64)"

Package existence check passed for "libaio(x86_64)"

Package existence check passed for "libaio-devel(x86_64)"

Package existence check passed for "glibc(x86_64)"

Package existence check passed for "compat-libstdc++-33(x86_64)"

WARNING:

PRVF-7584 : Multiple versions of package "elfutils-libelf" found on node rac2: elfutils-libelf(x86_64)-0.152-1.el6,elfutils-libelf(x86_64)-0.164-2.el6

WARNING:

PRVF-7584 : Multiple versions of package "elfutils-libelf" found on node rac1: elfutils-libelf(x86_64)-0.152-1.el6,elfutils-libelf(x86_64)-0.164-2.el6

Package existence check passed for "elfutils-libelf(x86_64)"

Package existence check passed for "elfutils-libelf-devel"

Package existence check passed for "glibc-common"

Package existence check passed for "glibc-devel(x86_64)"

Package existence check passed for "glibc-headers"

Package existence check passed for "gcc-c++(x86_64)"

Package existence check passed for "libgcc(x86_64)"

Package existence check passed for "libstdc++(x86_64)"

Package existence check passed for "libstdc++-devel(x86_64)"

Package existence check passed for "sysstat"

Package existence check passed for "pdksh"

Package existence check passed for "expat(x86_64)"

Check for multiple users with UID value 0 passed

Current group ID check passed

Starting check for consistency of primary group of root user

Check for consistency of root user's primary group passed

Checking CRS integrity...

Clusterware version consistency passed

CRS integrity check passed

Checking node application existence...

Checking existence of VIP node application (required)

VIP node application check passed

Checking existence of NETWORK node application (required)

NETWORK node application check passed

Checking existence of GSD node application (optional)

GSD node application is offline on nodes "rac2,rac1"

Checking existence of ONS node application (optional)

ONS node application check passed

Time zone consistency check passed

Pre-check for database configuration was successful.

检验成功,但有两个告警,主要我装了两个elfutils-libelf的包

2、静默安装配置(RAC1)确认并修改或另外创建静默安装配置文件 dbca.rsp

2.1 查看下dbca.rsp里面的参数

rac1-> cat /home/oracle/database/response/dbca.rsp | grep -v ^# | grep -v ^$

2.2 复制编辑响应文件,仅保留部分参数,和单机安装最大的一个区别就是NODELIST参数

rac1-> cd /home/oracle/database/response

rac1-> vi racdbca.rsp

[GENERAL]

RESPONSEFILE_VERSION = "11.2.0"

OPERATION_TYPE="createDatabase"

[CREATEDATABASE]

GDBNAME="burton"

SID="burton"

TEMPLATENAME="General_Purpose.dbc"

NODELIST=rac1,rac2

SYSPASSWORD="oracle4U"

SYSTEMPASSWORD="oracle4U"

STORAGETYPE=ASM

DISKGROUPNAME=DATA

RECOVERYGROUPNAME=FRA

CHARACTERSET="AL32UTF8"

NATIONALCHARACTERSET="UTF8"

rac1-> chmod 750 racdbca.rsp

3、静默安装数据库(RAC1)

注意:-responseFile 参数必须使用绝对路径

rac1-> cd $ORACLE_HOME/bin

rac1-> $ORACLE_HOME/bin/dbca -silent -responseFile /home/oracle/database/response/racdbca.rsp

六、创建监听

rac1-> $ORACLE_HOME/bin/netca /silent /responseFile /home/oracle/database/response/netca.rsp

Parsing command line arguments:

Parameter "silent" = true

Parameter "responsefile" = /home/oracle/database/response/netca.rsp

Done parsing command line arguments.

Oracle Net Services Configuration:

Profile configuration complete.

Profile configuration complete.

Default listener "LISTENER" is already configured in Grid Infrastructure home: /u01/app/11.2.0/grid

Oracle Net Services configuration successful. The exit code is 0

注:装完netca后发现没有listener.ora文件,原来在grid用户下/u01/app/11.2.0/grid/network/admin/listener.ora

采用的动态监听。

SQL> show parameter listener

NAME TYPE VALUE

listener_networks string

local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST= 192.168.91.152)(PORT=1521))

remote_listener string scan-ip.burton.com:1521

七、开启归档日志:

1、创建归档目录

rac1-> su - grid

rac1-> asmcmd

ASMCMD> lsdg

State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name

MOUNTED EXTERN N 512 4096 1048576 9992 8250 0 8250 0 N DATA/

MOUNTED EXTERN N 512 4096 1048576 4996 4666 0 4666 0 N FRA/

MOUNTED NORMAL N 512 4096 1048576 2997 2071 999 536 0 Y OCRVOTE/

ASMCMD> cd data/burton

ASMCMD> mkdir arch

ASMCMD> cd arch

ASMCMD> pwd

+data/burton/arc

2、修改归档路径参数

rac1-> su - oracle

rac1-> sqlplus / as sysdba

SQL> alter system set log_archive_dest_1='location=+data/burton/archivelog' scope=spfile sid='';

SQL> alter system set log_archive_format='arch_%t_%s_%r.arc' scope=spfile sid='';

3、开归档模式

3.1 关闭数据库,全部节点都要关闭(在任意节点完成)

rac1-> srvctl stop database -d burton -o immediate

将一个节点数据库启动到mount状态

3.2 检查数据库关闭后状态

rac1-> srvctl status database -d burton

Instance burton1 is not running on node rac1

Instance burton2 is not running on node rac2

3.3 启动第一个实例到mount状态

rac1-> srvctl start instance -d burton -i burton1 -o mount

修改数据库的归档模式并启动数据库

rac1-> sqlplus / as sysdba

SQL> alter database archivelog;

SQL> alter database open;

检查状态

SQL> archive log list;

开启另个节点,由于控制文件在ASM共享文件中,其他的节点会读取修改后的控制文件

rac1-> srvctl start instance -d burton -i burton2

检查集群状态(root用户执行)

rac1-> crsctl stat res -t

八、配置数据库具体参数:(待完善)

1、修改参数

$ sqlplus / as sysdba

alter system set audit_sys_operations=TRUE scope=spfile;

alter system set audit_trail=DB_EXTENDED scope=spfile;

alter system set deferred_segment_creation=false;

alter system set processes=1500 scope=spfile;

alter system set disk_asynch_io=true scope=spfile;

alter system set filesystemio_options=setall scope=spfile;

alter system set db_files=2000 scope=spfile;

alter system set control_file_record_keep_time=60;

alter system set resource_limit=true;

alter system set log_checkpoints_to_alert=true;

alter tablespace undotbs1 retention guarantee;

alter system set db_flashback_retention_target = 1440;

alter system set db_recovery_file_dest = '+FRA';

alter system set db_recovery_file_dest_size = 4440M;

alter database flashback on;

2、重启数据库

srvctl stop database -d burton -o immediate

srvctl start database -d burton -o open

[oracle@rac1 admin]$ srvctl start database -d burton -o open

问题一:

PRCR-1079 : Failed to start resource ora.burton.db

CRS-5017: The resource action "ora.burton.db start" encountered the following error:

ORA-01078: failure in processing system parameters

ORA-00838: Specified value of MEMORY_TARGET is too small, needs to be at least 1468M

. For details refer to "(:CLSN00107:)" in "/u01/app/11.2.0/grid/log/rac1/agent/crsd/oraagent_oracle/oraagent_oracle.log".

CRS-5017: The resource action "ora.burton.db start" encountered the following error:

ORA-01078: failure in processing system parameters

ORA-00838: Specified value of MEMORY_TARGET is too small, needs to be at least 1468M

. For details refer to "(:CLSN00107:)" in "/u01/app/11.2.0/grid/log/rac2/agent/crsd/oraagent_oracle/oraagent_oracle.log".

CRS-2674: Start of 'ora.burton.db' on 'rac1' failed

CRS-2674: Start of 'ora.burton.db' on 'rac2' failed

CRS-2632: There are no more servers to try to place resource 'ora.burton.db' on that would satisfy its placement policy

解决方法:减少SGA值,或增大MEMORY_TARGET值

create pfile='/tmp/pfile.ora' from SPFILE='+DATA/burton/spfileburton.ora';

startup pfile='/tmp/pfile.ora';

修改/tmp/pfile.ora 中MEMORY_TARGET如果没达到要求报错:

SQL> startup pfile='/tmp/pfile.ora';

ORA-00838: Specified value of MEMORY_TARGET is too small, needs to be at least 1468M

ORA-01078: failure in processing system parameters

问题二:

SQL> startup nomount pfile='/tmp/pfile.ora';

ORA-00845: MEMORY_TARGET not supported on this system

解决方案

1.初始化参数MEMORY_TARGET或MEMORY_MAX_TARGET不能大于共享内存(/dev/shm),为了解决这个问题,可以增大/dev/shm

mount -t tmpfs shmfs -o size=3g /dev/shm

检查集群状态

su - root

$ crsctl stat res -t 或 crs_stat -t -v

$ srvctl status listener

$ ps -ef | grep lsnr |grep -v grep

$ ps -ef | grep crs |grep -v grep

$ ps -ef | grep smon |grep -v grep

个节点查看参数是否修改:

show parameter control_file_record_keep_time

show parameter log_archive_format

select flashback_on from gv$database;

可选设置参数:

alter system set db_block_checking=true; # implying FULL


主要用于防止在内存中损坏或数据损坏。由于是逻辑检查,因此引起的额外负荷比较高,甚至可以达到10%,

alter system set db_block_checksum=true; #oracle 公司建议开启


主要是防止IO硬件和IO子系统的错误。根据块的字节值计算一个效验和,因此算法比较简单,引起的系统额外负荷通常在1%-2%

alter system set resource_limit=true ;

create profile profile_name limit idle_time 10 ;

create profile profile_res_lmt limit connect_time 180 idle_time 10 CPU_PER_SESSION 10 ;

--alter user scott profile profile_name; --指定受限制的用户

--alter profile default limit failed_login_attempts 100;


--drop profile profile_name [CASCADE] ;

创建触发器,设置可恢复时间(logon trigger)可以优化抛出异常

CREATE OR REPLACE TRIGGER trg_work_log

AFTER LOGON ON DATABASE

declare

v_program_name varchar2(200);

v_username varchar2(100);

v_ip_address varchar2(18);

begin

---获取当前的连接用户信息

select username,program,SYS_CONTEXT('USERENV','IP_ADDRESS')

into v_username,v_program_name,v_ip_address

from v$session where AUDSID = SYS_CONTEXT('USERENV', 'SESSIONID');

if upper(v_username)='TEST' then

execute immediate 'alter session enable resumable timeout 3600';

end if;

END;

九、集群基本操作

1、oracle 11g RAC关闭顺序

1.1 停止em服务

su - oracle

export ORACLE_UNQNAME=rac1db

emctl status dbconsole

emctl stop dbconsole

emctl status dbconsole

1.2 停数据库

srvctl stop database -d burton -o immediate

停某个实例

srvctl stop instance -d burton -i burton2 -o immediate

srvctl stop asm -n rac2 -i +ASM2

1.3 停监听

srvctl status listener

srvctl stop listener

srvctl status listener

停某个监听

srvctl stop listener -n rac2

1.4 停crs(每个节点执行)

su - root

/u01/app/11.2.0/grid/bin/crsctl stop crs

1.5 查看进程

ps -ef | grep lsnr |grep -v grep

ps -ef | grep crs |grep -v grep

ps -ef | grep smon |grep -v grep

2、启动顺序

2.1 启crs(两个节点都执行,如果要重启操作系统,那么这一步可以省略)

su - root

cd /u01/app/11.2.0/grid/bin

./crsctl start crs

./crs_stat -t -v

2.2 启动数据库

su - oracle

srvctl start database -d burton -o open

启动某个实例

srvctl start instance -d burton -i burton1

2.3 查看数据库和监听状态

srvctl status database -d burton

srvctl status listener

srvctl start listener -n rac2

2.4 查看crs

/u01/app/11.2.0/grid/bin/crs_stat -t -v

2.5 查看进程

ps -ef | grep lsnr |grep -v grep

ps -ef | grep crs |grep -v grep

ps -ef | grep smon |grep -v grep

运行过程中,遇到如下问题

ERROR1:crfclust.bdb文件过大,Bug 20186278

== check

cd /u01/app/11.2.0/grid/bin

./crsctl stat res ora.crf -init -t

== stop

./crsctl stop res ora.crf -init

== delete

cd /u01/app/11.2.0/grid/crf/db/rac1

rm -rf *.bdb

== start

cd /u01/app/11.2.0/grid/bin

./crsctl start res ora.crf -init

问题一:

1.1 遇到错误,可以把日志重定向到文件,便于查找问题。

问题:

Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes...

search entry in file "/etc/resolv.conf" is consistent across nodes

Checking DNS response time for an unreachable node

Node Name Status

rac2 failed

rac1 passed

PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: rac2

File "/etc/resolv.conf" is not consistent across nodes

1.2 解决方案:

(1)修改DNS服务器的/etc/named.conf文件,添加fil "/dev/null";信息即可。

zone "." IN {

type hint;

// file "named.ca";

file "/dev/null";

(2)分别在各个RAC节点添加如下参数:

vi /etc/resolv.conf


search prudentwoo.com

nameserver 114.114.114.114

nameserver 8.8.8.8

options rotate

options timeout:2

options attempts:5

问题二:

可以把输出重定向到文件,发现以下错误

ERROR:

PRVG-1101 : SCAN name "scan-ip.burton.com" failed to resolve

SCAN Name IP Address Status Comment

scan-ip.burton.com 192.168.91.154 failed NIS Entry

ERROR:

PRVF-4657 : Name resolution setup check for "scan-ip.burton.com" (IP address: 192.168.91.154) failed

ERROR:

PRVF-4664 : Found inconsistent name resolution entries for SCAN name "scan-ip.burton.com"

Verification of SCAN VIP and Listener setup failed

Checking VIP configuration.

可能没配置DNS或GSD服务器,就可能报这错(原因待查
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: