11gRAC

2021-06-07  本文已影响0人  苏水的北
11g RAC新特性

1、10g和11g 安装RAC的软件变化:
1.1、10g:database软件(数据库软件+asm软件)、clusterware软件(集群软件/CRS)
10g安装clusterware:
OCR disk--->/dev/raw/raw1
VOTing disk--->/dev/raw/raw2
1.2、11g:database软件(数据库软件)、grid软件(集群软件+ASM软件)
11g安装grid:
OCR disk/VOTing disk--->必须安装在同一个dg中。
2、用户变化:
2.1、10g:oracle用户、dba组
2.2、11g:
database软件--->oracle用户 oinstall、dba、asmdba组。
grid软件--->grid用户、oinstall、dba、asmdba、asmadmin、asmoper组。
3、SCAN IP:
在11g中使用SCAN IP来替代VIP的作用。
虽然VIP在11g中没有直接用途,但是在搭建RAC中仍需要指定。
rac1的IP规划:rac1、rac1priv、rac1vip、scanip
rac2的IP规划:rac2、rac2priv、rac2vip
4、参数修改方式不同:
4.1、11g中参数的修改shell limit可以通过fixup.sh完成--->简化安装流程。
4.2、11g中提供自动配置互信的脚本.sshUserSetup.sh(grid软件安装包)
5、单实例RAC:单实例RAC不等于单机数据库。
单实例RAC=集群+单实例(为了使其能够有更好的扩展性)
单机数据库=单实例
注意:11g我们搭建RAC时可以直接选择单实例RAC。
6、11g RAC要求集群中节点时间一致。
保证服务器时间一致的方式是通过NTP做时间同步保证时间一致。
Oracle本身自己提出时间同步服务CTSSD,保证集群中节点时间一致。
7、安装软件先后顺序不同:
7.1、10g:clusterware软件(vipca)、database软件(netca、dbca(asm)、dbca建库)
7.2、11g:grid软件(vipca、netca、asmca(asm))、database软件(dbca建库)

11g RAC搭建:

1、基础环境准备:

1、关闭防火墙并设置为开机不自启动(rac1和rac2均执行):

[root@localhost ~]# service iptables stop
You have new mail in /var/spool/mail/root
[root@localhost ~]# service ip6tables stop
[root@localhost ~]# service iptables status
Firewall is stopped.
[root@localhost ~]# chkconfig iptables off
[root@localhost ~]# chkconfig ip6tables off
[root@localhost ~]# chkconfig  --list iptables
iptables        0:off   1:off   2:off   3:off   4:off   5:off   6:off

2、关闭selinux(rac1和rac2均执行):

[root@localhost ~]# cat /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#       enforcing - SELinux security policy is enforced.
#       permissive - SELinux prints warnings instead of enforcing.
#       disabled - SELinux is fully disabled.
SELINUX=disabled
# SELINUXTYPE= type of policy in use. Possible values are:
#       targeted - Only targeted network daemons are protected.
#       strict - Full SELinux protection.
SELINUXTYPE=targeted
[root@localhost ~]# getenforce
Disabled

3、配置yum源(rac1和rac2均执行):

[root@localhost ~]# mount /dev/cdrom  /mnt   /挂载光驱到mnt下
[root@localhost ~]# cat  /etc/yum.repos.d/rhel-debuginfo.repo  /配置yum源
[server]
name=server
enabled=1
gpgcheck=0
baseurl=file:///mnt/Server
[root@localhost ~]# yum repolist   /查看镜像的yum包
Loaded plugins: rhnplugin, security
This system is not registered with RHN.
RHN support will be disabled.
repo id                          repo name                       status
server                           server                          enabled: 3,040
repolist: 3,040

4、配置主机名,修改IP(rac1和rac2均执行)

rac1 192.168.100.10 192.168.100.100 10.10.10.10 192.168.100.111
rac2 192.168.100.20 192.168.100.200 10.10.10.20

[root@localhost ~]# hostname rac1
[root@localhost ~]# vim /etc/sysconfig/network
[root@localhost ~]# cat /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=rac1

5、编写主机名映射关系(rac1和rac2均执行):

vim /etc/hosts
 192.168.100.10   rac1
192.168.100.20   rac2 
 10.10.10.10      rac1priv
10.10.10.20      rac2priv 
 192.168.100.100   rac1vip
192.168.100.200   rac2vip 
192.168.100.111   racscanip

6、安装软件依赖包(rac1和rac2均执行):

[root@rac1 ~]# yum install -y  binutils-* compat-libstdc++-33-* elfutils-libelf-* elfutils-libelf-devel-* gcc-* gcc-c++-* glibc-* glibc-common-* glibc-devel-* glibc-headers-* ksh-* libaio-* libgcc-* libstdc++-*  make-* sysstat-* unixODBC-*  unixODBC-devel-*    libXt.so.6 libXtst.so.6  libXp.so.6  glibc-devel glibc.i686 libgcc.i686 glibc-devel.i686 libgcc

7、创建用户和组(rac1和rac2均执行):
注意:10g rac中需要安装两个软件:

clusterware 集群软件
database 数据库软件
只需要创建oracle用户即可。
注意:11g rac中需要安装两个软件:
grid 集群软件 grid用户
database 数据库软件 oracle用户
创建oracle用户和组(rac1和rac2均执行):

groupadd -g 1000 oinstall
groupadd -g 1100 asmadmin
groupadd -g 1200 dba
groupadd -g 1300 asmdba
groupadd -g 1301 asmoper
useradd -u 1100 -g oinstall -G asmadmin,asmdba,asmoper -d /home/grid grid
useradd -u 1101 -g oinstall -G dba,asmdba,asmadmin -d /home/oracle oracle
[root@rac1 ~]# echo 123456 | passwd --stdin grid
Changing password for user grid.
passwd: all authentication tokens updated successfully.
[root@rac1 ~]# echo 123456 | passwd --stdin oracle
Changing password for user oracle.
passwd: all authentication tokens updated successfully.

8、创建软件安装路径(rac1和rac2均执行):
路径规划(需要写入环境变量):

创建清单目录:
mkdir -p /u01/app/oraInventory
chown -R grid:oinstall /u01/app/oraInventory
chmod -R 775 /u01/app/oraInventory
创建grid用户的ORACLE_HOME和ORACLE_BASE
mkdir -p /u01/11.2.0/grid
mkdir -p /u01/app/grid
chown -R grid:oinstall /u01/app/grid
chown -R grid:oinstall /u01/11.2.0/grid
chmod -R 775 /u01/11.2.0/grid
创建oracle用户的ORACLE_BASE
 mkdir -p /u01/app/oracle
mkdir /u01/app/oracle/cfgtoollogs
chown -R oracle:oinstall /u01/app/oracle
chmod -R 775 /u01/app/oracle
创建oracle用户的ORACLE_HOME
mkdir -p /u01/app/oracle/product/11.2.0/db_1
chown -R oracle:oinstall /u01/app/oracle/product/11.2.0/db_1
chmod -R 775 /u01/app/oracle/product/11.2.0/db_1

9、修改内核参数(rac1和rac2均执行):

[root@rac1 ~]# vim /etc/sysctl.conf
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 536870912
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586
[root@rac1 ~]# sysctl -p
net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 536870912
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586

10、修改shell limit(rac1和rac2均执行):

[root@rac1 ~]# vim /etc/security/limits.conf
grid             soft    nproc   2047
grid             hard    nproc   16384
grid             soft    nofile  1024
grid             hard    nofile  65536
oracle           soft    nproc   2047
oracle           hard    nproc   16384
oracle           soft    nofile  1024
oracle           hard    nofile  65536

11、根据官方文档要求,需要在root用户下加入系统限制判定:

[root@rac1 ~]# vim /etc/profile
if [ $USER = "oracle" ]; then
        if [ $SHELL = "/bin/ksh" ]; then
              ulimit -p 16384
              ulimit -n 65536
        else
              ulimit -u 16384 -n 65536
        fi
fi
[root@rac1 ~]# source /etc/profile

12、修改grid/oracle用户环境变量(rac1和rac2均执行):
12.1、rac1节点grid用户环境变量配置:

[root@rac1 ~]# su - grid
[grid@rac1 ~]$ vim .bash_profile
export ORACLE_SID=+ASM1
export ORACLE_BASE=/oracle/app/grid
export ORACLE_HOME=/oracle/11.2.0/grid
export PATH=$ORACLE_HOME/bin:$PATH
[grid@rac1 ~]$ source .bash_profile

12.2、rac1节点oracle用户环境变量配置:

[root@rac1 ~]# su - oracle
[oracle@rac1 ~]$ vim .bash_profile
export ORACLE_SID=racdb1
export ORACLE_BASE=/oracle/app/oracle
export ORACLE_HOME=/oracle/app/oracle/product/11.2.0/db_1
export PATH=$ORACLE_HOME/bin:$PATH
[oracle@rac1 ~]$ source .bash_profile

12.3、rac2节点grid用户环境变量配置:

[root@rac2 ~]# su - grid
[grid@rac2 ~]$ vim .bash_profile
export ORACLE_SID=+ASM2
export ORACLE_BASE=/oracle/app/grid
export ORACLE_HOME=/oracle/11.2.0/grid
export PATH=$ORACLE_HOME/bin:$PATH
[grid@rac2 ~]$ source .bash_profile

12.4、rac2节点oracle用户环境变量配置:

[root@rac2 ~]# su - oracle
[oracle@rac2 ~]$ vim .bash_profile
export ORACLE_SID=racdb2
export ORACLE_BASE=/oracle/app/oracle
export ORACLE_HOME=/oracle/app/oracle/product/11.2.0/db_1
export PATH=$ORACLE_HOME/bin:$PATH
[oracle@rac2 ~]$ source .bash_profile

13、配置互信(rac1下root用户执行):
备注:上传grid软件到rac1服务器,借助grid软件里的脚本,配置互信。

[root@rac1 tmp]#unzip 
p10404530_112030_Linux-x86-64_3of7
[root@rac1 tmp]# cd  /tmp/grid/sshsetup/
[root@rac1 sshsetup]# ./sshUserSetup.sh -user grid -hosts "rac1 rac2" -advanced -noPromptPassphrase
[root@rac1 sshsetup]# ./sshUserSetup.sh -user oracle -hosts "rac1 rac2" -advanced -noPromptPassphrase

验证一下(rac1和rac2都要执行):

su - grid
ssh rac1 date
ssh rac2 date
ssh rac1priv date
ssh rac2priv date

su - oracle
ssh rac1 date
ssh rac2 date
ssh rac1priv date
ssh rac2priv date

14、配置裸设备:
备注:11g中OCR和VOTing必须保存在同一个dg中。--->准备dg给集群使用。
14.1、查看磁盘情况(由于是共享盘,所以只用在rac1上执行):

[root@rac1 ~]# fdisk -l

Disk /dev/sda: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14         274     2096482+  82  Linux swap / Solaris
/dev/sda3             275        2610    18763920   83  Linux

Disk /dev/sdb: 6442 MB, 6442450944 bytes
255 heads, 63 sectors/track, 783 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn't contain a valid partition table

14.2、之前规划的sdb是共享盘,现在将sdb分为sdb1(1G)和sdb2(5G)(由于是共享盘,所以只用在rac1上执行):
sdb1:映射为raw1,用来创建griddg
sdb2:映射为raw2,用来创建datadg

[root@rac1 ~]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-783, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-783, default 783): +1G

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 2
First cylinder (124-783, default 124):
Using default value 124
Last cylinder or +size or +sizeM or +sizeK (124-783, default 783):
Using default value 783

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@rac1 ~]# fdisk -l /dev/sdb       /查看是否分区成功

Disk /dev/sdb: 6442 MB, 6442450944 bytes
255 heads, 63 sectors/track, 783 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1         123      987966   83  Linux
/dev/sdb2             124         783     5301450   83  Linux
[root@rac2 ~]# fdisk -l /dev/sdb

Disk /dev/sdb: 6442 MB, 6442450944 bytes
255 heads, 63 sectors/track, 783 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1         123      987966   83  Linux
/dev/sdb2             124         783     5301450   83  Linux

14.3、rac1和rac2映射设备(rac1和rac2都要执行):

[root@rac1 ~]# vim /etc/udev/rules.d/60-raw.rules
ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N"
ACTION=="add", KERNEL=="sdb2", RUN+="/bin/raw /dev/raw/raw2 %N"
KERNEL=="raw*",OWNER="grid",GROUP="asmadmin",MODE="0660"
[root@rac1 ~]# start_udev
Starting udev:                                             [  OK  ]

14.4、在rac1和rac2上分别刷新分区表:

[root@rac1 ~]# partprobe
Warning: Unable to open /dev/hdc read-write (Read-only file system).  /dev/hdc has been opened read-only.

14.5、查看rac1和rac2映射设备(rac1和rac2都要执行):

[root@rac1 ~]# ll /dev/raw
total 0
crw-rw---- 1 grid asmadmin 162, 1 Jun  5 22:29 raw1
crw-rw---- 1 grid asmadmin 162, 2 Jun  5 22:29 raw2
[root@rac2 dev]# ll /dev/raw
total 0
crw-rw---- 1 grid asmadmin 162, 1 Jun  5 22:12 raw1
crw-rw---- 1 grid asmadmin 162, 2 Jun  5 22:12 raw2

15、到此为止环境准备结束。可以尝试运行环境检测脚本。


image.png

rac1上grid用户运行脚本:

[grid@rac1 grid]$ ./runcluvfy.sh stage  -pre crsinst -n rac1,rac2 -verbose
Check: Kernel parameter for "shmmax"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac2              536870912     536870912     1054529536    failed        Current value too low. Configured value too low.
  rac1              536870912     536870912     1054529536    failed        Current value too low. Configured value too low.
Result: Kernel parameter check failed for "shmmax"
备注:把参数修改后即可。
2、grid软件安装:
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
3、oracle用户安装database软件

注意:root用户解压database软件(只在rac1执行,装完之后由rac1传至rac2,整个安装过程rac2不参与):
1、解压database软件安装包:

[root@rac1 tmp]# unzip p10404530_112030_Linux-x86-64_1of7.zip
[root@rac1 tmp]# unzip p10404530_112030_Linux-x86-64_2of7.zip
[root@rac1 tmp]# ll
drwxr-xr-x 8 root root           4096 Sep 22  2011 database

2、rac1下的oracle用户安装database软件:

[oracle@rac1 ~]$ cd /tmp/database/
[oracle@rac1 database]$ ll
total 64
drwxr-xr-x 12 root root  4096 Sep 19  2011 doc
drwxr-xr-x  4 root root  4096 Sep 22  2011 install
-rwxr-xr-x  1 root root 28122 Sep 22  2011 readme.html
drwxr-xr-x  2 root root  4096 Sep 22  2011 response
drwxr-xr-x  2 root root  4096 Sep 22  2011 rpm
-rwxr-xr-x  1 root root  3226 Sep 22  2011 runInstaller
drwxr-xr-x  2 root root  4096 Sep 22  2011 sshsetup
drwxr-xr-x 14 root root  4096 Sep 22  2011 stage
-rwxr-xr-x  1 root root  5466 Aug 23  2011 welcome.html
[oracle@rac1 database]$ ./runInstaller  -ignoresysprereqs

3、开始database软件安装:


image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png

安装完成后,检查安装版本:

[oracle@rac1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Sun Jun 6 12:36:50 2021

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to an idle instance.

SQL>
3、配置ASM磁盘组:

注意:闪回暂不开启,仅配置+DATADG,进行创建数据库。以grid用户登陆图形化界面,运行asmca进行ASM磁盘组配置。


image.png
image.png
image.png
image.png

检查磁盘组创建情况:

[root@rac1 tmp]# su - grid
[grid@rac1 ~]$ sqlplus / as sysasm

SQL*Plus: Release 11.2.0.3.0 Production on Sun Jun 6 12:50:28 2021

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SQL> select name,state from v$asm_diskgroup;

NAME                           STATE
------------------------------ -----------
GRIDDG                         MOUNTED
DATADG                         MOUNTED
4、dbca创建数据库:

以oracle用户登陆图形化界面,运行dbca建库。


image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png

10g RAC集群管理:

1、crs_stat:打印集群资源状态(这些集群资源信息和状态保存在OCR disk中)

name列:集群中资源名
target列:目标状态
state列:当前状态
host列:资源当前所在节点

[oracle@rac1 ~]$ crs_stat
NAME=ora.rac1.ASM1.asm   /集群中资源名。rac1叫做节点名,ASM1叫做实例名。(也就是说asm实例在集群中名字为ora.rac1.ASM1.asm,在rac1节点下的名字为+ASM1)
TYPE=application
TARGET=ONLINE
STATE=ONLINE on rac1

NAME=ora.rac1.LISTENER_RAC1.lsnr
TYPE=application
TARGET=ONLINE
STATE=ONLINE on rac1

NAME=ora.rac1.gsd
TYPE=application
TARGET=ONLINE
STATE=ONLINE on rac1

NAME=ora.rac1.ons
TYPE=application
TARGET=ONLINE
STATE=ONLINE on rac1

NAME=ora.rac1.vip
TYPE=application
TARGET=ONLINE
STATE=ONLINE on rac1

NAME=ora.rac2.ASM2.asm
TYPE=application
TARGET=ONLINE
STATE=ONLINE on rac2

NAME=ora.rac2.LISTENER_RAC2.lsnr
TYPE=application
TARGET=ONLINE
STATE=ONLINE on rac2

NAME=ora.rac2.gsd
TYPE=application
TARGET=ONLINE
STATE=ONLINE on rac2

NAME=ora.rac2.ons
TYPE=application
TARGET=ONLINE
STATE=ONLINE on rac2

NAME=ora.rac2.vip
TYPE=application
TARGET=ONLINE
STATE=ONLINE on rac2

NAME=ora.racdb.db
TYPE=application
TARGET=ONLINE
STATE=ONLINE on rac1

NAME=ora.racdb.racdb1.inst
TYPE=application
TARGET=ONLINE
STATE=ONLINE on rac1

NAME=ora.racdb.racdb2.inst
TYPE=application
TARGET=ONLINE
STATE=ONLINE on rac2

2、处理srvctl命令无法使用问题:
运行出错如下:

[oracle@rac1 ~]$ srvctl
/u01/app/oracle/product/10.2/crs/jdk/jre/bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory

解决问题方法:添加unset LD_ASSUME_KERNEL(取消变量)
[oracle@rac1 ~]$ vim /u01/app/oracle/product/10.2/crs/bin/srvctl


image.png

执行srvctl进行验证(正常):

[oracle@rac1 ~]$ srvctl
Usage: srvctl <command> <object> [<options>]
    command: enable|disable|start|stop|relocate|status|add|remove|modify|getenv|setenv|unsetenv|config
    objects: database|instance|service|nodeapps|asm|listener
For detailed help on each command and object and its options use:
    srvctl <command> <object> -h

3、集群中asm对象介绍:
asm对象就是节点中的+ASM实例,各个节点的+ASM实例用于管理dg。
注意:如果哪个节点的ASM实例offline,那就意味着他没有办法用dg,也就没有办法找数据库。

[oracle@rac1 ~]$ crs_stat | grep asm
NAME=ora.rac1.ASM1.asm   /rac1节点上的asm1实例
NAME=ora.rac2.ASM2.asm

查看ASM实例状态(就是查看有没有进程):

[oracle@rac1 ~]$ ps -ef | grep +ASM
oracle    9191 27281  0 11:33 pts/2    00:00:00 grep +ASM
oracle   16447     1  0 09:30 ?        00:00:00 asm_pmon_+ASM1
oracle   16449     1  0 09:30 ?        00:00:00 asm_diag_+ASM1
oracle   16451     1  0 09:30 ?        00:00:00 asm_psp0_+ASM1
oracle   16453     1  0 09:30 ?        00:00:01 asm_lmon_+ASM1
oracle   16455     1  0 09:30 ?        00:00:00 asm_lmd0_+ASM1
oracle   16457     1  0 09:30 ?        00:00:00 asm_lms0_+ASM1
oracle   16461     1  0 09:30 ?        00:00:00 asm_mman_+ASM1
oracle   16463     1  0 09:30 ?        00:00:00 asm_dbw0_+ASM1
oracle   16465     1  0 09:30 ?        00:00:00 asm_lgwr_+ASM1
oracle   16467     1  0 09:30 ?        00:00:00 asm_ckpt_+ASM1
oracle   16469     1  0 09:30 ?        00:00:00 asm_smon_+ASM1
oracle   16471     1  0 09:30 ?        00:00:00 asm_rbal_+ASM1
oracle   16473     1  0 09:30 ?        00:00:00 asm_gmon_+ASM1
oracle   16500     1  0 09:30 ?        00:00:00 asm_lck0_+ASM1
oracle   16916     1  0 09:30 ?        00:00:00 asm_o001_+ASM1
oracle   16918     1  0 09:30 ?        00:00:00 asm_o002_+ASM1
oracle   17009     1  0 09:30 ?        00:[oracle@rac1 ~]$ ps -ef | grep smon
oracle   10759 27281  0 11:34 pts/2    00:00:00 grep smon
oracle   16469     1  0 09:30 ?        00:00:00 asm_smon_+ASM1
oracle   16967     1  0 09:30 ?        00:00:00 ora_smon_racdb1
00:00 oracle+ASM1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))

4、集群中lsnr对象介绍:
lsnr对象就是节点中的监听进程,各个节点都有自己的监听用于接收客户端请求。

[oracle@rac1 ~]$ crs_stat | grep lsnr
NAME=ora.rac1.LISTENER_RAC1.lsnr
NAME=ora.rac2.LISTENER_RAC2.lsnr

5、集群中VIP对象介绍:
VIP对象就是各个节点上的vip资源。

[oracle@rac1 ~]$ crs_stat | grep vip
NAME=ora.rac1.vip
NAME=ora.rac2.vip
image.png
image.png

6、集群中inst对象介绍:
inst对象是各个节点上的数据库实例。

[oracle@rac1 ~]$ crs_stat |grep inst
NAME=ora.racdb.racdb1.inst  /racdb是数据库名,racdb1是实例名
NAME=ora.racdb.racdb2.inst
注意:类似于ora.数据库名.实例名.inst名称的都属于实例对象。
image.png

7、集群中db对象介绍:
db对象就是集群中的数据库,只存在一个。

[oracle@rac1 ~]$ crs_stat |grep db
NAME=ora.racdb.db

8、集群中ons对象介绍:
ons对象是集群中的ONS服务(oracle notification ser),一般没有用。

[oracle@rac1 ~]$ crs_stat |grep ons
NAME=ora.rac1.ons
NAME=ora.rac2.ons

9、集群中gsd对象介绍:
gsd对象是集群中的GSD服务,为Oracle 9i的客户端提供服务。如果不存在9i客户端,则gsd不工作。
Oracle 11g中默认gsd关闭。

[oracle@rac1 ~]$ crs_stat |grep gsd
NAME=ora.rac1.gsd
NAME=ora.rac2.gsd

gsd:
在10g rac中:
target:online state:online--->正常状态
11g rac中:
target:offline state:offline--->正常状态
10、日常维护:
查看资源状态时,更多关注的是:
target:online
state:offline/unknown 需要处理异常
当资源异常时,会出现state为offline/unknown的状态。比较麻烦的是之前是online,现在变成了unknown。
比如:发现某监听资源为unknown,可以通过ps -ef来查看进程状态。
如果没有监听进程--->集群启动监听。
如果存在监听进程,尝试使用问题监听进行远程连接操作。
能正常远程连接--->unknown可以不做处理。
如果不能正常连接--->集群重启监听。
11、root用户停止crs集群:
注意:切换root用户用su root,不能加“-”。这里的意思是切换到root后,还是用的oracle环境变量。
注意:rac集群中每个节点都有clusterware集群,下面在rac1节点关闭,rac2节点还在运行。

[oracle@rac1 ~]$ su root
Password:
[root@rac1 oracle]# crsctl stop crs /关掉了rac1上的clusterware集群服务。
Stopping resources.
Successfully stopped CRS resources
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.

rac1的oracle用户检查集群状态:

[oracle@rac1 ~]$ crs_stat  -t
CRS-0184: Cannot communicate with the CRS daemon.
备注:因为rac1集群功能关闭,所以查不了资源。可以在rac2上查询。

rac2的oracle用户检查集群状态:

[oracle@rac2 ~]$ crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora....SM1.asm application    ONLINE    OFFLINE
ora....C1.lsnr application    ONLINE    OFFLINE
ora.rac1.gsd   application    ONLINE    OFFLINE
ora.rac1.ons   application    ONLINE    OFFLINE
ora.rac1.vip   application    ONLINE    ONLINE    rac2       /此处发生了VIP漂移,rac1上的vip跑到了rac2上
ora....SM2.asm application    ONLINE    ONLINE    rac2
ora....C2.lsnr application    ONLINE    ONLINE    rac2
ora.rac2.gsd   application    ONLINE    ONLINE    rac2
ora.rac2.ons   application    ONLINE    ONLINE    rac2
ora.rac2.vip   application    ONLINE    ONLINE    rac2
ora.racdb.db   application    ONLINE    ONLINE    rac2
ora....b1.inst application    ONLINE    OFFLINE
ora....b2.inst application    ONLINE    ONLINE    rac2
image.png
image.png

12、root用户启动crs集群:

[root@rac1 oracle]# crsctl start crs
Attempting to start CRS stack
The CRS stack will be started shortly
[oracle@rac1 ~]$ crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora....SM1.asm application    ONLINE    ONLINE    rac1
ora....C1.lsnr application    ONLINE    ONLINE    rac1
ora.rac1.gsd   application    ONLINE    ONLINE    rac1
ora.rac1.ons   application    ONLINE    ONLINE    rac1
ora.rac1.vip   application    ONLINE    ONLINE    rac1
ora....SM2.asm application    ONLINE    ONLINE    rac2
ora....C2.lsnr application    ONLINE    ONLINE    rac2
ora.rac2.gsd   application    ONLINE    ONLINE    rac2
ora.rac2.ons   application    ONLINE    ONLINE    rac2
ora.rac2.vip   application    ONLINE    ONLINE    rac2
ora.racdb.db   application    ONLINE    ONLINE    rac2
ora....b1.inst application    ONLINE    ONLINE    rac1
ora....b2.inst application    ONLINE    ONLINE    rac2
上一篇下一篇

猜你喜欢

热点阅读