目錄
第一章 整體規(guī)劃
1.1 拓撲結(jié)構(gòu)
1.2 主機規(guī)劃信息
1.3 IP規(guī)劃信息
1.4 存儲規(guī)劃信息
1.5 數(shù)據(jù)庫規(guī)劃信息
整體數(shù)據(jù)庫安裝規(guī)劃
第二章 操作系統(tǒng)安裝及配置
2.1 創(chuàng)建虛擬機
2.2 OS安裝
2.2.1 服務(wù)器配置信息表
2.2.2 安裝注意事項
2.3 OS配置
2.3.1 ip地址配置
2.3.2 hosts解析文件配置
2.3.3 創(chuàng)建并掛載共享磁盤
2.3.4 關(guān)閉防火墻和selinux
2.3.5 調(diào)整network參數(shù)
2.3.6 調(diào)整/dev/shm
2.3.7 關(guān)閉THP和numa
2.3.8 軟件包安裝
2.3.9 禁用NTP和chrony時間服務(wù)器
2.3.10 修改主機參數(shù)
2.3.11 禁用不必要的服務(wù)
2.3.12 創(chuàng)建用戶、目錄
2.3.13 環(huán)境變量
2.3.14 其他參數(shù)修改
2.3.15 設(shè)置用戶資源限制
2.3.16 配置時間同步
2.3.17 關(guān)閉兩臺主機,并開啟共享目錄
2.3.18 使用udev配置共享磁盤
2.3.19 安裝GI軟件
第三章 Grid安裝
3.1 安裝前檢測
3.2 Grid安裝
3.2.1 安裝執(zhí)行
3.2.2 集群檢測
第四章 ASM管理磁盤組
4.1 ASMCA新建磁盤組
第五章 Oracle軟件安裝
5.1 解壓軟件包
5.2 圖形界面安裝
第六章 建庫
6.1 dbca建庫
========================================================================
整體規(guī)劃
拓撲結(jié)構(gòu)
1.1主機規(guī)劃信息
Hostname |
OS |
DB |
Role |
CPU |
RAM |
Network |
dkf19c01 |
OEL 7.9 |
19.3.0.0 |
node 1 |
2*2 |
4G |
2 |
dkf19c02 |
OEL 7.9 |
19.3.0.0 |
node 2 |
2*2 |
4G |
2 |
1.2?IP規(guī)劃信息
Node |
IPADDR |
NAME |
dkf19c01 |
10.0.0.111 |
dkf19c01 |
10.0.0.115 |
dkf19c01-vip |
|
192.168.195.111 |
dkf19c01-priv |
|
dkf19c02 |
10.0.0.112 |
dkf19c02 |
10.0.0.116 |
dkf19c02-vip |
|
192.168.195.112 |
dkf19c02-priv |
|
10.0.0.117 |
dkf19c-scan |
|
10.0.0.118 | ||
10.0.0.119 |
1.3存儲規(guī)劃信息
操作系統(tǒng)分區(qū)
分區(qū) |
大小 |
/boot |
1G |
/ |
10G |
/tmp |
10G |
SWAP |
8G |
/u01 |
50G |
共享磁盤信息
共享存儲 |
分區(qū) |
大小 |
數(shù)量 |
共享盤 |
DATA |
10G |
2 |
OCR |
1G |
3 |
1.4數(shù)據(jù)庫規(guī)劃信息
軟件信息 |
||||
RAC |
類型 |
軟件名稱 |
備注 |
|
dkf19c |
操作系統(tǒng) |
oracle linux 7.9 |
||
數(shù)據(jù)庫 |
oracle 19.0.0.0 |
數(shù)據(jù)庫服務(wù)器存儲策略 |
||||||
項目 |
細分 |
容量 |
存儲目錄 |
存儲系統(tǒng) |
配置要求 |
|
Clusterware軟件 |
本地容量 |
/u01 |
文件系統(tǒng) |
|||
OCR和voting disk |
3個1GB |
+OCR |
ASM |
normal模式 |
||
數(shù)據(jù)庫軟件 |
本地容量 |
/u01 |
文件系統(tǒng) |
|||
數(shù)據(jù)庫文件 DATA |
共20GB |
+DATA |
ASM |
外部冗余模式 |
新建Group組
組名 |
GID |
備注 |
oinstall |
54421 |
Oracle清單和軟件所有者 |
dba |
54322 |
數(shù)據(jù)庫管理員 |
oper |
54323 |
DBA操作員組 |
backupdba |
54324 |
備份管理員 |
dgdba |
54325 |
DG管理員 |
kmdba |
54326 |
KM管理員 |
asmdba |
54327 |
ASM數(shù)據(jù)庫管理員組 |
asmoper |
54328 |
ASM操作員組 |
asmadmin |
54329 |
Oracle自動存儲管理組 |
racdba |
54330 |
RAC管理員 |
新建用戶列表
用戶名 |
UID |
屬組 |
Home目錄 |
shell |
備注 |
oracle |
10000 |
dba,asmdba,oper |
/home/oracle |
bash |
|
grid |
10001 |
asmadmin,asmdba,asmoper,oper,dba |
/home/grid |
bash |
整體數(shù)據(jù)庫安裝規(guī)劃
規(guī)劃內(nèi)容 |
規(guī)劃描述 |
CDB |
Yes |
PDB |
pdkf01 |
內(nèi)存規(guī)劃 |
SGA_TARGET |
processes |
300 |
字符集 |
AL32UTF8 |
歸檔模式 |
非歸檔(手工調(diào)整歸檔模式) |
- 2. 操作系統(tǒng)安裝及配置
- 2.1創(chuàng)建虛擬機
兩臺虛擬機創(chuàng)建方式相同,只是IP和主機名不同,因此相關(guān)說明只截取一臺, 節(jié)點一:
新建虛擬機向?qū)В?/p>
?
?
?
?
?
?
?
?
?創(chuàng)建完畢后,需要增加第二塊網(wǎng)卡:?
?
?注意:由于兩臺虛擬機操作系統(tǒng)一樣,可以在安裝完操作系統(tǒng)后通過Clone的方式創(chuàng)建第二臺。
- 2.2 OS安裝
- 2.2.1服務(wù)器配置信息表
硬盤 |
內(nèi)存 |
IP地址 |
用戶名 |
密碼 |
|
dkf19c01 |
80G |
4GB |
10.0.0.111 |
root |
自定義 |
dkf19c02 |
80G |
4GB |
10.0.0.112 |
root |
自定義 |
2.2.2安裝注意事項
1.英文界面
?2.上海時區(qū),關(guān)閉夏令時
3、本地存儲劃分
如下圖:
?
?4.軟件包:
?5. 啟用網(wǎng)卡:
?6. 禁用KDUMP
?7. 點擊 Begin Installation
?8. 設(shè)置root用戶密碼
?9. 安裝完畢后,執(zhí)行reboot重啟操作
?10. 重啟后,正常關(guān)閉主機,通過VMware的Clone功能創(chuàng)建第二臺虛擬機。
?
至此,兩臺虛擬主機準(zhǔn)備完畢,并已安裝操作系統(tǒng),接下來進行操作系統(tǒng)的配置
- 2.3 OS配置
- 2.3.1 ip地址配置
第一塊網(wǎng)卡:
[root@dkf19c01 ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=9a65b8b6-244d-40fc-9c92-6da63f31a117
DEVICE=ens33
ONBOOT=yes
IPV6_PRIVACY=no
IPADDR=10.0.0.111
NETMASK=255.255.255.0
GATEWAY=10.0.0.1
DNS1=10.0.0.1
第二塊網(wǎng)卡的配置:
[root@dkf19c01 ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens34
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens34
UUID=5a04719d-b6e8-48ec-862b-6d534ad45537
DEVICE=ens34
ONBOOT=yes
IPADDR=192.168.195.110
NETMASK=255.255.255.0
GATEWAY=192.168.195.1
DNS1=192.168.195.1
[root@dkf19c01 ~]#
按照如上方式調(diào)整第二臺主機的網(wǎng)卡配置,IP地址配置如下:
[root@dkf19c02 ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=9a65b8b6-244d-40fc-9c92-6da63f31a117
DEVICE=ens33
ONBOOT=yes
IPV6_PRIVACY=no
IPADDR=10.0.0.112
NETMASK=255.255.255.0
GATEWAY=10.0.0.1
DNS1=10.0.0.1
第二塊網(wǎng)卡的配置:
[root@dkf19c02 ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens34
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens34
UUID=5a04719d-b6e8-48ec-862b-6d534ad45537
DEVICE=ens34
ONBOOT=yes
IPADDR=192.168.195.112
NETMASK=255.255.255.0
GATEWAY=192.168.195.1
DNS1=192.168.195.1
[root@dkf19c02 ~]#
2.3.2 hosts解析文件配置
vi /etc/hosts?
增加內(nèi)容如下:(兩臺同樣配置)??????????????????
#public ip
10.0.0.111? dkf19c01
10.0.0.112? dkf19c02
#private ip
192.168.195.111 dkf19c01-priv
192.168.195.112 dkf19c02-priv
#vip
10.0.0.115 dkf19c01-vip
10.0.0.116 dkf19c02-vip
#scanip
10.0.0.117 dkf19c-scan
10.0.0.118 dkf19c-scan
10.0.0.119 dkf19c-scan
配置完畢后,關(guān)閉兩臺主機。
???????2.3.3?創(chuàng)建并掛載共享磁盤
在安裝VMware軟件的操作系統(tǒng)上,以管理員權(quán)限打開命令行工具cmd,進入到計劃存放共享磁盤的目錄,如d:\vm\sharedisk下,創(chuàng)建共享磁盤;
C:\"Program Files (x86)"\VMware\"VMware Workstation"\vmware-vdiskmanager -c -s 1GB -a lsilogic -t 4 shared-asm01.vmdk
C:\"Program Files (x86)"\VMware\"VMware Workstation"\vmware-vdiskmanager -c -s 1GB -a lsilogic -t 4 shared-asm02.vmdk
C:\"Program Files (x86)"\VMware\"VMware Workstation"\vmware-vdiskmanager -c -s 1GB -a lsilogic -t 4 shared-asm03.vmdk
C:\"Program Files (x86)"\VMware\"VMware Workstation"\vmware-vdiskmanager -c -s 10GB -a lsilogic -t 4 shared-asm04.vmdk
C:\"Program Files (x86)"\VMware\"VMware Workstation"\vmware-vdiskmanager -c -s 10GB -a lsilogic -t 4 shared-asm05.vmdk
創(chuàng)建完畢后,掛載到兩臺虛擬機上,兩臺主機同樣操作:
以同樣的方式掛載其它4塊共享盤,掛載之后的情況如下:
分別進入到兩臺虛擬機的存放目錄,D:\vm\dkf19c01 和 D:\vm\dkf19c02
編輯dkf19c019c-1.vmx,增加如下內(nèi)容,注意:標(biāo)紅的內(nèi)容在添加磁盤時已自動添加
disk.locking = "FALSE"
scsi1.sharedBus = "virtual"
scsi1.present = "TRUE"
scsi1.virtualDev = "lsilogic"
disk.EnableUUID = "TRUE"
scsi1:5.mode = "independent-persistent"
scsi1:5.deviceType = "disk"
scsi0:5.fileName = "D:\vm\sharedks\shared-asm05.vmdk"
scsi0:5.present = "TRUE"
scsi1:2.mode = "independent-persistent"
scsi1:2.deviceType = "disk"
scsi0:2.fileName = "D:\vm\sharedks\shared-asm02.vmdk"
scsi0:2.present = "TRUE"
scsi1:3.mode = "independent-persistent"
scsi1:3.deviceType = "disk"
scsi0:3.fileName = "D:\vm\sharedks\shared-asm03.vmdk"
scsi0:3.present = "TRUE"
scsi1:1.mode = "independent-persistent"
scsi1:1.deviceType = "disk"
scsi0:1.fileName = "D:\vm\sharedks\shared-asm01.vmdk"
scsi0:1.present = "TRUE"
scsi1:4.mode = "independent-persistent"
scsi1:4.deviceType = "disk"
scsi0:4.fileName = "D:\vm\sharedks\shared-asm04.vmdk"
scsi0:4.present = "TRUE"
編輯完畢后,啟動兩臺虛擬機。
???????2.3.4 關(guān)閉防火墻和selinux(兩臺主機上同樣操作)
執(zhí)行命令:
systemctl stop firewalld
systemctl disable firewalld
systemctl status firewalld
[root@dkf19c01 ~]# systemctl status firewalld -l
● firewalld.service - firewalld - dynamic firewall daemon
?? Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
?? Active: inactive (dead)
??? ?Docs: man:firewalld(1)
[root@dkf19c01 ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
關(guān)閉selinux
vi /etc/selinux/config
SELINUX=disabled
然后執(zhí)行:
[root@dkf19c01 ~]# setenfence 0
???????2.3.5 調(diào)整network參數(shù)
當(dāng)使用Oracle集群的時候,Zero Configuration Network一樣可能會導(dǎo)致節(jié)點間的通信問題,所以也應(yīng)該停掉Without zeroconf, a network administrator must set up network services, such as Dynamic Host Configuration Protocol (DHCP) and Domain Name System (DNS), or configure each computer's network settings manually.在使用平常的網(wǎng)絡(luò)設(shè)置方式的情況下是可以停掉Zero Conf的
[root@dkf19c01 ~]# echo "NOZEROCONF=yes"? >>/etc/sysconfig/network && cat /etc/sysconfig/network
# Created by anaconda
NOZEROCONF=yes
[root@dkf19c02 ~]# echo "NOZEROCONF=yes"? >>/etc/sysconfig/network && cat /etc/sysconfig/network
# Created by anaconda
NOZEROCONF=yes
???????2.3.6 調(diào)整/dev/shm
[root@dkf19c01 ~]# df -h
Filesystem?????????? Size? Used Avail Use% Mounted on
devtmpfs???????????? 2.0G???? 0? 2.0G?? 0% /dev
tmpfs??????????????? 2.0G???? 0? 2.0G?? 0% /dev/shm
tmpfs??????????????? 2.0G? 8.8M? 2.0G?? 1% /run
tmpfs??????????????? 2.0G???? 0? 2.0G?? 0% /sys/fs/cgroup
/dev/mapper/ol-root?? 10G? 2.1G? 8.0G ?21% /
/dev/mapper/ol-home?? 10G?? 33M?? 10G?? 1% /home
/dev/mapper/ol-u01??? 55G?? 33M?? 55G?? 1% /u01
/dev/sda1?????????? 1014M? 169M? 846M? 17% /boot
tmpfs??????????????? 393M???? 0? 393M?? 0% /run/user/0
[root@dkf19c01 ~]#cp /etc/fstab /etc/fstab_`date +"%Y%m%d_%H%M%S"`
echo "tmpfs??? /dev/shm??? tmpfs??? rw,exec,size=4G??? 0 0">>/etc/fstab
[root@dkf19c01 ~]# mount -o remount /dev/shm
[root@dkf19c01 ~]# df -h
Filesystem?????????? Size? Used Avail Use% Mounted on
devtmpfs???????????? 2.0G???? 0? 2.0G?? 0% /dev
tmpfs??????????????? 4.0G???? 0? 4.0G?? 0% /dev/shm
tmpfs??????????????? 2.0G? 8.8M? 2.0G?? 1% /run
tmpfs??????????????? 2.0G???? 0? 2.0G?? 0% /sys/fs/cgroup
/dev/mapper/ol-root ??10G? 2.1G? 8.0G? 21% /
/dev/mapper/ol-home?? 10G?? 33M?? 10G?? 1% /home
/dev/mapper/ol-u01??? 55G?? 33M?? 55G?? 1% /u01
/dev/sda1?????????? 1014M? 169M? 846M? 17% /boot
tmpfs??????????????? 393M???? 0? 393M?? 0% /run/user/0
同樣的方式調(diào)整第二節(jié)點。
[root@dkf19c02 ~]# df -h
Filesystem?????????? Size? Used Avail Use% Mounted on
devtmpfs???????????? 2.0G???? 0? 2.0G?? 0% /dev
tmpfs??????????????? 2.0G???? 0? 2.0G?? 0% /dev/shm
tmpfs??????????????? 2.0G? 8.8M? 2.0G?? 1% /run
tmpfs??????????????? 2.0G???? 0? 2.0G?? 0% /sys/fs/cgroup
/dev/mapper/ol-root?? 10G? 2.1G? 8.0G ?21% /
/dev/mapper/ol-home?? 10G?? 33M?? 10G?? 1% /home
/dev/mapper/ol-u01??? 55G?? 33M?? 55G?? 1% /u01
/dev/sda1?????????? 1014M? 169M? 846M? 17% /boot
tmpfs??????????????? 393M???? 0? 393M?? 0% /run/user/0
[root@dkf19c02 ~]#cp /etc/fstab /etc/fstab_`date +"%Y%m%d_%H%M%S"`
echo "tmpfs??? /dev/shm??? tmpfs??? rw,exec,size=4G??? 0 0">>/etc/fstab
[root@dkf19c02 ~]# mount -o remount /dev/shm
[root@dkf19c02 ~]# df -h
Filesystem?????????? Size? Used Avail Use% Mounted on
devtmpfs???????????? 2.0G???? 0? 2.0G?? 0% /dev
tmpfs??????????????? 4.0G???? 0? 4.0G?? 0% /dev/shm
tmpfs??????????????? 2.0G? 8.8M? 2.0G?? 1% /run
tmpfs??????????????? 2.0G???? 0? 2.0G?? 0% /sys/fs/cgroup
/dev/mapper/ol-root ??10G? 2.1G? 8.0G? 21% /
/dev/mapper/ol-home?? 10G?? 33M?? 10G?? 1% /home
/dev/mapper/ol-u01??? 55G?? 33M?? 55G?? 1% /u01
/dev/sda1?????????? 1014M? 169M? 846M? 17% /boot
tmpfs??????????????? 393M???? 0? 393M?? 0% /run/user/0
???????2.3.7?關(guān)閉THP和numa
檢查兩節(jié)點的THP設(shè)置:
[root@dkf19c02 ~]# cat /sys/kernel/mm/transparent_hugepage/enabled
方法一:需重啟
[root@dkf19c01 ~]# sed -i 's/quiet/quiet transparent_hugepage=never numa=off/' /etc/default/grub
[root@dkf19c01 ~]# grep quiet? /etc/default/grub
[root@dkf19c01 ~]# grub2-mkconfig -o /boot/grub2/grub.cfg
方法二:不重啟,臨時生效
[root@dkf19c01 ~]# echo never > /sys/kernel/mm/transparent_hugepage/enabled
[root@dkf19c01 ~]# cat /sys/kernel/mm/transparent_hugepage/enabled
- 2.3.8 軟件包安裝
- 2.3.8.1 配置本地yum配置
[root@dkf19c01 ~]# cd /etc/yum.repos.d
#將oracle-linux-ol7.repo復(fù)制:
[root@dkf19c01 ~]# cp oracle-linux-ol7.repo OS-CDROM.repo
[root@dkf19c01 ~]# vi OS-CDROM.repo
添加如下內(nèi)容:
[CD-ROM]
name=OS-$releaseverCDROM
bAseurl=file:///tmp/cd-rom
gpgcheck=0
enabled=1
創(chuàng)建掛載點:
[root@dkf19c01 ~]# mkdir /tmp/cd-rom
掛載光盤方式:
[root@dkf19c01 ~]# mount /dev/cdrom /tmp/cd-rom
掛載ISO方式:
[root@dkf19c01 ~]# mount -o loop /tmp/Oracle-Linux-OS...iso /tmp/cd-rom
#測試
[root@dkf19c01 ~]# yum repolist
2.3.8.2 安裝oracle所需軟件包
yum install -y binutils
yum install -y compat-libcap1
yum install -y compat-libstdc++-33
yum install -y compat-libstdc++-33.i686
yum install -y gcc
yum install -y gcc-c++
yum install -y glibc
yum install -y glibc.i686
yum install -y glibc-devel
yum install -y glibc-devel.i686
yum install -y ksh
yum install -y libgcc
yum install -y libgcc.i686
yum install -y libstdc++
yum install -y libstdc++.i686
yum install -y libstdc++-devel
yum install -y libstdc++-devel.i686
yum install -y libaio
yum install -y libaio.i686
yum install -y libaio-devel
yum install -y libaio-devel.i686
yum install -y libXext
yum install -y libXext.i686
yum install -y libXtst
yum install -y libXtst.i686
yum install -y libX11
yum install -y libX11.i686
yum install -y libXau
yum install -y libXau.i686
yum install -y libxcb
yum install -y libxcb.i686
yum install -y libXi
yum install -y libXi.i686
yum install -y make
yum install -y sysstat
yum install -y unixODBC
yum install -y unixODBC-devel
yum install -y readline
yum install -y libtermcap-devel
yum install -y bc
yum install -y unzip
yum install -y compat-libstdc++
yum install -y elfutils-libelf
yum install -y elfutils-libelf-devel
yum install -y fontconfig-devel
yum install -y libXi
yum install -y libXtst
yum install -y libXrender
yum install -y libXrender-devel
yum install -y libgcc
yum install -y librdmacm-devel
yum install -y libstdc++
yum install -y libstdc++-devel
yum install -y net-tools
yum install -y nfs-utils
yum install -y python
yum install -y python-configshell
yum install -y python-rtslib
yum install -y python-six
yum install -y targetcli
yum install -y smartmontools
yum install -y nscd
???????2.3.9 禁用NTP和chrony時間服務(wù)器
兩臺主機上備份NTP和chrony配置文件:
[root@dkf19c01 ~]# mv /etc/ntp.conf /etc/ntp.conf.bak
[root@dkf19c01 ~]# mv /etc/chrony.conf /etc/chrony.conf.bak
???????2.3.10 修改主機參數(shù):
兩臺主機上同步調(diào)整:
[root@dkf19c01 ~]# cp /etc/sysctl.conf /etc/sysctl.conf.bak
memTotal=$(grep MemTotal /proc/meminfo | awk '{print $2}')
totalMemory=$((memTotal / 2048))
shmall=$((memTotal / 4))
if [ $shmall -lt 2097152 ]; then
? shmall=2097152
fi
shmmax=$((memTotal * 1024 - 1))
if [ "$shmmax" -lt 4294967295 ]; then
? shmmax=4294967295
fi
[root@dkf19c01 ~]# cat <<EOF>>/etc/sysctl.conf
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = $shmall
kernel.shmmax = $shmmax
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 16777216
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.wmem_default = 16777216
fs.aio-max-nr = 6194304
vm.dirty_ratio=20
vm.dirty_background_ratio=3
vm.dirty_writeback_centisecs=100
vm.dirty_expire_centisecs=500
vm.swappiness=10
vm.min_free_kbytes=524288
net.core.netdev_max_backlog = 30000
net.core.netdev_budget = 600
#vm.nr_hugepages =
net.ipv4.conf.all.rp_filter = 2
net.ipv4.conf.default.rp_filter = 2
net.ipv4.ipfrag_time = 60
net.ipv4.ipfrag_low_thresh=6291456
net.ipv4.ipfrag_high_thresh = 8388608
EOF
參數(shù)生效:
sysctl -p
???????2.3.11 禁用不必要的服務(wù)
兩臺主機上同步調(diào)整:
systemctl disable accounts-daemon.service
systemctl disable atd.service
systemctl disable avahi-daemon.service
systemctl disable avahi-daemon.socket
systemctl disable bluetooth.service
systemctl disable brltty.service
systemctl disable chronyd.service
systemctl disable colord.service
systemctl disable cups.service?
systemctl disable debug-shell.service
systemctl disable firewalld.service
systemctl disable gdm.service
systemctl disable ksmtuned.service
systemctl disable ktune.service??
systemctl disable libstoragemgmt.service?
systemctl disable mcelog.service
systemctl disable ModemManager.service
systemctl disable ntpd.service
systemctl disable postfix.service
systemctl disable postfix.service?
systemctl disable rhsmcertd.service?
systemctl disable rngd.service
systemctl disable rpcbind.service
systemctl disable rtkit-daemon.service
systemctl disable tuned.service
systemctl disable upower.service
systemctl disable wpa_supplicant.service
--停止服務(wù)
systemctl stop accounts-daemon.service
systemctl stop atd.service
systemctl stop avahi-daemon.service
systemctl stop avahi-daemon.socket
systemctl stop bluetooth.service
systemctl stop brltty.service
systemctl stop chronyd.service
systemctl stop colord.service
systemctl stop cups.service?
systemctl stop debug-shell.service
systemctl stop firewalld.service
systemctl stop gdm.service
systemctl stop ksmtuned.service
systemctl stop ktune.service??
systemctl stop libstoragemgmt.service?
systemctl stop mcelog.service
systemctl stop ModemManager.service
systemctl stop ntpd.service
systemctl stop postfix.service
systemctl stop postfix.service?
systemctl stop rhsmcertd.service?
systemctl stop rngd.service
systemctl stop rpcbind.service
systemctl stop rtkit-daemon.service
systemctl stop tuned.service
systemctl stop upower.service
systemctl stop wpa_supplicant.service
???????2.3.12 創(chuàng)建用戶、目錄
1、創(chuàng)建用戶
groupadd -g 54321 oinstall
groupadd -g 54322 dba
groupadd -g 54323 oper
groupadd -g 54324 backupdba
groupadd -g 54325 dgdba
groupadd -g 54326 kmdba
groupadd -g 54327 asmdba
groupadd -g 54328 asmoper
groupadd -g 54329 asmadmin
groupadd -g 54330 racdba
useradd -g oinstall -G dba,oper,backupdba,dgdba,kmdba,asmdba,racdba -u 10000 oracle
useradd -g oinstall -G dba,asmdba,asmoper,asmadmin,racdba -u 10001 grid
echo "oracle" | passwd --stdin oracle
echo "grid" | passwd --stdin grid
2、創(chuàng)建目錄
mkdir -p /u01/app/19.3.0/grid
mkdir -p /u01/app/grid
mkdir -p /u01/app/oracle/product/19.3.0/dbhome_1
chown -R grid:oinstall /u01
chown -R oracle:oinstall /u01/app/oracle
chmod -R 775 /u01
???????2.3.13 環(huán)境變量
Grid用戶環(huán)境變量
e cat >> /home/grid/.bash_profile << "EOF"
################add#########################
umask 022
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/19.3.0/grid
export TNS_ADMIN=$ORACLE_HOME/network/admin
export NLS_LANG=AMERICAN_AMERICA.AL32UTF8
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export ORACLE_SID=+ASM1
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH
alias dba='sqlplus / as sysasm'
export PS1="[\`whoami\`@\`hostname\`:"'$PWD]\$ '
EOF
oracle用戶環(huán)境變量
cat >> /home/oracle/.bash_profile << "EOF"
################ add#########################
umask 022
export TMP=/tmp
export TMPDIR=$TMP
export NLS_LANG=AMERICAN_AMERICA.AL32UTF8
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/19.3.0/dbhome_1
export ORACLE_HOSTNAME=oracle19c-dkf19c01
export TNS_ADMIN=\$ORACLE_HOME/network/admin
export LD_LIBRARY_PATH=\$ORACLE_HOME/lib:/lib:/usr/lib
export ORACLE_SID=orcl1
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH
alias dba='sqlplus / as sysdba'
export PS1="[\`whoami\`@\`hostname\`:"'$PWD]\$ '
EOF
???????2.3.14 其他參數(shù)修改
1、vi /etc/pam.d/login 行末添加以下內(nèi)容
cat >> /etc/pam.d/login <<EOF
session required pam_limits.so
EOF
2、修改/etc/profile文件
新增如下內(nèi)容:
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
? if [ $SHELL = "/bin/ksh" ]; then
??? ulimit -p 16384
??? ulimit -n 65536
? else
??? ulimit -u 16384 -n 65536
? fi
fi
???????2.3.15 設(shè)置用戶資源限制
首先編輯配置文件:
cat >> /etc/security/limits.conf <<EOF
grid? soft? nproc? 2047
grid? hard? nproc? 16384
grid? soft?? nofile? 1024
grid? hard? nofile? 65536
grid? soft?? stack? 10240
grid? hard? stack? 32768
oracle? soft? nproc? 2047
oracle? hard? nproc? 16384
oracle? soft? nofile? 1024
oracle? hard? nofile? 65536
oracle? soft? stack? 10240
oracle? hard? stack? 32768
oracle soft memlock 3145728
oracle hard memlock 3145728
EOF
???????2.3.16 配置時間同步
在集群中的兩個 Oracle RAC 節(jié)點上執(zhí)行以下集群時間同步服務(wù)配置。
Oracle 提供了兩種方法來實現(xiàn)時間同步:
一種方法是配置了網(wǎng)絡(luò)時間協(xié)議 (NTP) 的操作系統(tǒng),
另一種方法是新的 Oracle 集群時間同步服務(wù) (CTSS)。Oracle 集群時間同步服務(wù) (ctssd) 旨在為那些 Oracle RAC 數(shù)據(jù)庫無法訪問 NTP 服務(wù)的組織提供服務(wù)。
~ 配置集群時間同步服務(wù) — (CTSS)
使用集群時間同步服務(wù)在集群中提供同步服務(wù),需要卸載網(wǎng)絡(luò)時間協(xié)議 (NTP) 及其配置。
要停用 NTP 服務(wù),必須停止當(dāng)前的 ntpd 服務(wù),從初始化序列中禁用該服務(wù),并刪除 ntp.conf 文件。要在 Oracle Enterprise Linux 上完成這些步驟,以 root 用戶身份在兩個 Oracle RAC 節(jié)點上運行以下命令:
[root@racnode1 ~]# /sbin/service ntpd stop
[root@racnode1 ~]# chkconfig ntpd off
[root@racnode1 ~]# mv /etc/ntp.conf /etc/ntp.conf.original
~還要刪除以下文件:
[root@racnode1 ~]# rm /var/run/ntpd.pid
此文件保存了 NTP 后臺程序的 pid。
當(dāng)安裝程序發(fā)現(xiàn) NTP 協(xié)議處于非活動狀態(tài)時,安裝集群時間同步服務(wù)將以活動模式自動進行安裝并通過所有節(jié)點的時間。如果發(fā)現(xiàn)配置了 NTP,則以觀察者模式啟動集群時間同步服務(wù),Oracle Clusterware 不會在集群中進行活動的時間同步。
???????2.3.17 關(guān)閉兩臺主機,并開啟共享目錄,開啟步驟如下
???????2.3.18 使用udev配置共享磁盤
獲取共享磁盤的UUID
/usr/lib/udev/scsi_id -g -u /dev/sdb
/usr/lib/udev/scsi_id -g -u /dev/sdc
/usr/lib/udev/scsi_id -g -u /dev/sdd
/usr/lib/udev/scsi_id -g -u /dev/sde
/usr/lib/udev/scsi_id -g -u /dev/sdf
[root@dkf19c01 ~]# /usr/lib/udev/scsi_id -g -u /dev/sdb
36000c299c828142efb0230db9c7a9d93
[root@dkf19c01 ~]# /usr/lib/udev/scsi_id -g -u /dev/sdc
36000c29b8c865854d447ef6c0c220137
[root@dkf19c01 ~]# /usr/lib/udev/scsi_id -g -u /dev/sdd
36000c293b90e8742bb8cc98c32d77fc6
[root@dkf19c01 ~]# /usr/lib/udev/scsi_id -g -u /dev/sde
36000c296930fa70e2fd41c6f26af38ac
[root@dkf19c01 ~]# /usr/lib/udev/scsi_id -g -u /dev/sdf
36000c290673aefb6ad44d24b1d986e92
[root@dkf19c01 ~]#
[root@dkf19c01 ~]# vi /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="36000c29c5e48e6db24ed2afbb2d5ce0a", RUN+="/bin/sh -c '/usr/bin/mkdir /dev/asm; mknod /dev/asm/asm_ocr01 b 8 16; chown grid:asmadmin /dev/asm/asm_ocr01; chmod 0660 /dev/asm/asm_ocr01'"
KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="36000c299c828142efb0230db9c7a9d93", RUN+="/bin/sh -c '/usr/bin/mkdir /dev/asm; mknod /dev/asm/asm_ocr02 b 8 32; chown grid:asmadmin /dev/asm/asm_ocr02; chmod 0660 /dev/asm/asm_ocr02'"
KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="36000c29b8c865854d447ef6c0c220137", RUN+="/bin/sh -c '/usr/bin/mkdir /dev/asm; mknod /dev/asm/asm_ocr03 b 8 48; chown grid:asmadmin /dev/asm/asm_ocr03; chmod 0660 /dev/asm/asm_ocr03'"
KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="36000c296930fa70e2fd41c6f26af38ac", RUN+="/bin/sh -c '/usr/bin/mkdir /dev/asm; mknod /dev/asm/asm_data01 b 8 64; chown grid:asmadmin /dev/asm/asm_data01; chmod 0660 /dev/asm/asm_data01'"
KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="36000c290673aefb6ad44d24b1d986e92", RUN+="/bin/sh -c '/usr/bin/mkdir /dev/asm; mknod /dev/asm/asm_data02 b 8 80; chown grid:asmadmin /dev/asm/asm_data02; chmod 0660 /dev/asm/asm_data02'"
重新解析磁盤:
[root@dkf19c01 ~]# /sbin/udevadm control --reload
[root@dkf19c01 ~]# /sbin/udevadm trigger --type=devices --action=change
檢查磁盤綁定情況:
[root@dkf19c01 yum.repos.d]# ll /dev/asm*
total 0
brw-rw---- 1 grid asmadmin 8, 64 Feb 14 21:41 asm_data01
brw-rw---- 1 grid asmadmin 8, 80 Feb 14 21:40 asm_data02
brw-rw---- 1 grid asmadmin 8, 16 Feb 14 21:41 asm_ocr01
brw-rw---- 1 grid asmadmin 8, 32 Feb 14 21:41 asm_ocr02
brw-rw---- 1 grid asmadmin 8, 48 Feb 14 21:41 asm_ocr03
[root@dkf19c01 yum.repos.d]#
節(jié)點2同樣的操作;
???????2.3.19 安裝GI軟件
- 切換到grid用戶,進入共享目錄,解壓grid軟件包:
[grid@dkf19c01 Oracle]# cd /mnt/hgfs/Oracle
[grid@dkf19c01:/mnt/hgfs/Oracle]$ ls
Oracle_grid_V982068-01.zip
Oracle_database_1903-V982063-01.zip
[grid@dkf19c01:/mnt/hgfs/Oracle]$ unzip Oracle_grid_V982068-01.zip -d $ORACLE_HOME
- 配置圖形界面,安裝X11圖形工具;
建議使用MobaXterm終端進行操作,并安裝X11-Xorg包
安裝命令:[root@dkf19c02 yum.repos.d]# yum install xorgs*
rpm -qa cvuqdisk? 兩個節(jié)點執(zhí)行,看是否有安裝,若無安裝,則:
2、安裝CVU包:兩個節(jié)點都執(zhí)行,root用戶
[root@dkf19c01 ~]# cd /u01/app/19.3.0/grid/cv/rpm
[root@dkf19c01 rpm]# ls
cvuqdisk-1.0.10-1.rpm
[root@dkf19c01 rpm]# rpm -ivh cvuqdisk-1.0.10-1.rpm
Preparing...????????????????????????? ################################# [100%]
Using default group oinstall to install package
Updating / installing...
?? 1:cvuqdisk-1.0.10-1??????????????? ################################# [100%]
[root@dkf19c01 rpm]#
節(jié)點2:
[root@dkf19c02 ~]# cd /u01/app/19.3.0/grid/cv/rpm
[root@dkf19c02 rpm]# rpm -ivh cvuqdisk-1.0.10-1.rpm
Preparing...????????????????????????? ################################# [100%]
Using default group oinstall to install package
Updating / installing...
?? 1:cvuqdisk-1.0.10-1??????????????? ################################# [100%]
[root@dkf19c02 rpm]#
- 第三章 Grid安裝
- 3.1安裝前檢測
[root@dkf19c01 rpm]# su - grid
[grid@dkf19c01 ~]$ export CVUQDIISK_GRP=oinstall
[grid@dkf19c01 ~]$ cd /u01/app/19.3.0/grid/
[grid@dkf19c01 grid]$ ./runcluvfy.sh stage -pre crsinst -n dkf19c01, dkf19c02 -verbose
3.2 Grid安裝
3.2.1安裝執(zhí)行
[grid@dkf19c01 ~]$ cd /u01/app/19.3.0/grid/
[grid@dkf19c01 ~]$ ./gridsetup.sh
?
?
?
增加第二個節(jié)點:
配置兩節(jié)點間的grid用戶互信:
?
?
?
創(chuàng)建OCR磁盤組:?
?
?
?
?
?
?
?
?
?
?兩節(jié)點分別執(zhí)行兩個腳本:
?
?節(jié)點1執(zhí)行:
[root@dkf19c01 rpm]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@dkf19c01 rpm]#
節(jié)點2執(zhí)行:
[root@dkf19c02 rpm]# /u01/app/oraInventory/orainstRoot.sh
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@dkf19c02 rpm]#
節(jié)點1第二個腳本:
[root@dkf19c01 rpm]# /u01/app/19.3.0/grid/root.sh
Performing root user operation.
The following environment variables are set as:
??? ORACLE_OWNER= grid
??? ORACLE_HOME=? /u01/app/19.3.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
?? Copying dbhome to /usr/local/bin ...
?? Copying oraenv to /usr/local/bin ...
?? Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/19.3.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
? /u01/app/grid/crsdata/dkf19c01/crsconfig/rootcrs_dkf19c01_2023-02-10_09-59-01PM.log
2023/02/10 21:59:10 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2023/02/10 21:59:10 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
2023/02/10 21:59:10 CLSRSC-363: User ignored prerequisites during installation
2023/02/10 21:59:10 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2023/02/10 21:59:12 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
2023/02/10 21:59:13 CLSRSC-594: Executing installation step 5 of 19: 'SetupOSD'.
2023/02/10 21:59:13 CLSRSC-594: Executing installation step 6 of 19: 'CheckCRSConfig'.
2023/02/10 21:59:13 CLSRSC-594: Executing installation step 7 of 19: 'SetupLocalGPNP'.
2023/02/10 21:59:30 CLSRSC-594: Executing installation step 8 of 19: 'CreateRootCert'.
2023/02/10 21:59:33 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2023/02/10 21:59:39 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2023/02/10 21:59:47 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2023/02/10 21:59:48 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
2023/02/10 21:59:52 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2023/02/10 21:59:52 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
2023/02/10 22:00:14 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
2023/02/10 22:00:19 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
2023/02/10 22:00:24 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2023/02/10 22:00:28 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
ASM has been created and started successfully.
[DBT-30001] Disk groups created successfully. Check /u01/app/grid/cfgtoollogs/asmca/asmca-230210PM100058.log for details.
2023/02/10 22:01:54 CLSRSC-482: Running command: '/u01/app/19.3.0/grid/bin/ocrconfig -upgrade grid oinstall'
CRS-4256: Updating the profile
Successful addition of voting disk 6312bdb7b5904f5fbfc453f557492888.
Successful addition of voting disk 451040038e734faebfbff20dbf027e21.
Successful addition of voting disk 7a8cbd0838244f73bfdd80a32c6f1599.
Successfully replaced voting disk group with +OCR.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##? STATE??? File Universal Id??????????????? File Name Disk group
--? -----??? -----------------??????????????? --------- ---------
?1. ONLINE?? 6312bdb7b5904f5fbfc453f557492888 (/dev/asm/asm_ocr03) [OCR]
?2. ONLINE?? 451040038e734faebfbff20dbf027e21 (/dev/asm/asm_ocr02) [OCR]
?3. ONLINE?? 7a8cbd0838244f73bfdd80a32c6f1599 (/dev/asm/asm_ocr01) [OCR]
Located 3 voting disk(s).
2023/02/10 22:03:20 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
2023/02/10 22:04:55 CLSRSC-343: Successfully started Oracle Clusterware stack
2023/02/10 22:04:56 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
2023/02/10 22:06:25 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
2023/02/10 22:06:58 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@dkf19c01 rpm]#
節(jié)點2第二個腳本:
[root@dkf19c02 rpm]# /u01/app/19.3.0/grid/root.sh
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
Performing root user operation.
The following environment variables are set as:
??? ORACLE_OWNER= grid
??? ORACLE_HOME=? /u01/app/19.3.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
?? Copying dbhome to /usr/local/bin ...
?? Copying oraenv to /usr/local/bin ...
?? Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/19.3.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
? /u01/app/grid/crsdata/dkf19c02/crsconfig/rootcrs_dkf19c02_2023-02-10_10-11-01PM.log
2023/02/10 22:11:07 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2023/02/10 22:11:07 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
2023/02/10 22:11:07 CLSRSC-363: User ignored prerequisites during installation
2023/02/10 22:11:08 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2023/02/10 22:11:09 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
2023/02/10 22:11:09 CLSRSC-594: Executing installation step 5 of 19: 'SetupOSD'.
2023/02/10 22:11:09 CLSRSC-594: Executing installation step 6 of 19: 'CheckCRSConfig'.
2023/02/10 22:11:10 CLSRSC-594: Executing installation step 7 of 19: 'SetupLocalGPNP'.
2023/02/10 22:11:11 CLSRSC-594: Executing installation step 8 of 19: 'CreateRootCert'.
2023/02/10 22:11:11 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2023/02/10 22:11:20 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2023/02/10 22:11:20 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
2023/02/10 22:11:21 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2023/02/10 22:11:22 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
2023/02/10 22:11:34 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2023/02/10 22:11:43 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
2023/02/10 22:11:44 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
2023/02/10 22:11:46 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2023/02/10 22:11:47 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
2023/02/10 22:11:55 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
2023/02/10 22:12:44 CLSRSC-343: Successfully started Oracle Clusterware stack
2023/02/10 22:12:44 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
2023/02/10 22:12:57 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
2023/02/10 22:13:03 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@dkf19c02 rpm]#
繼續(xù)執(zhí)行安裝:
?
?檢查不通過,可忽略;
?
注意:如果第一次安裝失敗,第二次安裝grid時,磁盤已有信息,故需要重新擦除,才能變?yōu)榭蛇x狀態(tài):
[root@dkf19c01 software]# dd if=/dev/zero of=/dev/asm-ocr1 bs=1M count=10
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.0051855 s, 2.0 GB/s
[root@dkf19c01 software]# dd if=/dev/zero of=/dev/asm-ocr2 bs=1M count=10
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.00490229 s, 2.1 GB/s
[root@dkf19c01 software]# dd if=/dev/zero of=/dev/asm-ocr3 bs=1M count=10
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.00451599 s, 2.3 GB/s
[root@dkf19c01 software]# dd if=/dev/zero of=/dev/asm-data1 bs=1M count=10
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.00490229 s, 2.1 GB/s
[root@dkf19c01 software]# dd if=/dev/zero of=/dev/asm-data2 bs=1M count=10
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.00490229 s, 2.2 GB/s
???????3.2.2 集群檢測
[grid@dkf19c01:/home/grid]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name?????????? Target? State??????? Server?????????????????? State details??????
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
?????????????? ONLINE? ONLINE?????? dkf19c01???????????????? STABLE
?????????????? ONLINE? ONLINE?????? dkf19c02???????????????? STABLE
ora.chad
??????????? ???ONLINE? ONLINE?????? dkf19c01???????????????? STABLE
?????????????? ONLINE? ONLINE?????? dkf19c02???????????????? STABLE
ora.net1.network
?????????????? ONLINE? ONLINE?????? dkf19c01???????????????? STABLE
?????????????? ONLINE? ONLINE?????? dkf19c02?? ??????????????STABLE
ora.ons
?????????????? ONLINE? ONLINE?????? dkf19c01???????????????? STABLE
?????????????? ONLINE? ONLINE?????? dkf19c02???????????????? STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
????? 1??????? ONLINE? ONLINE?????? dkf19c01???????????????? STABLE
????? 2??????? ONLINE? ONLINE?????? dkf19c02??????????????? ?STABLE
????? 3??????? OFFLINE OFFLINE?????????????????????????????? STABLE
ora.LISTENER_SCAN1.lsnr
????? 1??????? ONLINE? ONLINE?????? dkf19c02???????????????? STABLE
ora.LISTENER_SCAN2.lsnr
????? 1??????? ONLINE? ONLINE?????? dkf19c01???????????????? STABLE
ora.LISTENER_SCAN3.lsnr
????? 1??????? ONLINE? ONLINE?????? dkf19c01???????????????? STABLE
ora.OCR.dg(ora.asmgroup)
????? 1??????? ONLINE? ONLINE?????? dkf19c01???????????????? STABLE
????? 2??????? ONLINE? ONLINE?????? dkf19c02???????????????? STABLE
????? 3??????? OFFLINE OFFLINE?????????????????????????????? STABLE
ora.asm(ora.asmgroup)
????? 1??????? ONLINE? ONLINE?????? dkf19c01???????????????? Started,STABLE
????? 2??????? ONLINE ?ONLINE?????? dkf19c02???????????????? Started,STABLE
????? 3??????? OFFLINE OFFLINE?????????????????????????????? STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
????? 1??????? ONLINE? ONLINE?????? dkf19c01???????????????? STABLE
????? 2??????? ONLINE? ONLINE?????? dkf19c02???????????????? STABLE
????? 3??????? OFFLINE OFFLINE?????????????????????????????? STABLE
ora.cvu
????? 1??????? ONLINE? ONLINE?????? dkf19c01???????????????? STABLE
ora.dkf19c01.vip
????? 1??????? ONLINE? ONLINE?????? dkf19c01??????????? ?????STABLE
ora.dkf19c02.vip
????? 1??????? ONLINE? ONLINE?????? dkf19c02???????????????? STABLE
ora.qosmserver
????? 1??????? ONLINE? ONLINE?????? dkf19c01???????????????? STABLE
ora.scan1.vip
????? 1??????? ONLINE? ONLINE?????? dkf19c02???????????????? STABLE
ora.scan2.vip
????? 1??????? ONLINE? ONLINE?????? dkf19c01???????????????? STABLE
ora.scan3.vip
????? 1??????? ONLINE? ONLINE?????? dkf19c01???????????????? STABLE
--------------------------------------------------------------------------------
[grid@dkf19c01:/home/grid]$
[grid@dkf19c01:/home/grid]$
????????????????????????????????????????
[grid@dkf19c02:/home/grid]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[grid@dkf19c02:/home/grid]$
[grid@dkf19c02:/home/grid]$ crsctl query css votedisk
##? STATE??? File Universal Id??????????????? File Name Disk group
--? -----??? -----------------??????????????? --------- ---------
?1. ONLINE?? 6312bdb7b5904f5fbfc453f557492888 (/dev/asm/asm_ocr03) [OCR]
?2. ONLINE?? 451040038e734faebfbff20dbf027e21 (/dev/asm/asm_ocr02) [OCR]
?3. ONLINE?? 7a8cbd0838244f73bfdd80a32c6f1599 (/dev/asm/asm_ocr01) [OCR]
Located 3 voting disk(s).
[grid@dkf19c02:/home/grid]$
資源組狀態(tài):
[grid@dkf19c02:/home/grid]$ crsctl status resource -t
--------------------------------------------------------------------------------
Name?????????? Target? State??????? Server?????????????????? State details??????
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
?????????????? ONLINE? ONLINE?????? dkf19c01???????????????? STABLE
?????????????? ONLINE? ONLINE?????? dkf19c02???????????????? STABLE
ora.chad
?????????????? ONLINE? ONLINE?????? dkf19c01????? ???????????STABLE
?????????????? ONLINE? ONLINE?????? dkf19c02???????????????? STABLE
ora.net1.network
?????????????? ONLINE? ONLINE?????? dkf19c01???????????????? STABLE
?????????????? ONLINE? ONLINE?????? dkf19c02???????????????? STABLE
ora.ons
???????? ??????ONLINE? ONLINE?????? dkf19c01???????????????? STABLE
?????????????? ONLINE? ONLINE?????? dkf19c02???????????????? STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
????? 1??????? ONLINE? ONLINE?????? dkf19c01???????????????? STABLE
????? 2??????? ONLINE? ONLINE?????? dkf19c02???????????????? STABLE
????? 3??????? ONLINE? OFFLINE?????????????????????????????? STABLE
ora.DATA.dg(ora.asmgroup)
????? 1??????? ONLINE? ONLINE?????? dkf19c01???????????????? STABLE
????? 2??????? ONLINE? ONLINE?????? dkf19c02???????????????? STABLE
????? 3??????? OFFLINE OFFLINE?????????????????????????????? STABLE
ora.LISTENER_SCAN1.lsnr
????? 1??????? ONLINE? ONLINE?????? dkf19c01???????????????? STABLE
ora.LISTENER_SCAN2.lsnr
????? 1??????? ONLINE? ONLINE?????? dkf19c02???????????????? STABLE
ora.LISTENER_SCAN3.lsnr
????? 1??????? ONLINE? ONLINE?????? dkf19c02???????????????? STABLE
ora.OCR.dg(ora.asmgroup)
????? 1??????? ONLINE? ONLINE?????? dkf19c01???????????????? STABLE
? ????2??????? ONLINE? ONLINE?????? dkf19c02???????????????? STABLE
????? 3??????? OFFLINE OFFLINE?????????????????????????????? STABLE
ora.asm(ora.asmgroup)
????? 1??????? ONLINE? ONLINE?????? dkf19c01???????????????? Started,STABLE
????? 2??????? ONLINE? ONLINE?????? dkf19c02???????????????? Started,STABLE
????? 3??????? OFFLINE OFFLINE?????????????????????????????? STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
????? 1??????? ONLINE? ONLINE?????? dkf19c01???????????????? STABLE
????? 2??????? ONLINE? ONLINE? ?????dkf19c02???????????????? STABLE
????? 3??????? OFFLINE OFFLINE?????????????????????????????? STABLE
ora.cvu
????? 1??????? ONLINE? ONLINE?????? dkf19c02???????????????? STABLE
ora.dkf19c.db
????? 1??????? ONLINE? ONLINE?????? dkf19c01???????????????? Open,HOME=/u01/app/o
???????????????????????????????????????????????????????????? racle/product/19.3.0
???????????????????????????????????????????????????????????? /dbhome_1,STABLE
????? 2?????? ?ONLINE? ONLINE?????? dkf19c02???????????????? Open,HOME=/u01/app/o
???????????????????????????????????????????????????????????? racle/product/19.3.0
???????????????????????????????????????????????????????????? /dbhome_1,STABLE
ora.dkf19c01.vip
????? 1??? ????ONLINE? ONLINE?????? dkf19c01???????????????? STABLE
ora.dkf19c02.vip
????? 1??????? ONLINE? ONLINE?????? dkf19c02???????????????? STABLE
ora.qosmserver
????? 1??????? ONLINE? ONLINE?????? dkf19c02???????????????? STABLE
ora.scan1.vip
????? 1??????? ONLINE? ONLINE?????? dkf19c01???????????????? STABLE
ora.scan2.vip
????? 1??????? ONLINE? ONLINE?????? dkf19c02???????????????? STABLE
ora.scan3.vip
????? 1??????? ONLINE? ONLINE?????? dkf19c02???????????????? STABLE
--------------------------------------------------------------------------------
[grid@dkf19c02:/home/grid]$
檢查集群節(jié)點:
[grid@dkf19c01 ~]$ olsnodes -s
dkf19c01??? Active
dkf19c02??? Active
[grid@dkf19c01 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
檢查時間服務(wù)器:
[grid@dkf19c02:/home/grid]$ crsctl check ctss
CRS-4701: The Cluster Time Synchronization Service is in Active mode.
- ASM管理磁盤組
- 4.1 ASMCA新建磁盤組
以grid用戶在node1節(jié)點登錄
在安裝Clusterware 的時候,會創(chuàng)建ASM 實例,但是它只創(chuàng)建了一個CRS 組來安裝OCR 和Voting Disk。 在我們繼續(xù)安裝Oracle 數(shù)據(jù)庫之前,我們需要創(chuàng)建一個DATA的ASM 磁盤組來存放數(shù)據(jù)文件。
創(chuàng)建過程很簡單。 運行asmca(ASM Configuration Assistant)命令就可以彈出創(chuàng)建窗口。 在窗口中創(chuàng)建完DATA 和 FRA 組后,退出窗口即可。
在grid 用戶下,執(zhí)行 asmca,啟動 asm 磁盤組創(chuàng)建向?qū)?/p>
點擊“創(chuàng)建”按鈕,在彈出的創(chuàng)建界面中填寫磁盤組名稱,選擇外邊存儲方,并勾選成員,選擇完畢后點擊 ok;
- grid用戶,執(zhí)行asmca
?
?
- 第五章 Oracle軟件安裝
- 5.1 解壓軟件包
使用oracle用戶登錄系統(tǒng):
[oracle@dkf19c01:/mnt/hgfs/Oracle]$ unzip Oracle_database_1903-V982063-01.zip -d $ORACLE_HOME
???????5.2?圖形界面安裝
[oracle@dkf19c01:/mnt/hgfs/Oracle]$ cd $ORACLE_HOME/
[oracle@dkf19c01:/u01/app/oracle/product/19.3.0/dbhome_1]$ ./runInstaller
?
?
?
?
?
完成后在兩節(jié)點上執(zhí)行腳本:
[root@dkf19c01 ~]# /u01/app/oracle/product/19.3.0/dbhome_1/root.sh
Performing root user operation for Oracle 19c
The following environment variables are set as:
??? ORACLE_OWNER= oracle
??? ORACLE_HOME=? /u01/app/oracle/product/19.3.0/dbhome_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
完成安裝。
- 第六章 建庫
- 6.1 dbca建庫
oracle用戶執(zhí)行:dbca
?
?
?
?
?
?
?
?
?
?
登錄數(shù)據(jù)庫驗證:
[oracle@dkf19c01:/home/oracle]$
[oracle@dkf19c01:/home/oracle]$ dba
SQL*Plus: Release 19.0.0.0.0 - Production on Tue Feb 14 22:17:53 2023
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle.? All rights reserved.
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
SQL>文章來源:http://www.zghlxwxcb.cn/news/detail-445073.html
SQL> show pbds;
SP2-0158: unknown SHOW option "pbds"
SQL> show pdbs;
??? CON_ID CON_NAME?????????????????????? OPEN MODE? RESTRICTED
---------- ------------------------------ ---------- ----------
???????? 2 PDB$SEED?????????????????????? READ ONLY? NO
???????? 3 PDKF01???????????????????????? READ WRITE NO
SQL>
======================================================================文章來源地址http://www.zghlxwxcb.cn/news/detail-445073.html
到了這里,關(guān)于VMware虛擬機19c RAC+Oracle Linux 7.9安裝手冊的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!