国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

ORACLE RAC集群CRSD服務(wù)異常后無需重啟集群的處理方法

這篇具有很好參考價值的文章主要介紹了ORACLE RAC集群CRSD服務(wù)異常后無需重啟集群的處理方法。希望對大家有所幫助。如果存在錯誤或未考慮完全的地方,請大家不吝賜教,您也可以點(diǎn)擊"舉報違法"按鈕提交疑問。

問題:
監(jiān)控軟件連不上TEST集群節(jié)點(diǎn)1,發(fā)現(xiàn)監(jiān)聽中不存在IP 1.80.檢查巡檢數(shù)據(jù)發(fā)現(xiàn)5/18日正常。
排查:
1.根據(jù)監(jiān)控軟件報錯時間排查,監(jiān)聽日志無異常,監(jiān)聽狀態(tài)中不存在IP 1.80物理IP。
2.進(jìn)一步發(fā)現(xiàn)集群狀態(tài)異常,多個資源狀態(tài)顯示OFFLINE。
排查發(fā)現(xiàn)為CRSD服務(wù)異常,其它集群資源正常,所以數(shù)據(jù)庫沒有宕機(jī)。
3.集群日志排查發(fā)現(xiàn)OCR不能讀導(dǎo)致CRSD資源異常;具體為5/26號12:19分時候asm日志顯示OCR仲裁盤有問題,13:23分出現(xiàn)讀寫錯誤。

處理:
1.數(shù)據(jù)庫集群因OCR訪問問題異常(節(jié)點(diǎn)2及其它DATA磁盤組未報錯)
2.CRS資源異常
3.命令啟動CRS資源(alter diskgroup ocr mount; //crsctl start res ora.crsd -init),不影響集群其它資源,數(shù)據(jù)庫等均持續(xù)正常運(yùn)行。
4.關(guān)閉和重啟了監(jiān)聽后,監(jiān)聽出現(xiàn)物理IP 80,srvctl start listener -n TESTdb01
5.已經(jīng)聯(lián)系用戶協(xié)調(diào)存儲工程師做檢查

處理過程:

1.CRS資源異常
[grid@cxhisdb01 ~]$  crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

2.啟動CRS資源
[grid@cxhisdb01 ~]$ crsctl start res ora.crsd -init
CRS-2672: Attempting to start 'ora.crsd' on 'cxhisdb01'
CRS-2676: Start of 'ora.crsd' on 'cxhisdb01' succeeded


3、檢查CRS狀態(tài)
[grid@cxhisdb01 ~]$  crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

[grid@TESTdb01 ~]$ sqlplus / as sysasm

SQL*Plus: Release 11.2.0.4.0 Production on Thu May 27 12:12:17 2021

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SQL> alter diskgroup ocr mount;

Diskgroup altered.

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options
[grid@TESTdb01 ~]$ ps -ef|grep crs
grid      96756  95841  0 12:13 pts/32   00:00:00 grep crs
[grid@TESTdb01 ~]$ ps -ef|grep css
grid      96877  95841  0 12:13 pts/32   00:00:00 grep css
root     144383      1  0  2020 ?        09:30:54 /u01/app/11.2.0/grid/bin/cssdmonitor
root     144401      1  0  2020 ?        09:23:19 /u01/app/11.2.0/grid/bin/cssdagent
grid     144412      1  0  2020 ?        2-05:10:35 /u01/app/11.2.0/grid/bin/ocssd.bin 
[grid@TESTdb01 ~]$ ps -ef|grep has
root       5926      1  0  2017 ?        00:00:00 /bin/sh /etc/init.d/init.ohasd run
grid      96897  95841  0 12:13 pts/32   00:00:00 grep has
root     144192      1  0  2020 ?        1-14:31:42 /u01/app/11.2.0/grid/bin/ohasd.bin reboot
[grid@TESTdb01 ~]$  crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[grid@TESTdb01 ~]$ crsctl start res ora.crsd -init
CRS-2672: Attempting to start 'ora.crsd' on 'TESTdb01'
CRS-2676: Start of 'ora.crsd' on 'TESTdb01' succeeded
[grid@TESTdb01 ~]$  crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online


[grid@TESTdb01 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       TESTdb01                                    
               ONLINE  ONLINE       TESTdb02                                    
ora.LISTENER.lsnr
               ONLINE  ONLINE       TESTdb01                                    
               ONLINE  ONLINE       TESTdb02                                    
ora.OCR.dg
               ONLINE  ONLINE       TESTdb01                                    
               ONLINE  ONLINE       TESTdb02                                    
ora.SSD.dg
               ONLINE  ONLINE       TESTdb01                                    
               ONLINE  ONLINE       TESTdb02                                    
ora.asm
               ONLINE  ONLINE       TESTdb01                Started             
               ONLINE  ONLINE       TESTdb02                Started             
ora.gsd
               OFFLINE OFFLINE      TESTdb01                                    
               OFFLINE OFFLINE      TESTdb02                                    
ora.net1.network
               ONLINE  ONLINE       TESTdb01                                    
               ONLINE  ONLINE       TESTdb02                                    
ora.ons
               ONLINE  ONLINE       TESTdb01                                    
               ONLINE  ONLINE       TESTdb02                                    
ora.registry.acfs
               ONLINE  ONLINE       TESTdb01                                    
               ONLINE  ONLINE       TESTdb02                                    
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       TESTdb02                                    
ora.cvu
      1        ONLINE  ONLINE       TESTdb01                                    
ora.TESTdb01.vip
      1        ONLINE  ONLINE       TESTdb01                                    
ora.TESTdb02.vip
      1        ONLINE  ONLINE       TESTdb02                                    
ora.hospital.db
      1        ONLINE  ONLINE       TESTdb01                Open                
      2        ONLINE  ONLINE       TESTdb02                Open                
ora.oc4j
      1        ONLINE  ONLINE       TESTdb01                                    
ora.scan1.vip
      1        ONLINE  ONLINE       TESTdb02                                    
[grid@TESTdb01 ~]$ lsnrctl status

分析過程:

集群日志

2020-12-29 05:06:46.095: 
[/u01/app/11.2.0/grid/bin/oraagent.bin(145339)]CRS-5818:Aborted command 'check' for resource 'ora.OCR.dg'. Details at (:CRSAGF00113:) {1:61066:2} in /u01/app/11.2.0/grid/log/TESTdb01/agent/crsd/oraagent_grid/oraagent_grid.log.
2021-05-26 13:23:46.059: 
[crsd(145215)]CRS-1006:The OCR location +OCR is inaccessible. Details in /u01/app/11.2.0/grid/log/TESTdb01/crsd/crsd.log.
2021-05-26 13:23:46.068: 
[crsd(145215)]CRS-1006:The OCR location +OCR is inaccessible. Details in /u01/app/11.2.0/grid/log/TESTdb01/crsd/crsd.log.
2021-05-26 13:23:56.293: 
[/u01/app/11.2.0/grid/bin/oraagent.bin(66885)]CRS-5822:Agent '/u01/app/11.2.0/grid/bin/oraagent_grid' disconnected from server. Details at (:CRSAGF00117:) {0:21:18} in /u01/app/11.2.0/grid/log/TESTdb01/agent/crsd/oraagent_grid/oraagent_grid.log.
2021-05-26 13:23:56.294: 
[/u01/app/11.2.0/grid/bin/oraagent.bin(31320)]CRS-5822:Agent '/u01/app/11.2.0/grid/bin/oraagent_oracle' disconnected from server. Details at (:CRSAGF00117:) {0:19:50603} in /u01/app/11.2.0/grid/log/TESTdb01/agent/crsd/oraagent_oracle/oraagent_oracle.log.
2021-05-26 13:23:56.461: 
[/u01/app/11.2.0/grid/bin/orarootagent.bin(145347)]CRS-5822:Agent '/u01/app/11.2.0/grid/bin/orarootagent_root' disconnected from server. Details at (:CRSAGF00117:) {0:5:1568} in /u01/app/11.2.0/grid/log/TESTdb01/agent/crsd/orarootagent_root/orarootagent_root.log.
2021-05-26 13:23:56.485: 
[/u01/app/11.2.0/grid/bin/scriptagent.bin(145549)]CRS-5822:Agent '/u01/app/11.2.0/grid/bin/scriptagent_grid' disconnected from server. Details at (:CRSAGF00117:) {0:9:68} in /u01/app/11.2.0/grid/log/TESTdb01/agent/crsd/scriptagent_grid/scriptagent_grid.log.
2021-05-26 13:23:56.651: 
[ohasd(144192)]CRS-2765:Resource 'ora.crsd' has failed on server 'TESTdb01'.
2021-05-26 13:23:58.540: 
[crsd(5795)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/app/11.2.0/grid/log/TESTdb01/crsd/crsd.log.
2021-05-26 13:23:58.548: 
[crsd(5795)]CRS-0804:Cluster Ready Service aborted due to Oracle Cluster Registry error [PROC-26: Error while accessing the physical storage
]. Details at (:CRSD00111:) in /u01/app/11.2.0/grid/log/TESTdb01/crsd/crsd.log.
2021-05-26 13:23:58.964: 
[ohasd(144192)]CRS-2765:Resource 'ora.crsd' has failed on server 'TESTdb01'.
2021-05-26 13:24:00.374: 
[crsd(5834)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/app/11.2.0/grid/log/TESTdb01/crsd/crsd.log.
2021-05-26 13:24:00.382: 
[crsd(5834)]CRS-0804:Cluster Ready Service aborted due to Oracle Cluster Registry error [PROC-26: Error while accessing the physical storage
]. Details at (:CRSD00111:) in /u01/app/11.2.0/grid/log/TESTdb01/crsd/crsd.log.
2021-05-26 13:24:01.010: 
[ohasd(144192)]CRS-2765:Resource 'ora.crsd' has failed on server 'TESTdb01'.
2021-05-26 13:24:02.447: 
[crsd(5886)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/app/11.2.0/grid/log/TESTdb01/crsd/crsd.log.
2021-05-26 13:24:02.455: 
[crsd(5886)]CRS-0804:Cluster Ready Service aborted due to Oracle Cluster Registry error [PROC-26: Error while accessing the physical storage
]. Details at (:CRSD00111:) in /u01/app/11.2.0/grid/log/TESTdb01/crsd/crsd.log.
2021-05-26 13:24:03.068: 
[ohasd(144192)]CRS-2765:Resource 'ora.crsd' has failed on server 'TESTdb01'.
2021-05-26 13:24:04.457: 
[crsd(5909)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/app/11.2.0/grid/log/TESTdb01/crsd/crsd.log.
2021-05-26 13:24:04.465: 
[crsd(5909)]CRS-0804:Cluster Ready Service aborted due to Oracle Cluster Registry error [PROC-26: Error while accessing the physical storage
]. Details at (:CRSD00111:) in /u01/app/11.2.0/grid/log/TESTdb01/crsd/crsd.log.
2021-05-26 13:24:05.102: 
[ohasd(144192)]CRS-2765:Resource 'ora.crsd' has failed on server 'TESTdb01'.
2021-05-26 13:24:06.492: 
[crsd(5937)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/app/11.2.0/grid/log/TESTdb01/crsd/crsd.log.
2021-05-26 13:24:06.501: 
[crsd(5937)]CRS-0804:Cluster Ready Service aborted due to Oracle Cluster Registry error [PROC-26: Error while accessing the physical storage
]. Details at (:CRSD00111:) in /u01/app/11.2.0/grid/log/TESTdb01/crsd/crsd.log.
2021-05-26 13:24:07.132: 
[ohasd(144192)]CRS-2765:Resource 'ora.crsd' has failed on server 'TESTdb01'.
2021-05-26 13:24:08.517: 
[crsd(5986)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/app/11.2.0/grid/log/TESTdb01/crsd/crsd.log.
2021-05-26 13:24:08.525: 
[crsd(5986)]CRS-0804:Cluster Ready Service aborted due to Oracle Cluster Registry error [PROC-26: Error while accessing the physical storage
]. Details at (:CRSD00111:) in /u01/app/11.2.0/grid/log/TESTdb01/crsd/crsd.log.
2021-05-26 13:24:09.162: 
[ohasd(144192)]CRS-2765:Resource 'ora.crsd' has failed on server 'TESTdb01'.
2021-05-26 13:24:10.544: 
[crsd(6015)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/app/11.2.0/grid/log/TESTdb01/crsd/crsd.log.
2021-05-26 13:24:10.552: 
[crsd(6015)]CRS-0804:Cluster Ready Service aborted due to Oracle Cluster Registry error [PROC-26: Error while accessing the physical storage
]. Details at (:CRSD00111:) in /u01/app/11.2.0/grid/log/TESTdb01/crsd/crsd.log.
2021-05-26 13:24:11.193: 
[ohasd(144192)]CRS-2765:Resource 'ora.crsd' has failed on server 'TESTdb01'.
2021-05-26 13:24:12.581: 
[crsd(6051)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/app/11.2.0/grid/log/TESTdb01/crsd/crsd.log.
2021-05-26 13:24:12.589: 
[crsd(6051)]CRS-0804:Cluster Ready Service aborted due to Oracle Cluster Registry error [PROC-26: Error while accessing the physical storage
]. Details at (:CRSD00111:) in /u01/app/11.2.0/grid/log/TESTdb01/crsd/crsd.log.
2021-05-26 13:24:13.223: 
[ohasd(144192)]CRS-2765:Resource 'ora.crsd' has failed on server 'TESTdb01'.
2021-05-26 13:24:14.614: 
[crsd(6070)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/app/11.2.0/grid/log/TESTdb01/crsd/crsd.log.
2021-05-26 13:24:14.622: 
[crsd(6070)]CRS-0804:Cluster Ready Service aborted due to Oracle Cluster Registry error [PROC-26: Error while accessing the physical storage
]. Details at (:CRSD00111:) in /u01/app/11.2.0/grid/log/TESTdb01/crsd/crsd.log.
2021-05-26 13:24:15.253: 
[ohasd(144192)]CRS-2765:Resource 'ora.crsd' has failed on server 'TESTdb01'.
2021-05-26 13:24:16.643: 
[crsd(6090)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/app/11.2.0/grid/log/TESTdb01/crsd/crsd.log.
2021-05-26 13:24:16.650: 
[crsd(6090)]CRS-0804:Cluster Ready Service aborted due to Oracle Cluster Registry error [PROC-26: Error while accessing the physical storage
]. Details at (:CRSD00111:) in /u01/app/11.2.0/grid/log/TESTdb01/crsd/crsd.log.
2021-05-26 13:24:17.284: 
[ohasd(144192)]CRS-2765:Resource 'ora.crsd' has failed on server 'TESTdb01'.
2021-05-26 13:24:17.284: 
[ohasd(144192)]CRS-2771:Maximum restart attempts reached for resource 'ora.crsd'; will not restart.
2021-05-26 13:24:17.315: 
[ohasd(144192)]CRS-2769:Unable to failover resource 'ora.crsd'.
2021-05-27 12:18:47.208: 
[crsd(99493)]CRS-1012:The OCR service started on node TESTdb01.
2021-05-27 12:18:47.603: 
[crsd(99493)]CRS-1201:CRSD started on node TESTdb01.

ASM實(shí)例日志文章來源地址http://www.zghlxwxcb.cn/news/detail-852694.html

Sun Jan 17 01:12:00 2021
Warning: VKTM detected a time drift.
Time drifts can result in an unexpected behavior such as time-outs. Please check trace file for more details.
Wed May 26 12:19:57 2021
WARNING: Waited 15 secs for write IO to PST disk 0 in group 3.
WARNING: Waited 15 secs for write IO to PST disk 2 in group 3.
WARNING: Waited 15 secs for write IO to PST disk 3 in group 3.
WARNING: Waited 15 secs for write IO to PST disk 4 in group 3.
WARNING: Waited 15 secs for write IO to PST disk 0 in group 3.
WARNING: Waited 15 secs for write IO to PST disk 2 in group 3.
WARNING: Waited 15 secs for write IO to PST disk 3 in group 3.
WARNING: Waited 15 secs for write IO to PST disk 4 in group 3.
Wed May 26 12:19:57 2021
NOTE: process _b000_+asm1 (160488) initiating offline of disk 0.1409468596 (OCR_0000) with mask 0x7e in group 3
NOTE: process _b000_+asm1 (160488) initiating offline of disk 2.1409468594 (OCR_0002) with mask 0x7e in group 3
NOTE: process _b000_+asm1 (160488) initiating offline of disk 3.1409468595 (OCR_0003) with mask 0x7e in group 3
NOTE: process _b000_+asm1 (160488) initiating offline of disk 4.1409468592 (OCR_0004) with mask 0x7e in group 3
NOTE: checking PST: grp = 3
GMON checking disk modes for group 3 at 15 for pid 46, osid 160488
ERROR: no read quorum in group: required 3, found 1 disks
NOTE: checking PST for grp 3 done.
NOTE: initiating PST update: grp = 3, dsk = 0/0x5402c8b4, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 3, dsk = 2/0x5402c8b2, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 3, dsk = 3/0x5402c8b3, mask = 0x6a, op = clear
NOTE: initiating PST update: grp = 3, dsk = 4/0x5402c8b0, mask = 0x6a, op = clear
GMON updating disk modes for group 3 at 16 for pid 46, osid 160488
ERROR: no read quorum in group: required 3, found 1 disks
Wed May 26 12:19:57 2021
NOTE: cache dismounting (not clean) group 3/0xA242386E (OCR) 
NOTE: messaging CKPT to quiesce pins Unix process pid: 160495, image: oracle@TESTdb01 (B001)
Wed May 26 12:19:57 2021
NOTE: halting all I/Os to diskgroup 3 (OCR)
Wed May 26 12:19:57 2021
NOTE: LGWR doing non-clean dismount of group 3 (OCR)
NOTE: LGWR sync ABA=15.85 last written ABA 15.85
WARNING: Offline for disk OCR_0000 in mode 0x7f failed.
WARNING: Offline for disk OCR_0002 in mode 0x7f failed.
WARNING: Offline for disk OCR_0003 in mode 0x7f failed.
WARNING: Offline for disk OCR_0004 in mode 0x7f failed.
Wed May 26 12:19:58 2021
kjbdomdet send to inst 2
detach from dom 3, sending detach message to inst 2
Wed May 26 12:19:58 2021
NOTE: No asm libraries found in the system
Wed May 26 12:19:58 2021
List of instances:
 1 2
Dirty detach reconfiguration started (new ddet inc 1, cluster inc 4)
 Global Resource Directory partially frozen for dirty detach
* dirty detach - domain 3 invalid = TRUE 
 2 GCS resources traversed, 0 cancelled
Dirty Detach Reconfiguration complete
Wed May 26 12:19:58 2021
WARNING: dirty detached from domain 3
NOTE: cache dismounted group 3/0xA242386E (OCR) 
SQL> alter diskgroup OCR dismount force /* ASM SERVER:2722248814 */ 
Wed May 26 12:19:58 2021
NOTE: cache deleting context for group OCR 3/0xa242386e
ASM Health Checker found 1 new failures
GMON dismounting group 3 at 17 for pid 47, osid 160495
NOTE: Disk OCR_0000 in mode 0x7f marked for de-assignment
NOTE: Disk OCR_0001 in mode 0x7f marked for de-assignment
NOTE: Disk OCR_0002 in mode 0x7f marked for de-assignment
NOTE: Disk OCR_0003 in mode 0x7f marked for de-assignment
NOTE: Disk OCR_0004 in mode 0x7f marked for de-assignment
NOTE:Waiting for all pending writes to complete before de-registering: grpnum 3
Wed May 26 12:20:28 2021
SUCCESS: diskgroup OCR was dismounted
SUCCESS: alter diskgroup OCR dismount force /* ASM SERVER:2722248814 */
SUCCESS: ASM-initiated MANDATORY DISMOUNT of group OCR
Wed May 26 12:20:29 2021
NOTE: diskgroup resource ora.OCR.dg is offline
Wed May 26 12:20:29 2021
Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_145227.trc:
ORA-15078: ASM diskgroup was forcibly dismounted
Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_145227.trc:
ORA-15078: ASM diskgroup was forcibly dismounted
Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_145227.trc:
ORA-15078: ASM diskgroup was forcibly dismounted
Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_145227.trc:
ORA-15078: ASM diskgroup was forcibly dismounted
Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_145227.trc:
ORA-15078: ASM diskgroup was forcibly dismounted
Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_145227.trc:
ORA-15078: ASM diskgroup was forcibly dismounted
Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_145227.trc:
ORA-15078: ASM diskgroup was forcibly dismounted
Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_145227.trc:
ORA-15078: ASM diskgroup was forcibly dismounted
Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_145227.trc:
ORA-15078: ASM diskgroup was forcibly dismounted
Wed May 26 13:23:45 2021
Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_145227.trc:
ORA-15078: ASM diskgroup was forcibly dismounted
WARNING: requested mirror side 1 of virtual extent 6 logical extent 0 offset 741376 is not allocated; I/O request failed
WARNING: requested mirror side 2 of virtual extent 6 logical extent 1 offset 741376 is not allocated; I/O request failed
WARNING: requested mirror side 3 of virtual extent 6 logical extent 2 offset 741376 is not allocated; I/O request failed
Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_145227.trc:
ORA-15078: ASM diskgroup was forcibly dismounted
ORA-15078: ASM diskgroup was forcibly dismounted
ORA-15078: ASM diskgroup was forcibly dismounted
Wed May 26 13:23:46 2021
SQL> alter diskgroup OCR check /* proxy */ 
ORA-15032: not all alterations performed
ORA-15001: diskgroup "OCR" does not exist or is not mounted
ERROR: alter diskgroup OCR check /* proxy */
Wed May 26 13:23:56 2021
NOTE: client exited [145215]
Wed May 26 13:23:58 2021
NOTE: [crsd.bin@TESTdb01 (TNS V1-V3) 5795] opening OCR file
Wed May 26 13:24:00 2021
NOTE: [crsd.bin@TESTdb01 (TNS V1-V3) 5834] opening OCR file
Wed May 26 13:24:02 2021
NOTE: [crsd.bin@TESTdb01 (TNS V1-V3) 5886] opening OCR file
Wed May 26 13:24:04 2021
NOTE: [crsd.bin@TESTdb01 (TNS V1-V3) 5909] opening OCR file
Wed May 26 13:24:06 2021
NOTE: [crsd.bin@TESTdb01 (TNS V1-V3) 5937] opening OCR file
Wed May 26 13:24:08 2021
NOTE: [crsd.bin@TESTdb01 (TNS V1-V3) 5986] opening OCR file
Wed May 26 13:24:10 2021
NOTE: [crsd.bin@TESTdb01 (TNS V1-V3) 6015] opening OCR file
Wed May 26 13:24:12 2021
NOTE: [crsd.bin@TESTdb01 (TNS V1-V3) 6051] opening OCR file
Wed May 26 13:24:14 2021
NOTE: [crsd.bin@TESTdb01 (TNS V1-V3) 6070] opening OCR file
Wed May 26 13:24:16 2021
NOTE: [crsd.bin@TESTdb01 (TNS V1-V3) 6090] opening OCR file
Thu May 27 12:12:25 2021
SQL> alter diskgroup ocr mount 
NOTE: cache registered group OCR number=3 incarn=0xa2423bb2
NOTE: cache began mount (not first) of group OCR number=3 incarn=0xa2423bb2
NOTE: Assigning number (3,4) to disk (/dev/asm-ocr5)
NOTE: Assigning number (3,1) to disk (/dev/asm-ocr2)
NOTE: Assigning number (3,2) to disk (/dev/asm-ocr3)
NOTE: Assigning number (3,3) to disk (/dev/asm-ocr4)
NOTE: Assigning number (3,0) to disk (/dev/asm-ocr1)
Thu May 27 12:12:25 2021
GMON querying group 3 at 19 for pid 30, osid 95906
NOTE: cache opening disk 0 of grp 3: OCR_0000 path:/dev/asm-ocr1
NOTE: F1X0 found on disk 0 au 2 fcn 0.0
NOTE: cache opening disk 1 of grp 3: OCR_0001 path:/dev/asm-ocr2
NOTE: F1X0 found on disk 1 au 2 fcn 0.0
NOTE: cache opening disk 2 of grp 3: OCR_0002 path:/dev/asm-ocr3
NOTE: F1X0 found on disk 2 au 2 fcn 0.0
NOTE: cache opening disk 3 of grp 3: OCR_0003 path:/dev/asm-ocr4
NOTE: cache opening disk 4 of grp 3: OCR_0004 path:/dev/asm-ocr5
NOTE: cache mounting (not first) high redundancy group 3/0xA2423BB2 (OCR)
Thu May 27 12:12:26 2021
kjbdomatt send to inst 2
Thu May 27 12:12:26 2021
NOTE: attached to recovery domain 3
NOTE: redo buffer size is 256 blocks (1053184 bytes)
Thu May 27 12:12:26 2021
NOTE: LGWR attempting to mount thread 1 for diskgroup 3 (OCR)
NOTE: LGWR found thread 1 closed at ABA 15.85
NOTE: LGWR mounted thread 1 for diskgroup 3 (OCR)
NOTE: LGWR opening thread 1 at fcn 0.591 ABA 16.86
NOTE: cache mounting group 3/0xA2423BB2 (OCR) succeeded
NOTE: cache ending mount (success) of group OCR number=3 incarn=0xa2423bb2
Thu May 27 12:12:26 2021
NOTE: Instance updated compatible.asm to 11.2.0.0.0 for grp 3
SUCCESS: diskgroup OCR was mounted
SUCCESS: alter diskgroup ocr mount
Thu May 27 12:12:28 2021
WARNING: failed to online diskgroup resource ora.OCR.dg (unable to communicate with CRSD/OHASD)
NOTE: Attempting voting file refresh on diskgroup OCR
NOTE: Refresh completed on diskgroup OCR
. Found 5 voting file(s).
Thu May 27 12:18:41 2021
NOTE: [crsd.bin@TESTdb01 (TNS V1-V3) 99493] opening OCR file

到了這里,關(guān)于ORACLE RAC集群CRSD服務(wù)異常后無需重啟集群的處理方法的文章就介紹完了。如果您還想了解更多內(nèi)容,請?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請點(diǎn)擊違法舉報進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • Oracle 19c rac集群管理 -------- 集群啟停操作過程

    Oracle 19c rac集群管理 -------- 集群啟停操作過程

    首先查看數(shù)據(jù)庫的集群的db_unique_name –確認(rèn)集群的instance_name SQL select instance_name,status from gv$instance; INSTANCE_NAME STATUS p19c01 OPEN p19c02 OPEN Step 1.停止以及查看數(shù)據(jù)庫 Step 2.停止集群服務(wù) Step 3.啟動 集群服務(wù)(root): Step 4. 啟動數(shù)據(jù)庫:

    2024年01月25日
    瀏覽(20)
  • 小知識:使用oracle用戶查看RAC集群資源狀態(tài)

    正常情況按照標(biāo)準(zhǔn)配置的環(huán)境變量,只能grid用戶查看RAC集群資源狀態(tài)。 但是絕大部分操作其實(shí)都是oracle用戶來操作,比如啟停數(shù)據(jù)庫,操作完成以后就需要檢查下集群資源狀態(tài)。 看到好多DBA在現(xiàn)場操作時就是來回各種切換或開多個窗口。 其實(shí)有兩個簡單的解決方法可以實(shí)現(xiàn)

    2023年04月27日
    瀏覽(17)
  • oracle rac-歸檔滿處理

    有客戶反饋數(shù)據(jù)庫無法使用了,客戶手動啟動報錯如下 SQL startup; ORACLE instance started. Total System Global Area 2.6924E+10 bytes Fixed Size?? ??? ???? 2265984 bytes Variable Size?? ??? ? 1.3959E+10 bytes Database Buffers?? ? 1.2952E+10 bytes Redo Buffers?? ??? ??? 11202560 bytes Database mounted. ORA-16038: l

    2024年02月08日
    瀏覽(19)
  • ORACLE集群管理-19C RAC重新配置IPV6

    數(shù)據(jù)庫已經(jīng)配置和IPV6和 IPV4雙線協(xié)議,需要重新配置IPV6 1 root用戶執(zhí)行 ./srvctl stop scan_listener -i 1 ./srvctl stop scan ./srvctl stop listener -n orcldb1 ./srvctl stop listener -n orcldb2 ./srvctl stop vip -n orcldb1 ./srvctl stop vip -n orcldb2 ./oifcfg getif eno3 ?192.168.224.0 ?global ?public ens3f0 ?10.2.0.0 ?global ?cluste

    2024年02月09日
    瀏覽(23)
  • Oracle篇—單機(jī)對外訪問的IP變更為rac集群的scan ip

    Oracle篇—單機(jī)對外訪問的IP變更為rac集群的scan ip

    ? ? 因業(yè)務(wù)需要,需要修改現(xiàn)有數(shù)據(jù)庫環(huán)境中的scan ip。一般多在單機(jī)遷移到rac后,應(yīng)用不想在代碼中修改連接數(shù)據(jù)庫的ip,那么原單機(jī)的ip在rac中就變成了scan ip。 ? ? 修改為scan ip要確保原單機(jī)的ip下線,不然會沖突,scan ip可以理解為虛擬ip,所以不涉及在硬件網(wǎng)卡上修改。

    2024年02月03日
    瀏覽(19)
  • 記一次MySQL從節(jié)點(diǎn)服務(wù)器宕機(jī)重啟后,從節(jié)點(diǎn)出現(xiàn)主鍵沖突異常的處理

    MySQL 5.7 非GTID模式多線程復(fù)制。 某MySQL數(shù)據(jù)庫從節(jié)點(diǎn)因故障宕機(jī)(因故障直接宕機(jī),非正常關(guān)閉),重啟之后發(fā)現(xiàn)復(fù)制狀態(tài)異常,show slave的結(jié)果中Slave_SQL_Running為No,錯誤代碼為1062 error code,從系統(tǒng)表performance_schema.replication_applier_status_by_worker以及error log中顯示某條數(shù)據(jù)因?yàn)橐?/p>

    2024年02月19日
    瀏覽(25)
  • 重啟 Linux的Oracle服務(wù)

    要重啟 Linux 上的 Oracle 服務(wù),可以使用以下命令: 確保已登錄為 Oracle 用戶: su - oracle 進(jìn)入 Oracle 的安裝目錄: cd /u01/app/oracle/product/11.2.0/dbhome_1/bin 停止數(shù)據(jù)庫服務(wù): ./dbshut 啟動數(shù)據(jù)庫服務(wù): ./dbstart 請注意,上述命令假設(shè) Oracle 的安裝目錄為 /u01/app/oracle/product/11.2.0/dbhome_1,如果您

    2024年02月16日
    瀏覽(19)
  • oracle11g服務(wù)器重啟命令

    oracle11g服務(wù)器重啟命令

    首先省略oracle的安裝等等 1.登陸oracle并成功連接 ?2.以sysdba連接服務(wù)器,不然會顯示權(quán)限不夠 命令:connect / as sysdba 3.關(guān)閉服務(wù)器 命令:shutdown immediate或shutdown abort 4.接著再啟動就好了 命令:startup 這樣就完成了重啟,為什么要重啟呢? 答:在配置oracle服務(wù)中,只有重啟服務(wù)

    2024年02月08日
    瀏覽(93)
  • oracle異常處理

    最近在工作中遇到這么一個場景: 在同一網(wǎng)段內(nèi)存在著A庫和B庫,需要將A庫下某些表的數(shù)據(jù)同步到B庫 B庫跑著定時任務(wù),定時調(diào)用存儲過程將A庫下的數(shù)據(jù)同步到B庫。 B庫和A庫是通過建立dblink建立連接的。【關(guān)于dblink相關(guān)可能會后面單獨(dú)寫博客,先給自己挖個坑,慢慢填 哈哈

    2024年02月03日
    瀏覽(17)
  • oracle的異常處理

    oracle 提供了預(yù)定義例外、非預(yù)定義例外和自定義例外三種類型。其中: l 預(yù)定義例外用于處理常見的oracle錯誤; l 非預(yù)定義例外用于處理預(yù)定義所不能處理的oracle錯誤; l 自定義例外處理與oracle錯誤無關(guān)的其他情況。 Oracle代碼編寫過程中,如果捕捉例外則會在plsql塊內(nèi)解決運(yùn)

    2024年02月13日
    瀏覽(17)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包