Moving Grid Infrastructure Components in Oracle 18c

by Jeff Stonacek, Principal Architect

I recently ran into an interesting issue with Oracle 18c Grid Infrastructure (GI).  We were building several three-node clusters for a client and on one of the clusters, the ASM diskgroup used for GI was named differently than all of the other clusters.  Obviously, the diskgroup name is arbitrary, and has no effect on the operation of Clusterware.  However, we decided it was a good idea to fix the problem so that all of the GI diskgroups had the same name (which worked for me since I can be slightly OCD).

There are a couple of ways to approach this change.  The simplest way is to deconfigure Grid Infrastructure on all nodes, rebuild the diskgroup with the correct name, and then reconfigure Grid Infrastructure.  This method requires a bit of downtime however, so we decided to see what we could do to avoid downtime as much as possible.

Another option is to move all of the Grid Infrastructure components to a new diskgroup, which is the method I describe in this blog.

Move Grid Infrastructure

This blog covers moving version 18c GI components from one diskgroup to another.  Every version of Oracle introduces more components than the previous version.  Oracle 18c however, is very similar to 12c.  The following components were moved during this exercise:

  • Oracle Cluster Registry (OCR)
  • Voting Disks
  • ASM spfile
  • ASM password file
  • Grid Infrastructure Management Repository (GIMR) database

 

The obvious first step in this process is to create a new ASM diskgroup with the correct name.  Creating an ASM diskgroup is a routine operation and does not need any further elaboration.  Just create a new diskgroup as you would any other.

In the following examples, the original diskgroup name is CRS.  The new diskgroup name is OCR.

Oracle Cluster Registry (OCR)
Oracle provides utilities to move the Cluster Registry.  The ocrcheck and ocrconfig commands do the work for you.  First check the status of the OCR with ocrcheck.  Then login as root to manage the OCR and set the Oracle environment to the GI Home.

# export ORACLE_HOME=/u01/grid/1830
# export PATH=$ORACLE_HOME/bin:$PATH

# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version                  :          4
Total space (kbytes)     :     491684
Used space (kbytes)      :      84388
Available space (kbytes) :     407296
ID                       : 1573228169
Device/File Name         :       +CRS

 

Now add a copy of the OCR to the +OCR diskgroup and remove the original copy from the +CRS diskgroup.

# ocrconfig -add +OCR
# ocrconfig -delete +CRS

 

The OCR is now located in the +OCR diskgroup.

# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version                  :          4
Total space (kbytes)     :     491684
Used space (kbytes)      :      84372
Available space (kbytes) :     407312
ID                       : 1573228169
Device/File Name         :       +OCR
Device/File integrity check succeeded

 

Voting Disks
Next, we’ll tackle the voting disks.  Unlike the ocrconfig command, these commands can be completed as the GI Home owner.

First, query the registry to see where the existing voting disks are located.

$ export ORACLE_HOME=/u01/grid/1830
$ export PATH=$ORACLE_HOME/bin:$PATH

$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
  1.ONLINE 7902fa0a196d4f13bf08dcd022d37b26 (/dev/asm/ASM_CRS01) [CRS]
Located 1 voting disk(s).

 

Then run the crsctl command to move the voting disk.

$ crsctl replace votedisk +OCR

 

The voting disks are now located in the +OCR diskgroup.

$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1.ONLINE a6ad595bd0a04f92bf3fbeef8d672e9b (/dev/asm/ASM_OCR01) [OCR]
Located 1 voting disk(s).

 

ASM Password File
Check to see where the password file is located for the ASM instance.

srvctl config asm
ASM home: <CRS home>
Password file: +CRS/orapwASM
Backup of Password file: +CRS/orapwASM_backup
ASM listener: LISTENER
ASM instance count: 3
Cluster ASM listener: ASMNET1LSNR_ASM

 

Use the pwcopy command to move the password files in ASM.

pwcopy +CRS/orapwASM +OCR/orapwASM
pwcopy +CRS/orapwASM_backup +OCR/orapwASM_backup

 

Modify the ASM instance to set the password file location.

srvctl modify asm -pwfile +OCR/orapwASM
srvctl modify asm -pwfilebackup +OCR/orapwASM_backup

 

The password file is now on the new diskgroup.

srvctl config asm
ASM home: <CRS home>
Password file: +OCR/orapwASM
Backup of Password file: +OCR/orapwASM_backup
ASM listener: LISTENER
ASM instance count: 3
Cluster ASM listener: ASMNET1LSNR_ASM

 

ASM Spfile
When creating an 18c cluster, the default is to put the spfile for the ASM instance in ASM.  However, you can run the following steps to move the ASM spfile to the new diskgroup.

Using SQLPlus, create a pfile from the running ASM instance and then create a spfile in the new diskgroup.  In the GI Home, connect to the ASM instance.

. oraenv
+ASM1

sqlplus / as sysasm

create pfile='/tmp/initasm.ora' from spfile;
create spfile='+OCR' from pfile='/tmp/initasm.ora';

exit

 

Now, as the Grid owner, run the gpnptool to update the GPnP profile with the new spfile.

gpnptool get

 

At this point, we need to restart cluster services in order for the ASM spfile change take effect.  This can be done in a rolling fashion.  For each node in the cluster, stop and then start cluster services.  Do this one node at a time so the cluster services remain available on the surviving nodes during the restart.

crsctl stop crs

 

Once Grid Infrastructure is down, start things back up.

crsctl start crs

 

Check the status of the restart with the following command.

crsctl stat res -t

 

Grid Infrastructure Management Repository (GIMR) Database
The last component that needs to be moved is the GIMR database.  First, the GIMR database needs to be located.

$ srvctl status mgmtdb
Database is enabled
Instance -MGMTDB is running on node rac182

 

Step one in moving the GIMR database is to stop the ora.crf service on each node of the cluster.  Login to each node, as root, and run the following commands from the GI Home.

# export ORACLE_HOME=/u01/grid/1830
# export PATH=$ORACLE_HOME/bin:$PATH

# ctl stop res ora.crf -init
bash: ctl: command not found...
[root@rac181 ~]# crsctl stop res ora.crf -init
CRS-2673: Attempting to stop 'ora.crf' on 'rac181'
CRS-2677: Stop of 'ora.crf' on 'rac181' succeeded

# crsctl modify res ora.crf -attr ENABLED=0 -init

 

Connect to the node where the GIMR database is running, as the GI Home owner.  From the GI Home, run the following dbca command to delete the database.

$ dbca -silent -deleteDatabase -sourceDB -MGMTDB
[WARNING] [DBT-19202] The Database Configuration Assistant will delete the Oracle instances and datafiles for your database. All information in the database will be destroyed.
Prepare for db operation
32% complete
Connecting to database
35% complete
39% complete
42% complete
45% complete
48% complete
52% complete
65% complete
Updating network configuration files
68% complete
Deleting instance and datafiles
84% complete
100% complete
Database deletion completed.
Look at the log file "/u01/gridbase/cfgtoollogs/dbca/_mgmtdb/_mgmtdb0.log" for further details.

 

Now, recreate the GIMR container database using the new diskgroup.

dbca -silent -createDatabase -createAsContainerDatabase true \
-templateName MGMTSeed_Database.dbc \
-sid -MGMTDB -gdbName _mgmtdb -storageType ASM -diskGroupName +OCR \
-datafileJarLocation /u01/grid/1830/assistants/dbca/templates -characterset AL32UTF8 \
-autoGeneratePasswords -skipUserTemplateCheck

Prepare for db operation
10% complete
Registering database with Oracle Grid Infrastructure
14% complete
Copying database files
43% complete
Creating and starting Oracle instance
45% complete
49% complete
54% complete
58% complete
62% complete
Completing Database Creation
66% complete
69% complete
71% complete
Executing Post Configuration Actions
100% complete
Database creation complete. For details check the logfiles at:
/u01/gridbase/cfgtoollogs/dbca/_mgmtdb.
Database Information:
Global Database Name:_mgmtdb
System Identifier(SID):-MGMTDB
Look at the log file "/u01/gridbase/cfgtoollogs/dbca/_mgmtdb/_mgmtdb1.log" for further details.

 

Next, create the GIMR pluggable database inside the container.

mgmtca -local

 

Finally, login to each node of the cluster as root and re-enable the ora.crf service.

# crsctl modify res ora.crf -attr ENABLED=1 -init
# crsctl start res ora.crf -init
CRS-2672: Attempting to start 'ora.crf' on 'rac181'
CRS-2676: Start of 'ora.crf' on 'rac181' succeeded

 

Cleanup
At this point, all components have been removed from the old +CRS diskgroup.  The diskgroup can now be removed from ASM and the disk devices decommissioned from the server.

Conclusion

In this blog, we have shown how to relocate all Grid Infrastructure components from one ASM diskgroup to another. These instructions can be used to move individual components, or to move all components and evacuate the entire diskgroup.  This method is much less invasive than deconfiguring Grid Infrastructure and rebuilding, as there is only a short, rolling outage required to restart Grid Infrastructure one node at a time.

Table of Contents

Related Posts