Renaming an Oracle Grid Infrastructure Home

posted January 7, 2019, 12:34 PM by

Andy Kerber (@dbakerber), Senior Consultant

Renaming an Oracle Grid Infrastructure home is not something that needs to be done often, but I recently had occasion to change the name of a GI home without upgrading it.  Oracle documentation includes instructions for this, but as it happens, the documentation leaves off a few steps and isn’t clear on a few others.

Our source GI home is /u01/app/grid/12.2.0/grid, and our new GI home is /u01/app/grid/18.3.0/grid.  The source GI home was installed without issue, using ASMLIB in this case. So I will forgo the steps for the initial installation.  The ORACLE_BASE for the initial installation was /u01/app/grid/12.2.0/grid_base, and for the new installation it is /u01/app/grid/18.3.0/grid_base.

The official instructions for accomplishing this can be found in the Oracle doc, Changing the Oracle Grid Infrastructure Path.

As these instructions don’t completely suit our needs, we will make corrections as we go.  Note that from this point forward, $GI_HOME will refer to /u01/app/grid/18.3.0/grid.

  1. Stop the clusterware running from the source GI home (done as root).
/u01/app/grid/12.2.0/grid/bin/crsctl stop crs

 

  1. As the grid user, remove the current GI Home from the inventory (note that files are not deleted yet).
/u01/app/grid/12.2.0/grid/oui/bin/runInstaller –silent –waitforcompletion –detachhome ORACLE_HOME='/u01/app/grid/12.2.0/grid'

 

  1. As root, create the destination directories:
mkdir –p /u01/app/grid/18.3.0/grid
mkdir –p /u01/app/grid/18.3.0/grid_base

If necessary, change the ownership of the directories to correspond to the current ownership.

  1. As root, copy the files from the existing GI home to the destination GI home, preserving file ownerships and privileges:
cp -pR /u01/app/grid/12.2.0/grid /u01/app/grid/18.3.0/
cp -pR /u01/app/grid/12.2.0/grid_base /u01/app/grid/18.3.0/

 

  1. Unlock the destination GI home:
cd /u01/app/grid/12.2.0/grid/crs/install
./rootcrs.sh –unlock –dstcrshome /u01/app/grid/18.3.0/grid

 

  1. As grid, clone the GI home using the clone.pl script. This step relinks the GI home to the new location.  The correct arguments for the clone.pl script are not listed anywhere in the Oracle documentation, and it took some experimentation to get them right, as shown below.
cd /u01/app/grid/18.3.0/grid/clone/bin
./clone.pl ORACLE_BASE=/u01/app/grid/18.3.0/grid_base / 
ORACLE_HOME=/u01/app/grid/18.3.0/grid / 
INVENTORY_LOCATION=/u01/app/grid/12.2.0/oraInventory crs=TRUE

Starting Oracle Universal Installer...

Checking Temp space: must be greater than 500 MB.   Actual 41761 MB    Passed
Checking swap space: must be greater than 500 MB.   Actual 16378 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2018-12-28_01-48-13PM. Please wait ...[WARNING] [INS-32029] The Installer has detected that the Oracle Base location is not empty.
      ACTION: Oracle recommends that the Oracle Base location is empty.

You can find the log of this install session at:
/u01/app/grid/12.2.0/oraInventory/logs/cloneActions2018-12-28_01-48-13PM.log
..................................................   5% Done.
..................................................   10% Done.
..................................................   15% Done.
..................................................   20% Done.
..................................................   25% Done.
..................................................   30% Done.
..................................................   35% Done.
..................................................   40% Done.
..................................................   45% Done.
..................................................   50% Done.
..................................................   55% Done.
..................................................   60% Done.
..................................................   65% Done.
..................................................   70% Done.
..................................................   75% Done.
..................................................   80% Done.
..................................................   85% Done.
..........
Copy files in progress.

Copy files successful.

Link binaries in progress.
..........
Link binaries successful.
Setup files in progress.
..........
Setup files successful.

Setup Inventory in progress.

Setup Inventory successful.
..........
Finish Setup successful.
The cloning of OraHome1 was successful.
Please check '/u01/app/grid/12.2.0/oraInventory/logs/cloneActions2018-12-28_01-48-13PM.log' for more details.
Setup Oracle Base in progress.

Setup Oracle Base successful.
..................................................   95% Done.


As a root user, execute the following script(s):
1./u01/app/grid/18.3.0/grid/root.sh
..................................................   100% Done.

 

  1. Run the root.sh script (shown above) as root.
  2. There appears to be a bug in clone.pl whereby the Oracle binaries for ASM are not linked with the RAC_ON option, even with the CRS=TRUE flag set. Thus, we need to follow the steps in Metalink note 284785.1 in order to re-link the Oracle binaries with the RAC_ON option after running clone.pl:
export ORACLE_HOME=/u01/app/grid/18.3.0/grid
cd $ORACLE_HOME/rdbms/lib
make –f ins_rdbms.mk rac_on ioracle
(if /u01/app/grid/18.3.0/grid/bin/skgxpinfo | grep rds;\
then \
make -f  /u01/app/grid/18.3.0/grid/rdbms/lib/ins_rdbms.mk ipc_rds; \
else \
make -f  /u01/app/grid/18.3.0/grid/rdbms/lib/ins_rdbms.mk ipc_g; \
fi)
make[1]: Entering directory `/u01/app/grid/18.3.0/grid/rdbms/lib'
rm -f /u01/app/grid/18.3.0/grid/lib/libskgxp18.so
cp /u01/app/grid/18.3.0/grid/lib//libskgxpg.so /u01/app/grid/18.3.0/grid/lib/libskgxp18.so
make[1]: Leaving directory `/u01/app/grid/18.3.0/grid/rdbms/lib'
- Use stub SKGXN library
cp /u01/app/grid/18.3.0/grid/lib/libskgxns.so /u01/app/grid/18.3.0/grid/lib/libskgxn2.so
/usr/bin/ar d /u01/app/grid/18.3.0/grid/rdbms/lib/libknlopt.a ksnkcs.o
/usr/bin/ar cr /u01/app/grid/18.3.0/grid/rdbms/lib/libknlopt.a /u01/app/grid/18.3.0/grid/rdbms/lib/kcsm.o
chmod 755 /u01/app/grid/18.3.0/grid/bin


- Linking Oracle
rm -f /u01/app/grid/18.3.0/grid/rdbms/lib/oracle
/u01/app/grid/18.3.0/grid/bin/orald  -o
/u01/app/grid/18.3.0/grid/rdbms/lib/oracle -m64 -z noexecstack -Wl,--disable-new-dtags -L/u01/app/grid/18.3.0/grid/rdbms/lib/ -L/u01/app/grid/18.3.0/grid/lib/ -L/u01/app/grid/18.3.0/grid/lib/stubs/   -Wl,-E /u01/app/grid/18.3.0/grid/rdbms/lib/opimai.o /u01/app/grid/18.3.0/grid/rdbms/lib/ssoraed.o /u01/app/grid/18.3.0/grid/rdbms/lib/ttcsoi.o -Wl,--whole-archive -lperfsrv18 -Wl,--no-whole-archive /u01/app/grid/18.3.0/grid/lib/nautab.o /u01/app/grid/18.3.0/grid/lib/naeet.o /u01/app/grid/18.3.0/grid/lib/naect.o /u01/app/grid/18.3.0/grid/lib/naedhs.o /u01/app/grid/18.3.0/grid/rdbms/lib/config.o  -ldmext -lserver18 -lodm18 -lofs -lcell18 -lnnet18 -lskgxp18 -lsnls18 -lnls18  -lcore18 -lsnls18 -lnls18 -lcore18 -lsnls18 -lnls18 -lxml18 -lcore18 -lunls18 -lsnls18 -lnls18 -lcore18 -lnls18 -lclient18  -lvsnst18 -lcommon18 -lgeneric18 -lknlopt -loraolap18 -lskjcx18 -lslax18 -lpls18  -lrt -lplp18 -ldmext -lserver18 -lclient18  -lvsnst18 -lcommon18 -lgeneric18 `if [ -f /u01/app/grid/18.3.0/grid/lib/libavserver18.a ] ; then echo "-lavserver18" ; else echo "-lavstub18"; fi` `if [ -f /u01/app/grid/18.3.0/grid/lib/libavclient18.a ] ; then echo "-lavclient18" ; fi` -lknlopt -lslax18 -lpls18  -lrt -lplp18 -ljavavm18 -lserver18  -lwwg  `cat /u01/app/grid/18.3.0/grid/lib/ldflags`    -lncrypt18 -lnsgr18 -lnzjs18 -ln18 -lnl18 -lngsmshd18 -lnro18 `cat /u01/app/grid/18.3.0/grid/lib/ldflags`    -lncrypt18 -lnsgr18 -lnzjs18 -ln18 -lnl18 -lngsmshd18 -lnnzst18 -lzt18 -lztkg18 -lmm -lsnls18 -lnls18  -lcore18 -lsnls18 -lnls18 -lcore18 -lsnls18 -lnls18 -lxml18 -lcore18 -lunls18 -lsnls18 -lnls18 -lcore18 -lnls18 -lztkg18 `cat /u01/app/grid/18.3.0/grid/lib/ldflags`    -lncrypt18 -lnsgr18 -lnzjs18 -ln18 -lnl18 -lngsmshd18 -lnro18 `cat /u01/app/grid/18.3.0/grid/lib/ldflags`    -lncrypt18 -lnsgr18 -lnzjs18 -ln18 -lnl18 -lngsmshd18 -lnnzst18 -lzt18 -lztkg18   -lsnls18 -lnls18  -lcore18 -lsnls18 -lnls18 -lcore18 -lsnls18 -lnls18 -lxml18 -lcore18 -lunls18 -lsnls18 -lnls18 -lcore18 -lnls18 `if /usr/bin/ar tv /u01/app/grid/18.3.0/grid/rdbms/lib/libknlopt.a | grep "kxmnsd.o" > /dev/null 2>&1 ; then echo " " ; else echo "-lordsdo18 -lserver18"; fi` -L/u01/app/grid/18.3.0/grid/ctx/lib/ -lctxc18 -lctx18 -lzx18 -lgx18 -lctx18 -lzx18 -lgx18 -lordimt -lclscest18 -loevm -lclsra18 -ldbcfg18 -lhasgen18 -lskgxn2 -lnnzst18 -lzt18 -lxml18 -lgeneric18 -locr18 -locrb18 -locrutl18 -lhasgen18 -lskgxn2 -lnnzst18 -lzt18 -lxml18 -lgeneric18  -lgeneric18 -lorazip -loraz -llzopro5 -lorabz2 -lipp_z -lipp_bz2 -lippdcemerged -lippsemerged -lippdcmerged  -lippsmerged -lippcore  -lippcpemerged -lippcpmerged  -lsnls18 -lnls18  -lcore18 -lsnls18 -lnls18 -lcore18 -lsnls18 -lnls18 -lxml18 -lcore18 -lunls18 -lsnls18 -lnls18 -lcore18 -lnls18 -lsnls18 -lunls18  -lsnls18 -lnls18  -lcore18 -lsnls18 -lnls18 -lcore18 -lsnls18 -lnls18 -lxml18 -lcore18 -lunls18 -lsnls18 -lnls18 -lcore18 -lnls18 -lasmclnt18 -lcommon18 -lcore18  -ledtn18 -laio -lons  -lfthread18   `cat /u01/app/grid/18.3.0/grid/lib/sysliblist` -Wl,-rpath,/u01/app/grid/18.3.0/grid/lib -lm    `cat /u01/app/grid/18.3.0/grid/lib/sysliblist` -ldl -lm   -L/u01/app/grid/18.3.0/grid/lib `test -x /usr/bin/hugeedit -a -r /usr/lib64/libhugetlbfs.so && test -r /u01/app/grid/18.3.0/grid/rdbms/lib/shugetlbfs.o && echo -Wl,-zcommon-page-size=2097152 -Wl,-zmax-page-size=2097152 -lhugetlbfs`
rm -f /u01/app/grid/18.3.0/grid/bin/oracle
mv /u01/app/grid/18.3.0/grid/rdbms/lib/oracle /u01/app/grid/18.3.0/grid/bin/oracle
chmod 6751 /u01/app/grid/18.3.0/grid/bin/oracle
(if [ ! -f /u01/app/grid/18.3.0/grid/bin/crsd.bin ]; then \
getcrshome="/u01/app/grid/18.3.0/grid/srvm/admin/getcrshome" ; \
if [ -f "$getcrshome" ]; then \
crshome="`$getcrshome`"; \
if [ -n "$crshome" ]; then \
if [ $crshome != /u01/app/grid/18.3.0/grid ]; then \
oracle="/u01/app/grid/18.3.0/grid/bin/oracle"; \
$crshome/bin/setasmgidwrap oracle_binary_path=$oracle; \
fi \
fi \
fi \
fi\
);

 

  1. Next, cleanup and lock the Oracle GI home. Be sure to run these steps as root.  The steps change the ownership of files in the GI home to what is required in order for the clusterware to run:
cd $GI_HOME/rdbms/install
./rootadd_rdbms.sh
cd $GI_HOME/crs/install
./rootcrs.sh –lock

 

  1. Next, update the registry for the new GI_HOME with these commands, once again running as root:
cd $GI_HOME/crs/install
./rootcrs.sh –move –dstcrshome /u01/app/grid/18.3.0/grid

 

The previous command will also start the clusterware.

  1. At this point, the clusterware home and ORACLE_BASE have been renamed to your new location. You can delete the old directory structure, if you desire
  2. After completing the above steps on the first node, verify that your clusterware is up and running, then repeat these steps for the remaining nodes.

 

Common Problem Encountered

The most common problem you will encounter is an issue with the Oracle binaries not linking with the RAC_ON option.  The issue you will see is that everything starts up with the exception of the CRS:

[root@oelafd1 ~]# /u01/app/grid/18.3.0/grid/bin/crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

 

In the CRS alert log (/u01/app/grid/18.3.0/grid_base/diag/crs/oelafd2/crs/trace/alert.log, there will be messages like:

2018-12-28 15:37:34.150 [OCTSSD(13860)]CRS-2401: The Cluster Time Synchronization Service started on host oelafd2.
2018-12-28 15:37:44.756 [ORAROOTAGENT(11281)]CRS-5019: All OCR locations are on ASM 
disk groups [OCR], and none of these disk groups are mounted. Details are at "(:CLSN00140:)" in 
"/u01/app/grid/18.3.0/grid_base/diag/crs/oelafd2/crs/trace/ohasd_orarootagent_root.trc".
2018-12-28 15:39:15.076 [OLOGGERD(14053)]CRS-8500: Oracle Clusterware OLOGGERD process is starting with operating system process ID 14053

 

In order to resolve the problem, execute the process in step 8 above for relinking with the RAC_ON option.

Conclusion

In this post, we have discussed the steps for moving your Oracle Grid Infrastructure home to a new location using the same version.  While this is a fairly complex process that should be attempted only by experienced Oracle administrators, the enhanced steps listed in the blog are a good place to start.

 

Share with your networkTweet about this on Twitter
Twitter
Share on LinkedIn
Linkedin
Share on Facebook
Facebook
Digg this
Digg
Email this to someone
email

Leave a Reply

Your email address will not be published. Required fields are marked *

Icon URL Target
1

This site uses Akismet to reduce spam. Learn how your comment data is processed.

WANT TO LEARN MORE?