Why?
In an assumption that computing DB server got crashed and only
ASM disks which are in storage appliance are available or in a case of data replication
using ASM disk storage replication which not a recommended approach but I am
here just elaborate how you will achieve this in case of needed.
How?
In my case the original DB is RAC two nodes and I will create a new
single node to using that ASM disks and start the DB using these disks in a new
computing node with version 19c.
1- Install both
Grid and DB as software only using same version and patch level of the orginal
one:
2- Start using
"root" the HAS for oracle restart:
. oraenv
/u01/app/19.0.0/grid/perl/bin/perl
-I/u01/app/19.0.0/grid/perl/lib/ -I/u01/app/19.0.0/grid/crs/install/
/u01/app/19.0.0/grid/crs/install/roothas.pl
3- using grid
start the other service as below.
crsctl start res ora.cssd -init
4- From old nodes:
get the asm parameter file and change what is
needs to be changed,
get oracle
parameter file and change what is needs to be changed (cluster=no):
check the crs
status and check diskgroups :
crsctl stat res –t
ASMCMD> lsdg
start ASM whether using sqlplus / as sysasm
or
using after add the resource:
srvctl start asm
5- start DB using pfile:
#sqlplus / as sysdba
SQL> startup nomount
pfile='initorcla.ora' ;
SQL> alter database mount;
SQL> alter database open;
6- add needed resources in the SRVCTL to be part of Grid:
srvctl add listener -l LISTENER -p
"TCP:1541" -o $ORACLE_HOME
srvctl add asm -l LISTENER -p
$ORACLE_HOME/dbs/init+ASM.ora
srvctl add database -d <dbname> -o
$ORACLE_HOME -p /export/home/oracle/pfile_<dbname>.ora -pwfile
+DATA/ICM/PASSWORD/pwd<dbname>.257.1132500007
crsctl stat res -t
below references may help with troubleshooting and other
details :
https://eleoracle.wordpress.com/2015/01/23/move-asm-diskgroups-between-server/
https://www.br8dba.com/cssd-wont-start-automatically/