Preparation for Exam : Oracle 19c OCP

My OCP journey started from 9i,10g,11g,12c,12c R2,

Now I’m certified in 19c OCP, Oracle Database Administration 2019 Certified Professional

Exam Number# 
https://education.oracle.com/oracle-database-administration-ii/pexam_1Z0-083

Prior Certification Requirements : 1Z0-082 or 10g / 11g / 12c / 12c R2 OCP Certified

Study guide – Most Important for Exam
https://oracle.com/a/ocom/docs/dc/ww-ou-5297-database2019-studyguide-5.pdf

The Exam contains 85 questions related to Oracle Database 18c & 19c. To earn this certification you need to get 57% marks. Even with a low passing score, it is the toughest OCP exam I had ever.

Oracle University Recommendations

• Oracle Database: Deploy, Patch and Upgrade Workshop
• Oracle Database: Backup and Recovery Workshop
• Oracle Database: Managing Multitenant Architecture
• Oracle Database Administration: Workshop
• Oracle Database 19c: New Features for Administrators
• Oracle Database 18c: New Features for Administrators (for 10g and 11g OCAs and OCPs)
• Oracle Database 12c R2: New Features for 12c R1 Administrators (12c R1 OCAs and OCPs)
• Oracle Database 11g: New Features for Administrators (for 10g OCAs and OCPs)

OU Learning Subscription
https://learn.oracle.com/ols/learning-path/oracle-database-administrator/38560/54112

19c Database Administrator’s Guide
https://docs.oracle.com/en/database/oracle/oracle-database/19/admin/

Oracle® Multitenant Administrator’s Guide
https://docs.oracle.com/en/database/oracle/oracle-database/19/multi/index.html

Backup and Recovery User’s Guide
https://docs.oracle.com/en/database/oracle/oracle-database/19/bradv/index.html

19c New Features
https://docs.oracle.com/en/database/oracle/oracle-database/19/newft/new-features.html
https://docs.oracle.com/en/database/oracle/oracle-database/18/newft/new-features.html

Oracle19c Documentation
https://docs.oracle.com/en/database/oracle/oracle-database/19/index.html

I shared some of the important notes at https://twitter.com/oraclelearn

https://www.youracclaim.com/badges/f2dc6809-79e8-4fdf-b6af-9905021b46f2

Thank you for visiting this blog 🙂

 

Advertisement

HOL for Installing 2 Node 19c RAC in OL7.6

I am sharing my learning experience while installing 2-Node RAC Database 19.3  on Oracle Linux 7.6 64 bit deployed on Virtual Box 6.0

I faced two issues during the 19c RAC installation which i covered in this artical.

1.     Implement Platform Prerequisites
                                                                                                                                    
Create groups and users

[root@racnode1 ~]# groupadd oinstall
[root@racnode1 ~]# groupadd dba
[root@racnode1 ~]# groupadd oper
[root@racnode1 ~]# groupadd asmdba
[root@racnode1 ~]# groupadd asmadmin
[root@racnode1 ~]# groupadd asmoper
[root@racnode1 ~]# groupadd backupdba
[root@racnode1 ~]# groupadd dgdba
[root@racnode1 ~]# groupadd kmdba
[root@racnode1 ~]# groupadd racdba

[root@racnode1 ~]#
[root@racnode1 ~]#

# usermod -g oinstall -G dba,oper,asmdba,asmadmin,asmoper,backupdba,dgdba,kmdba,racdba oracle
# chown -R oracle:oinstall /home/oracle
# chmod -R 776 /home/oracle

Create directories and Assign ownership and permissions

[root@racnode1 ~]# df -kh
Filesystem           Size  Used Avail Use% Mounted on
devtmpfs             2.4G     0  2.4G   0% /dev
tmpfs                2.5G     0  2.5G   0% /dev/shm
tmpfs                2.5G  9.2M  2.5G   1% /run
tmpfs                2.5G     0  2.5G   0% /sys/fs/cgroup
/dev/mapper/ol-root   50G  5.0G   45G  10% /
/dev/sda1           1014M  212M  803M  21% /boot
/dev/mapper/ol-home   25G   38M   25G   1% /home
SOFTWARES            550G  144G  407G  27% /media/sf_SOFTWARES
tmpfs                495M   12K  495M   1% /run/user/42
tmpfs                495M     0  495M   0% /run/user/0

[root@racnode1 ~]# mkdir -p /u01/app/19.3.0/grid
[root@racnode1 ~]# chown -R oracle:oinstall /u01/app/19.3.0/grid/
[root@racnode1 ~]# chown -R oracle:oinstall /u01/
[root@racnode1 ~]# chmod 776 -R /u01/app/
[root@racnode1 ~]# chmod 776 -R /u01/app/19.3.0/grid/
[root@racnode1 ~]# chmod 776 -R /u01/

Configure limits

vi /etc/security/limits.conf
oracle   soft   nofile    1024
oracle   hard   nofile    65536
oracle   soft   nproc    16384
oracle   hard   nproc    16384
oracle   soft   stack    10240
oracle   hard   stack    32768
oracle   hard   memlock    134217728
oracle   soft   memlock    134217728

Installing mandatory RPMs as per oracle documentation

rpm -q --qf '%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n'  bc  binutils compat-libcap1 compat-libstdc++ elfutils-libelf elfutils-libelf-devel fontconfig-devel glibc glibc-devel ksh libaio libaio-devel libX11 libXau libXi libXtst libXrender-devel libXrender libgcc librdmacm-devel libstdc++ libstdc++-devel libxcb

make nfs-utils (for Oracle ACFS)
net-tools (for Oracle RAC and Oracle Clusterware)
python (for Oracle ACFS Remote)
python-configshell (for Oracle ACFS Remote)
python-rtslib (for Oracle ACFS Remote)
python-six (for Oracle ACFS Remote)
smartmontools
sysstat
targetcli (for Oracle ACFS Remote)

Configure oracle user profile

vi .bash_profile
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=racnode1
export ORACLE_UNQNAME=acsdb
export ORACLE_BASE=/u01/app/oracle
export GRID_HOME=/home/oracle/19.3.0/grid
export DB_HOME=$ORACLE_BASE/product/19.3.0/db_1
export ORACLE_HOME=$DB_HOME
export ORACLE_SID=cdb1
export ORACLE_TERM=xterm
export BASE_PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$BASE_PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

alias grid_env='. /home/oracle/grid_env'
alias db_env='. /home/oracle/db_env'

-------------------------------------------------
vi /home/oracle/grid_env
export ORACLE_SID=+ASM1
export ORACLE_HOME=$GRID_HOME
export PATH=$ORACLE_HOME/bin:$BASE_PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
---------------------------------------------------
vi /home/oracle/db_env
export ORACLE_SID=cdb1
export ORACLE_HOME=$DB_HOME
export PATH=$ORACLE_HOME/bin:$BASE_PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

Format the storage partitions

[root@racnode1 ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x9f87b470.

Command (m for help): u
Changing display/entry units to cylinders (DEPRECATED!).

Command (m for help): p
Disk /dev/sdb: 32.2 GB, 32212254720 bytes, 62914560 sectors
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x9f87b470

   Device Boot      Start         End      Blocks   Id  System
Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p):
Using default response p
Partition number (1-4, default 1):
First cylinder (1-3916, default 1):
Using default value 1

Last cylinder, +cylinders or +size{K,M,G} (1-3916, default 3916):
Using default value 3916
Partition 1 of type Linux and of size 30 GiB is set
Command (m for help): p

Disk /dev/sdb: 32.2 GB, 32212254720 bytes, 62914560 sectors
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x9f87b470

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        3916    31454246   83  Linux

Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.

[root@racnode1 ~]# fdisk /dev/sdc
Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0xcedd7490.

Command (m for help): u
Changing display/entry units to cylinders (DEPRECATED!).

Command (m for help): p
Disk /dev/sdc: 85.9 GB, 85899345920 bytes, 167772160 sectors
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xcedd7490

   Device Boot      Start         End      Blocks   Id  System
Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p):
Using default response p
Partition number (1-4, default 1):
First cylinder (1-10443, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-10443, default 10443):
Using default value 10443
Partition 1 of type Linux and of size 80 GiB is set

Command (m for help): p

Disk /dev/sdc: 85.9 GB, 85899345920 bytes, 167772160 sectors
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xcedd7490

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1       10443    83882373+  83  Linux

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Configure UDEV rules for storage partitions

[root@racnode1 ~]# vi /etc/scsi_id.config
options=-g
--------------
Find the WWWID:
--------------
[root@racnode1 ~]# vi /etc/scsi_id.config
[root@racnode1 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdb1
1ATA_VBOX_HARDDISK_VB94584d4c-2eb8a9ee
[root@racnode1 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdc1
1ATA_VBOX_HARDDISK_VB65e8a8d7-38d946d8

--------------
Add the Rules:
--------------
vi /etc/udev/rules.d/99-oracle-asmdevices.rules

KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB94584d4c-2eb8a9ee", SYMLINK+="oracleasm/asm-disk1", OWNER="oracle", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB65e8a8d7-38d946d8", SYMLINK+="oracleasm/asm-disk2", OWNER="oracle", GROUP="asmadmin", MODE="0660"

------------------------------------------------
Load updated block device partition tables.
------------------------------------------------
# /sbin/partprobe /dev/sdb1
# /sbin/partprobe /dev/sdc1

-------------------------------------------------
Test the rules are working as expected.
-------------------------------------------------
/sbin/udevadm test /block/sdb/sdb1
/sbin/udevadm test /block/sdb/sdc1
-------------------------------------------------
Reload the UDEV rules.
-------------------------------------------------
/sbin/udevadm control --reload-rules

# ls -al /dev/oracleasm/*

[root@racnode1 ~]# ls -al /dev/oracleasm/*
lrwxrwxrwx 1 root root 7 Jul 18 16:00 /dev/oracleasm/asm-disk1 -> ../sdb1
lrwxrwxrwx 1 root root 7 Jul 18 16:00 /dev/oracleasm/asm-disk2 -> ../sdc1

[root@racnode1 ~]# ls -al /dev/sd*1
brw-rw---- 1 root   disk 8,  1 Jul 18 14:52 /dev/sda1
brw-rw---- 1 oracle asmadmin  8, 17 Jul 18 16:00 /dev/sdb1
brw-rw---- 1 oracle asmadmin  8, 33 Jul 18 16:00 /dev/sdc1


Clone the Virtual Machine1 as Machine2

Step-1 Shutdown Machine1
# shutdown -h now

Step-2 Create directory for Machine2
Machine1
F:\Local Lab

New - Machine2
Create Directory D:\ASELAB\RACNODE2

Step-3 Command to start clonning
cd C:\Program Files\Oracle\VirtualBox
VBoxManage clonehd "D:\ASELAB\RACNODE1\RACNODE1\RACNODE1.vdi" "D:\ASELAB\RACNODE2\RACNODE2\RACNODE2.vdi"

C:\Program Files\Oracle\VirtualBox>VBoxManage clonehd "D:\ASELAB\RACNODE1\RACNODE1\RACNODE1.vdi" "D:\ASELAB\RACNODE2\RACNODE2\RACNODE2.vdi"
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Clone medium created in format 'VDI'. UUID: 2548f99c-3f42-416f-8b97-b817e46720d4

Step-4 Create Machine2 using cloned Image disk
Create Machine2 with Clone Image of Machine2

Step-5 Add network adapters
Attach the Network adapter

Step-6 Shared storage with Machine2
Attach the Storage

Step-7 Start Machine2
Start the Machine2

Step-8 Configure Machine2
Hostname change
Ip address changes for public and private interface
restart machine2 and confirm changes

2. 19c GI Installation
Extract GI Software 

-bash-4.2$ cd source/
-bash-4.2$ ls
LINUX.X64_193000_grid_home.zip
-bash-4.2$
-bash-4.2$ ll
total 2821472
-rwxrwxrw- 1 oracle oinstall 2889184573 Jul 18 17:09 LINUX.X64_193000_grid_home.zip
-bash-4.2$
-bash-4.2$
-bash-4.2$ unzip -qq LINUX.X64_193000_grid_home.zip -d /home/oracle/19.3.0/grid
-bash-4.2$









GUI is unable to setup
passwordless ssh connectivity 

This is the first issue i got during installation.
[root@racnode1 deinstall]# ./sshUserSetup.sh -user oracle -hosts "racnode1 racnode2" -noPromptPassphrase -confirm -advanced
The output of this script is also logged into /tmp/sshUserSetup_2019-07-18-17-27-21.log
Hosts are racnode1 racnode2

user is oracle
Platform:- Linux
Checking if the remote hosts are reachable
PING racnode1 (192.168.56.11) 56(84) bytes of data.
64 bytes from racnode1 (192.168.56.11): icmp_seq=1 ttl=64 time=0.015 ms
64 bytes from racnode1 (192.168.56.11): icmp_seq=2 ttl=64 time=0.020 ms
64 bytes from racnode1 (192.168.56.11): icmp_seq=3 ttl=64 time=0.018 ms
64 bytes from racnode1 (192.168.56.11): icmp_seq=4 ttl=64 time=0.021 ms
64 bytes from racnode1 (192.168.56.11): icmp_seq=5 ttl=64 time=0.020 ms
--- racnode1 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4178ms
rtt min/avg/max/mdev = 0.015/0.018/0.021/0.005 ms
PING racnode2 (192.168.56.12) 56(84) bytes of data.
64 bytes from racnode2 (192.168.56.12): icmp_seq=1 ttl=64 time=0.416 ms
64 bytes from racnode2 (192.168.56.12): icmp_seq=2 ttl=64 time=0.357 ms
64 bytes from racnode2 (192.168.56.12): icmp_seq=3 ttl=64 time=0.401 ms
64 bytes from racnode2 (192.168.56.12): icmp_seq=4 ttl=64 time=0.343 ms
64 bytes from racnode2 (192.168.56.12): icmp_seq=5 ttl=64 time=0.370 ms
--- racnode2 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4124ms
rtt min/avg/max/mdev = 0.343/0.377/0.416/0.032 ms
Remote host reachability check succeeded.
The following hosts are reachable: racnode1 racnode2.
The following hosts are not reachable: .
All hosts are reachable. Proceeding further...
firsthost racnode1
numhosts 2

The script will setup SSH connectivity from the host racnode1 to all
the remote hosts. After the script is executed, the user can use SSH to run
commands on the remote hosts or copy files between this host racnode1
and the remote hosts without being prompted for passwords or confirmations.

NOTE 1:
As part of the setup procedure, this script will use ssh and scp to copy
files between the local host and the remote hosts. Since the script does not
store passwords, you may be prompted for the passwords during the execution of
the script whenever ssh or scp is invoked.

NOTE 2:
AS PER SSH REQUIREMENTS, THIS SCRIPT WILL SECURE THE USER HOME DIRECTORY
AND THE .ssh DIRECTORY BY REVOKING GROUP AND WORLD WRITE PRIVILEGES TO THESE
directories.

Do you want to continue and let the script make the above mentioned changes (yes/no)?
Confirmation provided on the command line
The user chose yes
User chose to skip passphrase related questions.
Creating .ssh directory on local host, if not present already
Creating authorized_keys file on local host
Changing permissions on authorized_keys to 644 on local host
Creating known_hosts file on local host
Changing permissions on known_hosts to 644 on local host
Creating config file on local host
If a config file exists already at /root/.ssh/config, it would be backed up to /root/.ssh/config.backup.
Removing old private/public keys on local host
Running SSH keygen on local host with empty passphrase
Generating public/private rsa key pair.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:4MMvcII5TYmP7HQA9md9/yOl++lg9bb1jfq7IwuYf5w root@racnode1
The key's randomart image is:
+---[RSA 1024]----+

|..               |

|.... ..          |

|  o.oo.. .       |

| . Ooo .. .      |

|  B * = S  . o   |

| o o + o o  = .  |

|  .   . + .* + o.|

|       . ...E =.=|

|          .o=O==o|

+----[SHA256]-----+

Creating .ssh directory and setting permissions on remote host racnode1
THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR oracle. THIS IS AN SSH REQUIREMENT.
The script would create ~oracle/.ssh/config file on remote host racnode1. If a config file exists already at ~oracle/.ssh/config, it would be backed up to ~oracle/.ssh/config.backup.
The user may be prompted for a password here since the script would be running SSH on host racnode1.
Warning: Permanently added 'racnode1,192.168.56.11' (ECDSA) to the list of known hosts.
oracle@racnode1's password:
Done with creating .ssh directory and setting permissions on remote host racnode1.
Creating .ssh directory and setting permissions on remote host racnode2
THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR oracle. THIS IS AN SSH REQUIREMENT.
The script would create ~oracle/.ssh/config file on remote host racnode2. If a config file exists already at ~oracle/.ssh/config, it would be backed up to ~oracle/.ssh/config.backup.
The user may be prompted for a password here since the script would be running SSH on host racnode2.
Warning: Permanently added 'racnode2,192.168.56.12' (ECDSA) to the list of known hosts.
oracle@racnode2's password:
Done with creating .ssh directory and setting permissions on remote host racnode2.
Copying local host public key to the remote host racnode1
The user may be prompted for a password or passphrase here since the script would be using SCP for host racnode1.
oracle@racnode1's password:
Done copying local host public key to the remote host racnode1
Copying local host public key to the remote host racnode2
The user may be prompted for a password or passphrase here since the script would be using SCP for host racnode2.
oracle@racnode2's password:

Done copying local host public key to the remote host racnode2
Creating keys on remote host racnode1 if they do not exist already. This is required to setup SSH on host racnode1.
Creating keys on remote host racnode2 if they do not exist already. This is required to setup SSH on host racnode2.
Updating authorized_keys file on remote host racnode1
Updating known_hosts file on remote host racnode1
Updating authorized_keys file on remote host racnode2
Updating known_hosts file on remote host racnode2
SSH setup is complete.
===================
Verifying SSH setup
===================
The script will now run the date command on the remote nodes using ssh
to verify if ssh is setup correctly. IF THE SETUP IS CORRECTLY SETUP,
THERE SHOULD BE NO OUTPUT OTHER THAN THE DATE AND SSH SHOULD NOT ASK FOR
PASSWORDS. If you see any output other than date or are prompted for the
password, ssh is not setup correctly and you will need to resolve the
issue and set up ssh again.

The possible causes for failure could be:
The server settings in /etc/ssh/sshd_config file do not allow ssh for user oracle.
The server may have disabled public key based authentication.
The client public key on the server may be outdated.
~oracle or ~oracle/.ssh on the remote host may not be owned by oracle.
User may not have passed -shared option for shared remote users or may be passing the -shared option for non-shared remote users.
If there is output in addition to the date, but no password is asked,it may be a security alert shown as part of company policy. Append the
additional text to the <OMS HOME>/sysman/prov/resources/ignoreMessages.txt file.
--racnode1:--
Running /usr/bin/ssh -x -l oracle racnode1 date to verify SSH connectivity has been setup from local host to racnode1.
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.
Thu Jul 18 17:28:38 IST 2019
--racnode2:--
Running /usr/bin/ssh -x -l oracle racnode2 date to verify SSH connectivity has been setup from local host to racnode2.
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.
Thu Jul 18 17:28:38 IST 2019
Verifying SSH connectivity has been setup from racnode1 to racnode1
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL.
Thu Jul 18 17:28:39 IST 2019
Verifying SSH connectivity has been setup from racnode1 to racnode2
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL.
Thu Jul 18 17:28:39 IST 2019
-Verification from complete-
SSH verification complete.























second issues i faced during root.sh execution on node1.
please refer this artical
[root@racnode1 ~]# sh /u01/app/19.3.0/grid/root.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/19.3.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/19.3.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/racnode1/crsconfig/rootcrs_racnode1_2019-07-19_10-44-23PM.log
2019/07/19 22:44:32 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2019/07/19 22:44:32 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
2019/07/19 22:44:32 CLSRSC-363: User ignored prerequisites during installation
2019/07/19 22:44:32 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2019/07/19 22:44:34 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
2019/07/19 22:44:35 CLSRSC-594: Executing installation step 5 of 19: 'SetupOSD'.
2019/07/19 22:44:35 CLSRSC-594: Executing installation step 6 of 19: 'CheckCRSConfig'.
2019/07/19 22:44:36 CLSRSC-594: Executing installation step 7 of 19: 'SetupLocalGPNP'.
2019/07/19 22:45:15 CLSRSC-594: Executing installation step 8 of 19: 'CreateRootCert'.
2019/07/19 22:45:17 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2019/07/19 22:45:20 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2019/07/19 22:45:35 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2019/07/19 22:45:35 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
2019/07/19 22:45:41 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2019/07/19 22:45:41 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
2019/07/19 22:46:06 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
2019/07/19 22:46:12 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
2019/07/19 22:46:18 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2019/07/19 22:46:24 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.


ASM has been created and started successfully.

[DBT-30001] Disk groups created successfully. Check /u01/app/oracle/cfgtoollogs/asmca/asmca-190719PM104658.log for details.
2019/07/19 22:47:50 CLSRSC-482: Running command: '/u01/app/19.3.0/grid/bin/ocrconfig -upgrade oracle oinstall'
CRS-4256: Updating the profile
Successful addition of voting disk 2110c8fa2f3e4f05bf3b54ea3708b20b.
Successfully replaced voting disk group with +OVDATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced

##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
  1. ONLINE 2110c8fa2f3e4f05bf3b54ea3708b20b (/dev/sdb1) [OVDATA]
Located 1 voting disk(s).
2019/07/19 22:49:26 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
2019/07/19 22:50:38 CLSRSC-343: Successfully started Oracle Clusterware stack
2019/07/19 22:50:38 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
2019/07/19 22:52:07 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
2019/07/19 22:52:35 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded


[root@racnode2 grid]# sh root.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/19.3.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/19.3.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/racnode2/crsconfig/rootcrs_racnode2_2019-07-22_04-11-43PM.log
2019/07/22 16:11:49 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2019/07/22 16:11:49 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
2019/07/22 16:11:49 CLSRSC-363: User ignored prerequisites during installation
2019/07/22 16:11:49 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2019/07/22 16:11:51 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
2019/07/22 16:11:51 CLSRSC-594: Executing installation step 5 of 19: 'SetupOSD'.
2019/07/22 16:11:51 CLSRSC-594: Executing installation step 6 of 19: 'CheckCRSConfig'.
2019/07/22 16:11:51 CLSRSC-594: Executing installation step 7 of 19: 'SetupLocalGPNP'.
2019/07/22 16:11:53 CLSRSC-594: Executing installation step 8 of 19: 'CreateRootCert'.
2019/07/22 16:11:53 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2019/07/22 16:12:03 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2019/07/22 16:12:03 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
2019/07/22 16:12:05 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2019/07/22 16:12:06 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
2019/07/22 16:12:19 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2019/07/22 16:12:31 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
2019/07/22 16:12:32 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
2019/07/22 16:12:34 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2019/07/22 16:12:35 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
2019/07/22 16:12:46 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
2019/07/22 16:13:33 CLSRSC-343: Successfully started Oracle Clusterware stack
2019/07/22 16:13:33 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
2019/07/22 16:13:50 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
2019/07/22 16:13:58 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

[root@racnode1 ~]#  /u01/app/19.3.0/grid/bin/crsctl check cluster -all
**************************************************************
racnode1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnode2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[root@racnode1 ~]#  /u01/app/19.3.0/grid/bin/crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       racnode1                 STABLE
               ONLINE  ONLINE       racnode2                 STABLE
ora.chad
               ONLINE  ONLINE       racnode1                 STABLE
               ONLINE  ONLINE       racnode2                 STABLE
ora.net1.network
               ONLINE  ONLINE       racnode1                 STABLE
               ONLINE  ONLINE       racnode2                 STABLE
ora.ons
               ONLINE  ONLINE       racnode1                 STABLE
               ONLINE  ONLINE       racnode2                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       racnode1                 STABLE
      2        ONLINE  ONLINE       racnode2                 STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       racnode1                 STABLE
ora.OVDATA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       racnode1                 STABLE
      2        ONLINE  ONLINE       racnode2                 STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  ONLINE       racnode1                 Started,STABLE
      2        ONLINE  ONLINE       racnode2                 Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       racnode1                 STABLE
      2        ONLINE  ONLINE       racnode2                 STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       racnode1                 STABLE
ora.qosmserver
      1        ONLINE  ONLINE       racnode1                 STABLE
ora.racnode1.vip
      1        ONLINE  ONLINE       racnode1                 STABLE
ora.racnode2.vip
      1        ONLINE  ONLINE       racnode2                 STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       racnode1                 STABLE
--------------------------------------------------------------------------------

3.     Invoke ASMCA to create diskgroup






4.     RAC DB Software Installation















5.     RAC DB Creation using DBCA

cd /home/oracle/source/
$unzip -q LINUX.X64_193000_db_home.zip -d /u01/app/oracle/product/19.3.0/db_1
















6.     19C DB EXPRESS



7.     Verify the Cluster and DB Status

[root@racnode1 ~]# /u01/app/19.3.0/grid/bin/crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       racnode1                 STABLE
               ONLINE  ONLINE       racnode2                 STABLE
ora.chad
               ONLINE  ONLINE       racnode1                 STABLE
               ONLINE  ONLINE       racnode2                 STABLE
ora.net1.network
               ONLINE  ONLINE       racnode1                 STABLE
               ONLINE  ONLINE       racnode2                 STABLE
ora.ons
               ONLINE  ONLINE       racnode1                 STABLE
               ONLINE  ONLINE       racnode2                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       racnode1                 STABLE
      2        ONLINE  ONLINE       racnode2                 STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.DATA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       racnode1                 STABLE
      2        ONLINE  ONLINE       racnode2                 STABLE
      3        ONLINE  OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       racnode1                 STABLE
ora.OVDATA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       racnode1                 STABLE
      2        ONLINE  ONLINE       racnode2                 STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  ONLINE       racnode1                 Started,STABLE
      2        ONLINE  ONLINE       racnode2                 Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       racnode1                 STABLE
      2        ONLINE  ONLINE       racnode2                 STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cdb.db
      1        ONLINE  ONLINE       racnode1                 Open,HOME=/u01/app/o
                                                            racle/product/19.3.0
                                                             /db_1,STABLE
      2        ONLINE  ONLINE       racnode2                 Open,HOME=/u01/app/o
                                                             racle/product/19.3.0
                                                             /db_1,STABLE
ora.cvu
      1        ONLINE  ONLINE       racnode1                 STABLE
ora.qosmserver
      1        ONLINE  ONLINE       racnode1                 STABLE
ora.racnode1.vip
      1        ONLINE  ONLINE       racnode1                 STABLE
ora.racnode2.vip
      1        ONLINE  ONLINE       racnode2                 STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       racnode1                 STABLE
--------------------------------------------------------------------

select * from v$active_instances;

INST_NUMBER INST_NAME                   CON_ID                                                                                                                                                                                                     CON_ID
----------- ------------------------------------------
          1 racnode1:cdb1                     0                                                                                                                                                                                                    0
          2 racnode2:cdb2                     0                                                                                                                                                                                                   0

SQL> show pdbs
    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 ACS                            READ WRITE NO

Thank you for visiting this blog...

19c RAC : Dynamic Cluster Services

High-availability of database services has been a feature of Oracle Real Application Servers since many versions. Basically, when a database instance fails, a service which has got this instance as a preferred instance, fails over to another available instance.
Unfortunately, the service did not fail back to the original instance as soon as the instance is up again.

The administrator had to relocate the service

OR Needs to configure the fan callout script

You can refer my old blog post – fan-callouts-for-rac

This has changed with Oracle Database 19c.

Starting with Oracle Database release 19.3, if you specify yes for the -failback attribute of a service,
then, after failing over to an available instance when the last preferred instance went down, the service
transfers back to a preferred instance when one becomes available. For earlier releases, you can automate
fail back to the preferred instance by using FAN callouts.

Dynamic Services Fallback Option
For a dynamic database service that is placed using “preferred” and “available” settings,
you can now specify that this service should fall back to a “preferred” instance when it becomes
available if the service failed over to an available instance.

The Dynamic Services Fallback Option allows for more control in placing dynamic database services
and ensures that a given service is available on a preferred instance as long as possible.

#Create serv19c Service inside the acs pluggable database.

srvctl add service -db cdb -pdb acs -service serv19c -preferred cdb1 -available cdb2 -failback YES

#Review the configuration of pdb

-bash-4.2$ srvctl config service -db cdb -service serv19c
Service name: serv19c
Server pool:
Cardinality: 1
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Global: false
Commit Outcome: false
Failover type:
Failover method:
Failover retries:
Failover delay:
Failover restore: NONE
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Pluggable database name: acs
Hub service:
Maximum lag time: ANY
SQL Translation Profile:
Retention: 86400 seconds
Failback : true
Replay Initiation Time: 300 seconds
Drain timeout:
Stop option:
Session State Consistency: DYNAMIC
GSM Flags: 0
Service is enabled
Preferred instances: cdb1
Available instances: cdb2
CSS critical: no

#Review the status of service

-bash-4.2$ srvctl status service -db cdb -service serv19c
Service serv19c is not running.

#Starting the service
-bash-4.2$ srvctl start service -db cdb -service serv19c

#Review the status of service

-bash-4.2$ srvctl status service -db cdb -service serv19c
Service serv19c is running on instance(s) cdb1

#From Node1 we will reboot the machine1

#reboot

#From Node2 review the status of service

-bash-4.2$ srvctl status service -db cdb -service serv19c
Service serv19c is running on instance(s) cdb2

#Once Node1 is comes back in Cluster Service are automatically failback. 🙂

-bash-4.2$ srvctl status service -db cdb -service serv19c
Service serv19c is running on instance(s) cdb1

Thank you for visiting this blog.