Oracle RAC Installation – Pre-Installation steps (Part 2)

Installation of Oracle Enterprise Linux 5 Update 1 operating system
Install the necessary operating system.

Verifying your installation for the existence of the following required packages (if these do
not exist please install them)
In order to verify the existence of the required packages, you need to execute the
command rpm –qa. For example to check if the package binutils-2.17.50.0.6-2.el5
exists we need to execute the command «rpm –qa| grep binutil». The following list
contains all the required package versions (or later):
• binutils-2.17.50.0.6-2.el5
• compat-libstdc++-33-3.2.3-61
• elfutils-libelf-0.125-3.el5
• elfutils-libelf-devel-0.125
• gcc-4.1.1-52
• gcc-c++-4.1.1-52
• glibc-2.5-12
• glibc-common-2.5-12
• glibc-devel-2.5-12
• glibc-headers-2.5-12
• libaio-0.3.106
• libaio-devel-0.3.106
• libgcc-4.1.1-52
• libstdc++-4.1.1
• libstdc++-devel-4.1.1-52.e15
• make-3.81-1.1
• sysstat-7.0.0
• unixODBC-2.2.11
• unixODBC-devel-2.2.11

Configuring the Linux Kernel Parameters
• kernel.shmall = 512 * PROCESSES (2097152)
• kernel.shmmax = 2147483648
• kernel.shmmni = 4096
• kernel.sem = 250 32000 100 128
• fs.file-max = 65536
• net.ipv4.ip_local_port_range = 1024 65000
• net.core.rmem_default = 4194304
• net.core.rmem_max = 4194304
• net.core.wmem_default = 262144
• net.core.wmem_max = 262144
Insert the above parameters in the file /etc/sysctl.conf and then execute
#/sbin/sysctl –p to enable the kernel parameters or reboot.
In OEL5U1 (Oracle Enterprise Linux Update 1) and later the default values of
kernel.shmall and kernel.shmmax are above the minimum required.

Network Configuration for the RAC installation


The network configuration must be like the above screenshot. The Ethernet adapter
eth0 has the «public» ip address whereas the eth1 has the «private». Here you can see
the settings of the second node and the corresponding parameters must be set on the
first node «myrac1» and the addresses 192.168.10.11 for the «public» ip and 10.0.0.11

for the «private» ip.

Modify accordingly the file /etc/hosts
[root@myrac2 etc]# more hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
192.168.10.11 myrac1.gr.oracle.com myrac1
192.168.10.12 myrac2.gr.oracle.com myrac2
10.0.0.11 myrac1-priv
10.0.0.12 myrac2-priv
192.168.10.21 myrac1-vip
192.168.10.22 myrac2-vip
::1 localhost6.localdomain6 localhost6
[root@myrac2 etc]#
After editing the file (/etc/hosts) with a text editor use the scp command to copy it to
the other node e.g. scp /etc/hosts myrac1:/etc/. Type in the password of root for the
other node when it will be asked from you.

Confirm the ips
From both nodes of the rac myrac1, myrac2 execute:
ping myrac1.gr.oracle.com
ping myrac1
ping myrac1-priv
ping myrac2.gr.oracle.com
ping myrac2
ping myrac2-priv
Note: You will not be able to ping the virtual ip that you defined in hosts as myrac1-
vip and myrac2-vip, at least not until the installation of crs has finished.

Create the Oracle Groups and User Account
Logged in as root και on both nodes myrac1 and myrac2 we create the groups for
oracle user
/usr/sbin/groupadd oinstall
/usr/sbin/groupadd dba
then we create the oracle user
useradd -–g oinstall -G dba -m -s /bin/bash -d /home/oracle -r
oracle
Then we set the password for the user
passwd oracle
Changing password for user oracle.
New UNIX password:<enter password> (we use ‘manager1” for this exercise)
retype new UNIX password:<enter password>
passwd: all authentication tokens updated successfully.

Increase shell limits
With a text editor we edit the file /etc/security/limits.conf adding in the end the
following lines:
* soft nproc 2047
* hard nproc 16384
* soft nofile 1024
* hard nofile 65536
Also, we add the line session required /lib/security/pam_limits.so to the end of the
file /etc/pam.d/login
In the file profile under the folder etc “/etc” we add the following lines to change the
shell limits for the oracle user.
if [ $USER = 'oracle' ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi
We do the same on both nodes.

Configuring the common disks and block devices
As root you should configure appropriately the common disks of the cluster (cluster
registry, voting disk και asm).
So, we must create the partitions for the OCR and VOTE disk.
# /sbin/fdisk devicename (sdc in our case for CRS and sdd fore VOTE)
We use the command p to list the partitions of the device
We use the command n to create a new partition on the device.
In the creation of the partitions use the default values (choose the type p for primary,
1 for the number of the partition since only one partition will be created that will use
the maximum available space of the device).
After the creation of the partition use the w command to finalize the partition(s) to
the device. For more detailed information refer to the fdisk manual (rman fdisk).
So, after the completion of this step we should have the following partitions on
devices sdc and sdd:
sdc
[root@myrac1 log]# fdisk /dev/sdc
Command (m for help): p
Disk /dev/sdc: 322 MB, 322122240 bytes
64 heads, 32 sectors/track, 307 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
/dev/sdc1 1 307 314352 83 Linux
Command (m for help):
sdd
[root@myrac1 log]# fdisk /dev/sdd

Command (m for help): p
Disk /dev/sdd: 322 MB, 322122240 bytes
64 heads, 32 sectors/track, 307 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
/dev/sdd1 1 307 314352 83 Linux
Command (m for help):
After checking the partitions on the first node we do the same thing on the second
node (myrac2) and if we see the same partitions on the corresponding devices we
execute the w commanding order to finalize and on the second node the partitions.
For the fourth and fifth common device (sde and sdf) we create the corresponding
partitions (use of DISK and RECO throu asm) .
So, after the completion of this tep we should have the following partitions on the
devices sde and sdf:
sde
[root@myrac1 log]# fdisk /dev/sde
Command (m for help): p
Disk /dev/sde: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sde1 1 522 4192933+ 83 Linux
Command (m for help):
sdf
[root@myrac1 log]# fdisk /dev/sdf
Command (m for help): p
Disk /dev/sdf: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdf1 1 261 2096451 83 Linux
Command (m for help):
As in the previous step we execute fdisk also on the second node (myrac2) and if we
find out that same partitions correspond to device sdd we execute w command to
finalize on the second node the partitions.
The raw devices (character devices) in next releases of Linux will not be supported.
That’s why in 11g version of the Oracle database the files of CRS (OCR) will be able
to implement them via block devices. Since the block devices are buffered from
operating systems, Oracle embeds the use of the flag O_DIRECT (O_DIRECT is the
way in which it opens such devices so the kernel will ignore the buffer cache of Linux
and write directly to disk). The use of the block devices is not restrictive regarding
the maximum number of allowed devices (in raw devices could not exceed the 255
devices per node) . Also, the block devices in contradiction to raw devices cantransfer the privileges between the reboots of the Nodes of the cluster. (In 10g the
persistency of the privileges was accomplished through chown and chmod in rc.local
file). The persistency of privileges is configured using appropriately a file in the path
«/etc/udev/rules.d». We call the file that is being created 99-raw.rules (the naming
doesn’t play any role since the start_udev will be execute all the rule files that are
located in the specific path «/etc/udev/rules.d» in their numerical order). Below see
the contents of the file:
[root@myrac1 rules.d]# more 99-raw.rules
#OCR disks
KERNEL=="sdc1", GROUP="oinstall", MODE="640"
# Voting disks
KERNEL=="sdd1", OWNER="oracle", GROUP="oinstall", MODE="660"
Then we copy the file to the second node of the cluster, using the scp (secure copy)
command.
[root@myrac2 rules.d]# scp myrac1:/etc/udev/rules.d/99-
raw.rules .
The authenticity of host 'myrac1 (192.168.10.11)' can't be
established.
RSA key fingerprint is
03:a2:a3:be:b8:78:2c:64:5e:ec:f7:e5:8d:8a:f6:cb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'myrac1,192.168.10.11' (RSA) to the
list of known hosts.
root@myrac1's password:
99-raw.rules 100% 182
0.2KB/s 00:00
[root@myrac2 rules.d]# ls
05-udev-early.rules 60-net.rules 90-alsa.rules
bluetooth.rules
40-multipath.rules 60-pcmcia.rules 90-hal.rules
50-udev.rules 60-raw.rules 95-pam-console.rules
51-hotplug.rules 60-wacom.rules 99-raw.rules


Assigning privileges to the common disks for the oracle user
The persistency of privileges for the disks of CRS and VOTE is being handled by the
configuration of block devices. (see Configuring the common disks and block
devices).
The block devices that will be used for the ASM are being handled by Oracle through
asmlibs. Asmlibs run a service at the O/S level which is responsible to identify the
disks that participate in ASM and to maintain the persistency of access privileges to
these disks.
For additional information for the installation and configuration of asmlibs see the
chapters Installation of ASMLibs, Configuration of ASMLib.


Configuration of ssh (secure shell) for the oracle user
Login as oracle (without previously having logged as root e.g. su – oracle). The
oracle installation is done from one node and in order to be transferrable we have to
configure appropriately ssh for the oracle user. First, we have to check the user id
and the groups to which the oracle user belongs to that they are identical. So, we
execute from both nodes the id command. The results should be the following:



The information that must be identical is marked with bold characters below:
[oracle@myrac2 ~]$ id uid=100(oracle) gid=501(oinstall)
groups=501(oinstall),502(dba)

Then you must generate RSA and DSA public and private keys for the 2 nodes
(myrac1, myrac2).
Execute the following on both nodes:
/usr/bin/ssh-keygen -t rsa
Accept the default location for the creation of the file
Press Enter when the pass phrase will be asked (In reality you may want to give a
pass phrase but for ease of the exercise we won’t do it)
and
/usr/bin/ssh-keygen -t dsa
Accept the default location for the creation of the file
Press Enter when the pass phrase will be asked (In reality you may want to give a
pass phrase but for ease of the exercise we won’t do it)
The screen during the configuration should look like the screenshot below:


Now copy the /home/oracle/.ssh/id_rsa.pub and the
/home/oracle/.ssh/id_dsa.pub α̟ό τον κόμβο myrac2 στον κόμβο myrac1. Give
the oracle user password in the other node when a password will be asked from you.
From node myrac1 we are connected as oracle and we execute:
cd
cd .ssh
scp myrac2:/home/oracle/.ssh/id_rsa.pub id_rsa.pub2
scp myrac2:/home/oracle/.ssh/id_dsa.pub id_dsa.pub2


The first time that you will execute the scp command the necessary keys for the safe
communication of the two systems will be created. In the question
Are you sure you want to continue connecting (yes/no)? yes we choose yes.
Concatenate the rsa and dsa public keys of both nodes in one file that you will name
it authorized_keys.
cat id_rsa.pub id_rsa.pub2 id_dsa.pub id_dsa.pub2 >
authorized_keys
Copy the authorized_keys file to myarc2 node.
scp authorized_keys myrac2:/home/oracle/.ssh/.
Verify that everything is ok by connecting to node myrac1 and node myrac2 and
execute the following commands:
ssh myrac1 date
ssh myrac1.gr.oracle.com date
ssh myrac2 date
ssh myrac2.gr.oracle.com date
If a password will not be asked then the ssh configuration was successful.
Another way to check the success of the ssh configuration is to cat the know_host file
which is in the folder /home/oracle/.ssh/ for each node of the cluster where you
will find the 4 entries of myrac1, myrac2, myrac1.gr.oracle.com, myrac2.gr.oracle.com
along with the corresponding fingerprints.

6.Initialize cluster registry voting disk 
Initialize the partitions you created using the raw mappings you entered by
executing the following.
dd if=/dev/zero of=/dev/sdc1 bs=1M count=300
dd if=/dev/zero of=/dev/sdd1 bs=1M count=300

Installation of ASMLibs
It is a good practice to check if newer versions of the ASMLibs exist in the Oracle site
before you install them on your operating system.
The download link is:
http://www.oracle.com/technology/software/tech/linux/asmlib/rhel5.html
Be very careful to install the appropriate for your kernel and operating system RPMs.
Also, take into consideration if you use PAE (Physical Address Extension) , in order
to install the appropriate RPM. In this workshop all necessary installation files are
provided in the Media CD image..
Install the oracleasmlib package. Install the ASM by using the RPM command as root
user.


You must follow the correct order in the installation, in order to satisfy the
prerequisites of all RPM packages.
So, first the oracleasm-support-2.0.4-1.el5.i386.rpm then the oracleasm-2.6.18-53.el5-
2.0.4-1.el5.i686.rpm and finally theoracleasmlib-2.0.3-1.el5.i386.rpm.
Attention! The correct Asmlibs must be installed so verify the kernel version of the
operating system.

Configuration of ASMLib
Configure the ASMLib as root on both nodes (myrac1, myrac2).
# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM
library driver. The following questions will determine whether
the driver is loaded on boot and what permissions it will have.
The current values will be shown in brackets ('[]'). Hitting without typing an answer will keep that current value. Ctrl-C
will abort.
Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Fix permissions of Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: [ OK ]
Loading module "oracleasm": [ OK ]
Mounting ASMlib driver filesystem:
This way the ASMLib will be configured appropriately in order to load the libraries
during boot procedure and to grant appropriate priviliges.

ASM disks creation
Create ASM disks in any node of the cluster as root.
# /etc/init.d/oracleasm createdisk DATA /dev/sde1
Marking disk "/dev/sdd1 as an ASM disk: [ OK ]
# /etc/init.d/oracleasm createdisk RECO /dev/sdf1
Marking disk "/dev/sdd1 as an ASM disk: [ OK ]
Verify that the ASM disks are visible from all nodes of the cluster
# /etc/init.d/oracleasm scandisks
Scanning system for ASM disks: [ OK ]
# /etc/init.d/oracleasm listdisks
DATA
RECO
Or we execute oracleasm-discover
[root@myrac1 rules.d]# oracleasm-discover
Using ASMLib from /opt/oracle/extapi/32/asm/orcl/1/libasm.so
[ASM Library - Generic Linux, version 2.0.3 (KABI_V2)]
Discovered disk: ORCL:DATA [8385867 blocks (4293563904 bytes),
maxio 256]
Discovered disk: ORCL:RECO [4192902 blocks (2146765824 bytes),
maxio 256]
The asmlib utility of oracle allows us to list the devices that were “marked” by the
oracleasm createdisk procedure under the path /dev/oracleasm/disks where with ls
we see the asm devices.
[root@myrac1 disks]# pwd
/dev/oracleasm/disks
[root@myrac1 disks]# ls
DATA RECO
[root@myrac1 disks]#
The /dev/sdf1 partition can be used in a later step as a recovery area, place where
archives and other files are stored.

7. Final check by using «runcluvfy» 
Provided along with the Oracle ClusterWare installation software you will find the
runcluvfy tool. Execute this with the following parameters:
[oracle@myrac1 clusterware]$ ./runcluvfy.sh stage -pre crsinst -n myrac1,myrac2
The runcluvfy tool will perform pre-checks on nodes myrac1 and myrac2 verifying:
Node reachability
• User equivalence
• Administrative privileges
o User existence (for the oracle user)
o Group existence (for the oinstall and dba groups)
o Membership check (for the user)
• Node connectivity (for all subnets public, private)
o Interfaces check (for all subnets that were discovered)
• System requirements
o Memory
o Swap space
o Free disk space
o Architecture
o Kernel version
o Packages prerequisites
o
[oracle@myrac1 clusterware]$ ./runcluvfy.sh stage -pre crsinst
-n myrac1,myrac2
Performing pre-checks for cluster services setup
Checking node reachability...
Node reachability check passed from node "myrac1".
Checking user equivalence...
User equivalence check passed for user "oracle".
Checking administrative privileges...
User existence check passed for "oracle".
Group existence check passed for "oinstall".
Membership check for user "oracle" in group "oinstall" [as
Primary] passed.
Administrative privileges check passed.
Checking node connectivity...
Node connectivity check passed for subnet "192.168.10.0" with
node(s) myrac2,myrac1.
Node connectivity check passed for subnet "10.0.0.0" with
node(s) myrac2,myrac1.
Interfaces found on subnet "10.0.0.0" that are likely
candidates for VIP:
myrac2 eth1:10.0.0.12
myrac1 eth1:10.0.0.11
Interfaces found on subnet "192.168.10.0" that are likely
candidates for a private interconnect:
myrac2 eth0:192.168.10.12
myrac1 eth0:192.168.10.11
Node connectivity check passed.
Checking system requirements for 'crs'...
Total memory check failed.
Check failed on nodes:
myrac2,myrac1

Free disk space check passed.
Swap space check passed.
System architecture check passed.
Kernel version check passed.
Package existence check passed for "make-3.81".
Package existence check passed for "binutils-2.17.50.0.6".
Package existence check passed for "gcc-4.1.1".
Package existence check passed for "libaio-0.3.106".
Package existence check passed for "libaio-devel-0.3.106".
Package existence check passed for "libstdc++-4.1.1".
Package existence check passed for "elfutils-libelf-devel-
0.125".
Package existence check passed for "sysstat-7.0.0".
Package existence check passed for "compat-libstdc++-33-3.2.3".
Package existence check passed for "libgcc-4.1.1".
Package existence check passed for "libstdc++-devel-4.1.1".
Package existence check passed for "unixODBC-2.2.11".
Package existence check passed for "unixODBC-devel-2.2.11".
Package existence check passed for "glibc-2.5-12".
Group existence check passed for "dba".
Group existence check passed for "oinstall".
User existence check passed for "nobody".
System requirement failed for 'crs'
Pre-check for cluster services setup was unsuccessful on all
the nodes.
[oracle@myrac1 clusterware]$
The memory check fail that you might encounter (see above) is related to the limited
Vmware machine memory used during this workshop.


Comments

Popular posts from this blog

Configure IP address on Oracle Linux

Oracle Directory Object

TIME INTERVAL in Oracle