How to deploy Oracle 12c Release 1 on RHEL 7/Oracle Linux 7

How to deploy Oracle 12c Release 1 on RHEL 7/Oracle Linux 7


1.Software and Hardware Requirements

1.1. Hardware Requirements

  • Oracle requires at least 4 GB of physical memory
  • Swap space is proportional to the amount of RAM allocated to the system
RAM Swap Space
Between 4 GB and 16 GB Equal to the size of RAM
More than 16 GB 16GB

Table 1: Hardware Requirement

NOTE: If you enable HugePages, then you should deduct the memory allocated to HugePages from the available RAM before calculating swap space
  • The following table describes the disk space required for an Oracle installation
Software Installation Location Minimum Disk Space Requirements
Grid Infrastructure home At least 8 GB of disk space
Oracle Database home At least 6.4 GB of disk space
Shared storage disk space Sizes of Database and Flashback Recovery Area
Table 2: Disk Space
  • Oracle's temporary space (/tmp) must be at least 1 GB in size
  • A monitor that supports resolution of 1024 x 768 to correctly display the Oracle Universal Installer (OUI)
  • For Dell supported hardware configurations, see the Tested & Validated Matrix for each Dell Validated Component at Dell's Current Release Page(need to update link while release)

1.2 Network Requirements

  • It is recommended to ensure each node contains at least three network interface cards (NICs). One NIC for public network, two NICs for private network to ensure high availability of the Oracle RAC cluster. If you are going to use Automatic Storage Management (ASM) in the cluster, you need at least one Oracle ASM network. The ASM network can share the network interface with a private network
  • Public, Private and ASM interface names must be the same on all nodes. For example, if em1 is used as the public interface on node one, all other nodes require em1 as the public interface
  • All public interfaces for each node should be able to communicate with all nodes within the cluster
  • All private and ASM interfaces for each node should be able to communicate with all nodes within the cluster
  • The hostname of each node must follow the RFC 952 standard (www.ietf.org/rfc/rfc952.txt). Hostname that include an underscore ("_") are not permitted
1.3. Operating System Requirements
  • Red Hat Enterprise Linux (RHEL) 7.x (Kernel 3.10.0-327.el7.x86_64 or higher)
  • Oracle Linux (OL)7.x (RHEL Compatible Kernel 3.10.0-327.el7.x86_64 or higher)

2. Preparing Servers for Oracle Installation

Before installing Grid and database make sure to install below deployment scripts from Dell EMC which will set environment for Oracle database installation

2.1. Attaching systems to Red Hat Network (RHN)/Unbreakable Linux Network (ULN) Repository

All the pre-requisites rpms need to installed before any GRID/DB installation is performed. Depending on the Operating System flavor, following channel subscription is necessary

Note: Skip the repository steps if there is no subscription is available for RHN/ULN and go to step 2

Step 1:

ULN Repository:

  • Oracle-7-latest
  • Oracle-7.x

RHN Repository:

  • rhel-7-server-optional-rpms
  • rhel-7.x

Note: Skip Step 2 if the repository setup is successful for all the channels mentioned in RHN/ULN

Step 2:

Most of the pre-requisite RPMs for Oracle GRID/DB install are available as part of the base ISO. However, few RPMs like compat-libstdc++.. is not available in the base (RH/OL) ISO file and needs to be downloaded and installed manually prior to installing the preinstall RPMS provided by Dell for Red Hat and Oracle for Oracle Linux

Setup a local yum repository to automatically install the rest of dependency RPMS for performing GRID/DB install

1. The recommended configuration is to serve the files over http using an Apache server (package name: httpd). This section discusses hosting the repository files from a local file system storage. While other options to host repository files exist, they are outside of the scope of this document. It is highly recommended to use local filesystem storage for speed and simplicity of maintenance.

  • To mount the DVD, insert the DVD into the server and it should auto-mount into the /media directory.
  • To mount an ISO image we will need to run the following command as root, substituting the path name of your ISO image for the field myISO.iso:

mkdir /media/myISO

mount -o loop myISO.iso /media/myISO

2. To install and configure the http daemon, configure the machine that will host the repository for all other machines to use the DVD image locally. Create the file /etc/yum.repos.d/local.repo and enter the following:

[local]

name=Local Repository

baseurl=file:///media/myISO

gpgcheck=0

enabled=0

3. Now we will install the Apache service daemon with the following command which will also temporarily enable the local repository for dependency resolution:

yum -y install httpd --enablerepo=local

  • After the Apache service daemon is installed, start the service and set it to start up for us next time we reboot. Run the following commands as root:

systemctl start httpd; systemctl enable httpd in RH/OL 7

4. To use Apache to serve out the repository, copy the contents of the DVD into a published web directory. Run the following commands as root (make sure to switch myISO with the name of your ISO)command:

mkdir /var/www/html/myISO

cp -R /media/myISO/* /var/www/html/myISO

5. This step is only necessary if you are running SELinux on the server that hosts the repository. The following command should be run as root and will restore the appropriate SELinux context to the copied files:

restorecon -Rvv /var/www/html/

6. The final step is to gather the DNS name or IP of the server that is hosting the repository. The DNS name or IP of the hosting server will be used to configure your yum repository repo file on the client server. The following is the listing of an example configuration using the RHEL 7.x Server media and is held in the configuration file:/etc/yum.repos.d/myRepo.repo

[myRepo]

name=RHEL 7.x Base ISO DVD

baseurl= http://reposerver.mydomain.com/myISO

enabled=1

gpgcheck=0

NOTE: Replace reposerver.mydomain.com with your server's DNS name or IP address. Copy the file to /etc/yum.repos.d in all the necessary servers where GRID/DB will be installed

7. Install the compat-libstdc++ rpm manually using rpm or yum command in the directory where the rpms are copied.

Ex: rpm –ivh

yum localinstall –y

Step 3:

Note: Skip 1 in step 3 if Step 2 was done.

1. Install the compat-libstdcc++ rpms by running the following command

yum install –y compat-libstdc++.i686

yum install –y compat-libstdc++.x86_64

2. Download/copy the rpms provided by DELL by navigating to DellEMC Deployment RPMs for RH and OL to the servers where GRID/DB Installations will be performed. List of RPMs are as follows and download only necessary RPMs depending on the flavor of OS.

RH:

dell-redhat-rdbms-12cR1-preinstall.rpm

dell-redhat-rdbms-utilities.rpm

OL:

dell-oracle-rdbms-utiities.rpm
Note:

dell-redhat-rdbms-12c-preinstall.rpm is designed to do the following

  • Disable transparent_hugepages in grub2.cfg
  • Disable numa in grub2.cfg
  • Create Oracle user and groups oinstall & dba
  • Set sysctl kernel parameters
  • Set user limits (nofile, nproc, stack) for Oracle user
  • Set NOZEROCONF=yes in /etc/sysconfig/network file

dell-redhat-rdbms-utilities.rpm is designed to do the following

  • Create grid user and groups asmadmin, asmdba, asmoper, backupdba, dgdba, kmdba
  • Set user limits (nofile, nproc, stack) for Grid user.
  • Set sysctl kernel parameters
  • Set RemoveIPC=no to ensure semaphores set for users are not lost after user logout

dell-oracle-rdbms-utilities.rpm is designed to do the following

  • Create grid user and groups asmadmin, asmdba, asmoper, backupdba, dgdba, kmdba
  • Set user limits (nofile, nproc, stack) for Grid user
  • Set sysctl kernel parameters

3. RedHat OS: (skip this step if OS is OL)

  • Install dell-redhat-rdbms-12cR1-preinstall.rpm

yum localinstall –y dell-redhat-rdbms-12cR1-preinstall.rpm

Note: All dependency RPMs are installed if YUM repository is setup properly
  • Install dell-redhat-rdbms-utilities.rpm

yum localinstall –y dell-redhat-rdbms-utilities.rpm

4. Oracle Linux OS: (Skip this step if OS is RedHat)

  • Install oracle-rdbms-server-12cR1-preinstall rpm

yum install –y oracle-rdbms-server-12cR1-preinstall

Note: oracle-rdbms-server-12cR1-preinstall rpm is provided by Oracle and is available in Oracle public yum repository
  • Install dell-oracle-rdbms-utitlies rpm

yum localinstall –y dell-oracle-rdbms-utilities.rpm

2.2. Setting up the Network

2.2.1. Public Network

NOTE: Ensure that the public IP address is a valid and routable IP address.

To configure the public network on each node

1. Log in as root.

2. Edit the network device file /etc/sysconfig/network-scripts/ifcfg-em#

where # is the number of the network device

DEVICE=em1

ONBOOT=yes

NM_CONTROLLED=yes

IPADDR=

NETMASK=

BOOTPROTO=static

HWADDR=

SLAVE=no

GATEWAY=

NAME="system em1"

NOTE: Ensure that the Gateway address is configured for the public network interface. If the Gateway address is not configured, the Oracle Grid installation may fail

3. Set the hostname via below command

hostnamectl set-hostname ..com

5. Type ifconfig to verify that the IP addresses are set correctly

6. To check your network configuration, ping each public IP address from a client on the LAN that is not a part of the cluster

7. Connect to each node to verify that the public network is functioning. Type ssh to verify that the secure shell (ssh) command is working

2.2.2. Private Network

The private network configuration consists of two network interfaces em2 and em3. The private network is used to provide interconnect communication between all the nodes in the cluster. This is accomplished via Oracle's Redundant Interconnect, also known as Highly Available Internet Protocol (HAIP), that allows the Oracle Grid Infrastructure to activate and load balance traffic on up to four Ethernet devices for private interconnect communication.

NOTE: Each of the two NIC ports for the private network must be on separate PCI buses

The example below provides step-by-step instructions on enabling redundant interconnect using HAIP on a fresh Oracle 12c Grid Infrastructure installation

1. Edit the file /etc/sysconfig/network-scripts/ifcfg-emX, where X is the number of the em device, ifcfg-emX configuration files of the network adapters to be used for your private interconnect.

DEVICE=em2
BOOTPROTO=static
HWADDR=
ONBOOT=yes
NM_CONTROLLED=yes
IPADDR=192.168.1.140
NETMASK=255.255.255.0


DEVICE=em3
HWADDR=
BOOTPROTO=static
ONBOOT=yes
NM_CONTROLLED=yes
IPADDR=192.168.1.141
NETMASK=255.255.255.0

2. Once you have saved both the configuration files, restart your network service using below commands

nmcli connection reload

nmcli device disconnect em2

nmcli connection up em2

Repeat the steps for each interface that has been modified.

3. The completion of the steps above have now prepared your system to enable HAIP using the Oracle Grid Infrastructure installer. When you have completed all the Oracle prerequisites and are ready to install Oracle, you will need to select em2 and em3 as 'private' interfaces at the 'Network Interface Usage' screen

4. This step enables redundant interconnectivity once your Oracle Grid Infrastructure has successfully completed and is running

2.2.3. Oracle Flex ASM Network

Oracle Flex ASM can use either the same private networks as Oracle Clusterware, or use its own dedicated private networks. Each network can be classified PUBLIC or PRIVATE+ASM or PRIVATE or ASM

2.2.4. IP Address and Name Resolution Requirements

We can configure IP address of the cluster nodes with one of the following options:

  • Grid Naming Service (GNS)
  • Domain Name Server (DNS)

2.2.4.1. Grid Naming Service (GNS)

To set up an Oracle 12c RAC using Oracle GNS:

  • Create a static IP address for the GNS VIP address.
  • A Domain Name Server (DNS) running in the network for the address resolution of the GNS virtual IP address and hostname.
  • The DNS entry to configure the GNS sub-domain delegation
  • A DHCP server running on the same public network as your Oracle RAC cluster.

Below table describes the different interfaces, IP address settings, and the resolutions in a cluster.

Interface Type Resolution
Public Static DNS
Private Static Not required
ASM Static Not requiredNot required
Node Virtual IP DHCP GNS
GNS Virtual IP Static DNS
SCAN Virtual IP DHCP GNS

Table 3: Interface, IP address,Resolution


Configuring the DNS Server to support GNS


To configure changes on a DNS server for an Oracle 12cR1 cluster using a GNS:

1. Configure GNS VIP address on DNS server—In the DNS, create a name resolution entry for the GNS virtual IP address in the forward lookup file.

For example: gns-server IN A 155.168.1.2

Where gns-server is the GNS virtual IP address given during Oracle Grid installation. The address that you provide must be routable and should be in public IP address range.

2. Configure the GNS sub-domain delegation-In the DNS, create an entry to establish DNS Lookup that directs the DNS resolution of a GNS subdomain to the cluster.

Add the following to the DNS lookup file:

clusterdomain.example.com. NS gns-server.example.com.

where clusterdomain.example.com. is the GNS sub domain (provided during the Oracle Grid installation) that you delegate and gns-server.clustername.com. resolves to the GNS virtual IP address

Configuring a DNS Client

To configure the changes required on the cluster nodes for name resolution:

1. You must configure the resolv.conf on the nodes in the cluster to contain name server entries that are resolvable to DNS server.

nmcli connection modify ipv4.dns ipv4.dns-search

2. Verify the order configuration /etc/nsswitch.conf controls the name service order. In some configurations, the NIS can cause issues with Oracle SCAN address resolution. It is recommended that you place the NIS entry at the end of the search list.

For example, hosts: dns files nis

2.2.4.1. Domain Name Server (DNS)

To set up an Oracle 12c RAC using Oracle (without GNS):

A SCAN NAME must be configured on the DNS for Round Robin resolution to three addresses (recommended) or at least one address. The SCAN addresses must be on the same subnet as Virtual IP addresses and public IP addresses.

NOTE: For high availability and scalability, it is recommended that you configure the SCAN to use Round Robin resolution to three IP addresses. The name for the SCAN cannot begin with a numeral. For installation to succeed, the SCAN must resolve to at least one address

The table below describes the different interfaces, IP address settings and the resolutions in a cluster

Interface Type Resolution
Public Static DNS
Private Static Not required
ASM Static Not required
Node Virtual IP Static DNS
SCAN virtual IP Static DNS

Table 4: Configuring a DNS Server

Configuring a DNS Server

To configure changes on a DNS server for an Oracle 12c cluster using DNS (without GNS):

1. Configure SCAN NAME resolution on DNS server. A SCAN NAME configured on the DNS server using the Round Robin policy should resolve to three public IP addresses (recommended), however the minimum requirement is one public IP address

For example

scancluster IN A 192.0.2.1
IN A 192.0.2.2
IN A 192.0.2.3

Where scancluster is the SCAN NAME provided during Oracle Grid installation.

NOTE: The SCAN IP address must be routable and must be in public range

Configuring a DNS Client

To configure the changes required on the cluster nodes for name resolution:

1. You must configure the resolv.conf on the nodes in the cluster to contain name server entries that are resolvable to DNS server.

nmcli connection modify ipv4.dns ipv4.dns-search

2. Verify the order configuration /etc/nsswitch.conf controls the name service order. In some configurations, the NIS can cause issues with Oracle SCAN address resolution. It is recommended that you place the NIS entry at the end of the search list.

For example, hosts: dns files nis

3. Preparing Shared Storage for Oracle RAC Installation

NOTE: In this section, the terms disk(s), volume(s), virtual disk(s), LUN(s) mean the same and are used interchangeably, unless specified otherwise. Similarly, the terms Stripe Element Size and Segment Size both can be used interchangeably.

Oracle RAC requires shared LUNs for storing your Oracle Cluster Registry (OCR), voting disks, Oracle Database files, and Flash Recovery Area (FRA). To ensure high availability for Oracle RAC it is recommended that you have:

  • Three shared volumes each of 20GB in size for normal redundancy or five volumes/LUNs for high redundancy for the Oracle Clusterware.
  • Three shared volumes for normal redundancy or five volumes/LUNs for high redundancy for Database
  • Three shared volumes for normal redundancy or five volumes/LUNs for high redundancy for FRA. Ideally, the FRA space should be large enough to copy all of your Oracle data files and incremental backups.
NOTE: The use of device mapper multipath is recommended for optimal performance and persistent name binding across nodes within the cluster
NOTE: For more information on attaching shared LUNs/volumes, see the Wiki documentation found at: http://en.community.dell.com/dell-groups/enterprise_solutions/w/oracle_solutions/3-storage.aspx

3.1. Setting up Device Mapper Multipath for Compellent storage

The purpose of Device Mapper Multipath is to enable multiple I/O paths to improve performance and provide consistent naming. Multipathing accomplishes this by combining your I/O paths into one device mapper path and properly load balancing the I/O. This section will provide the best practices on how to setup your device mapper multipathing within your Dell PowerEdge server. Verify that your device-mapper and multipath driver are at least the version shown below or higher:

1. rpm -qa | grep device-mapper-multipath

device-mapper-multipath-

2. Identify your local disks i.e. /dev/sda. Once your local disk is determined run the following command to obtain its scsi_id

scsi_id --page=0x83 --whitelisted --device=/dev/sda

360026b900061855e000007a54ea53534

3. Open the /etc/multipath.conf file and locate and comment out the section below

#blacklist {

# devnode "*"

#}

4. Once the scsi_id of your local disk has been retrieved, you must blacklist this scsi_id from being used as a multipath device within /etc/multipath.conf file and locate, uncomment and modify the section below

blacklist {

wwid

devnode "^(ram|raw|loop|fd|md|dm-|sr||scd|st)[0-9]*"

devnode "^hd[a-z]"

}

5. Uncomment your defaults section within your /etc/multipath.conf

defaults {

udev_dir /dev

polling_interval 10

selector "round-robin 0"

path_grouping_policy multibus

getuid_callout "/sbin/scsi_id -g -u -s/block/%n"

prio_callout /bin/true

path_checker readsector0

rr_min_io 100

max_fds 8192

rr_weight priorities

failback immediate

no_path_retry fail

user_friendly_names yes

}

6. Locate the multipath section within your /etc/multipath.conf file. In this section you will provide the scsi_id of each volume and provide an alias in order to keep a consistent naming convention across all of your nodes. An example is shown below

multipaths {

multipath {

wwid

alias alias_of_volume1

}

multipath {

wwid

alias alias_of_volume2

}
}

7. Restart your multipath daemon service using

systemctl restart multipathd.service

8. Verify that your multipath volumes alias are displayed properly

multipath -ll

9. Make sure iSCSI service starts upon boot using the command

systemctl enable multipathd.service

10. Repeat steps 1-9 for all nodes

3.2. Setting up EMC PowerPath for Xtreme IO Storage

1. Download latest EMC PowerPath(PP) rpm depending on the flavor of OS

Note: There are separate rpms for EMC PP in RedHat Linux and EMC PP in Oracle Linux.

2. Install the EMC PP rpm

rpm –ivh EMCPower.LINUX-6.1.0.00.00-091.RHEL7.x86_64.rpm

3. Apply the license to EMC PP

emcpreg –install

=========== EMC PowerPath Registration ===========

Do you have a new registration key or keys to enter?No yes

Enter the registration keys(s) for your product(s),

one per line, pressing Enter after each key.

After typing all keys, press Enter again.

Key (Enter if done): XXXX-XXXX-XXXX-XXXX-XXXX-XXXX

1 key(s) successfully added.

4. Check if the license is applied properly

powermt check registration

5. Rescan the scsi bus to ensure all LUN paths are detected

rescan-scsi-bus.sh –a -i

6. Ensure EMC PP service starts automatically when OS reboots

systemctl enable PowerPath

7. Start EMC PP service to ensure multiple paths pointing to the same disks are identified with one logical disk

systemctl start PowerPath

Note: It is not necessary to configure anything in EMC PP conf file. Starting the service will take care of identifying all the physical paths pointing to the same disk and creates a logical disk accordingly under /dev/emcpower*

8. Check the multipathing disks

powermt display dev=all

9. Set the policy to rr (round robin) to ensure all paths are used for data transfer

powermt set policy=rr

Note: The above set policy is applied to all logical disks by default

10. Save the config file to ensure latest policy is applied even after OS reboots

powermt save

Note: EMC PP configuration file is /etc/powermt_custom.xml.

3.3. Setting up Device Mapper Multipath for Xtreme IO storage

1. rpm -qa | grep device-mapper-multipath

device-mapper-multipath-

2. Identify your local disks i.e. /dev/sda. Once your local disk is determined run the following command to obtain its scsi_id:

scsi_id --page=0x83 --whitelisted --device=/dev/sda

360026b900061855e000007a54ea53534

3. Open the /etc/multipath.conf file and locate and comment out the section below

#blacklist {

# devnode "*"

#}

4. Once the scsi_id of your local disk has been retrieved, you must blacklist this scsi_id from being used as a multipath device within /etc/multipath.conf file and locate, uncomment and modify the section below

blacklist {

wwid

devnode "^(ram|raw|loop|fd|md|dm-|sr||scd|st)[0-9]*"

devnode "^hd[a-z]"

}

5. Add these following line in to your /etc/multipath.conf

defaults {

user_friendly_names yes

}

devices {

device {

vendor XtremIO

product XtremApp

path_grouping_policy multibus

path_selector "queue-length 0"

rr_min_io_rq 1

}

}

6. Locate the multipaths section within your /etc/multipath.conf file. In this section you will provide the WWID ** of each LUN/volume and provide an alias in order to keep a consistent naming convention across all of your nodes. An example is shown below

multipaths {

multipath {

wwid

alias alias_of_volume1

}

multipath {

wwid

alias alias_of_volume2

}

}

7. Restart your multipath daemon service using

systemctl restart multipathd.service

8. Verify that your multipath volumes alias are displayed properly

multipath -ll

9. Make sure iSCSI service starts upon boot using the command

systemctl enable multipathd.service

10. Repeat steps 1-9 for all nodes.

Note: Here for Xtremio storage, the WWIDs of the volumes usually are in the format of ‘3’, where NAA identifier is the network address authority of the XtremIo volume. You can get the NAA identifier of the volume from XtremIO GUI:

For example, there are the volumes and their NAA identifiers shown in the XtremIO volume GUI:

volumes and their NAA identifiers
Figure 1 : volumes and their NAA identifiers

Volume TA1_DATA2’s NAA identifier is ‘514foc543560003’. The corresponding WWID for this volume is

‘3514foc543560003’ . Its corresponding entry in the multipath.conf file would be:

multipath {

wwid 3514foc543560003

alias DATA2

}

3.4. Partitioning the Shared Disk

This section describes how to use Parted utility to create a single partition on a volume/virtual disk that spans the entire disk.

  • To use Parted utility to create a partition:

#Parted

#(Parted) select /dev/mapper/

#(Parted) mklabel gpt mkpart primary 1 100% optimal

#(Parted) print

Model: Linux device-mapper (multipath) (dm)

Disk /dev/mapper/DATA: 1100GB

Sector size (logical/physical): 512B/4096B

Partition Table: gpt

Disk Flags:

Number Start End Size File system Name Flags

1 1049KB 1023GB 1023GB primary

  • Repeat above steps for all volumes and restart multipathd on all other nodes

systemctl restart multipathd.service

  • Reboot the system if your newly created partition is not displayed properly

3.5. Using udev Rules for EMC PowerPath devices

Once EMC PowerPath is installed and running, automatically all logical paths pointing to one physical LUN from storage is combined and presented as one Logical device ID with unique Pseudo name. Red Hat Enterprise Linux 7.x/Oracle Linux 7.x have the ability to use udev rules to ensure that the system properly manages permissions of device nodes. In this case, we are referring to properly setting permissions for our LUNs/volumes discovered by the OS. It is important to note that udev rules are executed in enumerated order. When creating udev rules for setting permissions, please include the prefix 20- and append .rules to the end of the filename.

  • Create a file 20-dell_oracle.rules under /etc/udev/rules.d
  • Ensure each block device has an entry in the file as shown below.

#---------------------start udev rule contents ------------------------#

SUBSYSTEM=="block", KERNEL=="emcpowera1", GROUP="asmadmin",OWNER="grid", MODE="0660"

SUBSYSTEM=="block", KERNEL=="emcpowerb1", GROUP="asmadmin",OWNER="grid", MODE="0660"

SUBSYSTEM=="block", KERNEL=="emcpowerc1", GROUP="asmadmin",OWNER="grid", MODE="0660"

#-------------------------- end udev rule contents ------------------#

  • Run "udevadm trigger" to apply the rule.

3.6. Using Udev Rules to Mark the Shared Disks as Candidate Disks

Red Hat Enterprise Linux 7.x/Oracle Linux 7.x have the ability to use udev rules to ensure that the system properly manages permissions of device nodes. In this case, we are referring to properly setting permissions for our LUNs/volumes discovered by the OS. It is important to note that udev rules are executed in enumerated order. When creating udev rules for setting permissions, please include the prefix 20- and append .rules to the end of the filename. An example file name is 20-dell_oracle.rules

In order to set udev rules, one must capture the multipath volumes alias of each disk to be used within your ASM

multipath –ll

This command lists all the volume aliased present in the node. The sample output looks like the below

Once the multipath volumes alias have been captured, create a file within the /etc/udev/rules.d/ directory and name it 20-dell_oracle.rules. A separate KERNEL entry must exist for each storage device.

An example of what needs to be placed in the /etc/udev/rules.d/20-dell_oracle.rules file

#------------------------ start udev rule contents ------------------#

KERNEL=="dm-*", ENV =="OCRp?", OWNER:="grid", GROUP:="asmadmin", MODE="0660"

KERNEL=="dm-*", ENV =="DATAp?", OWNER:="grid", GROUP:="asmadmin", MODE="0660"

KERNEL=="dm-*", ENV =="FRAp?", OWNER:="grid", GROUP:="asmadmin", MODE="0660"

#-------------------------- end udev rule contents ------------------#

As you can see from the above, the KERNEL command looks at all dm devices, searches for the devices whose names match against the multipath volume aliases, if the DM_NAME of the device is matched with alias, appropriately assign the grid user as the OWNER and the asmadmin group as the GROUP.

  • Run "udevadm trigger" to apply the rules.

3.6. Installing and Configuring ASMLib

If you are not using Udev rules one must use ASMLib. Use ULN and OTN to download the following files:

oracleasm-support

oracleasmlib

Kmod-oracleasm

NOTE: If your current OS distribution is Oracle Linux, you can obtain the software from the Unbreakable Linux Network using ULN.
NOTE: Download the latest versions of oracleasm-support and kmod-oracleasm from ULN.Download the latest versions of oracleasmlib but the version of oracleasm must match the current kernel used in your system. Check this information issuing the command uname -r. check the below link for oracleasmlib downloads:

http://www.oracle.com/technetwork/server-storage/linux/asmlib/ol6-1709075.html

2. Enter the following command as root

rpm -ivh oracleasm-support-* \

oracleasmlib-* \

kmod-oracleasm-* \

NOTE : oracleasm-$(uname -r)-* NOTE: Replace * by the correct version numbers of the packages or you can leave them in place of the command ensuring that there are no multiple versions of the packages in the shell's current working directory.

3.7. Using ASMLib to Mark the Shared Disks as Candidate Disks

1. To configure ASM use the init script that comes with the oracleasmsupport package. The recommended method is to run the following

command as root

# /usr/sbin/oracleasm configure -i

NOTE: Oracle recommends using the oracleasm command found under /usr/sbin. The /etc/init.d path has not been deprecated, but the oracleasm binary provided by Oracle in this path is used for internal purposes.

Default user to own the driver interface []: grid

Default group to own the driver interface []: asmadmin

Start Oracle ASM library driver on boot (y/n) [ n ]: y

Fix permissions of Oracle ASM disks on boot (y/n) [ y ]: y

NOTE: In this setup the default user is set to grid and the default group is set to asmadmin. Ensure that the oracle user is part of the asmadmin group. You can do so by using the dell-validated and dell-oracle-utilities rpms.

The boot time parameters of the Oracle ASM library are configured and a sequential text interface configuration method is displayed.

2. Set the ORACLEASM_SCANORDER parameter in

/etc/sysconfig/oracleasm

NOTE: When setting the ORACELASM_SCANORDER to a value, specify a common string associated with your device mapper pseudo device name. For example, if all the device mapper device had a prefix string of the word "asm", (/dev/mapper/asm-ocr1, /dev/mapper/asm-ocr2), populate the ORACLEASM_SCANORDER parameter as: ORACLEASM_SCANORDER="dm". This would ensure that oracleasm will scan these disks first.

3. Set the ORACLEASM_SCANEXCLUDE parameter in /etc/sysconfig/oracleasm to exclude non-multipath devices.

For example: ORACLEASM_SCANEXCLUDE=

NOTE: If we wanted to ensure to exclude our single path disks within /dev/ such as sda and sdb, our ORACLEASM_SCANEXCLUDE string would look like: ORACLEASM_SCANEXCLUDE="sda sdb"

4. To create ASM disks that can be managed and used for Oracle database installation, run the following command as root:

/usr/sbin/oracleasm createdisk DISKNAME /dev/mapper/diskpartition

NOTE: The fields DISKNAME and /dev/mapper/diskpartition should be substituted with the appropriate names for your environment respective
NOTE: It is highly recommended to have all of your Oracle related disks to be within Oracle ASM. This includes your OCR disks, voting disks, database disks, and flashback recovery disks.

5. Verify the presence of the disks in the ASM library by running the following command as root

/usr/sbin/oracleasm listdisks

All the instances of DISKNAME from the previous command(s) are displayed.

To delete an ASM disk, run the following command:

/usr/sbin/oracleasm deletedisk DISKNAME

6. To discover the Oracle ASM disks on other nodes in the cluster, run the following command on the remaining cluster nodes

/usr/sbin/oracleasm scandisks

4. Installing Oracle 12c Grid Infrastructure for a Cluster

This section gives you the installation information of Oracle 12c grid infrastructure for a cluster

Before you install the Oracle 12c RAC software on your system

  • Ensure that you have already configured your operating system, network, and storage based on the steps from the previous sections within this document.
  • Locate your Oracle 12c media kit.

Configure the System Clock Settings for All Nodes

To prevent failures during the installation procedure, configure all the nodes with identical system clock settings. Synchronize your node system clock with the Cluster Time Synchronization Service (CTSS) which is built in Oracle 12c. To enable CTSS, disable the operating system network time protocol daemon (ntpd) service using the following commands in this order:

  • systemctl stop chronyd.service
  • systemctl disable chronyd.service
  • mv /etc/chrony.conf /etc/ntp.chrony.orig

The following steps are for node one of your cluster environment, unless otherwise specified.

a. Log in as root.

b. If you are not in a graphical environment, start the X Window System by typing: startx

c. Open a terminal window and type: xhost +

d. Mount the Oracle Grid Infrastructure media.

e. Log in as grid user, for example: su - grid.

f. Type the following command to start the Oracle Universal Installer: /runInstaller

g. In the Select Installation Option window, select Install and Configure Grid Infrastructure for a Cluster and click Next

select Install and Configure Grid
Figure 2 : select Install and Configure Grid

h. In the Select Cluster Type window, select Configure a Flex Cluster, and click Next.

Select Cluster Type window
Figure 3 :Select Cluster Type window

i. In the Select Product Languages window, select English, and click Next.

j. In the Grid Plug and Play Information window, enter the following information:

  • Cluster Name—Enter a name for your cluster.
  • SCAN Name—Enter the named registered in the DNS server which is unique for the entire cluster. For more details on setting up your SCAN name see, "IP Address and Name Resolution Requirements".
  • SCAN Port—retain the default port of 1521.
  • Configure GNS—Check this option and select Configure nodes Virtual IPs as assigned by the Dynamic Networks And select Create a new GNS and enter the GNS VIP Address for the cluster and GNS Sub Domain mentioned in the DNS server and Click Next
Enter GNS Sub Domain mentioned in the DNS server and Click Next

Figure 4 : Enter GNS Sub Domain mentioned in the DNS server and Click Next

k. In the Cluster Node Information window, click Add to add additional nodes that must be managed by the Oracle Grid Infrastructure.

  • Enter the public Hostname information for Hub and Leaf cluster member nodes
  • Enter the Role of Cluster member node
  • Repeat step ‘k’ for each node within your cluster
•Enter the Role of Cluster member node

Figure 5 : Enter the Role of Cluster member node

l. Click SSH Connectivity and configure your passwordless SSH connectivity by entering the OS Password for the grid user and click Setup.

m. Click Ok and then click Next to go to the next window.

n. In the Specify Network Interface Usage window, make sure that the correct interface usage types are selected for the interface names. From the ‘Use for’ drop-down list, select the required interface type. The available options are Public, Private, ASM, ASM and Private and Do Not Use. Click Next.

select the required interface type
Figure 6 : select the required interface type

o. In the Grid Infrastructure Management Repository Option window select Yes for Configure Grid Infrastructure Management and click Next.

p. In the Storage Option Information window, select Automatic Storage Management (ASM) and click Next.

q. In the Create ASM Disk Group window, enter the following information:

  • ASM diskgroup— Enter a name, for example: OCR_VOTE
  • Redundancy— For your OCR and voting disks, select High if five ASM disks are available, select Normal if three ASM disks are available, or select External if one ASM disk is available (not recommended).
NOTE: for Oracle Linux 7 (RHEL compatible kernel) If no candidate disks are displayed, click Change Discovery Path and enter ORCL:*or /dev/oracleasm/disks/*. Ensure that you have marked your Oracle ASM disks, for more information see, "Using ASMLib to Mark the Shared Disks as Candidate Disks".
NOTE: For RHEL 7.x, If no candidate disks are displayed, click Change Discovery Path and enter /dev/mapper/*.

Specify ASM password
Figure 7 : Specify ASM password

r. In the Specify ASM Password window, choose the relevant option under Specify the passwords for these accounts and enter the relevant values for the password. Click Next.

s. In the Failure Isolation Support window, select Do Not use Intelligent Platform Management Interface (IPMI)

t. In the Privileged Operating Systems Groups window, select:

  • asmdba for Oracle ASM DBA (OSASM) Group
  • asmoper for Oracle ASM Operator (OSOPER) Group
  • asmadmin for Oracle ASM Administrator (OSDBA) Group
Privileged Operating Systems Groups

Figure 8 : Privileged Operating Systems Groups
u. In the Specify Installation Location window, specify the values of your Oracle Base and Software Location as configured within the Dell Oracle utilities RPM

NOTE: The default locations used within the Dell Oracle utilites RPM are:
  • Oracle Base -/u01/app/grid
  • Software Location - /u01/app/12.1.0/grid_1
Specify installation location

Figure 9 : Specify installation location
v. In the Create Inventory window, specify the location for your Inventory Directory. Click Next.

specify the location for your Inventory Directory
Figure 10 : specify the location for your Inventory Directory

NOTE: The default location based on the Dell Oracle utilites RPM for Inventory Directory is /u01/app/oraInventory

w. In the Root script execution configuration window, select automatically run configuration scripts and provide the enter the password for root user and click Next

enter the password for root user and click Next
Figure 11 : enter the password for root user and click Next

x. In the Summary window, verify all the settings and select Install

y. In the Install Product window check the status of the Grid Infrastructure Installation

z. After the installation is complete, click Yes for the Configuration scripts to run by privileged user root in the popped up window

After the installation is complete, click Yes
Figure 11 : After the installation is complete, click Yes

In the Finish window, click Close

5. Installing Oracle 12c Database

5.1. Installing Oracle 12c Database (RDBMS) Software

The following steps are for node 1 of your cluster environment, unless otherwise specified.

  • Log in as root and type: xhost+.
  • Mount the Oracle Database 12c media.
  • Log in as Oracle user by typing:su - oracle
  • Run the installer script from your Oracle database media:
<CD_mount>/runInstaller
5. In the Configure Security Updates window, enter your My Oracle Support credentials to receive security updates, else click Next


enter your My Oracle Support credentials to receive security updates
Figure 12 : enter your My Oracle Support credentials to receive security updates

6. In the Select Installation Option window, select Install database software only.

select Install database software only.
Figure 13 : select Install database software only.

7 . In the Grid Installation Options window Select Oracle Real Application Clusters database installation and click Next

Select Oracle Real Application Clusters database installation and click Next
Figure 14 : Select Oracle Real Application Clusters database installation and click Next

8. In the Select List of Nodes window select all the Hub nodes and omit Leaf nodes and Click SSH Connectivity and configure your passwordless SSH connectivity by entering the OS Password for the oracle user and selecting Setup. Click Ok and click Next to go the next window

Select List of Nodes window
SSH Conectivity
Figure 15 : Select List of Nodes window

9. In the Select Product Lanaguages window, select English as the Language Option and click Next

10. In the Select Database Edition window, select Enterprise Edition and click Next

select Enterprise Edition and click Next
Figure 16 : select Enterprise Edition and click Next

11. In the Specify Installation Location window, Specify the location of your Oracle Base configured within the Dell oracle utilities RPM

NOTE: The default locations used within the Dell Oracle utilites RPM are as follows:
  • Oracle Base—/u01/app/oracle
  • Software Location—/u01/app/oracle/product/12.1.0/dbhome_1
Specify the location of your Oracle Base configured

Figure 17 : Specify the location of your Oracle Base configured
12. In the Privileged Operating System Groups window, select dba for Database Administrator (OSDBA) group, dba for Database Operator (OSOPER) group, backupdba for Database Backup and Recovery (OSBACKUPDBA) group, dgdba for Data Guard administrative (OSDGDBA) group and kmdba for Encryption Key Management administrative (OSKMDBA) group and click Next

Privileged Operating System Groups
Figure 18 : Privileged Operating System Groups

13. In the Summary window verify the settings and select Install

verify the settings and select Install
Figure 19 : verify the settings and select Install

14. On completion of the installation process, the Execute Configuration scripts wizard is displayed. Follow the instructions in the wizard and click Ok

Execute Configuration scripts wizard
Figure 20 : Execute Configuration scripts wizard

NOTE: Root.sh should be run on one node at a time

15. In the Finish window, click Close

5.2. Creating Diskgroup Using ASM Configuration Assistant (ASMCA)

This section contains procedures to create the ASM disk group for the database files and Flashback Recovery Area (FRA).

1. Log in as grid user

2. Start the ASMCA utility by typing: $/bin/asmca

3. In the ASM Configuration Assistant window, select the Disk Groups tab

4. Click Create

ASM Configuration Assistant (ASMCA)
Figure 21: ASM Configuration Assistant (ASMCA)

5. Enter the appropriate Disk Group Name, for example: DATA

6. Select External for Redundancy

7. Select the appropriate member disks to be used to store your database files, for example: ORCL: DATA

Select the appropriate member disks
Figure 22 : Select the appropriate member disks

NOTE: If no candidate disks are displayed, click Change Discovery Path and type: ORCL:* or /dev/oracleasm/disks/*
NOTE: Please ensure you have marked your Oracle ASM disks. For more information, see "Using ASMLib to Mark the Shared Disks as Candidate Disks"

8. Click Show Advanced Options and select the appropriate Allocation Unit Size and specify the minimum software versions for ASM, Database and ASM volumes and click OK to create and mount the disks

select the appropriate Allocation Unit Size and specify the minimum software versions
Figure 23 : select the appropriate Allocation Unit Size and specify the minimum software versions

9. Repeat step 4 to step 8 to create another disk group for your Flashback Recovery Area (FRA). NOTE: Make sure that you label your FRA disk group differently than your database disk group name. For labeling your Oracle ASM disks, see "Using ASMLib to Mark the Shared Disks as Candidate Disks"

10. Click Exit to exit the ASM Configuration Assistant.

5.3. Creating Database Using DBCA

The following steps are applicable for node 1 of your cluster environment, unless otherwise specified:

1. Login as oracle user

2. From $, run the DBCA utility by typing: $/bin/dbca

3. In the Welcome window, select Create Database and click Next

select Create Database and click Next
Figure 24 : select Create Database and click Next

4. In the Creation Mode window, select Advanced Mode, and click Next

select Advanced Mode, and click Next
Figure 25 : select Advanced Mode, and click Next

5. In the Database Template window, Select Oracle Real Application Cluster (RAC) database in the Database type and Select Admin-Managed for Configuration Type and Select Template, and click Next

Select Oracle Real Application Cluster (RAC) database
Figure 26 : Select Oracle Real Application Cluster (RAC) database

6. In the Database Identification window:

  • Enter appropriate values for Global Database Name and SID Prefix
  • Select Create As Container Database and specify number of PDBs and PDB Name Prefix
  • Click Next
 Database Identification window

Figure 27 : Database Identification window
7. In the Database Placement window select all the available Hub nodes and click Next

select all the available Hub nodes and click Next
Figure 28 : select all the available Hub nodes and click Next

8. In the Management Options window, select Configure Enterprise Manager (EM) Database Express and Run Cluster Verification Utility (CVU) Checks Periodically and click Next

 select Configure Enterprise Manager (EM) Database Express and Run Cluster Verification Utility
Figure 29 : select Configure Enterprise Manager (EM) Database Express and Run Cluster Verification Utility

9. In the Database Credentials window, enter the appropriate credentials for your database

Database Credentials window
Figure 30 : Database Credentials window

10. In the Storage Locations window, select:

  • Automatic Storage Management (ASM) for Storage Type.
  • Use Oracle-Managed Files for Storage Location.
  • Browse to select the ASM disk group that you created to store the database files for Database Area.
  • Select Specify Flash Recovery Area.
  • Browse and select the ASM disk group that you created for Flash Recovery Area.
  • Enter a value for Flash Recovery Area Size.
  • Select Enable Archiving.
  • Click Next
Storage location window

Figure 31 : Storage location window
11. In the Database Options window, click Next

Database Options window, click Next
Figure 32 : Database Options window, click Next

12. In the Initialization Parameters window:

  • Select Custom Setting.
  • For the Memory Management, select Automatic Shared Memory Management
  • Specify appropriate values for the SGA Size and PGA Size
  • Click Next
Initialization Parameters window

Figure 33: Initialization Parameters window
13. In the Creation Options window, click Finish

Create Options window
Figure 34 : Create Options window

14. In the Summary window, click Finish to create database

click Finish to create database
Figure 35 : click Finish to create database

NOTE: Database creation can take some time to complete

15. Click Exit on the Database Configuration Assistant window after the database creation is complete




Article ID: SLN312606

Last Date Modified: 08/28/2018 11:17 AM


Rate this article

Accurate
Useful
Easy to understand
Was this article helpful?
Yes No
Send us feedback
Comments cannot contain these special characters: <>()\
Sorry, our feedback system is currently down. Please try again later.

Thank you for your feedback.