Skip to main content
  • Place orders quickly and easily
  • View orders and track your shipping status
  • Enjoy members-only rewards and discounts
  • Create and access a list of your products
  • Manage your Dell EMC sites, products, and product-level contacts using Company Administration.

Article Number: 000110066


How to deploy Oracle 12c Release 2 standalone Database on Red Hat Enterprise Linux 7.x

Summary: How to deploy Oracle 12c Release 2 standalone Database on Red Hat Enterprise Linux 7.x.

Article Content


Instructions

How to deploy Oracle 12c Release 2 standalone Database on Red Hat Enterprise Linux 7.x.

1. Software and Hardware Requirements

1.1. Hardware Requirements      

  • Oracle requires at least 4 GB of physical memory
  • Swap space is proportional to the amount of RAM allocated to the system
RAM Swap Space
Between 4GB and 16GB Equal to the size of RAM
More than 16GB
  16GB16GB
            
If you enable Huge Pages, then you should deduct the memory allocated to Huge Pages from the available RAM before calculating swap space
  • The following table describes the disk space required for an Oracle installation
Software Installation LocationSoftware Installation Location  Minimum Disk Space RequirementsMinimum Disk Space Requirements
  Grid Infrastructure homeGrid Infrastructure home At least 8 GB of disk space
  Oracle Database homeGrid Infrastructure home At least 6.4 GB of disk space
  Shared storage disk spaceShared storage disk space Sizes of Database and Flashback Recovery Area
  • Oracle's temporary space (/tmp) must be at least 1 GB in size
  • A monitor that supports resolution of 1024 x 768 to correctly display the Oracle Universal Installer (OUI)
   1.2. Network Requirements  
  • It is recommended to ensure each node contains at least one network interface cards for public network
  • The hostname of each node must follow the RFC 952 standard (www.ietf.org/rfc/rfc952.txt). Hostname that include an underscore ("_") are not permitted
   1.3. Operating System Requirements
  • Red Hat Enterprise Linux (Red Hat Enterprise Linux) 7.x (Kernel 3.10.0-693.el7.x86_64 or higher)
       1.3.1. Operating System Disk Partition

       Below is the recommended disk partitioning scheme entries when installing Red Hat Enterprise Linux 7 using a kickstart file on the local HDDs with at least 1.2 TB space available

     part /boot --asprimary --fstype="xfs" --ondisk=sda --size=1024

  part pv.1 --size=1 --grow --ondisk=sda --asprimary
  volgroup rhel7 pv.1
  logvol / --name=root --fstype=xfs --vgname=rhel7 --size=51200
  logvol swap --fstype swap --name=swap --vgname=rhel7 --size=17408
  logvol /home --name=home --fstype=xfs --vgname=rhel7 --size=51200
  logvol /var --name=var --fstype=xfs --vgname=rhel7 --size=20480
  logvol /opt --name=opt --fstype=xfs --vgname=rhel7 --size=20480
  logvol /tmp --name=tmp --fstype=xfs --vgname=rhel7 --size=5120
  logvol /u01 --name=u01 --fstype=xfs --vgname=rhel7 --size=1 --grow

2. Preparing Servers for Oracle Installation

    Before installing Grid and database make sure to install below deployment scripts from Dell EMC which will set environment for Oracle database installation

2.1. Attaching systems to Red Hat Network (RHN)/Unbreakable Linux Network (ULN) Repository

       Step 1: All the pre-requisites rpms need to installed before any GRID/DB installation is performed

  • rhel-7-server-optional-rpms
  • rhel-7.x
Skip Step 2 if the repository setup is successful for all the channels mentioned in RHN/ULN

 

Step 2:
Most of the pre-requisite RPMs for Oracle GRID/DB install are available as part of the base ISO. However, few RPMs like compat-libstdc++. is not available in the base (RH) ISO file and needs to be downloaded and installed manually prior to installing the preinstall RPMS provided by Dell for Red Hat

Setup a local yum repository to automatically install the rest of dependency RPMS for performing GRID/DB install
1. The recommended configuration is to serve the files over http using an Apache server (package name: httpd). This section discusses hosting the repository files from a local file system storage. While other options to host repository files exist, they are outside of the scope of this document. It is highly recommended to use local file system storage for speed and simplicity of maintenance.

  • To mount the DVD, insert the DVD into the server and it should auto-mount into the /media directory.
  • To mount an ISO image we will need to run the following command as root, substituting the path name of your ISO image for the field myISO.iso:
                      mkdir /media/myISO
                    mount -o loop myISO.iso /media/myISO

 
2. To install and configure the http daemon, configure the machine that will host the repository for all other machines to use the DVD image locally. Create the file /etc/yum.repos.d/local.repo and enter the following:
 

                   [local]
                   name=Local Repository

                   baseurl=file:///media/myISO
                   gpgcheck=0
                   enabled=0 

3.Now we will install the Apache service daemon with the following command which will also temporarily enable the local repository for dependency resolution:

         yum -y install httpd --enablerepo=local

           After the Apache service daemon is installed, start the service and set it to start up for us next time we reboot. Run the following commands as root:

         systemctl start httpd

4. To use Apache to serve out the repository, copy the contents of the DVD into a published web directory. Run the following commands as root (make sure to switch myISO with the name of your ISO)command:

                 mkdir /var/www/html/myISO
                 cp -R /media/myISO/* /var/www/html/myISO

 

5. This step is only necessary if you are running SELinux on the server that hosts the repository. The following command should be run as root and will restore the appropriate SELinux context to the copied files:
       restorecon -Rvv /var/www/html/

6. The final step is to gather the DNS name or IP of the server that is hosting the repository. The DNS name or IP of the hosting server will be used to configure your yum repository repo file on the client server. The following is the listing of an example configuration using the Red Hat Enterprise Linux 7.x Server media and is held in the configuration file:/etc/yum.repos.d/myRepo.repo

                [myRepo]
                name=Red Hat Enterprise Linux 7.x Base ISO DVD
                baseurl= http://reposerver.mydomain.com/myISO
                enabled=1
                gpgcheck=0

Replace reposerver.mydomain.com with your server's DNS name or IP address. Copy the file to /etc/yum.repos.d in all the necessary servers where GRID/DB will be installed

 

7. Install the compat-libstdc++ rpm manually using rpm or yum command in the directory where the rpms are copied.

          Ex: rpm -ivh

                yum localinstall -y

 Step 3:

Replace reposerver.mydomain.com with your server's DNS name or IP address. Copy the file to /etc/yum.repos.d in all the necessary servers where GRID/DB will be installed

 

1. Install the compat-libstdcc++ rpms by running the following command

            yum install -y compat-libstdc++.i686

            yum install -y compat-libstdc++.x86_64

2. Download the latest DellEMC Oracle Deployment tar file from DellEMC Dell Oracle Deployment RPMs for Oracle 12cR1 on RHEL7.x  Deployment RPMs for RH to the servers where GRID/DB Installations will be performed.  
 

Deployment RPM tar-file’s standardized naming convention: DellEMC-Oracle-Deployment-O-D-Y.M-#.tar.gz where O is OS version, D is DB version, Y is Year, M is Month, and # is Release Number E.g. DellEMC-Oracle-Deployment-Red Hat Enterprise Linux7-12cR1-2017.02-1.tar.gz

 Untar the DellEMC deployment tar-file , execute the following command:
tar -zxvf DellEMC-Oracle-Deployment-Red Hat Enterprise Linux7-12cR2-2018.06-1.tar.gz
After untarring we can find below rpms:

   dell-redhat-rdbms-12cR2-preinstall-2018.06-1.el7.noarch.rpm
  dell-redhat-rdbms-utilities-2018.06-1.el7.noarch.rpm

   dell-redhat-rdbms-12cR2-preinstall-2018.06-1.el7.noarch.rpm is designed to do the following
  • Disable transparent_hugepages in grub2.cfg
  • Disable numa in grub2.cfg
  • Create Oracle user and groups oinstall & dba
  • Set sysctl kernel parameters
  • Set user limits (nofile, nproc, stack) for Oracle user
  • Set NOZEROCONF=yes in /etc/sysconfig/network file

   dell-redhat-rdbms-utilities-2018.06-1.el7.noarch.rpm is designed to do the following

  • Create grid user and groups asmadmin, asmdba, asmoper, backupdba, dgdba, kmdba
  • Set user limits (nofile, nproc, stack) for Grid user.
  • Set sysctl kernel parameters
  • Set RemoveIPC=no to ensure semaphores set for users are not lost after user logout

3. Install these two rpms

             yum localinstall -y dell-redhat-rdbms-12cR2-preinstall-2018.06-1.el7.noarch.rpm

All dependency RPMs are installed if YUM repository is setup properly

   
            yum localinstall -y dell-redhat-rdbms-utilities-2018.06-1.el7.noarch.rpm

2.2. Setting up the Network

2.2.1. Public Network

HOW16670_en_US__1icon Ensure that the public IP address is a valid and routable IP address.

 To configure the public network
1. Log in as root.
2. 
Navigate to /etc/sysconfig/network-scripts and edit the ifcfg-em# file
    where # is the number of the network device

NAME="Oracle Public"
DEVICE= "em3"

ONBOOT=yes
TYPE= Ethernet
BOOTPROTO=static
IPADDR=
NETMASK=
GATEWAY=

HOW16670_en_US__1icon When configuring Red Hat Enterprise Linux 7 as a guest OS in a VMware ESXi environment, the network device enumeration might begin with 'ens#' instead of 'em#'

 3. Set the hostname via below command
       
hostnamectl set-hostname
         where is the hostname that we are using for installation
4. 
Type service network restart to restart the network service
5. Type ifconfig to verify that the IP addresses are set correctly
6. 
To check your network configuration, ping the public IP address from a client on the LAN
3. Preparing Shared Storage for Oracle Standalone Installation

HOW16670_en_US__1icon In this section, the terms disk(s), volume(s), virtual disk(s), LUN(s) mean the same and are used interchangeably, unless specified otherwise

 Oracle 12c Standalone Database installation requires LUNs for storing your Oracle Cluster Registry (OCR), Oracle Database files, and Flash Recovery Area (FRA). The following table shows the typical recommended storage volume design for Oracle 12c Database.

Database Volume Type/PurposeDatabase Volume Type/Purpose No of Volumes Volume Size
OCR/VOTE 3 50 GB each
DATA 4 250 GB1 each
REDO2 2 At least 50GB each
FRA 1 100 GB3
TEMP 1 100GB

1 - Adjust each volume size based on your database; 2 - At least two REDO ASM diskgroups are recommended, each with at least one storage volume; 3 - Ideally, the size should be 1.5x the size of the database if storage usable capacity permits;

 3.1. Setting up Device Mapper Multipath for XtremIO storage
The purpose of Device Mapper Multipath is to enable multiple I/O paths to improve performance and provide consistent naming. Multipathing accomplishes this by combining your I/O paths into  one device mapper path and properly load balancing the I/O. This section will provide the best practices on how to setup your device mapper multipathing within your Dell PowerEdge server.

HOW16670_en_US__1icon Skip this section if Red Hat Enterprise Linux 7 is deployed as a guest OS in a virtual environment as multipathing is handled at the bare-metal host-level

  Verify that your device-mapper and multipath driver are at least the version shown below or higher:

 1. rpm -qa | grep device-mapper-multipath

       device-mapper-multipath

2. 
Enable multipath by  mpathconf -enable

3. Configure XtremIO multipath by modifying /etc/multipath.conf with the following

         device {

         vendor                         XtremIO

         product                        XtremApp

        path_grouping_policy multibus

        path_checker              tur

        path_selector              "queue-length 0"

        rr_min_io_rq               1

       user_friendly_names  yes

       fast_io_fail_tmo         15

       failback                       immediate

}

4. Add appropriate user friendly names to each volume with the corresponding scsi_id. We can get scsi_ids with the below command

  /usr/lib/udev/scsi_id -g -u -d /dev/sdX

5. Locate the multipath section within your /etc/multipath.conf file. In this section you will provide the scsi_id of each volume and provide an alias in order to keep a consistent naming convention across all of your nodes. An example is shown below

      multipaths {

      multipath {

     wwid              

     alias                 alias_of_volume1

}

    multipath {

    wwid         

   alias alias_of_volume2

}
}

6. Restart your multipath daemon service using

 Service multipathd restart

7. Verify that your multipath volumes alias are displayed properly

     multipath -ll

3.2 Partitioning the Shared Disk

      This section describes how to use parted utility to create a single partition on a volume/virtual disk that spans the entire disk.

  • When Red Hat Enterprise Linux is running as a bare-metal OS, partition each database volume that was setup using device-mapper by running the following command:

$> parted -s /dev/mapper/ mklabel msdos

$> parted -s /dev/mapper/ primary 2048s 100%

  • When Red Hat Enterprise Linux is running as a guest OS, partition each database volume by running the following command:

$> parted -s /dev/sdX mklabel msdos

$> parted -s /dev/sdX primary 2048s 100%

  • Repeat this for all the required volumes
3.3. Using udev Rules for disk permissions and persistence

Red Hat Enterprise Linux 7.x  have the ability to use udev rules to ensure that the system properly manages permissions of device nodes. In this case, we are referring to properly setting permissions for our LUNs/volumes discovered by the OS. It is important to note that udev rules are executed in enumerated order. When creating udev rules for setting permissions, please include the prefix 60- and append .rules to the end of the filename.

  • Create a file 60-oracle-asmdevices.rules under /etc/udev/rules.d
  • Ensure each block device has an entry in the file as shown below

3.3.1 When Red Hat Enterprise Linux is running as a bare-metal OS

           #---------------------start udev rule contents ------------------------#

KERNEL=="dm-*", ENV =="C1_OCR1p?", OWNER:="grid", GROUP:="asmadmin", MODE="0660"

KERNEL=="dm-*", ENV =="C1_OCR2p?", OWNER:="grid", GROUP:="asmadmin", MODE="0660"

KERNEL=="dm-*", ENV =="C1_OCR3p?", OWNER:="grid", GROUP:="asmadmin", MODE="0660"

KERNEL=="dm-*", ENV =="C1_DATA1p?", OWNER:="grid", GROUP:="asmadmin", MODE="0660"

KERNEL=="dm-*", ENV =="C1_DATA2p?", OWNER:="grid", GROUP:="asmadmin", MODE="0660"

KERNEL=="dm-*", ENV =="C1_DATA3p?", OWNER:="grid", GROUP:="asmadmin", MODE="0660"

KERNEL=="dm-*", ENV =="C1_DATA4p?", OWNER:="grid", GROUP:="asmadmin", MODE="0660"

KERNEL=="dm-*", ENV =="C1_REDO1p?", OWNER:="grid", GROUP:="asmadmin", MODE="0660"

KERNEL=="dm-*", ENV =="C1_REDO2p?", OWNER:="grid", GROUP:="asmadmin", MODE="0660"

KERNEL=="dm-*", ENV =="C1_FRA?", OWNER:="grid", GROUP:="asmadmin", MODE="0660"

KERNEL=="dm-*", ENV =="C1_TEMP?", OWNER:="grid", GROUP:="asmadmin", MODE="0660"

          #-------------------------- end udev rule contents ------------------#
    3.3.2 When Red Hat Enterprise Linux is running as a guest OS  

Obtain the unique scsi_ids by running the following command against each database volume and provide the value in the appropriate RESULT section below: /usr/lib/udev/scsi_id -g -u -d /dev/sdX   

#---------------------start udev rule contents ------------------------#

KERNEL=="sd[a-z]*[1-9]", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="<scsi_id>", SYMLINK+="oracleasm/disks/ora-ocr1", OWNER="grid", GROUP="asmadmin", MODE="0660"

KERNEL=="sd[a-z]*[1-9]", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="<scsi_id>", SYMLINK+="oracleasm/disks/ora-ocr2", OWNER="grid", GROUP="asmadmin", MODE="0660"

KERNEL=="sd[a-z]*[1-9]", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="<scsi_id>", SYMLINK+="oracleasm/disks/ora-ocr3", OWNER="grid", GROUP="asmadmin", MODE="0660"

KERNEL=="sd[a-z]*[1-9]", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="<scsi_id>", SYMLINK+="oracleasm/disks/ora-fra", OWNER="grid", GROUP="asmadmin", MODE="0660"

KERNEL=="sd[a-z]*[1-9]", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="<scsi_id>", SYMLINK+="oracleasm/disks/ora-temp, OWNER="grid", GROUP="asmadmin", MODE="0660"

KERNEL=="sd[a-z]*[1-9]", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="<scsi_id>", SYMLINK+="oracleasm/disks/ora-data1", OWNER="grid", GROUP="asmadmin", MODE="0660"

KERNEL=="sd[a-z]*[1-9]", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="<scsi_id>", SYMLINK+="oracleasm/disks/ora-data2", OWNER="grid", GROUP="asmadmin", MODE="0660"

KERNEL=="sd[a-z]*[1-9]", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="<scsi_id>", SYMLINK+="oracleasm/disks/ora-data3", OWNER="grid", GROUP="asmadmin", MODE="0660"

KERNEL=="sd[a-z]*[1-9]", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="<scsi_id>", SYMLINK+="oracleasm/disks/ora-data4", OWNER="grid", GROUP="asmadmin", MODE="0660"

KERNEL=="sd[a-z]*[1-9]", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="<scsi_id>", SYMLINK+="oracleasm/disks/ora-redo1", OWNER="grid", GROUP="asmadmin", MODE="0660"

KERNEL=="sd[a-z]*[1-9]", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="<scsi_id>", SYMLINK+="oracleasm/disks/ora-redo2", OWNER="grid", GROUP="asmadmin", MODE="0660"

 

#-------------------------- end udev rule contents ------------------#

  • Run "udevadm trigger" to apply the rule.
 4. Installing Oracle 12c Grid Infrastructure for a standalone database

    This section gives you the installation information of Oracle 12c grid infrastructure for a standalone database 

  • Open a terminal window and type: xhost +
  • If  /u01/app/12.2.0/grid directory does not exists create manually as the  'grid ' user
  • Unzip Grid installation files to /u01/app/12.0/grid as the grid user
unzip -q /home/grid/linuxx64_12201_grid_home.zip
  • cd /u01/app/12.2.0/grid
  •  Run ./gridSetup.sh &
  •  In the Select Configuration Option window, select Configure Grid Infrastructure for a Standalone Server(Oracle Restart) and click Next
HOW16670_en_US__5image(6603)      
  •  In the Create ASM Disk Group window, enter the Disk group name (OCR), Redundancy(Normal), select appropriate candidate disks that are meant for OCR and  uncheck Configure Oracle ASM Filter Driver and Click Next
HOW16670_en_US__6image(6604)
  • In the Specify Management Option window, proceed with default options and Click Next
HOW16670_en_US__7image(6605)
  •  In the Privileged Operating System Groups window,  select default Operating System Groups and Click Next
HOW16670_en_US__8image(6617)
  • In the Specify installation Location window, Choose the Oracle Base Location, and Click Next
HOW16670_en_US__9image(6618)
  • In the Create Inventory window, choose default and Click Next
HOW16670_en_US__10image(6619)
  • In the Root script execution configuration window, uncheck Automatically run configuration scripts and click Next
HOW16670_en_US__11image(6620)
  • In the Perform Prerequisite Checks window, select Fix& Check Again
HOW16670_en_US__12image(6621)
  • Follow the instructions in the Fixup Script window and click OK when finished
HOW16670_en_US__13image(6622)
  • After running fixup script in the Summary window, review summary click Install
HOW16670_en_US__14image(6623)
  • Run the Root scripts whenever prompted and click Ok
HOW16670_en_US__15image(6624)
HOW16670_en_US__16image(6625)
  • In the Finish window, click Close after Grid installation successful
HOW16670_en_US__17image(6626)

5. Oracle standalone Database Software Installation
  • Mount the Oracle Database 12c Media 
  • Login as oracle user and run the installer script from oracle database media
su - oracle
/runInstaller 
  •  In the Configure Security Updates window, uncheck I wish to receive security updated via My Oracle Support and click Next
HOW16670_en_US__18image(6627)
  • In the Select Installation Option window, select Install database software only and click Next
HOW16670_en_US__19image(6628)
  • In the Select Database Installation Option window, Select Single instance database installation and click Next
HOW16670_en_US__20image(6629)
  • In the Select Database Edition window select Enterprise Edition and click Next
    HOW16670_en_US__21image(6630)
  • In the Specify Installation Location window, Specify the location of Oracle base and click Next
Oracle base: /u01/app/oracle
Software Location: /u01/app/oracle/product/12.2.0/dbhome_2
HOW16670_en_US__22image(6631)
  • In the Privileged Operating System Groups window, select default privileges for each group and click Next  
If you installed the Dell EMC Oracle preinstall deployment RPMs then the needed groups as noted in the screen below should already exist. If not, you may have to create the appropriate groups manually

  HOW16670_en_US__23image(6632)

  • After Perform Prerequisite Checks complete, in the Summary window verify settings and click Install
HOW16670_en_US__24image(6633)
  • On completion of installation process, the Execute Configuration scripts window is displayed. Follow the instructions in the window and click Ok
HOW16670_en_US__25image(6634)
  • In the Finish window click Close after Oracle Database installation successful
HOW16670_en_US__26image(6635)
6. Database installation
Creating Disk Groups Using ASM Configuration Assistant (ASMCA)
  • Login as grid user start asmca from /u01/app/12.2.0/grid/bin//u01/app/12.2.0/grid/bin/asmca
  • Create 'DATA' disk group with External Redundancy by selecting appropriate candidate disks
HOW16670_en_US__27image(6636)
  • Create two 'REDO' disk groups - REDO1 and REDO2 - with External Redundancy by selecting at least one candidate disk per REDO disk group
HOW16670_en_US__28image(6637)
  • Create 'FRA' disk group with External Redundancy by selecting appropriate candidate disks
HOW16670_en_US__29image(6638)
  • Create 'TEMP' disk group with External Redundancy by selecting appropriate candidate disks
HOW16670_en_US__30image(6639)
  • Verify all required disk groups and click Exit to close from ASMCA utility
HOW16670_en_US__31image(6640)
  • Change ASM striping to fine-grained for REDO, TEMP and FRA diskgroups as a  grid user using below commands 
We must change to fine-grained striping before we run DBCA

SQL> ALTER DISKGROUP REDO ALTER TEMPLATE onlinelog ATTRIBUTES (fine)

SQL> ALTER DISKGROUP TEMP ALTER TEMPLATE tempfile ATTRIBUTES (fine)

SQL> ALTER DISKGROUP FRA ALTER TEMPLATE onlinelog ATTRIBUTES (fine)
 

Creating Database using DBCA

  • Login as oracle user and run dbca utility from ORACLE_HOME
/u01/app/oracle/product/12.2.0/dbhome_2/bin/dbca
  • In the select Database Operation window, select Create a database and click Next
HOW16670_en_US__32image(6641)
  • In the Select Database Creation Mode window select Advanced Configuration and click Next
HOW16670_en_US__33image(6642)
  • In the Select Database Deployment Type window, select Oracle Single Instance database for the Database type and select General Purpose or Transition Processing as a template and click Next
HOW16670_en_US__1icon Creating a Container database is optional. If you would like to create a traditional Oracle database then uncheck 'Create as Container database' option

HOW16670_en_US__35image(6643)

  • In the specify Database identification Details window, enter appropriate values for Global database name and select Create as Container database and specify number of PDBs and PDB name and click Next 
Creating a Container database is optional. If you would like to create a traditional Oracle database then uncheck 'Create as Container database' option

 

HOW16670_en_US__36image(6644)
  • In the Select Database Storage Option window, select Database file location as +DATA and click Next
HOW16670_en_US__37image(6645)
  • In the Select Fast Recovery Option window, check Specify Fast Recovery Area and enter Fast Recovery Area as +FRA and specify size and click Next
HOW16670_en_US__38image(6646)
  • In the Specify Network Configuration Details window, select already created listener and click Next
HOW16670_en_US__39image(6647)
  • In the Select Oracle Data Vault Config option window, select default uncheck all and click Next
HOW16670_en_US__40image(6648)
  • In the Specify Configuration Options window, specify appropriate SGA and PGA values  and click Next
HOW16670_en_US__41image(6649)
  • In the Specify Management Options window, leave default and click Next
HOW16670_en_US__42image(6650)
  • In the Specify Database User Credentials window, enter password as oracle and click Next
HOW16670_en_US__43image(6651)
  • In the Select Database Creation Option window, click on Customize Storage Locations  
HOW16670_en_US__44image(6652)
  • Create/modify the Redo Log Groups based on the following design recommendation
Redo Log Group Number Thread Number Disk Group Location Redo Log File Size
1 1 +REDO1 5 GB
2 1 +REDO2 5 GB
3 1 +REDO1 5 GB
4 1 +REDO2 5 GB
HOW16670_en_US__45image(6926)
  • In the Summary window, review summary and click Finish
HOW16670_en_US__46image(6653)
  • In the Finish window, check for Database creation completion and click Close
HOW16670_en_US__47image(6654)
 
 

Article Properties


Affected Product

Red Hat Enterprise Linux Version 7

Last Published Date

12 Dec 2023

Version

5

Article Type

How To