1 Nickel

Oracle 11 RAC on VNX NFS v3


I have read the document : EMC Oracle Performance - EMC VNX, Enterprise Flash Drives, FAST Cache, VMware vSphere (attached)

I see that there is a recommendation to separate CRS from REDO log and locate both on R10.

Can you please advise the reason to separate CRS and REDO ?

and another question, I see there is a recommendation to use the uncached mount option for the file systems.

is it required for the db only or for the logs and appl as well.

Your assistance is appreciated,

Thanks in advance,

Amir Asulin

Tags (3)
0 Kudos
3 Replies

Re: Oracle 11 RAC on VNX NFS v3


I also disagree with this recommendation. I have previously placed all of the CRS files (consisting of the voting disk and the cluster configuration file) on the pools used to store the online redo files. The typical configuration for my stuff has always consisted of:

  • tier1_pool (Typically R5 on faster SAS or possibly flash drives, although possibly R10 depending on the workload. Also use either FAST Cache or FAST VP on this depending, again, on the nature of the workload. If using FAST VP, migrate the blocks across to tier2_pool, below, depending on I/O demands. This pool contains the datafiles for tablespaces which have high performance requirements.)
  • tier2_pool (Typically R6 on NL-SAS with slow, high capacity drives. Contains FRA, archive log dump destination, and datafiles with low I/O performance requirements. If using FAST VP, this would be the place where the low I/O demand blocks get migrated to.)
  • There maybe either a tier0_pool or a tier3_pool, depending.
  • log1_pool (R10 on fast SAS. Contains one copy of the online redo logs, CRS files and controlfiles.)
  • log2_pool (R10 on fast SAS. Contains the second copy of the online redo logs, CRS files and controlfiles.)

Everything starts with this basic design. As you see, I put the online redo logs, CRS files and controlfiles together on the same pools. Since Oracle provides soft mirroring of these files across different storage objects, that works well. Also, this basic design works regardless of the storage protocol. I only typically use either ASM or dNFS for storage-layer management within Oracle. This design also works well in either a physically-booted or virtualized environment.

I am honestly not sure what the story is on the document you identified, but will investigage further.



1 Nickel

Re: Oracle 11 RAC on VNX NFS v3


In Oracle 11g Release 2, Oracle ASM and Oracle Clusterware were integrated into a single set of binaries and named Oracle Grid Infrastructure (GI) and completely separate from the Oracle Database binaries.  GI now provides all the cluster and storage services required to run an Oracle RAC database.

The CRS files: i.e.OCR and voting files, are configured as part of the GI installation and, as you may know, the OCR contains data that is cluster specific, e.g. ip addresses and  hostnames of the member nodes.

By keeping the CRS files separate it maintains  the logical separation of Clusterware and Database files removing any dependency between the GI and Database install. Secondly, by keeping Clusterware and Database files separate, will simplify remote replication or movement of the database to another cluster with its own GI.

If it was an ASM/block solution these Oracle Clusterware files would reside in their own disk group.

On VNX with block, given the relatively insignificant I/O of the OCR (2 disk mirrored) and voting disks (2 separate disks), it would be possible to create a R1 group from the vault drives and create 5 fixed LUNs from that RAID group.

Also, I can see no recommendation of  "uncached mount option" for the file systems. FAST Cache is disabled for the redo logs.  The NFS file systems would have been mounted in line with the Oracle support document "Configure Direct NFS Client (DNFS) on Linux " - [ID 762374.1].

Can you clarify what you mean and I will get you the answer?


1 Nickel

Re: Oracle 11 RAC on VNX NFS v3

Hi Allan,

Thank you very much for your response,

you are right,

uncached is not specified in this document but in a document presented by USPEED last Q.

I will try to follow it up with the ppt owner,

One last point, we are currently not using Direct NFS because the customer DBA is not ready nor willing to have any changes to his setup and this is objection yet to be handled


0 Kudos