This post is more than 5 years old
2 Intern
•
20.4K Posts
0
9605
Oracle 11g with ASM - disk layout strategy
Hello folks,
we are looking at starting virtualizing our Oracle 11g systems. Some systems are RAC and some are standalone. Currently we use ASM with ASMLib to manage disk. I am trying to get some feedback on what other folks are doing in terms of their disk strategy. Here are my questions :
1) do you dedicate datastores for data files and separate datastore for log files ?
2) do you use paravirtualized disk ?
3) do you simply present a big vmdk file and then fdisk it and "oracleasm createdisk" or do you use LVM and then "oracleasm createdisk"
4) what other questions should i be asking ?
I've found some random papers online but nothing specific.
Thank you
jeff_browning
256 Posts
0
October 27th, 2011 12:00
Although some crosstalk is inevitable, I should point out that a more appropriate forum for this question would be the Everything Oracle at EMC community, of which Sam Lucido and I are co-administrators. Having said that, when we become aware of a question on another forum such as this one, we are happy to respond.
I would say that the answer by Alan Robertson, above, is very helpful. As Alan points out, in an ASM context there are a couple of high-level architectural decisions:
You have another option as well: NFS using Direct NFS. You indicated that you want to use ASM, so I will not cover that further here.
All of these options, as well as some detailed best practices and recommendations regarding storage layout can be found in the Virtualizing Oracle 11g RAC on vSphere technical session that Sam and I co-presented at VMworld 2011. Feel free to take a look at this presentation, and please let Sam or me know if you have any questions on the content.
LouisLu
161 Posts
0
October 25th, 2011 18:00
Dynamox,
We are not sure customers have the DataGuard protection in their environment. If so, RAC->RAC and standalone->standalone failover should be done respectively. We took care of the similar cases before.
Louis
dynamox
2 Intern
2 Intern
•
20.4K Posts
0
October 25th, 2011 19:00
Louis,
What about the questions i ask ..any wisdom to share?
LouisLu
161 Posts
0
October 25th, 2011 20:00
The answer for your #1 question is ture. General speaking, different datastore can satisfy different performance requirement of tiers.
I am fully understand your #2 question paravirtulized disks mean.
The #3 question gives me the concept that fdisk or VLM will bring more workload on the OS layer. Why not directly provide more VMDK for the different partitions.
Louis
reseach
225 Posts
1
October 25th, 2011 20:00
Dynamo, I have some thoughts like to share with your. My thoughts is following your questions starting with “A:”
1) do you dedicate datastores for data files and separate datastore for log files ?
A: I suggest to leverage ESM separated store to hold Oracle data files, because Oracle instance has various behaviors on those file, that means IO profile is different. General speaking, Oracle Data file like R5 or R 6 FC/SAS, Redo file like R1/R10 FC/SAS, Archived logs like R6 NL_SAS. If you would use F_Cache to improve Ora performance, you would have a look at Oracle database on EMC VNX Best Practices http://powerlink.emc.com/km/live1/en_US/Offering_Technical/White_Paper/h8242-deploying-oracle-vnx-wp.pdf
2) do you use paravirtualized disk ?
A: This depend your favor on Oracle instance mobility or performance, paravirtualized disk / RDM would stick Ora instance on a Physical ESX server, and the performance with using paravirtualized disk / RDM would roughly 10% Oracle Performance improvement, if you have deployed F_Cache, the difference is small.
3) do you simply present a big vmdk file and then fdisk it and "oracleasm createdisk" or do you use LVM and then "oracleasm createdisk"
A: I prefer “a big vmdk file”, because VMFS would cross all spindles connected with ESX servers and co-exist of VMFS and ASM is a bit duplicated.
Your thoughts?
Eddy
dynamox
2 Intern
2 Intern
•
20.4K Posts
0
October 25th, 2011 20:00
paravirtual adapters
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1010398
3) yes, i present vmdk files to the host but what do you do with them in the OS itself.
dynamox
2 Intern
2 Intern
•
20.4K Posts
0
October 25th, 2011 21:00
Eddy,
1) we only have one pool on VNX with EFD and SAS drives in raid-5 config. I am not going to put NL_SAS in the same pool until EMC allows pool configuration with different raid types (i am not using anythign but R6 for NL-SAS)
2) what do you mean mobility ? Are you saying i can't VMotion/sVMotion paravirtual controllers ? News to me
3) the question is what you do with vmdk devices presented to the Oracle box, do you fdisk that vmdk file into multiple partitions or do you use lvm. We are going to use ASM regardless.
reseach
225 Posts
1
October 26th, 2011 02:00
Dynamo,
Please have a look at my thoughts following your questions.
1) we only have one pool on VNX with EFD and SAS drives in raid-5 config. I am not going to put NL_SAS in the same pool until EMC allows pool configuration with different raid types (i am not using anythign but R6 for NL-SAS)
A: according to the whitepaper I referred to you, R5 SAS pool is configured for Oracle data flat files with F_Cahce, EFD enabled for better performance, R1 2+2 SAS Pool is configured for Oracle online redo log files, NL_SAS is considered for Archived log files.
I suggest Dev from every VNX pool are mapped with separated ASM DG, I,e Dev of R5 SAS pool with F_Cahce go to ASM DG for data flat files, etc.
2) what do you mean mobility ? Are you saying i can't VMotion/sVMotion paravirtual controllers ? News to me
A: The processor is paravirtualized within VMware (that's why you can't vmotion to too different processors) the drivers are not. If you use paravirtualized drivers (for SCSI, NIC, etc) you would be fine.
3) the question is what you do with vmdk devices presented to the Oracle box, do you fdisk that vmdk file into multiple partitions or do you use lvm. We are going to use ASM regardless.
ASM manage the dev presented to VM guest, single vmdk is a SCSI dev in VM guest, therefor from preventive, no fdisk action is required, but so far I don’t have any whitepaper talking about best practices of ASM on VMFS.
Thanks,
Eddy
dba_hba
63 Posts
1
October 26th, 2011 03:00
Dynamox,
we are looking at starting virtualizing our Oracle 11g systems. Some systems are RAC and some are standalone.
Currently we use ASM with ASMLib to manage disk. I am trying to get some feedback on what other folks are doing in terms of their disk strategy. Here are my questions :
I work in the team that builds, validates and produces many of EMC's Oracle the white papers. I have put my answers and links in line.
1) do you dedicate datastores for data files and separate datastore for log files ?
This is an older paper, and things have moved on, but it shows how we
P2V'd a database and server
laid out ASM on VMFS for performance and compared to RDMs.
Configured ASM including using fdisk with a 64KB offset on your partition as ASM is a Volume Manager.
Let me know if you do not
http://powerlink.emc.com/km/live1/en_US/Offering_Basics/White_Paper/h6502-virtual-infrastructure-oracle-v-max-vsphere-psg.pdf
As far as I know, EMC do not have an official position on single or multiple datastores. If you are using vmdks on VMFS then I would recommend separate data stores.
2) do you use paravirtualized disk ?
VMware recommend using the paravirtualized scsi controller we used it, without issue, for our first "Virtualized RAC" paper.
Once again it uses RDMs but also details general guidelines for
http://www.emc.com/collateral/hardware/white-papers/h8123-oracle-rac-symmetrix-fast-vp-vsphere-wp.pdf
The following papers build out RAC on VNX
http://www.emc.com/collateral/hardware/white-papers/h8124-oracle-e-business-unified-storage-vmware-wp.pdf
https://community.emc.com/docs/DOC-12147 (It is on NFS but the rules for the underlying configuration still apply)
I believe that Eddy has already pointed you to Deploying Oracle Database Applications on EMC VNX Unified Storage
http://powerlink.emc.com/km/live1//en_US/Offering_Technical/White_Paper/h8242-deploying-oracle-vnx-wp.pdf
3) do you simply present a big vmdk file and then fdisk it and "oracleasm createdisk" or do you use LVM and then "oracleasm createdisk"
If you are using ASM then you want multiple LUNs in your disk group for ASM to balance IO and prevent lun queuing on the guest.
As above ASM is an volume manager so I would use fdisk with a 64KB offset on your partition. If you want to grow your disk group add another LUN.
4) what other questions should i be asking ?
Not an exhaustive list but some things I think you should consider
What size of cluster is being used, cpu/core count? Oracle Licensing
RDM or VMFS:Effects on Backup and Replication.
Migration to/from physical: downtime and mechanisms.
Build your VM template from a fresh Guest install or VMconverter.
You don't mention OS: RedHat vs SusE. VMware have an OEM deal with and offering licensing and support for SuSE on VMware with certain bundles.
http://blogs.vmware.com/vsphere/2011/02/suse-linux-enterprise-server-for-vmware-now-available-to-many-more-vmware-vsphere-customers-.html
Currently, I have only tested RAC with RDM (no vMotion due to SCSI bus sharing must set to physical) and NFS.
However, if you want to use Oracle RAC and vMotion/DRS then you can now use vmdks on VMFS thanks to the multi-writer flag that enables sharing at the disk level, see vmware KB 1034165 and probably a future EMC whitepaper on the topic
http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=1034165&sliceId=1&docTypeID=DT_KB_1_1&dialogID=182416168&stateId=1%200%20182414901
Here are the main public EMC anding pages for virtualizing Oracle
http://www.emc.com/solutions/samples/oracle/virtualization-oracle.htm
http://www.emc.com/solutions/application-environment/oracle/oracle-virtualization-vmware.htm
Lastly, we have a large pool of Oracle SMEs out there in the field who are able to do a deep dive on this topic with you.
Hope this helps.
Allan
dynamox
2 Intern
2 Intern
•
20.4K Posts
0
October 26th, 2011 03:00
thanks Eddy. I don't feel right about having one to one relationship between vmdk and ASM disk. If my standard is to present 100G vmdk file to the host, i would prefer to partition it so where would be more queues in the OS and not just one big 100G device. You know what i mean ?
reseach
225 Posts
0
October 27th, 2011 03:00
Dyamox,
From my perspective, The unit ASM managed is Disk recognized with OS after scanning SCSI bus, on the other words, it is LUN presented with Storage.
Here is some ASM CLI for you reference,
-- Add disks.
ALTER DISKGROUP disk_group_1 ADD DISK
'/devices/disk*3',
'/devices/disk*4';
-- Drop a disk.
ALTER DISKGROUP disk_group_1 DROP DISK diska2;
And one VMDK file is mounted with a VM guest, it is a disk, which can be managed with ASM and it would not be used after associated with a ASM DG.
In your case, a 100G VMDK attached with this Ora box, after VM stating, ASM would see a 100G disk.
Thoughts?
Eddy
dynamox
2 Intern
2 Intern
•
20.4K Posts
0
October 27th, 2011 12:00
Thanks Jeff, i downloaded that presentation yestarday, very helpfull. This is my issue with using RDM LUNs. Even in vSphere 5.0 the limit is still 256 LUNs. My current physical oracle boxes have a lot more than 256 LUNs. The presentation says to use use less bigger size LUNs, i don't like that strategy ..at least not on my physical boxes where i want to give my OS a lot of disk queues.
Let's forget about SRDF/Timefinder requirement, let's say my DBAs will use RMAN to clone their databases. What do you see customers do in terms of datastore sizing, vmdk sizing and LVM management once vmdk is presented to the host.
Thanks
jeff_browning
256 Posts
0
October 28th, 2011 09:00
Wow! That's alot of RDMs. Did you attend the VMworld Europe event by any chance? I ran into another customer who hit the 256 limit on RDMs and came up after my technical session there to discuss options. Thought that might be you.
At any rate, if you are using RMAN for cloning and such and are willing to live with the limitations of VMFS in terms of RAC nodes, then I would say there is certainly no compelling reason to do anything else. Certainly, the performance overhead of VMFS in a vSphere 5 context is tiny, very low single digit percent from the testing that I have seen. Not enough to worry about in other words. For a purely virtualized setup like you are describing, I have shifted my recommendation away from RDM in the direction of VMFS.
In terms of the physical layout, the following are points to keep in mind:
Please let us know if you need further help. That's what we're here for.
dynamox
2 Intern
2 Intern
•
20.4K Posts
0
November 1st, 2011 11:00
these are no RDMs, these are physical servers that we will be trying to virtualize at some point.
i don't know about RAID 10, RAID-5 might be sufficient for redo/archive lgos. VNX does a very good job with large block sequential I/O, it will be written as full stripe write causing even less write penalty than RAID 10.
dynamox
2 Intern
2 Intern
•
20.4K Posts
0
November 1st, 2011 20:00
you had anythign specific in mind ? Redo log example is using 4+1R5