Start a Conversation

Unsolved

This post is more than 5 years old

123265

January 31st, 2013 07:00

ESX HYPERVISOR ON SD CARD

Hello, 

i got dell r720xd server with raid aray and one SD card slot

is it better  choice to install the esx on the sd card or to place it on the sas raid array?

what are the pros and cons?

9.3K Posts

January 31st, 2013 10:00

Are you referring to the single SD card on the back of the server (part of the life cycle controller), or do you have a single internal SD card (requires opening up the cover to see if)?

The SD card from the life cycle controller is for a manual one-time boot and cannot be selected as a permanent boot device, so you'd have to use the harddrives anyway.

However, if you have the internal SD card, my personal preference is to use the harddrives assuming you aren't planning to remove them (use them elsewhere).

The harddrives can be hot swapped and are probably set up in a redundant raid type, unlike a single SD card.

Moderator

 • 

6.2K Posts

January 31st, 2013 10:00

Hello arik010

The main reason VMware is installed on USB/SD drives is because you are not supposed to install it on the same physical disks that the data stores are on. Because of the extremely small installation space that VMware requires it is a huge waste of space to install it on a HDD/SSD. Aside from space saving there is not much benefit. Here is a quote from the 5.1 install guide:

 Due to the I/O sensitivity of USB and SD devices, the installer does not create a scratch partition on these devices. As such, there is no tangible benefit to using large USB/SD devices as ESXi uses only the first 1GB. When installing on USB or SD devices, the installer attempts to allocate a scratch region on an available local disk or datastore. If no local disk or datastore is found, /scratch is placed on the ramdisk. You should reconfigure /scratch to use a persistent datastore following the installation. 

Thanks

Moderator

 • 

6.2K Posts

January 31st, 2013 14:00

is there any specific reason that i am not supposed to installed the esx on the same aray with data?

Performance. All virtualization configurations I am aware of recommend that the VMs be on separate drives than the host operating system.

http://pubs.vmware.com/vsphere-51/index.jsp?topic=%2Fcom.vmware.vsphere.install.doc%2FGUID-FF4F7C0F-FDED-4256-8331-037EC5A91A22.html

Disk location

Place all data that your virtual machines use on physical disks allocated specifically to virtual machines. Performance is better when you do not place your virtual machines on the disk containing the ESXi boot image. Use physical disks that are large enough to hold disk images that all the virtual machines use.

33 Posts

January 31st, 2013 14:00

is there any specific reason that i am not supposed to installed the esx on the same aray with data?

33 Posts

January 31st, 2013 15:00

thanks for the reference!

since i don't have any redundant sd card device my only option is to use the two small 2.5 bays for the esxi boot partition.

i still don't sure how that will give me any performance gain since the esx partition don't require any i/o (the fact it can be placed on slow sd cards) is it worth the extra 600$ to go for the esxi dedicated raid 1 2X15k sas disks ?

Moderator

 • 

6.2K Posts

January 31st, 2013 16:00

i still don't sure how that will give me any performance gain since the esx partition don't require any i/o (the fact it can be placed on slow sd cards) is it worth the extra 600$ to go for the esxi dedicated raid 1 2X15k sas disks ?

The host OS does have a lot of I/O, but it mostly loads into memory so you don't get a lot of read/writes to the disk it is on. I wasn't able to find a document that discusses the I/O on the host OS in detail. The scratch will have a fair amount of I/O, so if you are going to have the host and scratch on the disks then it may be worth it to put them on 15k. You may not see much performance difference between 15k and 7200 RPM drives with the host and scratch. The big thing is not having the requests queued up on the same drives that the data is on.

If performance is a premium then I would go with the 15k drives. If a millisecond or two of delay is okay then I would go with slower drives. I was not able to find any definite information, but I don't think you will be able to tell any difference in spindle speed unless you are close to maxing out the server.

Thanks

33 Posts

February 1st, 2013 01:00

if the host install disk have lot of i/o  why are vmware and dell letting install it on slow sd card in many scenarios?

Moderator

 • 

6.2K Posts

February 1st, 2013 09:00

if the host install disk have lot of i/o  why are vmware and dell letting install it on slow sd card in many scenarios?

The host OS does have a lot of I/O, but it mostly loads into memory so you don't get a lot of read/writes to the disk it is on.

You are not impacted by the I/O because it is in memory. The I/O requests to the disk are redirected to memory.

33 Posts

February 2nd, 2013 00:00

Thanks now  i understand you.

i mentioned the host os but i meant the host OS + system scratch/swap space

i still cannot find any documentation on how much i/o if any the scracth partition is using.

i know it will only use it for logs and so...didn't find if there any serious i/o demands that can justify the purchase of dedicated raid1 disk.

33 Posts

February 2nd, 2013 04:00

finally found that:

blogs.vmware.com/.../scratch

5.  What are there performance considerations needed when choosing a location for the scratch partition?

"Performance of the scratch partition is not critical.  Scratch is used for log files, core dumps, support log bundles, and as a temporary working space.  It’s not imperative that the scratch partition be on a high performing device.  The concern is to have scratch on a reliable, persistent location."

seems like there is almost no disk activity with the scratch and os partition.

if there any other consideration or documentation on that i will be happy to hear about.

1 Message

June 7th, 2013 01:00

Hello, I 2 am very confused about this question. currently I have my ESX installed on a SDcard. However - SD cards have a maximum number of read writes, depending on the manufacture of the card it can be between 10k write and 100k read (roughly 5 years by my estimate). So the card is going to stop working after an extended period of time - FACT. It has no redundancy or backup like a RAID array, so eventually you are going to have a problem with it  over a long enough period of time.

2. One of the benefits of  using a raid controller card is that you can separate your array into many different logical partitions. So in effect you could have 1 * 4GB  partition for your esx to live. and allocate the rest to storage. So you wouldn't be loosing any space only gain redundancy form a RAID set - and speed as well.

Any thoughts welcome as i'm strongly considering changing back to storing esx on the RAID array unless anyone can make a good counter argument.

  

4 Posts

December 15th, 2015 14:00

When Esxi boots from a SD card, there are no writes, it just loads it into memory. This is preferred method for VSAN configurations because theoretically (and I say this because of the Scratch/Syslog config still need to be local) VSAN will claim all of the local disks for communal usage. I have a little blog post on my workaround and method of implementation.

Most of the Dell implementations use dual internal SD Cards, the flash sd on the outside rear of the server is not meant for OS install.

Also, the hypervisor is very small, like 300 MB. Even though my SD card is 16GB, I think I'm only using a small portion of that.

The other "best practice" from VMWare is to send the coredump to SD card. I think this is in case your RAID fails or something, you can always get to the SD card to get the coredump.


Scratch/Syslog Config 

http://vmwwarevsansoupnuts.blogspot.com/2015/12/vmware-55-scratchconfig-syslog-etc.htm

Coredump config

http://vmwwarevsansoupnuts.blogspot.com/2015/12/so-i-ran-into-issue-recently-with.html

In regard to your specific question, I would definitely setup coredump to go to SD. If you have local disk space for hypervisor install, I think it's really a matter of preference, plus, if you are the one maintaining it, go with what you want to maintain for the future.

No Events found!

Top