Start a Conversation

Solved!

Go to Solution

4541

March 17th, 2019 04:00

Dell PowerEdge R810 Server SSD Configuration

Hallo community,
I request for your assistance.
Please forgive me if I ask a 'silly' question.
I have a Dell PowerEdge R810 Server for my home lab and VMware ESXi 6.5.
I would like to install six SSD drives 2.5 inch 500GB or 1TB each and take advantage of all six disks for my VMs.
I need all disks running at 6Gbps. As far as I know I need to purchase the Dell H700 Perc RAID controller.
At this point I don't need RAID configuration for safety/redundancy. I just need to take advantage of all disk space available for VMs.
My question is how to configure the server, what kind of extra h/w to purchase, 'where' to install the esxi s/w (e.g. on a partition and leave the rest of the space for the datastore), in order to have as much space available for the datastores.
I greatly appreciate your help.
Thank you in advance.

2.9K Posts

March 18th, 2019 07:00

You can make RAID 0 volumes with only 1 drive. 

The IDSDM is a PCI device that let's you add a pair of SD cards into the host, typically used for booting and for crash logging for ESXi. These are normally kept in a mirrored state, so only 1 SD card is really needed for that option.

ESXi should support being installed to a USB, but I don't see an internal port on the motherboard diagram. If there's not an internal port, your drive would be exposed from the front or rear, so I wouldn't really recommend it. 

The H700 should be an excellent controller choice.

2.9K Posts

March 18th, 2019 08:00

I have 1TB drives in mine and I'm having no problems. They're not certified, but I can't remember the make and model. 

As for the Samsung EVO drives, they may work, they may not. I've seen multiple cases of the EVOs and PERCs not getting along across multiple generations. That having been said, I have seen them work a few times. Being in support, we generally only talk to people when something isn't working. It may be the case that they commonly work. I wish I could give you a better answer, but the best I can do is probably: If you can get a good deal on them from a place with a good return policy, it might be worth trying. As for capacity, the largest capacity SSD I see in our validated list is nearly 4TB, so the capacity you're looking at should not be an issue.

5 Posts

March 18th, 2019 07:00

Thank you for your reply and assistance.

If I use RAID 0 mode do I have to use two disks to create a volume or using all physical disks individually for max use of available disk space?

If I'm not taking a lot of your time can you please tell me what is Internal Dual SD Module?

Can I use a USB stick instead?

I'm thinking to purchase the Dell H700 controller for 6Gbps speed. Is this ok?

Thank you again.

5 Posts

March 18th, 2019 07:00

(If I'm not taking advantage of you)

From your experience with your server (which is very similar to R810) could I use commercial SSD Drives (e.g. Samsung EVO) instead of the Pro Series and the Dell Certified to reduce the costs?

Can I install a 1TB drive or there is a max limit to 500GB?

Once again thank you so much.

5 Posts

March 18th, 2019 07:00

Thank you for your assistance. It realy helped me.

2.9K Posts

March 18th, 2019 07:00

Hello,

I also have an 11G server at home for an ESXi lab, it's an R710, though.

Anyway, one fairly common solution is to create single drive RAID 0 volumes for each individual drive. It performs essentially the same as a straight pass through. If your present controller supports pass-through mode or HBA mode, that passes the disks straight through to the OS by default.

As for where to install the OS, you might consider installing to an SD card using an IDSDM (Internal Dual SD Module). This would let you keep all your other storage completely dedicated to the VMs.  

2.9K Posts

March 18th, 2019 07:00

Glad to hear it! Feel free to bump the thread  if you have any other questions.

5 Posts

March 18th, 2019 08:00

Once again thank you. You are very kind. Your answers and advice helped me a lot.

July 1st, 2020 13:00

Hi,

Is it possible to initialize NVMe disk connected to PCIe adaptor without any PERC at all?

I bought a R810 from eBay and it seems my PERC 800 controller died. Also I don't have any other drive installed and from boot menus or iDRAC, there is no such option to create and initialize new system folder.

I am looking to run ESXi on the server to lab Cisco stuff, thanks  a lot !

Moderator

 • 

3.1K Posts

July 2nd, 2020 02:00

Hi,

 

I'm not sure I'm understanding you correctly, but you installing another PCIe adapter to communicate with the drive, and would like to initialize the drives?

 

Well, if yes, then that will depend if the card is compatible with the server, if it isn't, iDRAC or OMSA won't be able communicate with the card. You will need to access to the card's BIOS and use it's proprietary system to initialize the drives.

July 2nd, 2020 06:00

Thanks for the reply! The thing is, I am not positive if the thing I want to do is possible. I have a single storage - the NVMe SSD installed in PCIe express slot. No SATA at all. I was able to run installer and install the ESXi onto that disk but it will never boot from there. 

 
In the meantime, I was able to install the Esxi also on iDRAC SD card, boot from there and then (!) attach the storage Nvme card and deploy my VMs on it. 
 
I hope I explained what is my goal. I am not sure that if I had PERC 700 for internal raid, access to the ctrl+R and create the virtual RAID 0, and run basically only single disk server, ID that would be a viable solution. I would prefer not install the main purpose OS on the iDRAC SD card.

Moderator

 • 

3.1K Posts

July 2nd, 2020 22:00

Hi,

 

The server is unable to support boot from PCIe for this generation. Only the latest generation would be able to.

 

H700, based on documentation, it's able to support RAID 0 with 1 drive configuration. 

 

Do let me know if you have any other questions.

July 3rd, 2020 00:00

Thanks a lot. I really appreciate that! That's all I needed to know about NVMe. So basically Gen 11 does not support booting OS from it but it works fine for storage VMFS disk where my VMs are attached. Booting is possible from the dual SD or even from iDRAC SD slot. It works just fine for virtual lab host, I don't need any mechanical disks for storage.

No Events found!

Top