Unsolved
This post is more than 5 years old
6 Posts
0
99819
October 23rd, 2015 10:00
Samsung SSDs in Poweredge with Windows Storage Spaces
We are planning to build a hyper-v server to host a number of VMs. I would like to buy a used/refurbished Poweredge (prob a 710 or 720) and put in some Samsung 850 Pros or EVOs. With 64GB or more of RAM and 2 6-core or 8-core processors and we should have a server that can handle quite a few VMs.
I have seen a lot of different information about what is supported on what Dell PERC cards and wanted to see what the current landscape is because I know there have been changes to firmware that has affected some of the SSD related issues. I looked at a similar project in the past but that was when the Dell firmware did not “play nice” with 3rd party drives. I also need some recommendations on which PERC to get. The 710 options are PERC6i or H700. The 720 has 3 options, H310, H710, and H710P. If I had a lot more money I would look at a PE 730 which has the newer H730 PERC available…
I like the idea of using Storage Spaces and letting windows see the raw SSDs to better manage SDD firmware updates, smart reporting, support for TRIM, etc. I know SSD can be “safely” used in RAID arrays at the PERC level by over provisioning them but I have used Storage Spaces on another server and like some of the features like more flexibility when adding more storage. So to me SS seems a better solution for SSDs in a windows server environment.
Some concerns:
- Older PERCs did not support TRIM. I have seen some firmware release notes that list adding TRIM support but there was no detail. Was this in RAID mode? Non-raid mode?
- What is the best mode to use with the PERC? I have seen reference to “non-raid” and also “pass-through” and also “HBA” and also “JBOD”. Are these all the same? If not what are the differences? I have also seen people create single drive RAID 0 arrays to let the OS see each drive but then the OS does not have direct access to the drive itself as it has to go through the abstracted VD layer.
- PERC caching. Obviously better performance is desired. I have read for some PERCs the cache only is used with RAID mode VDs. So if you do any kind of “pass-through” mode the cache is not used. Is this still true with newer firmware and newer PERC models? Is this just something that Dell hasn’t completed yet or is there a fundamental reason the cache cannot be used this way?
- OS drives vs data drives. Windows server cannot boot from Storage Spaces. So I would need different physical drives for that. The hyper-v OS is tiny, I have even run it from an 8GB flash drive on test systems. Ideally I could create a RAID 1 mirror of two cheap drives like 36GB 10K 2.5” drives. But from what I read the backplane can not have mixed HDD and SSD right? So I would need to put the OS on additional SSD drives. I have two older 840 Pros lying around I could use for this.
Thanks



theflash1932
9 Legend
•
16.3K Posts
0
October 23rd, 2015 11:00
I'll leave much of the discussion on SSD's to others who have had more experience with it, but I can tell you that OEM's have requirements for their disks so that they work seamlessly with their controllers so that the OEM's can control the system in order to guarantee/warranty a stable and solid machine. Factory settings on retail drives are way too generic to properly communicate with specific RAID controllers, so drives are certified by either the OEM or the drive manufacturer as having been tested and validated for use on it. This is not just Dell, and it is not just with OEM - retail NAS and controllers have lists of supported drives as well. So, in a sense it is not the controller that doesn't play nice, it is the drive - the drive comes to the game not knowing the rules for that specific game. That said, just because you use a generic/retail drive, doesn't mean it won't work, but Dell can't guarantee it, because they haven't tested it. If you look through the forums, there are several issues, ranging from nothing really, to drives not being recognized, being recognized but randomly going offline, to the LED giving false status indicators. It comes down to this: use them at your own risk.
Some technical information in response to a few of your questions (assuming R7x0 servers):
Non-RAID, JBOD, and pass-through, for all intents and purposed here, all refer to the same thing - the ability of a controller to pass access through the controller to the OS, rather than managing the reads from and writes to the virtual disk made up of the managed disks. HBA generally refers to a non-RAID disk controller.
PERC 6/i:
PERC H700/H710/H730:
If you care at all about performance, I would not consider any other controller, period. Anyone who even thinks the words "SSD" or "benchmark" needs the H7x0 controllers.
You CAN mix SAS/SATA/SSD/HDD on the backplane. You CANNOT mix within a virtual disk. Most of those machines can boot to internal SD/USB though.
I don't know for sure on the use of controller cache when in non-RAID mode - I would assume that it is managed by the RAID functions of the controller, as it disables/ignores the built-in cache on the drives in lieu of the higher-capacity cache on the controller for better distribution control across member disks. In non-RAID mode, the disks' onboard cache could be utilized (and the controller probably no longer has the access to properly manage what is happening on the drive to properly coordinate the cache for that drive. Just a guess though.
pcmeiners
4 Operator
•
1.8K Posts
0
October 24th, 2015 12:00
Storage Space can have a good showing in a 4 disk 2 column mirror; these good results appear to be dependent on the stripe size.
A few issues I have with Storage Spaces is what happens if corruption occurs in the Storage Space environment, where is the support to repair it, Aother issue is there is god deal of referencing disk cache enabled in Storages Spaces, sounds like quite a bit of corruption/damage could occur if it is enabled. Lastly I wonder what overhead Storage Spaces places on the OS and resident programs when highly utilized.
"The stripe size is also a decisive parameter, which receives a special mention here, because it permits different assessments in a default setting compared with an optimized setting. If 64kB stripe size for the HW RAID configuration is to be compared with 256 kB stripe size of the Storage Spaces, the Storage Spaces throughputs are almost without exception (e.g. not “Parity”) above the HW RAID results.