Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

404451

December 27th, 2013 14:00

Compellent Volumes raid levels

Hi.

We are looking into buying a Dell Compellent Array, but before that im looking into its technology.

Unfortunately the few public datasheets on the SC8000 are very brief.

Im looking into information how the array handles its raids, from what ive understood, when you create a volume, some data will be placed on raid 10, some on raid 5 and some might end up on raid6 (tiering).

But how are these raid levels defined? Are they simply made from all available disks of a certain type, say raid10 is ssd and SAS-NL is raid6. or are they raids built more like how 3par handles CPGs, creating mini raids out of chunks of the different disks?

HP also has a wealth of information readily available about the 3par arrays, i cannot find anything for compellent :(

http://h20566.www2.hp.com/portal/site/hpsc/public/psi/manualsResults?sp4ts.oid=5044394&ac.admitted=1388181376886.876444892.199480143

118 Posts

December 27th, 2013 14:00

Ok - so really this is best done in person with someone who can demo and animate the block allocations for you.

BUT

Compellent storage is virtualized by Tier - so for each tier of storage (which are defined by disks of like I/O characteristics such as rotational speed, flash type, etc) uses all of the devices within it to process each I/O.  Compellent's allow each LUN to have multiple RAID algorithms in use at the same time. The ideal on this would be to have write operations occur in RAID 10 (the fastest for write) and read operations to happen from RAID5/6 (most space efficient). RAID types do NOT reserve a specific amount of physical disks - but rather function as a method for organizing blocks from various LUNS across all of the disks within a Tier.

Data Progression (Tiering) extends this capability to allow volumes to exist with blocks in multiple tiers based on the age and activity of each block. The LUN is presented such that there is no difference to servers based on the placement of data and the blocks can be shifted dynamically as criteria are met. Recently Data Progression was expanded to allow the mix of write optimized and read optimized SSD tiers which changes the economics around Flash Arrays without resorting to using SSD as a front end cache. The data is moved more quickly then a normal progression from the write optimized drives to the read optimized SSD so as to keep performance levels above 100K IOPS while leveraging the cost differences inherent to MLC drives (they are cheaper).

http://www.dell.com/learn/us/en/qto08/shared-content~data-sheets~en/documents~dell-compellent-software-suite.pdf

One thing that really sets Compellent storage apart is that the license is NOT tied to the hardware. So an SC8000 controller can be replaced without purchasing more licensing or invalidating the original license. Licensing is perpetual and abstracted from the hardware that is being used. So if I have a Compellent I bought 5 years ago with licensing for all the features for 16 disks I can apply that license to SC8000 controllers and SSD drives (up to 16 non-spare) and start using it.

I think if you use this site as your jump off point - http://www.dell.com/learn/us/en/04/dcm/flash-storage  you will find plenty of information - you just have to look at the Storage Center software rather than specific hardware building blocks.

118 Posts

December 27th, 2013 14:00

First link should have been

http://www.dellstorage.com/WorkArea/DownloadAsset.aspx?id=2717

December 28th, 2013 02:00

Thank you michael for the walkthrough.

I did have a demo, but i came out a bit more confused than when i came in :)

I have a quote for an array with 6 SLC SSDs and 12 MLC, and then a third tier of either 12 4TB or 24 2TB SAS-NL. I asked for the latter myself, since im not sure that 12 spindles (minus hotspare etc.) will provide good performance, when reading data that is not on the SSDs.

I got a bit confused since 6 disks minus hotspare, does not make for a good raid10 :) but since its based on the LUN im sure it will be ok.

which leads me to a question.. are hotspares really needed when your raid levels are not based on disks? as long as you have available capacity, the consistency of the volumes would be ok?

Ofcause i prefer having a hotspare for each disktype, i just do not want to end up in equallogic land again where i have 7 chassis and a total of 14 hotspares.. that is alot of wasted capacity.

118 Posts

December 28th, 2013 04:00

Was this configuration written by Dell directly or through a partner?


We have been doing Compellent systems for a long time. One of the first partners in the US. As such let me just say that there are some design guidelines that we use internally to ensure a good client experience in the short and long term. One of those guidelines is around a point you mentioned from above - we do not configure a new Tier unless we can put 48 spindles (as in spinning disks). While less disks CAN be used, we have found that people are very unhappy if their SAN doesn't perform well.  Thats a guideline, and for some workloads we have bent to allow lesser numbers of disks based on workloads like backup or long term archives (without end of year reporting requirements).

On the SSD configurations, you are talking about disks that have very high throughput - so keeping in mind that the RAID is simply a method of distributing blocks - its fine at 5 disks. Keep in mind that the way the SSD hybrid arrays work you are always going to be writing RAID 10 to the SLC so your rate of change will need to be less than or equal to 1TB. The 11 MLC will provide a usable (after average RAID) 13TB. This is a configuration for a general purpose SAN and not for anything workload specific (like a million transaction database that needs 200K IOPS sustained).

Price tends to drive configurations - you might ask about 48 x 1TB configurations to see how that price compares to the offered. From a Tier 3 point of view you are in a hard place - if you have a high growth rate then sometimes its easier to plan for more space and push on the discounting.

Compellent best practice requires one spare per disk enclosure per disk type. This has always protected people from failures in manufacturing runs (multiple failures at the same time). Compellent can use any spare available in any enclosure if it must - its simply a design parameter. For SC280 (dense enclosure with 84 x 4TB disks) there are only 4 spares for the entire enclosure. I have been pushing for new spare design parameters around SSD because of the cost of multiple spares over a set of enclosures, but as a group Compellent folks are cautious and always side with what they can prove through data from the field. As a side note, I have found that most storage companies have a similar requirement of a spare per enclosure since its the safest mathematically when systems can vary in size and number of enclosure.

December 28th, 2013 10:00

It was given directly from the danish dell division.

I will say though, that my request for an design and offer, was also more a question about pricing indications towards dell. i am sure there will be an more exhaustive specification when we are nearer the final configuration. 

i did provide dell with the following though:

Dpack, that showed our 99% to be around 4500 IOPS and our 95% to be around 2500 and a peak of 8500iops.

So all in all our enviroment is not super heavily loaded or anything of that kind. We have currently allocated around 26TB in total, and that is why i asked to get an offer on a 40TB usable configuration. since my trending over the last 5 years show  that is about where i will end up.

I asked dell to spec me a system with either 3 tiers (10%ssd,30%15K and 7.2k for the rest) or a mixture of SSDs and NL-sas. which they did.

I am worried about the disk performance, since a large amount of our data will be placed on Slow media, and i recon your right i should probably have some sort of examination of how much of our data is touched on a daily basis. i just do not know how even to get these statistics from our Equallogic enviroment.

Thanks a bunch for the answers. They've given me something to think about.

118 Posts

December 28th, 2013 11:00

Dell Direct will generally use the DPACK data to generate a configuration and quote. Especially if its a general environment. The DPACK would also indicate percent of read and write which would be taken into account. If you do differential backups, you can always look at the total backed up as an indicator of how much space you use.  EQL snapshots and Compellent replays function is a way where you can look at the size of them (if you take them daily) to get an idea of change per day.

At 40TB usable with a peak of 8500 IOPS you might be better of with a spindle configuration.

I would typically do that system as a 15K / 7.2K mix  (48 300GB 15k, 48 1TB 7.2k) - that would meet your peak requirements and is very price efficient.

I would look at flash if something like VDI is in the cards for the future (or some other workload that benefits from "unlimited IOPS"). Do you have a workload that might fall into this definition?

December 28th, 2013 11:00

We are not planning any vdi deployments. But both our Oracle and MSSQL installations are getting bigger, especially since we have implemented sharepoint as the place to store our data. We are also looking at a 1200 user lync enterprise voice installation. There latency matters a lot from gets ive understood. I have not been happy with the latency over the lat few years of our 7 array EQL installation. Latency on reads has gone from the 10s to the 30-40s of milliseconds. I want to drive that down and I am willing to make a switch back to FC and move towards SSDs to handle that issue. I have a budget that allows for some wiggle room and the quote I got was quite acceptable with room for negotiations. Of cause a change in configuration should not double our price or anything like that.

118 Posts

December 28th, 2013 14:00

Big does not always equate to storage demands. If the peak measured is the max out of the iSCSI pipe and/or the latency caused by lack of disk spindles, it can be hard to determine what your peak need might be. With a 95% of 4500 IOPS it doesnt seem like your everyday is being affected.


There are reference architectures for Compellent and Lync that you might want to look over to check for specific needs. If your using 10Gb iSCSI with a HARDWARE iSCSI initiator then it generally doesnt cost out to change out to FC. If you have a converged 10Gb switch you can do limited FC for the Compellent and FCoE out to the server (if its supported). Overall the Brocade 16Gb FC switches are very economical - so if your not 10Gb its a very affordable option (depending on the number of servers connected).

If you have the budget for Flash, I would probably increase the SLC count (to 12) reduce the MLC (to 12) and then do the rest in 7.2k. OR go to the other way and do the whole thing as flash (it would be two enclosures, 6 SLC per enclosure, 18 MLC in one enclosure, 12 in the other). Comes in around 34TB usable. If you need file add an FS8600.

December 29th, 2013 05:00

Ofcause i did not intend to indicate that i believe that size equals load, my apologies. We are seeing a growing usage of those solutions, meaning that i see the load on my arrays specifically are rising on volumes handling those solutions.

I will look into asking for a price for 16gbit FC instead of the 8gbit they  have included, and i will also ask for a second offer detailing all flash. i have a feeling its not that much more expensive.

Thank you very much for your time Michael, and have happy new years

No Events found!

Top