before you tore everything apart how was the pool configured ? Did you use Flash drives for Fast Cache only or did you use them in a pool or both Just trying to understand where you were, what issues you experienced ? Did you troubleshoot performance issues with VMware/EMC ?
For example when a VM was migrating to a new LUN it took forever and caused the SQL server to freeze during the migration but if another VM was migrating to a different LUN it was quick and no down time during the migration.
What is the storage connectivity? This sounds more like a misconfiguration, or a saturated 1Gb iSCSI network, than an array (disk-based) performance problem. Also I would recommend looking into a flare upgrade if you aren't already at the latest stable version, while there is nothing on it.
When creating these new storage pools, why would I include NL drives in these pools if capacity isn't a factor? Can't I just create a separate storage pool in a RAID6 just for NL drives only?
You can create separate storage pools, but why wouldn't you include NLSAS in there? Tiering is designed to save you money because your data is not all hot enough to qualify for EFD or even SAS level of performance. It also saves money because your storage admins don't have to manually carve up servers into different tiered LUNs, and don't have to monitor those to make sure they aren't exceeding or more likely underperforming with respect to their expected I/O.
It is very difficult to generalize with a limited amount of info, but I would highly recommend you not create a storage pool of only NLSAS drives and then try to run VMs off of it. Instead consider something like a gold/silver pool strategy where your gold tier is EFD/SAS, and your silver tier is SAS/NLSAS. You may also want to tear off 8-16 SAS drives in an R10 pool for tlogs. Lots of factors to consider.
At a minimum I would recommend you read https://www.emc.com/collateral/white-papers/h12682-vnx-best-practices-wp.pdf to get a laundry list of dos and don'ts. But really, I would recommend you look into some architectural services to help guide you in making the right decisions for your environment. It is super-easy to make seemingly innocent wrong choices that paint you into a horrible corner down the road...especially on an array that is already at the slot limit.
We had two pools configured with one RAID10 and RAID5 pools. The RAID10 had only SAS drives and the RAID5 had mixed SAS and NL drives. FAST was used in the RAID5 pool.
unless you have very stringent / predictable I/O requirements why even isolate into multiple pools, let that expensive FAST-VP license do the work for you. You need to a local USPEED guru to help you model your workload. Did you collect NAR files when you were having issues, what were the issues anyway ? When you say slow sVMotion, it could be a bug in Block OE/VAAI implementation and nothing to do with backend performance.
andyboy2
5 Posts
0
August 14th, 2014 14:00
Yes
dynamox
9 Legend
•
20.4K Posts
0
August 14th, 2014 14:00
before you tore everything apart how was the pool configured ? Did you use Flash drives for Fast Cache only or did you use them in a pool or both Just trying to understand where you were, what issues you experienced ? Did you troubleshoot performance issues with VMware/EMC ?
dynamox
9 Legend
•
20.4K Posts
0
August 14th, 2014 14:00
FAST-VP is licensed on this array ?
rzero
58 Posts
0
August 14th, 2014 15:00
For example when a VM was migrating to a new LUN it took forever and caused the SQL server to freeze during the migration but if another VM was migrating to a different LUN it was quick and no down time during the migration.
What is the storage connectivity? This sounds more like a misconfiguration, or a saturated 1Gb iSCSI network, than an array (disk-based) performance problem. Also I would recommend looking into a flare upgrade if you aren't already at the latest stable version, while there is nothing on it.
When creating these new storage pools, why would I include NL drives in these pools if capacity isn't a factor? Can't I just create a separate storage pool in a RAID6 just for NL drives only?
You can create separate storage pools, but why wouldn't you include NLSAS in there? Tiering is designed to save you money because your data is not all hot enough to qualify for EFD or even SAS level of performance. It also saves money because your storage admins don't have to manually carve up servers into different tiered LUNs, and don't have to monitor those to make sure they aren't exceeding or more likely underperforming with respect to their expected I/O.
It is very difficult to generalize with a limited amount of info, but I would highly recommend you not create a storage pool of only NLSAS drives and then try to run VMs off of it. Instead consider something like a gold/silver pool strategy where your gold tier is EFD/SAS, and your silver tier is SAS/NLSAS. You may also want to tear off 8-16 SAS drives in an R10 pool for tlogs. Lots of factors to consider.
At a minimum I would recommend you read https://www.emc.com/collateral/white-papers/h12682-vnx-best-practices-wp.pdf to get a laundry list of dos and don'ts. But really, I would recommend you look into some architectural services to help guide you in making the right decisions for your environment. It is super-easy to make seemingly innocent wrong choices that paint you into a horrible corner down the road...especially on an array that is already at the slot limit.
andyboy2
5 Posts
0
August 14th, 2014 15:00
We had two pools configured with one RAID10 and RAID5 pools. The RAID10 had only SAS drives and the RAID5 had mixed SAS and NL drives. FAST was used in the RAID5 pool.
Thanks,
Andy
dynamox
9 Legend
•
20.4K Posts
0
August 14th, 2014 15:00
those drive numbers you listed above, are those excluding hot spares ?
shofmannn
1 Rookie
•
104 Posts
0
August 14th, 2014 15:00
Just use highest tier available on the lun properties
dynamox
9 Legend
•
20.4K Posts
1
August 14th, 2014 16:00
unless you have very stringent / predictable I/O requirements why even isolate into multiple pools, let that expensive FAST-VP license do the work for you. You need to a local USPEED guru to help you model your workload. Did you collect NAR files when you were having issues, what were the issues anyway ? When you say slow sVMotion, it could be a bug in Block OE/VAAI implementation and nothing to do with backend performance.
andyboy2
5 Posts
0
August 14th, 2014 18:00
10GB Nexus FC switches. Can't be a network issue. Our other SAN works like a champ with no performance issues at all.
Thanks,
Andy
Roger_Wu
4 Operator
•
4K Posts
0
August 14th, 2014 18:00
Another good whitepaper:
Using EMC VNX Storage with VMware vSphere
https://www.emc.com/collateral/hardware/technical-documentation/h8229-vnx-vmware-tb.pdf
andyboy2
5 Posts
0
August 14th, 2014 18:00
Updated the flare about a month ago.
Thanks,
Andy