Unsolved

This post is more than 5 years old

5 Posts

68209

October 18th, 2011 00:00

Throttling disk usage

Hi

 

We currently have a compellent with 2 disk tiers.  We have 5 volumes configured on there.

 

My 1st question is, if there is high latency on one volume will it affect the other volumes? 

 

My Second question is, is there any sort of way to throttle disk usage for each VM especially for VM's running SQL queries. 

 

Thanks

5 Posts

October 18th, 2011 08:00

We use RAW device mappings, iSCSI mappings and VMDK files and need to try and ensure that on one VM can kill an entire tier

48 Posts

October 18th, 2011 08:00

Do the VMs have some sort of exclusive access or are they just using VMDK files? If VMDKs on a VMFS I believe Storage I/O Control is what you'd be looking at.

5 Posts

October 18th, 2011 09:00

Hello,

we have some similiar setup - 2 tiers, 34 in T1, 9 in T3.

Recently we migrated from EMC CLarion and currently have some serious performance issue (case is open with support), because other VMachines/ volumes from e.g. DEV environments affect volumes with production VMachines.

There was no such problem on EMC as volumes were simply on separate disks/ enclosures, while with Compellent everything sits everywhere and affects everything.... :) we banned the word "fluid" in the company...

Our way of "throttling":

- create new storage profile - "T1, RAID 10 only" and apply that to your critical volumes - using this replays and even data that should be progressed to T3 stays on T1,

- if you are using storage replication to DR - unselect option "replicate to lowest storage",

- check settings on FC HBA cards - in our case we modified some values to match best practise recommendations,

- for windows 2k3 - upgrade drivers for FC cards, for 2k3 & 2k8 - modify registry setings - to match best practise recommendations,

- assign more Disk resources (shares) on VMWare for your critical VMs

Our case is still far from closure though, but after above tweaks we achieved some improvement...

Rgrds

 

5 Posts

October 18th, 2011 23:00

Hi

Thanks for the reply and your input.  

In the future we will be getting 3 tiers or disks, however we still still have a large amount of sql servers running on the same tier structure and at the end of the day any one of them can kill the entire tier with a bad query.

Our setup on the networking side is all setup to best practice quides to ensure the best through put to the disks.

While I agree that changing disk resources in VMware can help it also does not help with bad SQL queries.  It also does not help with direct iSCSI connections from the SQL servers or RAW device mappings

If you find out anything from the case you logged please update us and it will be great to hear about other work arounds.

15 Posts

October 28th, 2011 11:00

iSCSI for production SQL environment ?  This is not a best practice in and of itself.  

2 Posts

December 6th, 2013 07:00

Good afternoon,

I wonder if I can make a few suggestions here:

- create new storage profile - "T1, RAID 10 only" and apply that to your critical volumes - using this replays and even data that should be progressed to T3 stays on T1,

You can use the default high priority profile, as there is no inherent benefit of keeping data in RAID 10 format whilst reading data from the disks. RAID 10 is great to accept new writes, as there is no processing involved to commit data to two disks. However when it comes to reading, when using RAID 5-9 the same 2MB page that you write in RAID 10 can be read from 9 disks in 256K chunks, hence offering potentially 9x IOPS and throughput compared to RAID 10. Progressing it to Tier 3 is entirely your choice.

- if you are using storage replication to DR - unselect option "replicate to lowest storage",

This has no/little impact on the production system, however in case of a DR, replicate to lowest storage can lead to slightly longer access time, since data needs to be accessed from Tier 3.

- check settings on FC HBA cards - in our case we modified some values to match best practise recommendations,

- for windows 2k3 - upgrade drivers for FC cards, for 2k3 & 2k8 - modify registry setings - to match best practise recommendations,

- assign more Disk resources (shares) on VMWare for your critical VMs

Was your solution sized based on DPACK? If not, it is worth running two seperate DPACK instances on your PRODUCTION and anther DPACK on your non-critical hosts to determine the level of performance they are requesting from the SAN. Please see attached a link for DPACK.

<ADMIN NOTE: Broken link has been removed from this post by Dell>

 

2 Posts

December 6th, 2013 07:00

The questions that I would ask you are:

a) high latency at the SAN level on Compellent should not be there if it is sized correctly in the first instance. You can check the Enterprise Manager interface->Charting View to monitor any volumes which are experiencing latency to identify any issues. Also, it is worth running DPACK on your hosts, if DPACK was not run when sizing your solution to identify any latency on network/host end.

<ADMIN NOTE: Broken link has been removed from this post by Dell>

 

Any latency on SAN due to SAN being over-utilised can affect the whole SAN but you can distribute workload across diferent tiers using different profiles. A well sized Compellent SAN should offer about 30% of total IOPS of Tier 3. Therefore you can potentially put non critical workload on Tier 3 disks using low priority storage profile.

b) Disk usage throttling is not there on Compellent at the moment, you can however put non-critical workload on Tier 3, hence stopping them from interfering with business critical applications. I would recommend getting a Co-pilot resource in case you run in to any difficulty.

Kind regards

GB

5 Posts

December 9th, 2013 01:00

> high latency at the SAN level on Compellent should not be there if it is sized correctly in the first instance

- cannot agree more, this was the root cause of our performance problems. Dell's engineer who ran initial assesment "averaged" too much and sized the stored too low... at the end we bought 5 SSD disks to sort out the problem :)

events found

No Events found!

Top