Unsolved

This post is more than 5 years old

2 Intern

 • 

185 Posts

59749

March 27th, 2014 01:00

PC8132 iSCSI setup

Stack of 2 for dedicated/isolated iSCSI data traffic. Would it make sense to enable or disable iSCSI optimization?

If disabled would the normal config be enough:

spanning-tree mode rstp
flowcontrol on
spanning-tree portfast
storm control disabled for Unicast, enabled for Broadcast & Multicast
jumbo frames 9126

I am not planning to have any (as I do not have any) Equallogic connected to it.

Seb

Moderator

 • 

9.5K Posts

March 27th, 2014 09:00

Hi Seb,

It would still be recommended to use the iSCSI optimization as it provides the following features

  • Ability      to assign a specific QoS profile to the iSCSI flows
  • Display      of iSCSI session details (connections, initiator, target, and so on)
  • Identification      of (self-discovered) iSCSI sessions
  • Identification      of iSCSI session termination
  • Identification      of non-active iSCSI sessions

2 Intern

 • 

185 Posts

April 7th, 2014 15:00

I now have this dual switch stack configured & connected to Qnap TS-1679U-RP (with 2 x dual Intel X540T) & PowerEdge R720 with also 2 x dual Intel X540T


Speed of the data write to Qnap is way worse then my previous 1 Gb setup (PC 5424 switches, BCM5709C , same Qnap setup). It now only makes 27 Mb/sec which is plain AWFUL

So it looks like PC8132 switch setup / X540T cards setup is not correct

Already disabled VM Queue on Intel cards, driver 3.8.35.0 from ProSet 19

Anybody having any idea what else can be checked? As it is now, it is unusable!

Thanks

Seb

2 Intern

 • 

185 Posts

April 8th, 2014 04:00

Played with most of the advanced options on the Intel X540-T2 driver, no difference

Same tragic speed (27Mb/sec - way less then even USB 3)  is while copying to MD3600i which is on 10G connection to same PowerConnect 8132

 

I am running out of ideas what is causing it & how to tackle it

 

PE R720 2 x Intel X540-T2 --> 10Gb --> PowerConnect 8132 --> 10Gb --> MD3600i OR Qnap TS-1679U-RP

Moderator

 • 

9.5K Posts

April 8th, 2014 08:00

Are the iSCSI ports in their own VLANs? Have you tried different cables? Here is the MD3600i deployment guide, try checking to make sure to make sure there wasn’t a difference in this setup versus a different storage device ftp://ftp.dell.com/Manuals/Common/powervault-md3600i_Deployment%20Guide_en-us.pdf

 

Is the intel NIC firmware up to date? http://www.dell.com/support/home/us/en/04/Drivers/DriversDetails?driverId=HKK1W&fileId=3355421560&osCode=WS8R2&productCode=poweredge-r720&languageCode=EN&categoryId=NI

2 Intern

 • 

185 Posts

April 8th, 2014 09:00

Well, not before, but I have now, and it did not make any difference at all! Going steady at 27Mb/sec

 

Seb

Moderator

 • 

9.5K Posts

April 8th, 2014 10:00

Is it just the storage and the one server on the switch? The VLANs are not shared with any other devices right? Can you post how it is configured, which ports are used, what vlans?

2 Intern

 • 

185 Posts

April 8th, 2014 11:00

2 x N4032 with 40Gb stacking cable (and stacking modules in each switch)

2 x PE R720 each with 2 x X540-T2 (4 ports per server total, 2 ports per subnet)

Vlan 77 access - ports 1-12 on each switch - for subnet ONE

Vlan 90 access - ports 13-24 on each switch - for subnet TWO

Dedicated iSCSI setup, only MD3600i & Qnap arrays & Intel X540-T2 IPv4 only, plugged into stack (each port into corresponding Vlan ports)

Setup is so simple that there is very little that should go wrong. And it should be blazing fast

Jumbo frames, port fast (as per first post) enabled all the way through

Seb

Moderator

 • 

9.5K Posts

April 8th, 2014 11:00

Each port on the MD3600i needs to be on its own VLAN, so if 2 ports are being used for the MD there needs to be 2 VLANs, if all 4 ports are connected than 4 VLANs are needed.

2 Intern

 • 

185 Posts

April 8th, 2014 14:00

I would not agree on that one. 4 ports = 2 vlans on MD3600i - 1 port from each controller on same subnet (4 subnets/vlans makes NO SENSE for MPIO setup - unless one has 8 dedicated ports in the server, unusual at least!)

Controller1

172.16.77.xx

172.16.90.xx

Controller2

172.16.77.yy

172.16.90.yy

I had it running this way for 2 years with no issues on PC5424

The array config is not a problem (as stated previously, I only have problems since changing to 8132 & X540-T2)

2 Intern

 • 

185 Posts

April 8th, 2014 14:00

Well spotted!, I run my tests pulling test file from network share hosted on.... USB2 hard drive (temporally!)

Ofcourse 27Mb/sec was all that one could expect.


Pulling off share on local drive I get 113Mb/sec on both Qnap & MD3600i (still would like more)

But if I pull off server local 10K rpm drive (used as temp space, so no system drive) I do get 765Mb/sec (what expected)

And when I pull from one array to the other is is just under 900Mb/sec


So I was being bit silly after all in my test!

Thanks Josh!

Seb

Moderator

 • 

9.5K Posts

April 8th, 2014 14:00

What speed do you get on transfers server to server and not to storage?

Moderator

 • 

9.5K Posts

April 8th, 2014 14:00

Those speeds sound much better.

2 Intern

 • 

185 Posts

April 9th, 2014 03:00

My test last night was not fair, as the data (just under 3Gb) was already in memory (hence the v fast copy)

Today I run same test, but on a 6+ Gb file which was not touched.

Copied from one array to the other over 10Gb cards. I could see copy only under 50Mb/sec

http://imgur.com/Li26xGD

Also, I can see that starting VMs is just plain painful, as the read off the iSCSI storage is way too slow

 

So I am nowhere better then before really

 

Seb

Moderator

 • 

9.5K Posts

April 9th, 2014 08:00

If it is copying fine from memory across the switch, the problem isn’t the switch, there has to be some other bottleneck from reading from the drives. Can you run a DSET on one or both of the servers http://www.dell.com/support/contents/us/en/04/article/Product-Support/Self-support-Knowledgebase/enterprise-resource-center/Enterprise-Tools/dell-system-e-support-tool

 

And get a support bundle from the MD and send them to me josha_craig@dell.com

2 Intern

 • 

185 Posts

March 13th, 2016 03:00

The problem was that Qnap TS-1679U-RP is an AWFUL hardware, once replaced with Poweredge R510, all the problems "magically" went away!

0 events found

No Events found!

Top