Unsolved
This post is more than 5 years old
8 Posts
0
10331
December 9th, 2011 08:00
10GB iSCSI Connectivity
Hello all,
We are installing a new VNX 5300 and are having trouble with getting connectivity on the iSCSI ports. Here is the troubleshooting I've done so far with one of our network engineers. Any ideas would be helpful. Thanks in advance. -Corinne
The 4 10G uplinks on the VNX are connected to 2 Cisco 3750X's stacked together via 10G sfp's using 62.5 micron mm fiber patch cables. They are 2-3 meters in length. We're primarily using icmp to test network connectivity, assuming it's not being blocked. Here's the testing we did:
- disabled 3 of the 4 10G upllinks and just picked on SPA int 0
- removed IP's from all of the other interfaces
- Put IP on the active interface.
- interface was NOT configured to tag the packets, since this is just an access layer connection
- swwitchport was setup as access layer port in iSCSI vlan. Here's config of port
- switchport access vlan xxx
- spanning-tree portfast
- verified that the switch port showed as active and it showed the correct mac address of VNX attached
- From the VNX, tried to ping another IP in the iSCSI network (which is Layer 2 only).
- tried pinging iSCSI int on one of our blade servers.
- first ping was successful. No others were no matter how many we tried.
- put a laptop on a port in the 3750X stack
- assigned ip address within iscsi subnet
- was able to ping a number of devices on the iSCSI subnet that were not physically attached to the 3750X's. Successfully ping'd
- Could never ping the VNX interface IP
- in an attempt to eliminate mismatched sfp's as a cause, tried the following
- took one of the Finisar sfp's from one of the disabled SAN ports and installed on Cisco sw. Same testing and same results. No ping replies for VNX
- tried putting a Cisco sfp on each end of the link. Same result.
- ended up putting sfp's back the way they were
- tried tagging the packets with vlan xxx. No joy.
After no success we broke out the sniffer and spanned the traffic from the active 10G link between the VNX and the Cisco 3750X. Tried pinging several IP's in the iscsi subnet. The packet capture shows the arp requests from the VNX and the replies from the clients. Matter fact it shows an arp request and reply for each ping attempt. There's no other traffic from the VNX. The packet capture only shows traffic leaving the Cisco sw going to the VNX SAN 10G interface. Doesn't mean that it's getting to the OS. Is it possible we need some type of additional configuration on the switch side other than std access layer settings?
0 events found


Rainer_EMC
6 Operator
•
8.6K Posts
0
December 9th, 2011 09:00
Are you sure you are using the correct interfaces and SFP’s on the Cisco ports connecting to the VNX 10Gbit ports ?
We only support 10GBASE-SR – 850nm short-range with multi-mode fibers like used with Cisco XENPAK-10GB-SR
Rainer
Rainer_EMC
6 Operator
•
8.6K Posts
0
December 9th, 2011 10:00
I would suggest to try generating a procedure with the VNX prodecure generator on Powerlink and follow that.
I cant attach a doc here - I'll put it into documents temporarily
https://community.emc.com/docs/DOC-13150
Rainer
crnnjhnsn
8 Posts
0
December 9th, 2011 10:00
Hi Rainer, thanks for replying. The SFP in the switch is SFP-10G-SR - Cisco 10GBase-SR SFP Module. We did have someone from EMC verify the compatibility of it. They said it was supported.
crnnjhnsn
8 Posts
0
December 9th, 2011 11:00
Thank you. I'll give that a try.
kelleg
6 Operator
•
4.5K Posts
0
December 15th, 2011 10:00
Were you ever able to get this resolved?
By chance are the Management Ports and the iSCSI ports on the array using the same subnet?
Are you using Jumbo frames? 10Gb on VNS only supports 4000 MTU.
Are you using flow control on the swtiches?
glen
Clustor
57 Posts
0
December 15th, 2011 12:00
"Are you using Jumbo frames? 10Gb on VNS only supports 4000 MTU."
wait a minute... do you mean only when Jumbo frames are enabled? or in general?
Rainer_EMC
6 Operator
•
8.6K Posts
0
December 15th, 2011 12:00
Without Jumbo frames Ethernet max MTU is around 1500
I assume you know that within a VLAN all nodes have to have the same Jumbo setting
Clustor
57 Posts
0
December 15th, 2011 12:00
Ok , yes that's clear. I thought something funny was going on with the default max MTU size of the VNX.
But still, 4000 is the max? i thought 9000 was the common configuration for jumbo frames.
kelleg
6 Operator
•
4.5K Posts
0
December 15th, 2011 14:00
The default is 1500, the max is 4000 - that's what engineering set it to.
Also, I've seen some discussions on the ESX blogs where someone tested the advantage of 9000 MTU and found that it really did not provide that much of a benefit:
http://www.boche.net/blog/index.php/2011/01/24/jumbo-frames-comparison-testing-with-ip-storage-and-vmotion/
glen
Noogz
21 Posts
0
December 16th, 2011 11:00
According to the "EMC Unified Storage Best Practices for Performance and Availability -- Common Platforum and Block Storage 31.0 -- Applied Best Practices" from 6/23/2011 on page 28:
"The VNX series supports 4000, 4080, or 4470 MTUs for its front-end iSCSI ports. It is not recommended to set your storage network for Jumbo frames to be any larger than these."
In version 31.5 released 10/17/2011:
"The VNX support front end iSCSI port MTU sizes of: 1260, 1448, 1500, 1548, 2000, 2450, 3000, 4000, 4080, 4470, 5000, 6000, 7000, 8000, 9000. it is not recommended to set your storage network for Jumbo frames to be any other than these."
With that said, I went with 4470 when we deployed our two VNX5300's a few months ago when the 31.0 document was released. So, what's the official word?
Noogz
21 Posts
0
December 16th, 2011 12:00
Thanks for the clarification. Surely you don't expect me to read the first page of the document instead of just Ctrl-F and search for MTU.
*cough*
kelleg
6 Operator
•
4.5K Posts
0
December 16th, 2011 12:00
I missed it also, until I searched for "jumbo"
glen
kelleg
6 Operator
•
4.5K Posts
0
December 16th, 2011 12:00
from the "EMC Unified Storage Best Practices for Performance and Availability - Common Platform and Block Storage 31.5 — Applied Best Practices.pdf" on page 5 - this was changed in this version of the document. So the settings in the 31.5 are the ones to follow - it goes along with the changes in flare 05.31.000.5.5xx. So if you have an older flare version, you need to follow the original recommendations. With the new flare, follow the recommendations in the 31.5 document:
glen
jps00
2 Intern
•
392 Posts
0
December 16th, 2011 12:00
>So, what's the official word?
What is the question?
Between Block O.E.31.0 and Block O.E. 31.5 releases of VNX BP, it was pointed out that CX-series Jumbo Frame values were being used for VNX. That section was corrected to be VNX.
The numbers are so specific, because the size increments for Jumbo Frames varies between vendors. The listed values are the values the VNX's front-end iSCSI ports support. Correlate your iSCSI HBA Jumbo Frame available settings, with your Switches Jumbo Frame settings, and the VNX's front-end iSCSI port Jumbo Frame settings.
jps00
2 Intern
•
392 Posts
0
December 19th, 2011 04:00
Aw Glen, you just about re-wrote that section yourself.