Unsolved

9 Posts

2342

November 26th, 2021 07:00

ME4012 iSCSI - Weird ping reply testing jumbo frames

Hi,

I'm installing a new ME4012 iSCSI storage.
I am experiencing some slow performance on my throughput, so I started testing to see if all equipment supports jumbo frames.

When I ping from my VM to the controller with the -f -l 8972 option, the reply states "Reply from x.x.x.x: bytes=104 (sent 8972)..."

What does this mean exactly, and mostly, how can I get a normal ping reply? 

I tried pinging to my old equallogic at the same switches, and that states the normal "Reply from x.x.x.x: bytes=8972..."

Yes, I have checked the "Support jumbo frames" at the ME4012 controller.

Any clues?
Thanks!

November 26th, 2021 12:00

Hi,

Page 55 https://dell.to/3re4XJY

“a jumbo frame can contain a maximum 8900-byte payload”

 

Try using a slightly smaller jumbo frame. Let us know if you have any additional questions.

9 Posts

November 26th, 2021 14:00

Hi,

Although the overhead in a default IP message is only 28 bytes, any value larger than 104 for the L-option results in the same message.

e.g. "Ping 192.168.1.31 -f -l 105" results in "...bytes=104 (sent 105)..."

So, basically my question still stands

November 26th, 2021 15:00

Is the firmware up to date? https://dell.to/3p2VeU6

9 Posts

November 26th, 2021 23:00

Yes, it is

Moderator

 • 

4K Posts

November 29th, 2021 00:00

Hello,

is jumbo-frame support enabled on all network components in the data path?

Thanks

Marco

9 Posts

November 29th, 2021 10:00

Yes, it is.

In fact, when I ping my good-old Equallogic (connected to same NICs to same Switches) from the same server, it just pings okay with 8972 bytes. 
Only difference in the chain is the ME4012 vs Equallogic. I have even swapped the ME4012  and the Eqiallogic's CAT cables at the switches. 

November 29th, 2021 11:00

What switches are you using? Are they up to date and what size packets are they set to?

9 Posts

November 30th, 2021 05:00

Hi,

We're using dell N2024 switches. Jumbo frames enabled (few years ago, so I don't recall the exact setting). 
Strange thing is that the Equallogic ping returns a normal reply, so I can conclude the switch is configured quickly.

 

November 30th, 2021 09:00

Since the ME4012 is newer it may have slightly different jumbo frame settings for the switch. So that is why checking the setting might help. Page 454 https://dell.to/3EasU8A

9 Posts

December 3rd, 2021 08:00

Sorry for delayed rely.

Just checked the MTU on the N2024 switches and confirmed that is is set to 9216 bytes.

 

 

4 Operator

 • 

2.9K Posts

December 3rd, 2021 14:00

I'm wondering if it may be operating as design? I'm not the most familiar with this specific product and would need to look further, but it looks like you completed this as well:

 

 

image

 

Do you know the service tag for the ME unit?

 

9 Posts

December 7th, 2021 01:00

Hi,

Yes, I have checked this all. Jumbo frames are enabled on the ME4012 as well. 
Servicetag: -edited by Moderator-

 

Thanks,

J.

3 Apprentice

 • 

278 Posts

December 7th, 2021 03:00

Hello jaccog,

 

I've checked several documents, trying to find possible troubleshooting steps.  I found one other post in Dell Community forum with similar issue description, but for EqualLogic, in any case I would like to share it with you, maybe it helps to find a way to resolve this issue:

https://dell.to/31zxHlw

 

Please ask me if you have any questions.

Maria Januszka

#IWork4Dell

Dell | Social Outreach Services - Enterprise

1 Message

December 9th, 2021 23:00

@jaccog We are facing the exact same problem with 2 x ME4024 arrays that we have purchased at 2 different sites. Each site has an old storage array we are migrating from. An equallogic and a powervault MDS. Both old storage arrays are working 100% on jumbo frames however the ME40X arrays are not. 

The first deployment site Jumbo frames was working as intended on the ME4X with no issues. No latency issues at all. However the second site never worked with jumbo enabled. Extremely high latency and fragmented packets were observed. For some reason the first site has now ALSO started giving us the same issues and we are observing the same symptoms with high latency and fragmented packets. You can clearly see from the below that its not returning the correct packet size from the host to the array. No changes were done but the issue has magically appeared on this array too. 

 vmkping -4 -s 8890 -c 5 x.x.x.x

PING x.x.x.x (x.x.x.x): 8890 data bytes

112 bytes from x.x.x.x: icmp_seq=0 ttl=64 time=0.408 ms

112 bytes from x.x.x.x: icmp_seq=1 ttl=64 time=0.392 ms



We had a support call open with Dell support for almost 3 months troubleshooting this issue and had no resolution. We tried everything possible to find the root cause and eventually had to go live with jumbo frames being disabled as that was the only suggestion the dell tech support could give us. 

This seems to be an issue specifically between VMware ESXi and the ME4024. We tried everything from using a single ESXi host and even using a separate new switch. Windows and linux hosts did not observe the issue. Running ESXi 6.7 and 1GB cisco switches. I have not tried testing this on 10GB switches so would be interesting to know if you are running 10GB ISCSI switches or 1GB?


9 Posts

January 1st, 2022 14:00

Hi Claudio,

In our setup we are using 1GB copper switches. (Planned to be upgraded to 10GB soon). Also, we are running WIndows server 2016/2022 on hyper-v (not using VMware).

I am currently running semi live for testing purposes, but I did not find extreme latency issues so far. 

Top