Unsolved

This post is more than 5 years old

2 Posts

1733

August 30th, 2013 21:00

Mirrorview using iSCSI - Initial Synchronization Very Slow

I'm using Mirrorview between two Clariion storage units, both are CX4-120s.  The system has 1Gbps links between them, with very low latency (1ms or less).  I have deployed the "A" and "B" Ethernet fabric, as is recommended by EMC.

I have setup two LUNs for replication, one owned by SPA, one by SPB.  Both LUNs perform their initial synchronization VERY slowly.  I have a 1TB LUN, for instance, which I only see about 57Mbps on the Ethernet switch.  Upon doing a Wireshark capture of the traffic between the Clariions, it seems that the arrays are sending one 'ACK' for every single data packet.  They don't appear to be using proper TCP Windowing at all.  I would think that arrays doing large data transfers such as this would use proper windowing, but also even utilize TCP1323 extensions to permit Window scaling.

Having an 'ACK' for every single data packet is definitely causing performance issues, and this initial synchronization is taking WAY too long (days).  Having the initial synchronization take that long poses reliability and maintainability issues for me, it cannot take so long.

I've tried many things to get the arrays moving faster.  I have tried:

  • destroying and re-creating the mirror
  • deleting and re-creating the iSCSI paths between the arrays, including deleting and re-creating the iSCSI login information (credentials)
  • exhaustive examination of every network port and setting in the entire path
  • tested network's ability to transfer data using two Windows hosts, and I got full-speed transfers with Windows and ROBOCOPY, no problems.
  • rebooting network gear in the path
  • rebooting the SPs on the array (on both sides)
  • complete power cycle of the arrays (on both sides)
  • opened a support case with EMC (not showing signs of promise)

Does anyone have any pointers?  Has anyone done this with iSCSI?  Has it worked well?  I expected to see a lot more throughput on this solution.  I have also spent some time looking at as many possible NAVI-CLI commands and options (admittedly, not all of them), in hopes I could fine-tune the TCP settings on the arrays, but have found nothing useful.

It seems like basic poor TCP behavior, which is disappointing.

Any help would be appreciated.

No Responses!

events found

No Events found!

Top