Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

9009

May 18th, 2017 17:00

OneFS 8.x and Twinax cables

Hello,

Is there a reason DellEMC decided to drop support for Twinax cables for 10G connectivity ?  As we are planning to upgrade to 8.0.x this summer we were notified that Twinax is no longer supported, why !?!?

Thanks

2 Intern

 • 

20.4K Posts

May 31st, 2017 14:00

djthomas,

1) how soon will you be able to provide a patch for .4 ?  I have a migration scheduled  for 7/15, upgrading from 7.2.1.4

2) what would my upgrade look like, as soon as entire cluster is rebooted for .4 upgrade, RCM will immediately install the patch and reboot entire cluster again ?  I assume at this point LACP configuration will be restored and all my IPs, static routes ..etc will be intact, just like before the upgrade ?

These are production clusters with thousand of clients, it was extremely hard to schedule complete outage for 8.x upgrade, so i don't want to be surprised with busted networking.

Thank you for your help.

33 Posts

June 1st, 2017 05:00

When we moved to 8.0.0.4 ( rushed in the first place since we were sold NL410's with buggy firmware ) we had to take 6 reboots and it took a few hours to restore 100% stability.  That list of patches included...

  • 8.0.0.4
  • Node Firmware 9.3.4
  • BMC firmware v1.0
  • BXE Kernel patch 188239
  • Drive Firmware
  • NFS patch 191603

It took a solid 2 hours to do 4 nodes.

Needless to say the storage group across the hall running NetApp was all but laughing at us.

2 Intern

 • 

20.4K Posts

June 2nd, 2017 04:00

djthomas,

can you please provide feedback to my last questions ?

Thank you

33 Posts

June 2nd, 2017 13:00

I can't speak to the twinax issue specifically, but we have LACP and in the end I didn't change any network setting during our 7.2.1.4 -> 8.0.0.4 upgrade. 

6 Posts

June 2nd, 2017 15:00

We will have more confidence in the date as we approach it but currently we are targeting a patch for 6/30/17.

7 Posts

June 6th, 2017 10:00

@dynamox,

We upgraded from 7.2.1.4 to 8.0.0.4 on  both our clusters a couple of months ago.  Had absolutely no issues with our DR cluster ( 8 x NL400 ), upgrade took less than 30 min (simultaneous reboot).  LACP and all other networking configurations remained intact.  Our Production cluster ( 5 x X410, 5 x NL410) took a little longer, but that was because all the nodes hung on the simultaneous reboot and I had to go onsite (just down the road) and manually power cycle them all.  Once that was done everything came back up (LACP, network configurations, etc).  I applied the BXE kernel patch a few weeks later and we've been static since then with the below configuration:

  • 8.0.0.4
  • Node Firmware 9.3.4
  • BMC firmware v1.0
  • BXE Kernel patch 188239
  • Drive Firmware

Couple of notes:

- while support was checking things out after the upgrade (because all the nodes hung I had them take a look), they mentioned a "known issue" where all SMB configurations can be lost during upgrade.  It didn't happen to us (luckily) but you might want to inquire about that

- You'll probably want to modify the alerts after the upgrade, since 8.x is way more chatty than OneFS 7.2

- I got a CMC non-responsive error on one node this last week and support has told me I need to go to Node Firmware 10.0.1 after I power cycle that node.

- At the time of the upgrade, we had a lot of SyncIQ policies in use (due to requiring lots of quotas on target cluster), like almost 250.  This worked fine with 7.2 code on the source cluster (even after target was upgraded to 8.0.0.4).  However, when the production cluster was upgraded to 8.0.0.4 we started having all kinds of failures.  It required support to change the default SyncIQ behavior for our environment.  Since 8.x removes the issue with quotas on child directories of a SyncIQ target we have been able to dramatically reduce the number of polcies, so we'll likely revert our SyncIQ settings back to default.

2 Intern

 • 

20.4K Posts

June 8th, 2017 19:00

Good to hear Steve.  Do you know what the "SMB issues" is all about ?  Do you use twinax on your clusters ?

7 Posts

June 9th, 2017 06:00

We exclusively use twinax cables for both of our clusters, 10GbE with LACP. They never gave me details about the SMB “issue” but it sounds like someone else mentioned it earlier in this thread.

2 Intern

 • 

20.4K Posts

June 9th, 2017 08:00

What Twinax cables do you use ?

7 Posts

June 9th, 2017 09:00

CABL-CU5M-00-A01

10GBASE-CU SFP+ Cable 5 Meter

CABL-CU3M-00-A01

10GBASE-CU SFP+ Cable 3 Meter

2 Intern

 • 

20.4K Posts

June 12th, 2017 10:00

interesting, non Cisco branded twinax worked, what switches are you using ?

7 Posts

June 12th, 2017 11:00

We are using Nexus 5ks

Steve Bogdanski

2 Intern

 • 

20.4K Posts

June 21st, 2017 09:00

djthomas,

Are we still on target for 6/30/2017,  any details you can share ?

Thank you

6 Posts

June 21st, 2017 21:00

The ETA slipped by a couple weeks. The new ETA is 7/14

2 Intern

 • 

20.4K Posts

June 22nd, 2017 08:00

"great", and i have an upgrade scheduled on 7/15.

No Events found!

Top