Start a Conversation

Unsolved

This post is more than 5 years old

P

38808

July 23rd, 2009 02:00

Connecting two PowerConnect 62xx switches w/ stacking cable vs 10GbE uplink module

Hi,

We've currently have two PowerConnect 6248 switches stacked together. Besides other things, we use it for our iSCSI SAN with VMware Virtual Infrastructure 3.5: each ESX Server has one iSCSI port connected to the first switch and another iSCSI port connected to the second switch for redundancy reasons.

We've recently upgraded the switches to the latest firmware version 3.x (and also to the new boot code), following the procedure in the Dell release notes, and that caused both switches to reboot together.

The stack took about two minutes to come back online and for all that time our iSCSI SAN was unreachable, bringing down the VMware VMs. Please also see my other post here with more info: http://en.community.dell.com/forums/t/19284196.aspx

If I got it correctly, there is currently no way to upgrade the firmware for stacked switches and avoid the single reboot for the whole stack (and the associated network downtime for all the equipment connected to the stacked switches).

So in order to avoid the VMware Virtual Infrastrucuture downtime with our current setup when upgrading the PowerConnect firmware (or when doing other maintenance to the switches), we are now considering to change the switches configuration and to use the two switches separately instead than a stack.

Is there any way to have the two switches still connected by the stacking cable but act as two separate switches (so you can update the firmware for one of them at time) instead of a single logical switch?

Or would we need to use a couple of ethernet ports on each switch to create a LAG to connect them? Or better yet can we purchase a 10GbE uplink module (for CX-4 Copper, dual integrated ports) for each switch and use that to connect the two switches and manage them separately?

If not using the two switches as stacked, besides losing the convenience of managing them together from a single console/web interface and the losing the high bandwidth connecting between the two, would we lose anything else if we manage them separately and connected them through the 10GbE uplink modules?

Thanks.

108 Posts

August 24th, 2009 08:00

Anyone?

Thanks.

909 Posts

August 24th, 2009 08:00

The only thing you lose is what you mentioned:

- instead of 24Gb with stacking, you will get 20Gb with 2 10GE ports LAGged

- you would have to manage the switches separately.

 

108 Posts

August 25th, 2009 01:00

Thanks for your reply.

Just a quick question: don't you get 48Gb of bandwidth with the stacking (and not 24Gb)?

Thanks.

909 Posts

August 25th, 2009 07:00

yes.  Using Marketing logic:

2 * stacking (12Gb) : 24Gb bi-directional or 48Gb throughput

2 * CX4 (10Gb) : 20Gb  bi-directional or 40Gb throughput

108 Posts

September 3rd, 2009 07:00

We didn't do that yet, we need to purchase the 10GbE uplink modules yet, I'll update this thread when we do.

Please post your experience if you do the same.

Thanks.

11 Posts

September 3rd, 2009 07:00

Hi Pzero,

Did you end up taking the switches out of the stack?  If so, did you have any issues? 

I have a pair of 6224s and I think I am going to unstack them for my ISCSI fabric too.

Ryan

11 Posts

September 4th, 2009 07:00

Pzero,

I ended up unstacking my 6224s yesterday to troubleshoot an issue and I think I am going to leave them that way.  I'm using two subnets for the ISCSI fabric and performance seems unaffected by the unstack.  Unstacked each switch will be running a single subnet VLAN and be connected to our management network individually.  When they were stacked each switch managed half a subnet VLAN for the ISCSI fabric.  One of my 6224s was bad and would drop packets under load causing major ISCSI issues througout the stack.  I should receive the replacement switch today.  With them unstacked I think it will be easier to troubleshoot problems and do firmware updates without downtime.

Ryan

43 Posts

September 13th, 2009 16:00

How did you notice/determine that one 6224 was "bad"?  That's pretty worrisome: on receipt of a new switch, I power them up and load the OS version we use, but I haven't hooked each switch up to an Ixia/AdTech before putting into production.

I use a bunch of 62xx's.  Do I need to sweat each unit in the lab before using?  As Bill Murray said "this isn't the behavior you expect from a major appliance!"

 

11 Posts

September 14th, 2009 08:00

We started having issues with our ISCSI connections dropping and reconnecting.  Our Windows 2008 servers that were directly connected over ISCSI were logging IScsiPrt events 7, 20, 1, and 34 in the system log.  Because the bad switch would pass data fine most of the time (until under load) it would reconnect within the same second and do the same thing all over when back under load causing major slowdowns and data corruption.  After running tests on another SAN connected to the same ISCSI fabric I determined the problem to be with the switches themselves and decided to unstack them and test them individually. 

For the test I ran a hard drive benchmarking program against the ISCSI drives and within 1 minute the switch would start to crash.  The same benchmark would run fine endlessly on the other switch.  I image a professional load simulator would do even better for a stress test.  I don't normally stress test my new switches either, but might start now!  At least for my ISCSI switches.

With that said I have about 20 other 6224s/48s in production and this is the first failure I have had.  So, hopefully this is an isolated incident. 

 

108 Posts

October 2nd, 2009 01:00

Hi,

We got the 10GbE uplink modules to "unstack" our two PowerConnect 6248 switches and manage them separately.

Can we now use the stacking cable (1m) we've been using to connect the stacking modules on the two switches to connect the 10GbE uplink modules or do we need the dedicated CX-4 cable for 10GbE uplink (12m) from Dell for that?

Thanks.

909 Posts

October 2nd, 2009 05:00

You can use the 1m cable.

14 Posts

October 17th, 2009 08:00

Hi,

I'm wondering if you 'unstak' the two switches and use them separately, then why do they need a 10GbE link between them ?

the two paths from ESX, storage will still work even without a link between the two switches.

 

43 Posts

October 18th, 2009 15:00

That depends on configuration.  For example, if you have a cable from each switch to a ISCSI target, that target may use its primary cable so long as the cable has LINK.  If your cable from the iSCSI target to Switch A is good, but the cable from switch A to the ESX host is bad, then the iSCSI target may not be smart enough to flip over to the second switch.

 

No Events found!

Top