Start a Conversation

Unsolved

This post is more than 5 years old

9278

June 13th, 2011 09:00

m1000e, 4 x m8024's, 2 x NEXUS 5548P's, VMware and an EQL PS6510

Hi All,
First post here - a good one to get teeth into I think - Ive been scratching my head for a while trying to think of the best way to configure this!

I should say, I didnt spec the kit....I have just been given it to get setup - so please, no comments saying "you shouldnt have bought that" :-D....

Anyway, I have an m1000e with 4 blades - which will be running ESX4.1 - the m8024 switches are in chassis slots B1, B2, C1 and C2 - (slots A1 and A2 have M6348's in there for nothing other than to provide some 1gb copper connectivity) - The m8024's are filled with the 4 port SFP modules (i.e. I have 8 x10gb SFP ports per switch).

In terms of internal connectivity, the full height blades have 2 x 10gb connections to each 10gb blade switch (so each blade has 8 x 10gb ports).

Externally we have 2 x 10 gb uplinks from each m8024 switch up to 2 x Cisco NEXUS 5548P's....

The NEXUS's also have the PS6510 SAN (controllers currently split across the two NEXUS Switches - eth0's of both SAN controllers linking to NEXUS 1 and eth1's linked to NEXUS 2.

I should also mention that both NEXUS's are connected up to a 3750X stack - using 4 x 10gb links split across the stack (this is the layer 3 core routing switch).

The plan I have in my head is to have blade switches B1 and C1 (i.e. two mezanines in the blades) dedicated to Network communication (i.e. server vlan etc).....

Blade switches B2 and C2 - I want to use for iSCSI communication.

Everything is to flow through the NEXUS switches....with as much redundancy as possible....

Any suggestions as to how/where/what/why would be much appreciated!

Cheers

Paul

June 13th, 2011 09:00

To throw something else into the mix, the above is for a server virtualisation environment.....there is another m1000e with 4 blades, with another 4 x m8024's, and a hybrid PS6010 - which will be sharing the above NEXUS's for their networking and iSCSI - But this second environemnt will be for VMware View.

July 4th, 2011 23:00


Paul,

Can I suggest you ditch the straight through switches and buy 2x M6220's. A good way to configure your chassis might be to use A1-A2 for general networking, B1/B2 switches for iSCSI and C1/C2 for VMotion. Keep in mind the M8024's dont stack, the M6220's do. This will give you 2 ports across two switches so if one fails (they never do) then you are still up and running. With a full height blade you get 4 ethernet ports per fabric so great throughput.

You can fibre connect the M8024 pairs together and use an LACP trunk linking two fibres at a time between switches and your second chassis, that allows you to share disk across chassis's and VMotion between them and have some degree of redundancy. You can also stack the M6220's between the chassis's so you have shared VLANs between all of Fabric "A"

To connect your blades to the outside world would then just require an LACP trunk of 2 or more ports back to your core switch, or just run a single link back to your switch (up to you and what degree of failure you can live with).


Sid
z900collector.wordpress.com
No Events found!

Top