Hi All,
First post here - a good one to get teeth into I think - Ive been scratching my head for a while trying to think of the best way to configure this!
I should say, I didnt spec the kit....I have just been given it to get setup - so please, no comments saying "you shouldnt have bought that" :-D....
Anyway, I have an m1000e with 4 blades - which will be running ESX4.1 - the m8024 switches are in chassis slots B1, B2, C1 and C2 - (slots A1 and A2 have M6348's in there for nothing other than to provide some 1gb copper connectivity) - The m8024's are filled with the 4 port SFP modules (i.e. I have 8 x10gb SFP ports per switch).
In terms of internal connectivity, the full height blades have 2 x 10gb connections to each 10gb blade switch (so each blade has 8 x 10gb ports).
Externally we have 2 x 10 gb uplinks from each m8024 switch up to 2 x Cisco NEXUS 5548P's....
The NEXUS's also have the PS6510 SAN (controllers currently split across the two NEXUS Switches - eth0's of both SAN controllers linking to NEXUS 1 and eth1's linked to NEXUS 2.
I should also mention that both NEXUS's are connected up to a 3750X stack - using 4 x 10gb links split across the stack (this is the layer 3 core routing switch).
The plan I have in my head is to have blade switches B1 and C1 (i.e. two mezanines in the blades) dedicated to Network communication (i.e. server vlan etc).....
Blade switches B2 and C2 - I want to use for iSCSI communication.
Everything is to flow through the NEXUS switches....with as much redundancy as possible....
Any suggestions as to how/where/what/why would be much appreciated!
Cheers
Paul