Start a Conversation

Unsolved

This post is more than 5 years old

28995

May 11th, 2009 14:00

MD3000i + 2 R710 servers + VMware Infrastructure = advice needed!

I am soon to order 2 R710 servers and an MD3000i, along with VMWare Infrastructure Enterprise. I have been reading numerous documents and forums for days on end. The picture becomes more clear each day, but things are still a bit fuzzy :-)

First of all, I first had Dell spec out the servers with 4 NIC ports each. However after reading around, I think it might be best if each server has 6 NIC ports. I will be using ESXi, running from the on board SD slot.

I have struggled over whether or not I should use my existing core switch (HP 5308XL) for this project, or get a new switch(s). I'd like to avoid VLANS if possible.

If I did go with new switches, would the dell powerconnect 5424 be a good choice? Or should I go with HP Procurve 2810 24G switches? All of the switches on my network are HP Procurve 26xx series, and I use the Procurve Manager Plus software. I'd like the switches to allow the single subnet configuration, since VMware does not support the 2 subnet config. The following thread has been helpful, but I'm still confused: http://www.delltechcenter.com/thread/1963035/MD3000i+and+vmware+%3A+redundant+iscsi+setup


I have a lot of questions...I want to get the right equipment ordered and set up right the first time! Any help would be greatly appreciated!

180 Posts

May 13th, 2009 15:00

Hi Razorhog,

Have you read through the Dell VSE team's reference architecture?

http://i.dell.com/sites/content/business/solutions/whitepapers/en/Documents/virtualization_ref_architecture_v1_1.pdf



8 Posts

May 13th, 2009 16:00

Thanks for the reply. I have read through that, and while helpful, it doesn't get into the nitty gritty stuff that I need. I've decided to use 2 Dell PC5424 switches connected together, in a single subnet configuration. The devil is in the details...

180 Posts

May 15th, 2009 13:00

Razorhog,

I'm glad to hear that you're up and running. The Dell VSE RA does show a highly available config with 2-switches ISL'ed together.

Also, have you seen the MD3000i + ESX setup guide?
http://i.dell.com/sites/content/business/solutions/engineering-docs/en/Documents/md3000i_esx_deploy_guide.pdf

If you have further questions, please do post them.

8 Posts

May 15th, 2009 14:00

"Razorhog,

I'm glad to hear that you're up and running. The Dell VSE RA does show a highly available config with 2-switches ISL'ed together.

Also, have you seen the MD3000i + ESX setup guide?
http://i.dell.com/sites/content/business/solutions/engineering-docs/en/Documents/md3000i_esx_deploy_guide.pdf

If you have further questions, please do post them."
I'm far from being up and running, I haven't ordered the equipment yet ;-)
Take a look at my Visio drawing and see what you think. I'm confused about the management and vmotion networks. http://farm4.static.flickr.com/3370/3534476366_b842e34322_o.jpg

May 18th, 2009 00:00

Looks ok to me but I think it's recommended to add a trunk between the two PC5424 and I would also hook up the MD3000i to the mgmt network. Some would probably also break out the vmotion- and mgmt-lan from the ProCurve 5308XL to keep things seperated.

8 Posts

May 18th, 2009 05:00

From what I understand, when using 2 subnets with the SAN, you don't need to connect the two switches. As far as separating mgmt and vmotion from the 5308XL - I don't have another switch to use. If the management network is different from the production LAN, how does one access it without changing the IP on his/her workstation?

May 18th, 2009 06:00

I read the previous posts of the thread and thought you were going for a single subnet and I didn't pay enough attention to spot that you had different subnets assigned on your visio drawing. If the production, vmotion and mgmt LANs are on different VLANs on the same switch you are safe. It's quite common to separate things physically in an attempt to minimize human errors.

8 Posts

May 18th, 2009 11:00

Would getting an additional Dell PowerConnect 5424 and using it for the VMotion and Management LANs be easier than buying a gigabit module for the 5308XL? The 5308XL is the core switch for the entire network. It does not have any gigabit ports, and a module for it is twice the cost of a Dell 5424... Thanks for the great info/suggestions!

May 18th, 2009 15:00

As for as I know it is required to have a gigabit network connection for VMotion so getting a gigabit module for the 5308XL or another PC5424 for half the price of the module, I would easily get the PC5424.

8 Posts

May 18th, 2009 16:00

I was thinking the same thing, but the 5308XL doesn't have any copper gigabit ports in it at all. Can the management network be 100mb? If so I'll get the PC5424 for VMotion and use regular 100mb ports on the 5308 for Management. If the Management network requires a gigabit connection, I'll get the 5308 module and use VLans. Thoughts?

8 Posts

May 20th, 2009 10:00

Anyone? Bueller......Bueller?

May 21st, 2009 05:00

100mbit should be enough for management. But why not use that third PC5424 and setup a separate vlan for management? If the only other vlan on it would be for VMotion you have plenty of ports.

26 Posts

May 21st, 2009 11:00

If you already have an HP 5308XL, IMHO, just get the gigabit module and implement some VLANs for the VMotion and Management. You gain more flexibility and stay with what's already familiar to you in terms of the swtich configuration interface and management tools. Why introduce an extra component if its unnecessary? And besides, the switching capacity on the backplane of the 5308XL beats the crap of the PC5424s anyway, which is really important when doing VMotion. Just my 2 cents.

8 Posts

May 21st, 2009 12:00

"If you already have an HP 5308XL, IMHO, just get the gigabit module and implement some VLANs for the VMotion and Management. You gain more flexibility and stay with what's already familiar to you in terms of the swtich configuration interface and management tools. Why introduce an extra component if its unnecessary? And besides, the switching capacity on the backplane of the 5308XL beats the crap of the PC5424s anyway, which is really important when doing VMotion. Just my 2 cents."
That is what I have decided to do. Getting it all set up correctly is my next hurdle.
Thanks to everyone who replied!

May 22nd, 2009 02:00

jhboricua: The facts are that the Dell PowerConnect 5424 switch fabric can handle 48Gbps which is wirespeed when you have 24 GbE-ports. Can't get better than that.

The HP 5308xl has a switch fabric that only handle 78Gbps. If you populate the 8 module slots with three 16-port GbE-modules (J4907A) you are already up to 96Gbps. Throw in a couple of 24-port FE-modules at 4.8Gbps á pop and you can see that the 5308xl is nowhere near wirespeed in the switch fabric. The HP 5300xl-series was released in 2002 and that switch fabric wasn't bad back then. In 2006 they released the 5400yl with a switch fabric capable of 345.6Gbps for the 6 modules version.

Both solutions would work in your scenario since there is no chance you would come even near the performance limits with your current setup.
No Events found!

Top