Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

30603

February 19th, 2014 07:00

Newbie question about cabling the MD3220i

I have been reading and watching the online video regarding the power vault md3220i. I plan on using only one host server at this time and NO plans (in the near future) to add additional hosts. The here are the questions I have not seen answers to: The cabling diagram showing the IP hosted SAN shows 4 ports on the server being used. If this is the case and my server only has 4 ports what are my options. Do I add a 4 port nic to the server or do I just use two of the 4 ports on the server and use the left over two for connection to the corporate LAN? - If I decide to do the direct attached storage cabling config and use 2 port on the server and cable directly to the SAN, what reduction in speed will I see in this config from the IP based? - In the direct attached config, could I add a 4port NIC to the server and realize additional throughput by connecting 4 cables to the host server? - will a power connect 5224 work with the IP based SAN and the port configuration needs to run the ip base SANS? -b

Moderator

 • 

7.1K Posts

February 20th, 2014 09:00

Hello koomen,

The 5224 do not support iscsi so you will not be able to use them.  With the 5224 it does support segregation for Vlans and able to set a higher MTU but doesn’t have any iscsi prioritization which is needed.

Please let us know if you have any other questions.

4 Operator

 • 

9.3K Posts

February 20th, 2014 11:00

In theory any Gbit ethernet switch could handle iSCSI traffic. However, performance plays a role, and the PowerConnect 5200 isn't a good switch for iSCSI. I'd suggest a 5400-series (staying within the Dell branded switch models), or 6200 and higher (avoid the 5500-series and anything lower than a 5k-series).

If you opt for direct-connect, use port 0 on one controller and port 1 on the other controller so you use 2 different subnets for iSCSI.

Using 2 NICs for iSCSI, does mean that you'll have failover (between the controllers), but no performance benefit as the virtual disks will only be owned by 1 controller at a time, and therefor you'll only have a 1Gbit/s connection to any single given virtual disk.

Adding additional network ports is probably not a bad idea if budget permits, but keep all iSCSI NICs the same brand (so all Broadcom or all Intel). You can spread the iSCSI NICs between different cards for more overall reliability if you want though.

Moderator

 • 

7.1K Posts

February 19th, 2014 08:00

Hello koomen,

You can use all 4 ports on the MD3220i or just use as many as you can connect to the host. Since your host has 4 ports then I would use 2 port to connect to the MD. Now if you want you can get additional nic’s & add them into your server & use those to connect to the MD as well. we state that it is best to use all 4 connections just in case you have a port failure on the controller of the MD. If you are looking at getting a switch I would look into the Power connect 62XX series work well with the MD for doing iSCSI. Here is a link to the deployment guide just in case you don’t have that guide as well. ftp://ftp.dell.com/Manuals/Common/powervault-md3200i_Deployment%20Guide_en-us.pdf

Please let us know if you have any other questions.

30 Posts

February 19th, 2014 11:00

also forgot, we currently have some PowerConnect 5224 so I was hoping we could use what we have instead of buying more equipment.  Any way to check and see of the 5224 can be used?

30 Posts

February 19th, 2014 11:00

Thanks for your quick reply.

So other than fault tolerance there is no speed benefits to going from 2 to 4 port?

No Events found!

Top