Hi to all
I was aproached by some friends to help with setup of Hyper-V servers based on some refurbished HW. That company is in process of buying the following:
RFB - HP server DL580 Gen7, CPU X 2,4 x 4, 512 GB DDR, 146 GB x 2, 1200W x 4 - 2 servers (also reseler is stating that this server have some extra 10Gb card)
RFB - DELL storage EqualLogic PS 6510, 38 x 900GB HDD, 2 x 10GB - 1 storage unit
RFB - Cisco Catalist 2960s 48 GigE Poe 370W 4XSFP LAN Base WS-C2960S-48LPS-L - 17 switches
and also RFB Cisco 4506 switch with 2 gigabit card, one for copper and the second for sfp ports.
Reseler told them that they can connect HP server directly to storage unit without need for extra SAN switch. As I can see in all manuals about this type of storage, you need to have switch (at least one, better two). I did browse around forums and some claim that this can be done (few of posters, without explanation how, eg do they have iscsi network connections all in same subnet, or each connection is like /30 subnet, or some other) and some claim that this can not be done with this storage. I know that from setup side even if it is possible to connect it directly it is not recomended (lost of packet in burst traffic periods, single point of failure, maybe freezing of vm machines while you scan for additional storage disk that you will be presenting, etc). I can not ask Dell direcly throug support page because that equipment is not in my company possesion, and company who will buy it don't have serial numbers in their possesion (also no support in this kind of buying will be billed, just HW).
So my question is will directly connected servers work at all with this storage? If not what is the bare minimum switch model (read cheap) that they need to buy so that they can have working system?
I did find this website and read config guides there
Solved! Go to Solution.
For proper operation and failover you must have a switch. All the Dell docs list that as a requirement. Going direct means one port on one CM and one port on the other. So no MPIO during normal operations. Also switches provide buffering for when server ports fill up.
You don't have enough ports for both the array and servers. So you're looking at something like a Dell 8xxx series switch. really no "cheap" 10GbE switches for iSCSI.
If the load isn't going to be too high, you could put the arrays on the 10GbE ports of the 4605 and the servers on GbE. if these are backup or video servers the sustained reads could lead to retransmits poor performance. A lot depends on the Supervisor Engine and Line cards on the 4506.
Re:6510 drives. There should be 48 drives not 38, is that a typo?
Is this going to be production data? Make sure you are using RAID6 for maximum redundancy.
I suspect the EQL firmware is old too.
EQL always need a switch pair with a ISL.
How do you think you get ASM and EQL FW without a Dell support contract for your array?
Two switches is preferred for maximum redundancy. However, it's not a requirement for connectivity.
They won't get the firmware, or software,but again, not required for connectivity.
That's why I suggested only running RAID6 and have verified backups at all times.
Running production data on storage w/o support is extremely risky and not best practice of course.
Thanks all for support.
After this insight, all parties agreed about buying two 10G switches
DELL power connect 8132f 24 x 10 gigabit sfp+
and they will use it for iSCSI network (dedicated vlan for it) and as agregated place for access swiches (uplink from other buildings and floors) that will have various vlans.
That sounds great! I am glad that I was able to assist you.
Are you going to have a dedicated 10GbE x2 ports just for iSCSI, or try to route up to another switch and back down? The later tends not work well long term. The added latency can spike depending on how busy the other switches are. So if at all possible a dedicated link between the switches, then you can uplink other ports anywhere you like.
Please upgrade the switches to current firmware and here is the best practice configuration guide for the 8100 series.
You need to disable Data Center Bridging (DCB) otherwise standard flowcontrol won't work. DCB is enabled on the switch by default.