For proper operation and failover you must have a switch. All the Dell docs list that as a requirement. Going direct means one port on one CM and one port on the other. So no MPIO during normal operations. Also switches provide buffering for when server ports fill up.
You don't have enough ports for both the array and servers. So you're looking at something like a Dell 8xxx series switch. really no "cheap" 10GbE switches for iSCSI.
If the load isn't going to be too high, you could put the arrays on the 10GbE ports of the 4605 and the servers on GbE. if these are backup or video servers the sustained reads could lead to retransmits poor performance. A lot depends on the Supervisor Engine and Line cards on the 4506.
Re:6510 drives. There should be 48 drives not 38, is that a typo?
Is this going to be production data? Make sure you are using RAID6 for maximum redundancy.
After this insight, all parties agreed about buying two 10G switches
DELL power connect 8132f 24 x 10 gigabit sfp+
and they will use it for iSCSI network (dedicated vlan for it) and as agregated place for access swiches (uplink from other buildings and floors) that will have various vlans.
That sounds great! I am glad that I was able to assist you.
Are you going to have a dedicated 10GbE x2 ports just for iSCSI, or try to route up to another switch and back down? The later tends not work well long term. The added latency can spike depending on how busy the other switches are. So if at all possible a dedicated link between the switches, then you can uplink other ports anywhere you like.
Please upgrade the switches to current firmware and here is the best practice configuration guide for the 8100 series.
dwilliam62
4 Operator
•
1.5K Posts
1
February 21st, 2019 13:00
Hello,
For proper operation and failover you must have a switch. All the Dell docs list that as a requirement. Going direct means one port on one CM and one port on the other. So no MPIO during normal operations. Also switches provide buffering for when server ports fill up.
You don't have enough ports for both the array and servers. So you're looking at something like a Dell 8xxx series switch. really no "cheap" 10GbE switches for iSCSI.
If the load isn't going to be too high, you could put the arrays on the 10GbE ports of the 4605 and the servers on GbE. if these are backup or video servers the sustained reads could lead to retransmits poor performance. A lot depends on the Supervisor Engine and Line cards on the 4506.
Re:6510 drives. There should be 48 drives not 38, is that a typo?
Is this going to be production data? Make sure you are using RAID6 for maximum redundancy.
I suspect the EQL firmware is old too.
Regards,
Don
Origin3k
4 Operator
•
2.4K Posts
0
February 21st, 2019 13:00
EQL always need a switch pair with a ISL.
How do you think you get ASM and EQL FW without a Dell support contract for your array?
Regards
Joerg
dwilliam62
4 Operator
•
1.5K Posts
1
February 21st, 2019 16:00
Hello,
Two switches is preferred for maximum redundancy. However, it's not a requirement for connectivity.
They won't get the firmware, or software,but again, not required for connectivity.
That's why I suggested only running RAID6 and have verified backups at all times.
Running production data on storage w/o support is extremely risky and not best practice of course.
Regards,
Don
md-bam-09
1 Rookie
•
6 Posts
0
February 22nd, 2019 09:00
Thanks all for support.
After this insight, all parties agreed about buying two 10G switches
DELL power connect 8132f 24 x 10 gigabit sfp+
and they will use it for iSCSI network (dedicated vlan for it) and as agregated place for access swiches (uplink from other buildings and floors) that will have various vlans.
dwilliam62
4 Operator
•
1.5K Posts
0
February 22nd, 2019 10:00
Hello,
That sounds great! I am glad that I was able to assist you.
Are you going to have a dedicated 10GbE x2 ports just for iSCSI, or try to route up to another switch and back down? The later tends not work well long term. The added latency can spike depending on how busy the other switches are. So if at all possible a dedicated link between the switches, then you can uplink other ports anywhere you like.
Please upgrade the switches to current firmware and here is the best practice configuration guide for the 8100 series.
https://downloads.dell.com/manuals/all-products/esuprt_software/esuprt_it_ops_datcentr_mgmt/s-solution-resources_white-papers86_en-us.pdf
You need to disable Data Center Bridging (DCB) otherwise standard flowcontrol won't work. DCB is enabled on the switch by default.
Regards,
Don