This post is more than 5 years old
5 Posts
0
33588
April 3rd, 2014 00:00
MD3220i addressing scheme incl. Hyper-V servers
I'm trying to connect a SAN:
- Two Dell Powervault MD3220i disk arrays
- Two Cisco gigabit ethernet switches
- Three (will be 5 in a few months) PowerEdge R710 Hyper-V Win2012R2 hosts
I'm aware of the technical guides, deployment guides, etc. But all I can find is a addressing schema for the disk arrays. I've been searching these forums for simular questions, but mostly they are about configuring the controllers and iscsi ports, but not about the servers.
I've connected the disk array controllers with the management ports per array (1 per controller) to the LAN-switch (management vlan). The eight iscsi ports per array (4 per controller) are connected to the dedicated SAN-switches. Half the ports to the first SAN-switch. The other ports to the second SAN-switch.
But I can't get the idea about how to connect the Hyper-V hosts. The Hyper-V hosts are at this time 3 exact similar PER710 servers. There will be 2 more in a few months. Therefor I did choose for a cabling schema with switches, and not directly conntected to the disk arrays.
My Hyper-V hosts all have two dedicated NICs (not teamed, jumbo frames, multipath) for ISCSI traffic. What will be the perfect addressing scheme for this setup?
I've read about different subnets for the disk array iscsi ports. Lets call them subnet A, B, C, D. And there is an other subnet for the management ports, connected to the management vlan on my LAN-switches. Lets call this subnet E.
In which subnet do I have to put the iscsi ports of my Hyper-V hosts?



Dev Mgr
4 Operator
•
9.3K Posts
0
April 4th, 2014 15:00
I would start with 8 total NICs in each server and if you can afford it, even consider 10 NICs (NIC ports).
Use a setup like this:
- 2 Ports in a team for host management
- 2 Ports in a team for VM-to-LAN traffic
- 2 Ports in a team for heartbeat/CSV/Live Migration traffic
- 4 Ports non-teamed for iSCSI with 4 unique subnets
If you have to limit it to 8 ports on your server, you could limit host management to just 1 port and also limit the heartbeat/CSV/Live Migration traffic to 1 port.
On an extreme budget, you could use 1 port for host management and VM-to-LAN traffic and then 1 port for heartbeat, but I'd reduce iSCSI to just 2 ports before doing this option to get to just 6 ports.
When teaming, it's best to use 2 or more ports from the same NIC vendor (Intel or Broadcom), even if opting for Windows native teaming.
DELL-Sam L
Moderator
•
7.7K Posts
0
April 3rd, 2014 07:00
Hello jboumaliudger,
Based on your setup with only having 2 Nic’s that are available for iSCSI you are only going to want to use 2 subnets. If you were to have 4Nic’s per host for iSCSI traffic then you would want to use 4 subnets.
In regards to your 3xR710 (will be 5 in a few months) hosts are you wanting them to access the same virtual disk or are they going to have their own virtual disk? If the host are going to share the same virtual disk then all the hosts will need to be in a cluster so that there is a locking system in place so that no 2 host can write to the same block of data at the same time, if you don’t put them in a cluster & try to have all host access to the same virtual disk then you will get corruption.
Last you will use Microsoft iSCSI Initiator for each of your Hosts. You will also need to configure each port from the MD to each host so that if one connection fails that it will fail over to another connection.
Please let us know if you have any other questions.
jboumaliudger
5 Posts
0
April 3rd, 2014 07:00
OK, I get the 2 server NICs = 2 subnets idea.
Can I equally divide all controllers? So 2 ports in subnet A, 2 ports in subnet B per controller? Or do I have to use a maximum of two ports per controller (1 port subnet A, 1 port in subnet B)?
All Hyper-V hosts are in a Server 2012 R2 Failover Cluster. I like to setup clustered VHDs. One for quorom, one for storage of all Hyper-V VHDs. So the virtual disks on the disk array are mapped to the Failover Cluster with the Microsoft ISCSI Initiator.
DELL-Sam L
Moderator
•
7.7K Posts
0
April 3rd, 2014 09:00
Hello jboumaliudger,
You want each port from the MD in it’s own subnet for traffic shaping and for better through put & error correction. Now if you wanted to test putting 2 MD ports in the same subnet & see if you have any issues that is fine but I wouldn’t recommend that you do it that way in your production environment.
The setup for your hosts will be correct as by doing it that way the cluster can control who has access to the storage & Quorom without any issue.
Please let us know if you have any other questions.
jboumaliudger
5 Posts
0
April 4th, 2014 00:00
So, to make use of all iSCSI data ports on my controllers, I'll have to use four NICs in my server.
In that case I have to rethink my NIC strategy in the servers. I have no experience in clustering at all, so I've read a lot of documentation about this. I got advice to use the following layout on my 8 NICs per server:
NIC em1: team member LAN (VM traffic)
NIC em2: team member LAN (VM traffic)
NIC em3: spare
NIC em4: management port
NIC p1n0: Hyper-V Cluster heartbeat
NIC p1n1: Hyper-V Live Migration
NIC p1n2: iSCSI data port with MPIO
NIC p1n3: iSCSI data port with MPIO
In this layout I have one spare NIC, but to get four iSCSI data ports I have to merge some of the other NICs. What layout it recommed?