This post is more than 5 years old
2 Posts
0
13026
February 26th, 2014 21:00
MD 3220i 3 node cluster IP addressing
I am setting up a new md3220i for our network at TAFE.
We are using 3 new dell servers with 4 nics each as hosts. The hosts are labeled HV-1, HV-2 and HV-3.
I have tried a few IP schemes but I cant fully wrap my head around it as I am only a beginner. This is what I am trying to acheive.
On the SAN, iscsi port 0 on each raid controller is disabled as we only have 3 hosts so we thought we would use ports 1,2 and 3 respectivly.
On the servers:
NIC1 is used to connect to a switch which is connected to the domain and internet.
NIC2 is connected to the same switch using a different subnet, this is going to be a private network for migration.
NIC3 is connected to the SAN's first controller.
NIC4 is connected to the SAN's second controller.
At the moment the SAN's iscsi ports are set to the default IP's.
Controller 0 Port 0 - -----------------------------------------------------
Controller 0 Port 1 - 192.168.131.101/24 Jumbo Frames 9K
Controller 0 Port 2 - 192.168.132.101/24 Jumbo Frames 9K
Controller 0 Port 3 - 192.168.133.101/24 Jumbo Frames 9K
Controller 1 Port 0 - -----------------------------------------------------
Controller 1 Port 1 - 192.168.131.102/24 Jumbo Frames 9K
Controller 1 Port 2 - 192.168.132.102/24 Jumbo Frames 9K
Controller 1 Port 3 - 192.168.133.102/24 Jumbo Frames 9K
HV-1 - NIC1 - 10.60.64.31/23 (Domain)
HV-1 - NIC2 - 192.168.10.1/24 (migration)
HV-1 - NIC3 - (controller 1 port 1)
HV-1 - NIC4 - (controller 2 port 2)
HV-1 - NIC1 - 10.60.64.32/23 (domain)
HV-2 - NIC2 - 192.168.10.2/24 (migration)
HV-2 - NIC3 (controller 1 port 2)
HV-2 - NIC4 (controller 2 port 2)
HV-3 - NIC1 - 10.60.64.33/23 (domain)
HV-3 - NIC2 - 192.168.10.3/24 (migration)
HV-3 - NIC3 - (controller 1 port 3)
HV-3 - NIC4 - (controller 2 port 3)
I hope this is understandable. Also if anyone can list the ISCSI sessions that would be awesome. Please tell me if i need to change anything, I am still learning.
Thanks, Scott. :emotion-1:



Dev Mgr
4 Operator
•
9.3K Posts
0
February 27th, 2014 07:00
This will not work.
One of the networking/IP basic rules is that a server should not (can not) talk to 2 isolated networks that share the same subnet.
A direct-connected cable is effectively a micro network. Therefor you cannot have a server connect to controller 0 port 1 (on 192.168.131.101) as well as controller 1 port 1 (on 192.168.131.102) from 2 different NICs at the same time without going through a switch.
The easy fix is this:
Server 1:
NIC 1: LAN (management + VM-to-LAN)
NIC 2: live migration + cluster heartbeat (should not be on the same switches as your LAN (get a decent 1Gbit switch that you can dedicate to live migration and the cluster heartbeat)
NIC 3: 192.168.130.10 -> connected to controller 0 port 0 (192.168.130.101)
NIC 4: 192.168.131.10 -> connected to controller 1 port 1 (192.168.131.102)
Then server 2 goes like this:
NIC 1: LAN (management + VM-to-LAN)
NIC 2: live migration + cluster heartbeat (should not be on the same switches as your LAN (get a decent 1Gbit switch that you can dedicate to live migration and the cluster heartbeat)
NIC 3: 192.168.130.20 -> connected to controller 1 port 0 (192.168.130.101)
NIC 4: 192.168.131.20 -> connected to controller 0 port 1 (192.168.131.102)
And server 3 would be like this:
NIC 1: LAN (management + VM-to-LAN)
NIC 2: live migration + cluster heartbeat (should not be on the same switches as your LAN (get a decent 1Gbit switch that you can dedicate to live migration and the cluster heartbeat)
NIC 3: 192.168.132.30 -> connected to controller 0 port 2 (192.168.132.101)
NIC 4: 192.168.133.30 -> connected to controller 1 port 3 (192.168.133.102)
I used the "10", "20", and "30" just as example IPs to help clarify. They can really be anything you want other than 101 and 102.
Note that the example I provided works, but when doing virtualization with an iSCSI SAN I typically recommend starting with at the very least 6 network ports and preferably 8:
- 2 for LAN management (team)
- 2 for VM-to-LAN (team) -> when you only have 6 total network ports, combine this with the 2 for LAN management
- 2 for the heartbeat/cluster shared volume/live migration (team)
- 2 for iSCSI (not teamed)
DELL-Sam L
Moderator
•
7.7K Posts
0
February 27th, 2014 06:00
Hello hackney92,
Not sure if you were following our deployment guide for setting up your MD3220i or not but here is our deployment guide if not. ftp://ftp.dell.com/Manuals/Common/powervault-md3200i_Deployment%20Guide_en-us.pdf Now if you look on page 19 it shows how we would recommend setting up a 4 node cluster direct attached to the MD3220i. I know you stated that your cluster is only going to be 3 nodes but you can still use it as a guide. Also I would not use port 3 on both controllers instead of not using port 0 on both controllers. The rest of your setup for each HV host looks pretty good and should work without many issues. If you look at page 35 of the deployment guide you can see how we suggest how to setup each host. In the deployment guide it lists out 4 ISCSi ports per Host but you don’t have to use all 4 ISCSi connections.
Please let us know if you have any other questions.
hackney92
2 Posts
0
February 27th, 2014 19:00
Thank you kindly for your help, we should have some 1Gbit switches laying around. I will implement this on Monday.
Cheers,
Scott.