Since you are using non-redundant setup you will have issues trying to change paths between controllers since both hosts only have a single connection to the MD3000. What you could do if available is to get 2 more SAS cables & cable the second ports on the MD to the opposite raid controller than the host is connected to currently.
Now if that is not an option then what you can do is to have 2 disk groups and assign the virtual disks/ Luns that are connected to each host into their own disk group. If you are going to make each host their own disks groups then you will need to stop the I/O while you are moving the virtual disk to the new disk group so that you want have any data loss.
Please let us know if you have any other questions.
In most cases when you get that error it means that the host doesn’t have access to both controllers so that you can change the owner of the virtual disk between both controllers. What I would so is check & confirm that both servers have access to both controllers, as if they don’t have access to both then that is why you are getting your error. Here is a link as well to the deployment guide for the MD3000 & if you look on page 13-15 it shows you how you would want to have your system cabled for both non redundant & redundant cabling. ftp://ftp.dell.com/Manuals/all-products/esuprt_ser_stor_net/esuprt_powervault/powervault-md3000_User%27s%20Guide9_en-us.pdf
If your cabling is setup as redundant & it is still not allowing you to redistribute the virtual disk between the 2 controllers then we will need to look into your MD3000 & see why it is not allowing you to redistribute the disks.
Please let us know if you have any other questions.
Thank you for responding! I actually read that document last night.
The setup is non-redundant.
ESX Host 1 direct connects via SAS 5e card to RAID Slot 0 on MD3000
ESX Host 2 direct connects via it's own SAS5e card to RAID Slot 1 on MD3000
When looking at vCenter, both hosts can see all of the LUNs. When we initially set it up, we added all the LUNs as datastores on vCenter and it automatically populated to the other host.
That said, on the MD3000, we have only one disk group with all the LUNs within that group. When I look at the disk group tab in the storage array profile in MDSM, I see DISK_GRP_1 which is owned by one of the RAID controller modules (in this case, by slot 1).
Today, I moved all the virtual machines to a single host that is ironically physically attached to slot 0 on the back of the MD3000 (the RAID controller interface on the left when facing the back of the machine) and things are running much more smoothly now.
So, it's all confusing.
- Can the raid controllers be assigned to a specific LUN or is it only to a disk group?
- Should we just create separate groups attached to each RAID controller and go to their respective ESX host?
I hope I explained that clearly and again, I very much appreciate any information you provide.
Option one was my plan, however the MD3000 only has a single input on each RAID controller.
My tentative solution for the time being is I have only one host attached with both of it's SAS/5e connectors going to both RAID controller inputs. Figure 2-5. Cabling a Single Host (Single-HBA) Using Redundant Data Paths
My future solution is I'm going to see if I can acquire replacement RAID controllers with two inputs and create a redundant array that way. Similar to figure 2-10 in the manual: Cabling Two Hosts (with Single HBAs) Using Redundant Data Paths. I believe this is what your are suggesting in your first paragraph.
I also have additional SAS/5e cards and could put dual HBAs in the hosts. Not sure if that gains a lot or not. Similar to Figure 2-9. Cabling Two Hosts (with Dual HBAs) Using Redundant Data Paths. It seems adding a second HBA is just another safety measure in case the alternate card fails.
You're final opinion on that would be great! I'm not sure I understand 2-10 and 2-11 regarding two node clusters.
At the minimum it is best to have some redundancy so that if you were to have a controller fail that you can still have access to all your data via the second controller. Now with that being said if you were going to keep your setup with your 2 hosts then you would want to use the cabling option 2-10. Both figures 2-10 & 2-11 are the same with the only difference that you have a cluster service in the middle.
Please let us know if you have any other questions.
DELL-Sam L
Moderator
•
7.8K Posts
0
August 27th, 2014 11:00
Hello MrRedPants,
Since you are using non-redundant setup you will have issues trying to change paths between controllers since both hosts only have a single connection to the MD3000. What you could do if available is to get 2 more SAS cables & cable the second ports on the MD to the opposite raid controller than the host is connected to currently.
Now if that is not an option then what you can do is to have 2 disk groups and assign the virtual disks/ Luns that are connected to each host into their own disk group. If you are going to make each host their own disks groups then you will need to stop the I/O while you are moving the virtual disk to the new disk group so that you want have any data loss.
Please let us know if you have any other questions.
DELL-Sam L
Moderator
•
7.8K Posts
0
August 26th, 2014 10:00
Hello MrRedPants,
In most cases when you get that error it means that the host doesn’t have access to both controllers so that you can change the owner of the virtual disk between both controllers. What I would so is check & confirm that both servers have access to both controllers, as if they don’t have access to both then that is why you are getting your error. Here is a link as well to the deployment guide for the MD3000 & if you look on page 13-15 it shows you how you would want to have your system cabled for both non redundant & redundant cabling. ftp://ftp.dell.com/Manuals/all-products/esuprt_ser_stor_net/esuprt_powervault/powervault-md3000_User%27s%20Guide9_en-us.pdf
If your cabling is setup as redundant & it is still not allowing you to redistribute the virtual disk between the 2 controllers then we will need to look into your MD3000 & see why it is not allowing you to redistribute the disks.
Please let us know if you have any other questions.
MrRedPants
5 Posts
0
August 26th, 2014 14:00
Sam
Thank you for responding! I actually read that document last night.
The setup is non-redundant.
ESX Host 1 direct connects via SAS 5e card to RAID Slot 0 on MD3000
ESX Host 2 direct connects via it's own SAS5e card to RAID Slot 1 on MD3000
When looking at vCenter, both hosts can see all of the LUNs. When we initially set it up, we added all the LUNs as datastores on vCenter and it automatically populated to the other host.
That said, on the MD3000, we have only one disk group with all the LUNs within that group. When I look at the disk group tab in the storage array profile in MDSM, I see DISK_GRP_1 which is owned by one of the RAID controller modules (in this case, by slot 1).
Today, I moved all the virtual machines to a single host that is ironically physically attached to slot 0 on the back of the MD3000 (the RAID controller interface on the left when facing the back of the machine) and things are running much more smoothly now.
So, it's all confusing.
- Can the raid controllers be assigned to a specific LUN or is it only to a disk group?
- Should we just create separate groups attached to each RAID controller and go to their respective ESX host?
I hope I explained that clearly and again, I very much appreciate any information you provide.
MrRedPants
5 Posts
0
August 27th, 2014 12:00
Option one was my plan, however the MD3000 only has a single input on each RAID controller.
My tentative solution for the time being is I have only one host attached with both of it's SAS/5e connectors going to both RAID controller inputs. Figure 2-5. Cabling a Single Host (Single-HBA) Using Redundant Data Paths
My future solution is I'm going to see if I can acquire replacement RAID controllers with two inputs and create a redundant array that way. Similar to figure 2-10 in the manual: Cabling Two Hosts (with Single HBAs) Using Redundant Data Paths. I believe this is what your are suggesting in your first paragraph.
I also have additional SAS/5e cards and could put dual HBAs in the hosts. Not sure if that gains a lot or not. Similar to Figure 2-9. Cabling Two Hosts (with Dual HBAs) Using Redundant Data Paths. It seems adding a second HBA is just another safety measure in case the alternate card fails.
You're final opinion on that would be great! I'm not sure I understand 2-10 and 2-11 regarding two node clusters.
DELL-Sam L
Moderator
•
7.8K Posts
0
August 27th, 2014 14:00
Hello MrRedPants,
At the minimum it is best to have some redundancy so that if you were to have a controller fail that you can still have access to all your data via the second controller. Now with that being said if you were going to keep your setup with your 2 hosts then you would want to use the cabling option 2-10. Both figures 2-10 & 2-11 are the same with the only difference that you have a cluster service in the middle.
Please let us know if you have any other questions.