Which JBOD’s do you have as you didn’t state them in your post? I am guessing that you have a MD1000,12xx, or 14xx enclosure. If that is the case & your virtual disk is spanning both enclosures, then you can’t remove one enclosure without harming your virtual disk. All your virtual disk information is stored on your PERC card & on the drives that make up the virtual disk. If you move the enclosure, then you will need the same perc card type that it is connect to see the virtual disk on the new host.
Please let us know if you have any other questions.
Thanks for your reply. Maybe I did not state this correctly, but every disk is in it's own DG. So in other words there is not Drive Groups between enclosures. This is how does the summary looks like:
# ./SMcli 172.16.75.103 -c "show storageArray summary;"
Performing syntax check...
STORAGE SUMMARY
Disk pools: 0
Virtual Disks on Disk Pools: 0
Disk groups: 120
RAID 0 Disk Groups: 120 Virtual Disks: 120
Access virtual disks: 1
Standard Virtual Disks (Used/Allowed): 120 / 512
Base: 120
Repository: 0
Thin Virtual Disks (Used/Allowed): 0 / 512
HOST MAPPINGS SUMMARY
Access virtual disk: LUN 31,31 (see Mappings section for details)
Default host OS: Windows (Host OS index 0)
Mapped virtual disks: 121
Unmapped virtual disks: 0
HARDWARE SUMMARY
Enclosures: 2
System configured to use batteries: Yes
RAID Controller Modules: 2
Consistency mode: Duplex (dual RAID controller modules)
Physical Disks: 120
Current physical disk media types: Physical Disk (120)
Current physical disk interface type(s): Serial Attached SCSI (SAS) (120)
FEATURES SUMMARY
Feature enable identifier: 37303030323930303033363554B8374F
Feature pack: SAS - MD3460
Feature pack submodel ID: 215
Additional feature information
Snapshot groups allowed per source virtual disk (see note below): 4
Virtual Disks allowed per storage partition: 256
FIRMWARE INVENTORY
MD Storage Manager®
AMW Version: 11.20.0G06.0020
Report Date: Tue Jun 05 08:54:14 CEST 2018
Storage Array
Storage Array Name: med-file-md12
Current Package Version: 08.20.05.60
Current NVSRAM Version: N2701-820890-004
Staged Package Version: None
Staged NVSRAM Version: None
RAID Controller Modules
Location: Enclosure 0, Slot 0
Current Package Version: 08.20.05.60
Current NVSRAM Version: N2701-820890-004
Board ID: 2701
Sub-Model ID: 215
Location: Enclosure 0, Slot 1
Current Package Version: 08.20.05.60
Current NVSRAM Version: N2701-820890-004
Board ID: 2701
Sub-Model ID: 215
Physical Disk
Enclosure, Drawer, Slot: Manufacturer: Product ID: Physical Disk Type: Capacity: Physical Disk firmware version: FPGA Version: (SSD only)
Enclosure 0, Drawer 0, Slot 0 SEAGATE ST4000NM0023 Serial Attached SCSI (SAS) 3,720.523 GB GS0F Not Available
Enclosure 1, Drawer 3, Slot 11 SEAGATE ST4000NM0023 Serial Attached SCSI (SAS) 3,720.523 GB GS0D Not Available
Enclosure 1, Drawer 4, Slot 0 SEAGATE ST4000NM0023 Serial Attached SCSI (SAS) 3,720.523 GB GS0D Not Available
Enclosure 1, Drawer 4, Slot 1 SEAGATE ST4000NM0023 Serial Attached SCSI (SAS) 3,720.523 GB
Script execution complete.
SMcli completed successfully.
And what I am trying to do is to disconnect bottom enclosure(Enclosure 1), and attach it to different server. As you see they both belong to same array, but there is not disk groups created on between enclosures. In other words each DG has only one physicall member, so all drives are separated(we have RAID on OS level).
120 physicall disks, and 120 separated DGs(I removed most of them from CLUI output, because it was too big)
So the question is, if in this case I can disconnect Enclosure 1 without destroying array ?
From the output that you posted, you are running an MD3460 & MD3060e. Based on that no you can’t separate your MD3060e and keep the configuration on those drives. The reason is that your MD3460 is controlling current configuration, & your MD3060e is present to it. If you remove your MD3060e and don’t destroy the virtual disks & disk groups you will put both controllers into a lockdown state. In addition to that when you connect your MD3060e to another host your configuration will not be seen due to the DG & VD configuration being on your MD3460.
Please let us know if you have any other questions.
What is the OS that your host is running that your VD are presented to? Also, what software are you using for your software raid? Can we get a support bundle from your MD3460 so that we can review it? I can send you can email that you can reply with the support bundle log. It is also a good thing to have a support bundle handy just in case the system has any issues we can see what the config was like and fix any data base issue.
The steps you listed are correct. I would also get a valid & test backup of the data on your MD’s prior to doing any steps just in case something goes wrong. What it would be best to do is to power down your MD3460 & MD3060e and then disconnect the cables from your MD3060e then power on your MD3460 & make sure that it doesn’t go into a lockdown state, & to make sure that you can still access the data on your MD3460. If you can still access your data on your MD3460 then you can move your MD3060e to your new host and clear the configuration & create the new virtual disk configuration.
In most cases when you want to remove an expansion enclosure it is best to destroy all DG & VD then recreate as the controller can go into a lockdown state due to the change.
Please let us know if you have any other questions.
Sorrry, I thought we switched to email, and forgot topic. I sent support bundle via email. Regarding questions: We are running CentOS 7.4 with ZFS pool as a "RAID" solution.
DELL-Sam L
Moderator
•
7.8K Posts
0
June 4th, 2018 10:00
Hello kubaq87,
Which JBOD’s do you have as you didn’t state them in your post? I am guessing that you have a MD1000,12xx, or 14xx enclosure. If that is the case & your virtual disk is spanning both enclosures, then you can’t remove one enclosure without harming your virtual disk. All your virtual disk information is stored on your PERC card & on the drives that make up the virtual disk. If you move the enclosure, then you will need the same perc card type that it is connect to see the virtual disk on the new host.
Please let us know if you have any other questions.
kubaq87
1 Rookie
•
12 Posts
0
June 5th, 2018 00:00
Hello Sam,
Thanks for your reply. Maybe I did not state this correctly, but every disk is in it's own DG. So in other words there is not Drive Groups between enclosures. This is how does the summary looks like:
# ./SMcli 172.16.75.103 -c "show storageArray summary;" Performing syntax check... STORAGE SUMMARY Disk pools: 0 Virtual Disks on Disk Pools: 0 Disk groups: 120 RAID 0 Disk Groups: 120 Virtual Disks: 120 Access virtual disks: 1 Standard Virtual Disks (Used/Allowed): 120 / 512 Base: 120 Repository: 0 Thin Virtual Disks (Used/Allowed): 0 / 512 HOST MAPPINGS SUMMARY Access virtual disk: LUN 31,31 (see Mappings section for details) Default host OS: Windows (Host OS index 0) Mapped virtual disks: 121 Unmapped virtual disks: 0 HARDWARE SUMMARY Enclosures: 2 System configured to use batteries: Yes RAID Controller Modules: 2 Consistency mode: Duplex (dual RAID controller modules) Physical Disks: 120 Current physical disk media types: Physical Disk (120) Current physical disk interface type(s): Serial Attached SCSI (SAS) (120) FEATURES SUMMARY Feature enable identifier: 37303030323930303033363554B8374F Feature pack: SAS - MD3460 Feature pack submodel ID: 215 Additional feature information Snapshot groups allowed per source virtual disk (see note below): 4 Virtual Disks allowed per storage partition: 256 FIRMWARE INVENTORY MD Storage Manager® AMW Version: 11.20.0G06.0020 Report Date: Tue Jun 05 08:54:14 CEST 2018 Storage Array Storage Array Name: med-file-md12 Current Package Version: 08.20.05.60 Current NVSRAM Version: N2701-820890-004 Staged Package Version: None Staged NVSRAM Version: None RAID Controller Modules Location: Enclosure 0, Slot 0 Current Package Version: 08.20.05.60 Current NVSRAM Version: N2701-820890-004 Board ID: 2701 Sub-Model ID: 215 Location: Enclosure 0, Slot 1 Current Package Version: 08.20.05.60 Current NVSRAM Version: N2701-820890-004 Board ID: 2701 Sub-Model ID: 215 Physical Disk Enclosure, Drawer, Slot: Manufacturer: Product ID: Physical Disk Type: Capacity: Physical Disk firmware version: FPGA Version: (SSD only) Enclosure 0, Drawer 0, Slot 0 SEAGATE ST4000NM0023 Serial Attached SCSI (SAS) 3,720.523 GB GS0F Not Available Enclosure 1, Drawer 3, Slot 11 SEAGATE ST4000NM0023 Serial Attached SCSI (SAS) 3,720.523 GB GS0D Not Available Enclosure 1, Drawer 4, Slot 0 SEAGATE ST4000NM0023 Serial Attached SCSI (SAS) 3,720.523 GB GS0D Not Available Enclosure 1, Drawer 4, Slot 1 SEAGATE ST4000NM0023 Serial Attached SCSI (SAS) 3,720.523 GB Script execution complete. SMcli completed successfully.And what I am trying to do is to disconnect bottom enclosure(Enclosure 1), and attach it to different server. As you see they both belong to same array, but there is not disk groups created on between enclosures. In other words each DG has only one physicall member, so all drives are separated(we have RAID on OS level).
120 physicall disks, and 120 separated DGs(I removed most of them from CLUI output, because it was too big)
So the question is, if in this case I can disconnect Enclosure 1 without destroying array ?
DELL-Sam L
Moderator
•
7.8K Posts
0
June 5th, 2018 06:00
Hello kubaq87,
From the output that you posted, you are running an MD3460 & MD3060e. Based on that no you can’t separate your MD3060e and keep the configuration on those drives. The reason is that your MD3460 is controlling current configuration, & your MD3060e is present to it. If you remove your MD3060e and don’t destroy the virtual disks & disk groups you will put both controllers into a lockdown state. In addition to that when you connect your MD3060e to another host your configuration will not be seen due to the DG & VD configuration being on your MD3460.
Please let us know if you have any other questions.
kubaq87
1 Rookie
•
12 Posts
0
June 5th, 2018 23:00
Hello Sam,
Many thanks for your advices. So just last one question to summarize this. Will this work if I do it this way:
1. Destroy all Virtual Disk/Drive Groups on MD3060e
2. Remove MD3060e from array
3. Attach MD3060e to another server
4. Create new array on MD3060e
DELL-Sam L
Moderator
•
7.8K Posts
0
June 6th, 2018 12:00
Hello kubaq87,
What is the OS that your host is running that your VD are presented to? Also, what software are you using for your software raid? Can we get a support bundle from your MD3460 so that we can review it? I can send you can email that you can reply with the support bundle log. It is also a good thing to have a support bundle handy just in case the system has any issues we can see what the config was like and fix any data base issue.
The steps you listed are correct. I would also get a valid & test backup of the data on your MD’s prior to doing any steps just in case something goes wrong. What it would be best to do is to power down your MD3460 & MD3060e and then disconnect the cables from your MD3060e then power on your MD3460 & make sure that it doesn’t go into a lockdown state, & to make sure that you can still access the data on your MD3460. If you can still access your data on your MD3460 then you can move your MD3060e to your new host and clear the configuration & create the new virtual disk configuration.
In most cases when you want to remove an expansion enclosure it is best to destroy all DG & VD then recreate as the controller can go into a lockdown state due to the change.
Please let us know if you have any other questions.
kubaq87
1 Rookie
•
12 Posts
0
June 10th, 2018 23:00
Sorrry, I thought we switched to email, and forgot topic. I sent support bundle via email. Regarding questions: We are running CentOS 7.4 with ZFS pool as a "RAID" solution.