40 Posts
0
1955
Equallogic ps610 nospares cli not working
Hi,
After I ran through the initial setup via CLI, I typed this command in order to remove the 2x HDD spare from the raid10 that I created
member select membername raid-policy raid10-nospares
This worked well the first time and but now added 4x more HDD and 2x HDD has been added to the raid10 pool and 2x HDD was set as spares.
I run the same command again but nothing is happening.
Can anyone please advise how I can remove the spare and add them to the raid10 pool?
Is it because I now have a volume in the array and I am not allowed?
Any advice is very welcome.
Thank you
dwilliam62
1 Rookie
1 Rookie
•
1.5K Posts
1
June 2nd, 2020 09:00
Hello,
There are only two supported drive configurations with EQL arrays. 1/2 populated and fully populated. You need to add more drives then your RAID10-nospares might work again. You have likely crossed the max number of drives for the first RAIDset. Now you need enough drives to complete the 2nd RAIDset. No promise that it will since you are not using it as intended.
Regards,
Don
fred974
40 Posts
0
June 2nd, 2020 10:00
Thank you, I didn't know that.
We have 14 drives in at the moment. Is that really bad?
dwilliam62
1 Rookie
1 Rookie
•
1.5K Posts
0
June 2nd, 2020 11:00
Hello,
It's not tested or qualified that way. Therefore good/bad is not predictable. Especially at any given firmware revision.
I would strongly suggest you purchase enough drives to completely fill the chassis to get it back to a normal operating condition.
Regards,
Don
fred974
40 Posts
0
June 2nd, 2020 11:00
I have 14 SSD at the moment, can I fill the rest with 15k HDD?
dwilliam62
1 Rookie
1 Rookie
•
1.5K Posts
0
June 2nd, 2020 12:00
Hello,
No! Mixing SSD and spinning is only for RAID-6 accelerated. And that has to be a majority of SPINNING drives not SSDs. There's no way to convert to RAID-6 accelerated either. Mixing RPM drives in same member is also not supported. So no 10K with 15K or 7.2K RPM drives.
The balance of the drives must all be SSD of the same size or larger.
Regards,
Don
fred974
40 Posts
0
June 4th, 2020 03:00
@dwilliam62
Thank you for assisting me on understanding were I went wrong.
dwilliam62
1 Rookie
1 Rookie
•
1.5K Posts
0
June 4th, 2020 05:00
Hello,
You are very welcome! I am glad that I could help out.
Take care.
Don
fred974
40 Posts
0
December 18th, 2020 04:00
Hi again,
I finally managed to get extra 10k HDD. In order to get it to work based on what you already explained, I need to have 7x SSD and 17 10k HDD.
Do I have to backup he whole array and rebuild it again to convert to raid6 accelerated? Or is there an alternative way?
Thank you
dwilliam62
1 Rookie
1 Rookie
•
1.5K Posts
0
December 18th, 2020 07:00
Hello,
As I mentioned earlier you can't convert to RAID6 accelerated. You will have to backup the array. Reset it.
Setup the 7x SSDs and then the rest the spinning drives and build the array from scratch again. Then restore your data.
Only other option is if you had another array with compatible firmware and more space, then you could add it to the group, remove the SSD member which moves all the data to the new member and resets the SSD one. Then make the drive changes, and add it back into the group.
Regards,
Don
fred974
40 Posts
0
December 22nd, 2020 12:00
Hi @dwilliam62
Thank you for taking the time to reply.
The EqualLogic is currently used a shared repository for my xcp-NG cluster and I can easily moved all the VM data to my servers local storage for now while I sort the disk situation. I also have backups
Once I moved all my data away from the array, what are the step reconfigure the array?
Can I can put 7x ssd and 17 spinning disk and set my raid as raid10 like it is now? Or do I have to set the raid with only the7x ssd and then add the rest of the spinning disk later?
Can I still use raid10 or is raid6 accelerated the only supported option?
Thank you in advance
fred974
40 Posts
0
December 22nd, 2020 14:00
@dwilliam62
sorry for this moment of brain freeze.
I just read your previous reply again and you did made it clear that I need to start all over from scratch.
For now I will convert to raid10 to raid6 and set time to redo it all after the festive period.
Thank you and merry Christmas
dwilliam62
1 Rookie
1 Rookie
•
1.5K Posts
0
December 23rd, 2020 04:00
Hello,
You are very welcome. I am glad that I was able to help. Once you have moved off all the data, using the serial port is best. Login as grpadmin and issue reset. It will prompt you to confirm this by typing "DeleteAllMyDataNow" case sensitive.
If you are replacing ALL the drives there is another option. Confirm your backups. Shutdown the array. Remove all the drives and properly store them in anti-static bags. Then replace all the drives and start from scratch. But it would allow you to quickly revert to the old setup if something went wrong.
Another wild idea would be to try to buy a used chassis. It can be a different model. Then add the new drives to that and add it to the group. Having another pair of controllers and disks would provide much better performance than just converting to a hybrid array. A 24 drive 10K or 15K setup would provide a ton of IOs. Most especially if you are doing a large majority of writes. A hybrid will ONLY use RAID6 accelerated.
One footnote: If any of your new drives are 4K sector size, then the EQL firmware MUST be 9.1.9+ or greater. Otherwise the 4K drives won't show as supported.
One last reason to get a contract for that array, is a new feature of the PSS contracts are access to the EQL firmware. So you can be running the most current version. You won't be able to change the model if you did get the contract though.
A very happy holidays to you as well!!
Regards,
Don