Start a Conversation

Unsolved

This post is more than 5 years old

C

46780

July 15th, 2015 09:00

Equalogic volume offline-nospace-auto-grow, plenty of space in the pool

I have a volume that went status "offline-nospace-auto-grow" during a period of neglect where multiple drives in one member failed, causing it to go offline.


The member was brought back online, and was re-introduced to the group. There is plenty of free space in the pool now, 6TB. Why is the volume still "offline-nospace-auto-grow?"


I don't see a way to force the volume to go online. Increasing its size did not help. I cannot change any other settings on the volume due to the offline-nospace-auto-grow status.


The group is two PS5000's and a PS6000. The PS6000 is the one that had multiple drive failures and went offline some time during the past 18 months.

5 Practitioner

 • 

274.2K Posts

July 15th, 2015 10:00

Hello,

Was the PS6000 multidisc failure resolved, or was the array reset and brought back into the group with the same name/IP addresses?

What firmware revisions are on the arrays?  

Don

July 15th, 2015 11:00

Firmware is 5.1.2.

The member was repaired and brought back online with the same name and IP.

July 15th, 2015 11:00

Yes, the disk failure was resolved.

There were SIX failed drives on the member. Luckily, there were no volumes on the member. We replaced the drives and brought it back with the same name/IP as before.

Control module firmware revision is 5.1.2 on all three members.

5 Practitioner

 • 

274.2K Posts

July 15th, 2015 14:00

So it sounds like you RESET that member, replaced the drives, then re-ran setup to add it back into the group.  Is that correct?   If so, that is likely the crux of the problem.  When you create a new member, it gets a unique ID.  Regardless of the name/IP combo.  It sounds like the group may still be looking for that member.  

Don

July 15th, 2015 14:00

All the support guy did was slap the new disks in and leave. The PS6000 member came back online and recovered itself. We did not do anything.

There was no data to recover on the PS6000. All data is on the PS5000's.

I can create new volumes that span onto the PS6000, no problem. It appears to be working properly except for this one volume.

We do not have any sort of support through Dell on this array. I tried all the service tags with Dell and they were all rejected.

5 Practitioner

 • 

274.2K Posts

July 15th, 2015 14:00

Hello,

Sorry to ask again, but was the failed drive data recovered, drive cloned and put back into the PS6000?

I just have to makes sure the array databases were retained, not recreated by re-running setup.

The PS5000 is EOL so mixing them with arrays under contract isn't supported.

If the PS6000 is still under contract, please open a support case.  They'll need to review the diags to see why it's not working correctly.

Don

5 Practitioner

 • 

274.2K Posts

July 16th, 2015 07:00

One thing to try is convert that thin provision volume to thick.  

Is there still a contract on the 6000?   If so, you can open a case against that asset tag. They will tell you that the PS5000's will have to be removed from the group to maintain support.  What you might want to do is remove the PS6000.  Create a new group for that, and just keep the PS5000s in their own group.

Don

July 16th, 2015 07:00

None of these are under contract.

Separating the PS5000's from the PS6000 is not really an option.

No Events found!

Top