Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

16928

June 8th, 2017 11:00

C2F power module has insufficient hold time PS 6100

Dear All,

I have facing undermentioned error on storage box.

Critical hardware component failure, as shown next. C2F power module has insufficient hold time.

 


any needful advice to resolve the said issue.

Thanks and Best Regards,

Erik Castro

5 Practitioner

 • 

274.2K Posts

June 12th, 2017 11:00

Hello, 

 So it appears that both the controllers will need to be replaced. 

 Don 

June 8th, 2017 13:00

Add

SP:1496946696.29:emm.c:2363:ERROR:28.4.47:Critical health conditions exist.
Correct immediately before they affect array operation.
Critical hardware component failure.
There are 1 outstanding health conditions. Correct these conditions befo
re they affect array operation.
SP:1496946696.29:emm.c:355:ERROR:28.4.85:Critical hardware component failure, as
shown next.
C2F power module has insufficient hold time.
SP:1496946696.29:cache_driver.cc:1056:WARNING:28.3.17:Active control module cach
e is now in write-through mode. Array performance is degraded.
Thu Jun 8 14:31:44 EDT 2017
Jun 8 14:31:44 init: kernel security level changed from 0 to 1

5 Practitioner

 • 

274.2K Posts

June 9th, 2017 06:00

Hello Erik, 

 That error means you have a failed controller and that needs to be replaced. 

 Regards,

Don 

June 12th, 2017 11:00

3 Posts

October 30th, 2018 06:00

Hi,

 

i have the same issue both controllers have failed with a faulty battery. Is there a risk? what is the procedure to replace if both have failed?

Thanks

1 Rookie

 • 

1.5K Posts

October 30th, 2018 06:00

Hello, 

 The risk is in event of a power failure you might lose data in cache.  If you don't have a support contract you will need to find a third party source to acquire them or more likely new controllers. 

Regards,

Don

3 Posts

October 30th, 2018 06:00

Hi DOn,

 

i have already purchased 2 new refurbished controllers. i am worried about the procedure to replace them.

 

one controller is active online primary and the other secondary. both have an issue with battery.

 

if i remove secondary with a new one make sure all is ok then make new one primary and remove the other for replacement?

 

i need to know a procedure to avoid data being lost? thanks

 

1 Rookie

 • 

1.5K Posts

October 30th, 2018 07:00

Hello, 

 OK.  That's pretty easy.  You are correct.  You should properly ground yourself against static when doing this.

Since you going to failover, especially if you have not set the disk timeouts to 60 seconds you may want to do this during a quiet IO time or maint window. 

Remove the old passive controller.  Very important take the compact flash card from it and install it on the replacement.  

Install the new controller.   After a minute or so it should be booted and sync'd with the primary.  You can see that in the GUI or CLI.   If so, then you can do a restart of the array which will failover to that new secondary. 

 The repeat the process remove what was the primary, swap the compact flash card from old CM to new and install.  Wait about a minute and make sure they are both sync'd up and green. 

 Regards,

Don 

 

3 Posts

October 30th, 2018 07:00

Thanks Don,

 

so no need to completely shutdown vms and controller to replace. I can do it while it is still running?

 

What do you mean about this below? do this procedure when no one is working or in the office?

"Since you going to failover, especially if you have not set the disk timeouts to 60 seconds you may want to do this during a quiet IO time or maint window."

1 Rookie

 • 

1.5K Posts

October 30th, 2018 10:00

Hello. 

 Failing over the CM takes a certain amount of time to complete so that the new active CM is handling IO requests again.  Some OS's have very short time outs.  VMware Login timeout default is 5 seconds for example. 

 Most OS disk timeouts are 15-30 seconds.  So if the CM doesn't failover, pick up the new connections and respond to IOs before that, the OS will generate disk errors.   When it happens to a HyperVisor VMs can crash. 

 If you have extended the various timeouts to Dell best practice, (both HyperVisor AND in each VM) then they will be more patient and wait for the CM failover to complete. 

  When a failover is done is a lower IO period, it will take less time anyways.  Failover time is influenced by CM Hardware and especially the controller firmware as well.  Newer versions fail over faster than very old EQL CM firmware did.  

 With the firmware downloads on the EQL support site is a guide on setting these timeouts.  For VMware there is another Tech Report that covers it as well.   Dell TR1091 

http://downloads.dell.com/solutions/storage-solution-resources/BestPracticesWithPSseries-VMware%28TR1091%29.pdf

 Hope that helps explain it better? 

 Regards,

Don 

 

  

1 Rookie

 • 

1.5K Posts

April 28th, 2023 08:00

Hello, 

  I would not be sure the resolution is fixed.   Older versions of FW weren't as good at monitoring the passive controller HW.  If you can, during a maint window I would fail back to that controller to be sure that battery is actually a set of capacitors.  Which can leak, and fail.  

 Regards, 

Don 

#iworkfordell 

April 28th, 2023 08:00

I just encountered the same problem "C2F power module has insufficient hold time", indicating Battery: Failed on active controller. Then I accidentally discovered a possible solution is to press Restart, ie, switching the controllers (active>passive). After the switch over is completed, the failed battery return to GOOD!!! No idea why this fixed the problem, but no harm to try if you guys encountered the same battery failed problem in the future. 

No Events found!

Top