PowerProtect Data Protection, IDPA: Rapid Upgrade ChecKer Utility Shows Failure

Summary: This Article provides remediation steps for "firmware_readiness" check failure. IDPA: Rapid Upgrade ChecKer Utility Shows Failure for firmware_readiness Check.

This article applies to This article does not apply to This article is not tied to any specific product. Not all product versions are identified in this article.

Instructions

Note: Always be sure that you are running the latest version of the RUCK tool.


The following error may be seen in Rapid Upgrade ChecKer (RUCK) for the firmware_readiness check:

Upgrade readiness status - Failed checks:
+-----------+--------------------+--------+-----------------------------------------------+------------------------------------+
| Component | Check              | Status | Message                                       | Remedy                             |
+-----------+--------------------+--------+-----------------------------------------------+------------------------------------+
| ESXi      | firmware_readiness | FAILED | Example                                       | Example                            |
+-----------+--------------------+--------+-----------------------------------------------+------------------------------------+


Details:
The firmware_readiness check verifies if PTAgent, iSM, and iDRAC are ready for Firmware update. This check also verifies if the current Firmware is valid and if a two-hop upgrade is required.

Failure Scenarios:
 

Case#

REST API

Failure scenario

Error code

Error message (returned by DPATools API)

Remediation (returned by DPATools API)

 

Error message and remedy displayed on ACM UI (As per IDPA nomenclature). Remediation Steps
1 Any API REST API request failed due to unknown internal server error. 9000 Internal server error: dpatools-service encountered an unexpected condition that prevented it from fulfilling the request. Internal server error: dpatools-service encountered an unexpected condition that prevented it from fulfilling the request. Check the logs for details. Internal server error: Infrastructure Management Service encountered an unexpected condition that prevented it from fulfilling the request. Contact Support. To resolve this issue, see the table: Remediation Steps #7.

2

POST /versions

Failed to query FW version because PTAgent is not running (or PTA and iSM/iDRAC are not available).

9001

Failed to connect to PTAgent.

Failed to connect to PTAgent. Check PTAgent status or IP connection.

Failed to connect to Node Event service. Check Node Event service status or IP connection. To resolve this issue, see the table: Remediation Steps #1. 

3

POST /versions

Failed to query the FW version because iSM is not running.

Note: When iSM is stopped, it may take up to 3 minutes for PTAgent to process some REST requests, such as POST /host/swinventory and POST /host/lc.


It would be expected that the POST /firmware/versions API returns a timeout error in this case.

9002

Failed to query SW inventory from PTAgent (HttpStatus.SERVICE_UNAVAILABLE).

Failed to query SW inventory from PTAgent (HttpStatus.SERVICE_UNAVAILABLE). Check iSM/iDRAC status.

Failed to query software inventory from the Node Event service (HttpStatus.SERVICE_UNAVAILABLE). Check iDRAC Service Module/iDRAC status. To resolve this issue, see the table: Remediation Steps #2 or #4 or #5.

4

POST /versions

Failed to query FW version because iDRAC is not ready.

9003

Failed to query SW inventory from PTAgent (HttpStatus.BAD_GATEWAY).

Failed to query SW inventory from PTAgent (HttpStatus.BAD_GATEWAY). Check iDRAC/iSM status.

Failed to query software inventory from the Node Event service (HttpStatus.BAD_GATEWAY). Check iDRAC/iDRAC Service Module status.  

5

POST /precheck

Failed to query FW version because PTAgent is not running (or PTA and iSM/iDRAC are not available).

9004

Failed to connect to PTAgent.

Failed to connect to PTAgent. Check PTAgent status or IP connection.

Failed to connect to Node Event service. Check Node Event service status or IP connection. To resolve this issue, see the table: Remediation Steps #1.

6

POST /precheck

FW precheck failed because iSM is not running.

Note: When iSM is stopped, it may take up to 3 minutes for PTAgent to process some REST requests, such as POST /host/swinventory and POST /host/lc.


It would be expected that the POST /firmware/versions API returns a timeout error in this case.

9005

Failed to query SW inventory from PTAgent (HttpStatus.SERVICE_UNAVAILABLE).

Failed to query SW inventory from PTAgent (HttpStatus.SERVICE_UNAVAILABLE). Check iSM/iDRAC status.

Failed to query software inventory from the Node Event service (HttpStatus.SERVICE_UNAVAILABLE). Check iDRAC Service Module/iDRAC status. To resolve this issue, see the table: Remediation Steps #2 or #4 or #5.

7

POST /precheck

FW precheck failed because iDRAC is not ready.

9006

Failed to query SW inventory from PTAgent (HttpStatus.BAD_GATEWAY).

Failed to query SW inventory from PTAgent (HttpStatus.BAD_GATEWAY). Check iDRAC/iSM status.

Failed to query software inventory from the Node Event service (HttpStatus.BAD_GATEWAY). Check iDRAC/iDRAC Service Module status.  

8

POST/readinesscheck

ReadinessCheck failed because PTAgent is not running (or PTA and iSM/iDRAC are not available).

N/A

Failed to connect to PTAgent.

Failed to connect to PTAgent. Check PTAgent status or IP connection.

Failed to connect to Node Event service. Check Node Event service status or IP connection. To resolve this issue, see the table: Remediation Steps #1 or #9.

9

POST /readinesscheck

ReadinessCheck failed because iSM is not running (or iSM and iDRAC are not available).

N/A

Failed to query host summary from PTAgent (HttpStatus.SERVICE_UNAVAILABLE).

Failed to query host summary from PTAgent (HttpStatus.SERVICE_UNAVAILABLE). Check iSM/iDRAC status."

Failed to query host summary from Node Event Service (HttpStatus.SERVICE_UNAVAILABLE). Check iSM/iDRAC status."

To resolve this issue, see the table: Remediation Steps #2 or #4 or #5.

10

POST /readinesscheck

ReadinessCheck failed because iDRAC is not ready.

N/A

Failed to query host summary from PTAgent (HttpStatus.BAD_GATEWAY).

Failed to query host summary from PTAgent (HttpStatus.BAD_GATEWAY). Check iDRAC/iSM status.

Failed to query host summary from Node Event Service (HttpStatus.BAD_GATEWAY). Check iDRAC/iSM status.

To resolve this issue, see the table: Remediation Steps #1 or #9.

11

POST /readinesscheck

ReadinessCheck failed because iDRAC is in recovery mode.

N/A

Lifecycle Controller is in recovery mode.

Lifecycle Controller is in recovery mode. Clear the recovery mode before FW update.

iDRAC is in recovery mode. Clear the recovery mode before firmware upgrade.  

12

POST /readinesscheck

ReadinessCheck failed because there are some pending jobs in the iDRAC job queue.

N/A

There are some pending jobs in the iDRAC job queue.

There are some pending jobs in the iDRAC job queue. Clear iDRAC job queue before FW update

There are some pending jobs in the iDRAC job queue. Clear iDRAC job queue before firmware upgrade To resolve this issue, see the table: Remediation Steps #3.

13

POST /update

GET /activities/{id}
(> IDPA 2.4 only)

FW update Job is stuck at 0% download which causes subsequent tasks to fail.

9012

Failed to unpack firmware payload.

Failed to unpack firmware payload. Check if the iDRAC LC job queue is clear.

Failed to unpack firmware payload. Check if the iDRAC LC job queue is clear.  

14

POST /update

GET /activities/{id}

Failed to update FW because PTAgent is not running.

9013

Failed to connect to PTAgent.

Failed to connect to PTAgent. Check PTAgent status or IP connection.

Failed to connect to Node Event service. Check Node Event service status or IP connection. To resolve this issue, see the table: Remediation Steps #1.

15

POST /update

GET /activities/{id}
(> IDPA 2.4 only)

Failed to update FW because iSM is not running.

Note: When iSM is stopped, it may take up to 3 minutes for PTAgent to process some REST requests. It might return a timeout error in this case.

9014

Failed to process firmware payload with PTAgent (HttpStatus.SERVICE_UNAVAILABLE).

Failed to process firmware payload with PTAgent (HttpStatus.SERVICE_UNAVAILABLE). Check iSM/iDRAC status.

Failed to process firmware payload with Node Event service (HttpStatus.SERVICE_UNAVAILABLE). Check iDRAC Service Module/iDRAC status. To resolve this issue, see the table: Remediation Steps #2 or #4 or #5.

16

POST /update

GET /activities/{id}

 

Failed to update FW because iDRAC is not ready.

9015

Failed to process firmware payload with PTAgent (HttpStatus.BAD_GATEWAY).

Failed to process firmware payload with PTAgent (HttpStatus.BAD_GATEWAY). Check iDRAC/iSM status.

Failed to process firmware payload with Node Event service (HttpStatus.BAD_GATEWAY). Check iDRAC/iDRAC Service Module status.  

17

GET /precheck

FW precheck failed due to an error "No FW profile found."

9017

No FW profile found for <IDPA model>.

No FW profile found <IDPA model>. Ensure that the correct ID Module is installed.

No firmware profile was found. Ensure that the correct ID Module is installed.  
18

POST /postupdate

GET /activities/{id}

 

FW postupdate failed due to a vSAN issue. 9018 Failed to retrieve vSAN status. Failed to retrieve vSAN status. Ensure that the vSAN is in a healthy state. Failed to retrieve vSAN status. Ensure that the vSAN is in a healthy state.  
19

POST /postupdate

GET /activities/{id}

FW postupdate failed because PTAgent failed to process the reboot request. 9019 Failed to process reboot request with PTAgent. Failed to process reboot request with PTAgent. Check PTAgent status. Failed to process reboot request with Node Event service. Check Node Event service status. To resolve this issue, see the table: Remediation Steps #1.
20

POST /preupdate

GET /activities/{id}

Failed to restart PTAgent 1.8.3 because PTAgent is not installed properly, or is in an error state. 9020 Failed to restart PTAgent 1.8.3. Failed to restart PTAgent Check if PTAgent is in an error state, and ensure it is installed properly. Failed to restart Node Event service. Check if the Node Event service is in an error state, and ensure it is installed properly.  
21

POST /preupdate

GET /activities/{id}

Failed to perform pre-requisite tasks due to an internal error. 9021

Failed to perform pre-requisite tasks due to an internal error.

Failed to perform pre-requisite tasks due to an internal error. Check the logs for details. Failed to perform pre-requisite tasks due to an internal error. Check the upgrade logs for details. Check that Firmware Versions are correct. iDRAC Lifecycle Firmware Versions too.
DDOSCFD-24113
22

POST /update

GET /activities/{id}

FW payload file is not found. 9022 FW payload file is not found.

FW payload file is not found. Add the firmware payload path to the request body and then retry the FW update API.

The firmware payload file is not found. Add the firmware payload path to the request body and then retry the firmware update API.  
23

POST /update

GET /activities/{id}

Failed to update firmware due to an internal error. 9023 Failed to update firmware due to an internal error.

Failed to update firmware due to an internal error. Check PTAgent/iSM/iDRAC status and review logs for details.

Failed to update firmware due to an internal error. Check Node Event service/iDRAC Service Module/iDRAC status and review logs for details.  
24

POST /preupdate

GET /activities/{id}

Failed to perform prerequisite tasks because PTAgent is not running. 9024 Failed to connect to PTAgent.  Failed to connect to PTAgent. Check PTAgent status or IP connection. Failed to connect to Node Event service. Check Node Event service status or IP connection. To resolve this issue, see the table: Remediation Steps #1.
25

POST /postupdate

GET /activities/{id}

Failed to perform postupdate tasks because PTAgent is not running.

9025 Failed to connect to PTAgent.  Failed to connect to PTAgent. Check PTAgent status or IP connection. Failed to connect to Node Event service. Check Node Event service status or IP connection. To resolve this issue, see the table: Remediation Steps #1.
26

POST /postupdate

GET /activities/{id}

Failed to perform postupdate tasks due to an internal error.

9028 Failed to perform postupdate tasks due to an internal error. Failed to perform postupdate tasks due to an internal error. Check vSAN status and review logs for details. Failed to perform postupdate tasks due to an internal error. Check vSAN status and review upgrade logs for details.  
27 Any API

Timeout while waiting for the REST API response

9029 Timeout while waiting for the REST API response Timeout while waiting for the REST API response
Check PTAgent/iSM/iDRAC status and review logs for details.
Timeout while waiting for an internal task to finish
Check Node Event service/iDRAC Service Module/iDRAC status and check logs for details.
See article 222521 IDPA: Firmware task failed error: 9029 timeout while waiting
28

POST /postupdate

GET /activities/{id}

Failed to perform postupdate tasks because ESXi host failed to exit maintenance mode after reboot. 9030 ESXi host failed to exit maintenance mode.

ESXi host failed to exit maintenance mode. Check ESXi host status and ensure vSAN is in a healthy state.

Hypervisor failed to exit maintenance mode. Check Hypervisor status and ensure vSAN is in a healthy state.  
29

POST /postupdate

GET /activities/{id}

Failed to perform postupdate tasks due to timeout while waiting for host up and running. 9031 Timeout while waiting for host up and running

Timeout while waiting for host up and running
Check ESXi host status and ensure vSAN is in a healthy state.

Timeout while waiting for host up and running
Check Hypervisor status and ensure vSAN is in a healthy state.
 
30

POST /update

GET /activities/{id}

FW update failed due to timeout while waiting for iDRAC up and running. 9032 The maximum wait time for system reset is exceeded.

The maximum wait time for system reset is exceeded. Check iDRAC and PTAgent status.

The maximum wait time for system reset is exceeded. Check iDRAC and Node Event service status.  
31 POST /readinesscheck Failed to verify if the current firmware is valid. 9033

Failed to verify if the current firmware is valid. 

Failed to verify if the current firmware is valid. Check iDRAC software inventory and dpatools log for details. Failed to verify if the current firmware is valid. Check the iDRAC software inventory and upgrade log for details. To resolve this issue, see the table: Remediation Steps #8.
32 POST /precheck Precheck task failed because FW profile is missing. 9034 Failed to get a firmware profile.  Failed to get a firmware profile. Check installed FW payload and logs for details. Failed to get a firmware profile. Check the installed firmware payload and upgrade logs for details. To resolve this issue, see the table: Remediation Steps #8.
33 POST /version Failed to query FW version due to missing firmware profiles. 9035 Failed to get a firmware profile.  Failed to get a firmware profile. Check installed dpatools-service version and logs for details. Failed to get firmware versions due to missing firmware profiles. Check installed Infrastructure Management Service version and upgrade logs for details.  
34

POST /update

GET /activities/{id}

Failed to autoclear iDRAC pending jobs due to an internal error. 9036 Failed to clear pending jobs in iDRAC job queue.  Failed to clear pending jobs in iDRAC job queue. Check PTAgent/iSM/iDRAC status and review logs for details. Failed to clear pending jobs in iDRAC job queue. Check Node Event service/iDRAC Service Module/iDRAC status and check upgrade logs for details.  
35 POST /readinesscheck ReadinessCheck failed because iDRAC version is older than 3.30.30.30 and direct upgrade to the target version is not supported. iDRAC must be updated to 3.36.103.36 first. N/A     Current iDRAC firmware is older than 3.30.30.30. Direct upgrade to the target version is not supported. iDRAC firmware must be updated to 3.36.103.36 first.  To resolve this issue, see the table: Remediation Steps #6.
36

POST /readinesscheck

ReadinessCheck failed because PTAgent is not running (or PTA service is not available).

N/A

PTAgent is not available and active now.

PTAgent is not available and active now. Check PTAgent Service status or IP connection.

Node Event Service is not available and active now. Check Node Event Service status or IP connection. To resolve this issue, see the table: Remediation Steps #1.
37

POST /readinesscheck

ReadinessCheck failed because iSM is not running (or service is degraded or disabled).

N/A

The cached response with Node Event Service is disabled.

Node Event Service is in a degraded state, iDRAC Service Module is not available or active now.

Node Event Service is not available and active now. Check Node Event Service status or IP connection.

Multiple known issues:

See article 219231 Integrated Data Protection Appliance: The Cached Response With Node Event Service Is Disabled. Node Event Service Is in Degraded Due to a Duplicate Route 169.254.0.1 


See article 197174 PowerProtect Data Protection Appliances, IDPA: PowerProtect Data Protection Rapid Upgrade Checker Reported a Firmware Upgrade precheck failure

See article 219233 Integrated Data Protection Appliance: The Cached Response Degraded ESXi /tmp Folder Full



Remediation Steps:
 

Sr.no Steps
1
  1. SSH to ESXi Host and run following command to correct the PTAgent rest_ip configuration:
Set the PTAgent rest_ip parameter:
  • /opt/dell/DellPTAgent/tools/pta_cfg set rest_ip=https://<host_internal_IP>:8086
Check the status of PTAgent service.
  • /etc/init.d/DellPTAgent status
If this service is down, start it using following command:
  • /etc/init.d/DellPTAgent start
If this service is up and running, restart it using following command:
  • /etc/init.d/DellPTAgent restart
  1. To avoid further precheck failures, login to all ESXi Hosts and check PTAgent service status manually. If any Dell PTAgent service is down, start it manually.
2
  1. Log in to an ESXi Host in which iSM issue is observed.
  2. Run command:
    1. /etc/init.d/dcism-netmon-watchdog status
    If the service is stopped, run the command to start it:
      1. /etc/init.d/dcism-netmon-watchdog start 
    3
    1. SSH into iDRAC from ACM CLI using command "ssh root@<iDRAC-IP-Address>" (In order to fetch the iDRAC IP Address, following command can be used: enum_instances OMC_IPMIIPProtocolEndpoint root/cimv2 | grep IPv4Address).
    2. Run the command racadm jobqueue delete -i JID_CLEARALL_FORCE
    3. Wait 5 minutes for iDRAC to settle down.

    Example:

    racadm>>racadm jobqueue delete -i JID_CLEARALL_FORCE
    RAC1032: JID_CLEARALL_FORCE job was canceled by the user.
    racadm>> 

    4

    If iDRAC UI shows that iSM is "Not running (TLS error)," apply the following workaround:

    1. Log in to an ESXi Host in which iSM issue is observed.
    2. Run commands:
      1. /etc/init.d/dcism-netmon-watchdog stop
      2. /etc/init.d/dcism-netmon-watchdog start install
      5
      1. Log in to an ESXi Host in which iSM issue is observed.
      2. Run command:
        1. /etc/init.d/dcism-netmon-watchdog status 
        If the service status is showing as "iSM is active (not running)," then restart iSM. 
          1. /etc/init.d/dcism-netmon-watchdog stop
        1. /etc/init.d/dcism-netmon-watchdog start

        It may take ~5 minutes to restart.

        1. Reset iDRAC
          1. SSH into iDRAC from ACM CLI using command ssh root@<iDRAC-IP-Address> (In order to fetch the iDRAC IP Address, following command can be used: enum_instances OMC_IPMIIPProtocolEndpoint root/cimv2 | grep IPv4Address
          2. Run command: racadm racreset soft
          6
          1. SSH to ACM
          2. Verify if dpatools version 2.3.0 or higher is already installed.
          • To check this run: rpm -qa | grep dpatools
          This command should show "dpatools-2.3.0-0.noarch" or later version is installed.
          1. Verify if firmware bundle "IDPA-10.308-10.308.tar.gz" is already available at "/usr/local/dpatools/bin/payload."
          2. Run command: dpacli -fwupdate /usr/local/dpatools/bin/payload/IDPA-10.308-10.308.tar.gz -skipReboot

          After this command finishes, iDRAC version 3.36.103.36 gets installed on nodes.

          1. Click the Revalidate button in the Upgrade UI to run the upgrade prechecks again.
          7

          One of the possible reasons of this failure is that some client task scripts are missing on ESXi hosts.
          This may happen if the ESXi host is not power cycled during the reimage process.
          To check and correct this, follow the steps below:

          1. SSH to ESXi hosts
          2. Write down the list of files present in the '/scratch/dell/extern' folder.
          3. SSH to ACM
          4. Write down the list of files present in the '/usr/local/dpatools/bin/clienttask' folder.
          5. If not all files from ACM's folder are present on ESXi's folder, SCP the files to ESXi hosts.
          6. Retry upgrade from Upgrade UI.
          7. If the issue persists, contact support.
          8

          Another possible reason of this failure is the dpatools-service is not upgraded properly.

          To check the dpatools-service version, follow the steps below:

          1. SSH to ACM
          2. Verify if the dpatools-service target version is already installed 

                        rpm -qa | grep dpatools-service

           If the failure happens during ESXi and Firmware, apply latest DPATools, follow the steps below to work around the issue:

          1.  Run DPATools CLI command to update firmware on all hosts: 

          dpacli -fwworkflow /usr/local/dpatools/bin/payload/IDPA-<version>-<version>.tar.gz

          1. Run Upgrade Sync from Upgrade UI.
          2. If the issue persists contact support.
          9

          One of the possible reasons of this failure is there is a problem with PM1735 NVMe FW or PCIe riser.

          By default, PTAgent does automatic device-scan and cache the storage device properties.
          PTAgent failed to find component 'PhysicalDisk' in hwInventory because it has trouble to inquire data from the NVMe PM1735 card in slot #4.

          The following workarounds can be applied to resolve this issue:

          1. Upgrade NVMe/PCIe 1735 firmware to 2.3.0 (For NVMe 1735 drive, V2.0.2 or later is acceptable).
            1. Download 14G FW 2021-December-Block from the Dell Support Product Page site: 
            2. Extract the package, and the contents contain the 1735 FW Express-Flash-PCIe-SSD_Firmware_RP8RC_WN64_2.3.0_A03.EXE.
            3. Enable service mode on the appliance by running the following command on ACM:  dpacli -servicemode
            4. Update NVMe 1735 FW to 2.3.0 using iDRAC manually. 
            5. Powercycle the appliance for a firmware update to be committed. 
              1. If the steps above do not work and the issue persists, attempt a power off recycle sequence:
                1. On ACM, Enable service mode on the appliance by running the following command on ACM: dpacli -servicemode
                2. Power down
                3. Unplug power cable
                4. Hold down the power button for 10 seconds.
                5. Plug in power cable
                6. Power up
              2. If steps 1 and 2 listed above do not resolve the issue, it could be an issue with slot or riser.
                1. Replace the PCIe riser.

                Affected Products

                PowerProtect DP4400, PowerProtect DP5800, PowerProtect DP8300, PowerProtect DP8800, Integrated Data Protection Appliance Family, PowerProtect DP5900, PowerProtect DP8400, PowerProtect DP8900, PowerProtect Software

                Products

                Integrated Data Protection Appliance Software
                Article Properties
                Article Number: 000191627
                Article Type: How To
                Last Modified: 11 Apr 2025
                Version:  17
                Find answers to your questions from other Dell users
                Support Services
                Check if your device is covered by Support Services.