所有计划作业处于“等待”状态
摘要: 作业未运行。所有已计划的作业都处于等待状态。
本文适用于
本文不适用于
本文并非针对某种特定的产品。
本文并非包含所有产品版本。
症状
没有作业正在运行。作业状态显示所有作业都处于等待状态。
lifs010-13# isi job jobs list ID Type State Impact Pri Phase Running Time ----------------------------------------------------------------- 1500 AutoBalanceLin Waiting Low 4 1/3 38d 21h 51m 1662 ShadowStoreProtect Waiting Low 6 1/1 - 1712 Collect Waiting Low 5 1/2 2d 6h 46m 1724 SnapshotDelete Waiting Low 2 1/2 - 1725 WormQueue Waiting Low 6 1/1 - 1726 ShadowStoreDelete Waiting Low 2 1/1 - 1727 QuotaScan Waiting Low 6 1/2 - ----------------------------------------------------------------- Total: 7
原因
如果其中一个节点与作业引擎协调器断开连接,则可能会发生这种情况:
lifs010-102# isi job status --verbose
The job engine may temporarily delay running jobs.
Coordinator: 10
Connected: False
Disconnected Nodes: 8
Down or Read-Only Nodes: False
Statistics Ready: True
Cluster Is Degraded: False
Run Jobs When Degraded: False
Running and queued jobs:
ID Type State Impact Pri Phase Running Time
-----------------------------------------------------------------
1500 AutoBalanceLin Waiting Low 4 1/3 38d 21h 51m
1662 ShadowStoreProtect Waiting Low 6 1/1 -
1712 Collect Waiting Low 5 1/2 2d 6h 46m
1724 SnapshotDelete Waiting Low 2 1/2 -
1725 WormQueue Waiting Low 6 1/1 -
1726 ShadowStoreDelete Waiting Low 2 1/1 -
1727 QuotaScan Waiting Low 6 1/2 -
-----------------------------------------------------------------
Total: 7
Recent finished jobs:
ID Type State Time
------------------------------------------------------
1721 SnapshotDelete Succeeded 2016-04-21T11:00:20
1663 MultiScan User Cancelled 2016-04-22T15:35:08
1722 SnapshotDelete Succeeded 2016-04-22T17:25:29
1723 WormQueue Succeeded 2016-04-22T17:25:55
------------------------------------------------------
Total: 4
解决方案
确认断开连接的节点的逻辑节点编号 (LNN)。节点 LNN 可能并不总是与节点 ID 匹配。
# isi_nodes %{id} %{node} %{lnn} %{address}
Example output:
lifs010-2# isi_nodes %{id} %{node} %{lnn} %{address}
1 lifs010-1 1 192.168.41.101
2 lifs010-2 2 192.168.41.102
3 lifs010-3 3 192.168.41.103
4 lifs010-4 4 192.168.41.104
5 lifs010-5 5 192.168.41.105
6 lifs010-6 6 192.168.41.106
7 lifs010-7 7 192.168.41.107
8 lifs010-8 8 192.168.41.108
9 lifs010-9 9 192.168.41.109
10 lifs010-10 10 192.168.41.110
11 lifs010-11 11 192.168.41.111
12 lifs010-13 12 192.168.41.112
检查是否所有节点都运行isi_mcp进程:
# isi_for_array -s ps auxw | grep mcp | grep -v grep
示例输出:(请注意未列出节点 8)
lifs010-2# isi_for_array -s ps auxw | grep mcp | grep -v grep lifs010-1: root 1690 0.0 0.1 48708 18248 - Is Sat09 0:00.01 isi_mcp: failsafe (isi_mcp) lifs010-1: root 1692 0.0 0.1 59968 18212 - Is Sat09 0:00.40 isi_mcp: forker (isi_mcp) lifs010-1: root 1910 0.0 0.3 101728 31272 - Ss Sat09 44:23.35 isi_mcp: master (isi_mcp) lifs010-2: root 1751 0.0 0.1 53060 18228 - Is 12Jun25 0:00.11 isi_mcp: failsafe (isi_mcp) lifs010-2: root 1816 0.0 0.1 72896 18160 - Is 12Jun25 0:00.58 isi_mcp: forker (isi_mcp) lifs010-2: root 1901 0.0 0.3 86140 31368 - Ss 12Jun25 148:00.09 isi_mcp: master (isi_mcp) lifs010-3: root 1681 0.0 0.1 78532 18228 - Is Sat09 0:00.01 isi_mcp: failsafe (isi_mcp) lifs010-3: root 1683 0.0 0.1 55616 18172 - Is Sat09 0:05.67 isi_mcp: forker (isi_mcp) lifs010-3: root 1678 0.0 0.3 104324 31652 - Ss Sat09 46:12.73 isi_mcp: master (isi_mcp) lifs010-4: root 1691 0.0 0.1 48708 18248 - Is Sat09 0:00.01 isi_mcp: failsafe (isi_mcp) lifs010-4: root 1643 0.0 0.1 59968 18212 - Is Sat09 0:00.40 isi_mcp: forker (isi_mcp) lifs010-4: root 1312 0.0 0.3 101728 31272 - Ss Sat09 44:23.35 isi_mcp: master (isi_mcp) lifs010-5: root 1755 0.0 0.1 53060 18228 - Is 12Jun25 0:00.12 isi_mcp: failsafe (isi_mcp) lifs010-5: root 1256 0.0 0.1 72896 18160 - Is 12Jun25 0:00.58 isi_mcp: forker (isi_mcp) lifs010-5: root 1967 0.0 0.3 86140 31368 - Ss 12Jun25 148:00.09 isi_mcp: master (isi_mcp) lifs010-6: root 3456 0.0 0.1 78532 18228 - Is Sat09 0:00.01 isi_mcp: failsafe (isi_mcp) lifs010-6: root 2754 0.0 0.1 55616 18172 - Is Sat09 0:05.67 isi_mcp: forker (isi_mcp) lifs010-6: root 1923 0.0 0.3 104324 31652 - Ss Sat09 46:12.73 isi_mcp: master (isi_mcp) lifs010-7: root 1888 0.0 0.1 48708 18248 - Is Sat09 0:00.01 isi_mcp: failsafe (isi_mcp) lifs010-7: root 3654 0.0 0.1 59968 18212 - Is Sat09 0:00.40 isi_mcp: forker (isi_mcp) lifs010-7: root 1236 0.0 0.3 101728 31272 - Ss Sat09 44:23.35 isi_mcp: master (isi_mcp) lifs010-9: root 1030 0.0 0.1 78532 18228 - Is Sat09 0:00.01 isi_mcp: failsafe (isi_mcp) lifs010-9: root 1601 0.0 0.1 55616 18172 - Is Sat09 0:05.67 isi_mcp: forker (isi_mcp) lifs010-9: root 1922 0.0 0.3 104324 31652 - Ss Sat09 46:12.73 isi_mcp: master (isi_mcp) lifs010-10: root 1599 0.0 0.1 48708 18248 - Is Sat09 0:00.01 isi_mcp: failsafe (isi_mcp) lifs010-10: root 1633 0.0 0.1 59968 18212 - Is Sat09 0:00.40 isi_mcp: forker (isi_mcp) lifs010-10: root 1933 0.0 0.3 101728 31272 - Ss Sat09 44:23.35 isi_mcp: master (isi_mcp)
在未运行isi_mcp的节点上启动 isi_mcp:
# isi_for_array -n 8 isi_mcp
验证计划作业的状态:
# isi job status --verbose The job engine is running. Coordinator: 2 Connected: True Disconnected Nodes: - Down or Read-Only Nodes: False Statistics Ready: True Cluster Is Degraded: False Run Jobs When Degraded: False Running and queued jobs: ID Type State Impact Pri Phase Running Time ----------------------------------------------------------------- 1500 AutoBalanceLin Running Low 4 1/3 38d 21h 51m 1662 ShadowStoreProtect Waiting Low 6 1/1 - 1712 Collect Waiting Low 5 1/2 2d 6h 46m 1724 SnapshotDelete Running Low 2 1/2 3s 1725 WormQueue Waiting Low 6 1/1 - 1726 ShadowStoreDelete Running Low 2 1/1 2s 1727 QuotaScan Waiting Low 6 1/2 - ----------------------------------------------------------------- Total: 7
如果节点被拆分、离线、死机、只读或无响应,导致节点与作业引擎协调器断开连接,也会出现此问题。可能需要进一步的故障处理才能使节点恢复到正常运行状态。如果需要帮助,请联系 戴尔技术支持。
受影响的产品
Isilon文章属性
文章编号: 000017115
文章类型: Solution
上次修改时间: 10 9月 2025
版本: 5
从其他戴尔用户那里查找问题的答案
支持服务
检查您的设备是否在支持服务涵盖的范围内。