What to do when a node reports as Down or Offline
Summary: How to determine if a node is down and ways to connect to the node in a down state.
Instructions
Whenever a node has a problem communicating with the other nodes in the cluster, it is reported as offline. There are many reasons that a node or nodes can be reported in this state, from hardware to OS. The most common indicator of a node being down is in the event messages. If a node loses connectivity to the remaining nodes in the cluster, a "node offline" event is reported:
2.21767 02/27 05:14 C 3 173520 Node 3 is offline
If you see an event similar to this, determine if the node has recovered or if it is still offline. To determine this, use the output from isi status.
If isi status output reports all nodes as OK:
testcluster-1# isi status Cluster Name: testcluster Cluster Health: [ OK ] Data Reduction: 1.33 : 1 Storage Efficiency: 0.72 : 1 Cluster Storage: HDD SSD Storage Size: 0 (0 Raw) 16.7T (20.3T Raw) VHS Size: 3.6T Used: 0 (n/a) 22.0G (< 1%) Avail: 0 (n/a) 16.7T (> 99%) Health Ext Throughput (bps) HDD Storage SSD Storage ID |IP Address |DASR |C/N| In Out Total| Used / Size |Used / Size ---+----------------+-----+---+-----+-----+-----+-----------------+----------------- 1|xxx.xxx.xxx.148 | OK | C | 0| 524k| 524k|(No Storage HDDs)| 6.4G/ 5.6T(< 1%) 2|xxx.xxx.xxx.149 | OK | C |962.0|23.1M|23.1M|(No Storage HDDs)| 6.4G/ 5.6T(< 1%) 3|xxx.xxx.xxx.150 | OK | C | 0| 0| 0|(No Storage HDDs)| 9.2G/ 5.6T(< 1%) ---+----------------+-----+---+-----+-----+-----+-----------------+----------------- Cluster Totals: |962.0|23.7M|23.7M|(No Storage HDDs)|22.0G/16.7T(< 1%) Health Fields: D = Down, A = Attention, S = Smartfailed, R = Read-Only External Network Fields: C = Connected, N = Not Connected Critical Events: Time LNN Event --------------- ---- ------------------------------------------------------- Cluster Job Status: No running jobs. No paused or waiting jobs. No failed jobs. Recent job results: Time Job Event --------------- -------------------------- ------------------------------ 02/27 04:00:38 ShadowStoreProtect[518] Succeeded 02/27 02:00:14 WormQueue[517] Succeeded
In this example, all nodes report as OK. This indicates all nodes are online and part of the cluster. Determine if someone rebooted the node or if maintenance was being performed. If you are unsure of the reason for the reboot, you would want to gather logs and open a service request.
If isi status reports a node at Attention:
testcluster-1# isi status
Cluster Name: testcluster
Cluster Health: [ ATTN]
Data Reduction: 1.33 : 1
Storage Efficiency: 0.72 : 1
Cluster Storage: HDD SSD Storage
Size: 0 (0 Raw) 15.0T (18.6T Raw)
VHS Size: 3.6T
Used: 0 (n/a) 21.2G (< 1%)
Avail: 0 (n/a) 15.0T (> 99%)
Health Ext Throughput (bps) HDD Storage SSD Storage
ID |IP Address |DASR |C/N| In Out Total| Used / Size |Used / Size
---+---------------+-----+---+-----+-----+-----+-----------------+-----------------
1|xxx.xxx.xxx.148 | OK | C | 2.1k|16.9k|19.0k|(No Storage HDDs)| 6.4G/ 5.5T(< 1%)
2|xxx.xxx.xxx.149 | OK | C | 1.8M|10.0M|11.9M|(No Storage HDDs)| 6.4G/ 5.5T(< 1%)
3|xxx.xxx.xxx.150 |-A-- | C | 4.0k|480.0| 4.5k|(No Storage HDDs)|10.7G/ 5.5T(< 1%)
---+----------------+-----+---+-----+-----+-----+-----------------+-----------------
Cluster Totals: | 1.8M|10.0M|11.9M|(No Storage HDDs)|21.2G/15.0T(< 1%)
Health Fields: D = Down, A = Attention, S = Smartfailed, R = Read-Only
External Network Fields: C = Connected, N = Not Connected
Critical Events:
Time LNN Event
--------------- ---- -------------------------------------------------------
Cluster Job Status:
Running jobs:
Job Impact Pri Policy Phase Run Time
-------------------------- ------ --- ---------- ----- ----------
FlexProtectLin[520] Medium 1 MEDIUM 4/4 0:00:34
Job Description: Working on nodes: None and drives: node3:bay1
No paused or waiting jobs.
No failed jobs.
Recent job results:
Time Job Event
--------------- -------------------------- ------------------------------
02/27 04:00:38 ShadowStoreProtect[518] Succeeded
02/27 02:00:14 WormQueue[517] Succeeded
The isi status output on the node shows it at Attention -A--, this is triggered by a critical event on the cluster. A node in Attention state is online and part of the cluster but is reporting an issue. You can use isi event list to see what critical events are being reported for the node at Attention. In this case, it was due to a FlexProtectLin job running against drive bay 1. As with the OK state, you would want to determine why the node rebooted, if you can. if not, you would want to gather logs and open a service request.
If isi status reports a node as Down:
testcluster-1# isi status
Cluster Name: testcluster
Cluster Health: [ ATTN]
Data Reduction: 1.33 : 1
Storage Efficiency: 0.72 : 1
Cluster Storage: HDD SSD Storage
Size: 0 (0 Raw) 9.9T (13.5T Raw)
VHS Size: 3.6T
Used: 0 (n/a) 12.7G (< 1%)
Avail: 0 (n/a) 9.9T (> 99%)
Health Ext Throughput (bps) HDD Storage SSD Storage
ID |IP Address |DASR |C/N| In Out Total| Used / Size |Used / Size
---+---------------+-----+---+-----+-----+-----+-----------------+-----------------
1|xxx.xxx.xxx.148 | OK | C | 0|73.9k|73.9k|(No Storage HDDs)| 6.4G/ 5.0T(< 1%)
2|xxx.xxx.xxx.149 | OK | C | 0|11.3k|11.3k|(No Storage HDDs)| 6.4G/ 5.0T(< 1%)
3|xxx.xxx.xxx.150 |D--- | N | n/a| n/a| n/a| n/a/ n/a( n/a)| n/a/ n/a( n/a)
---+---------------+-----+---+-----+-----+-----+-----------------+-----------------
Cluster Totals: | n/a| n/a| n/a|(No Storage HDDs)|12.7G/ 9.9T(< 1%)
Health Fields: D = Down, A = Attention, S = Smartfailed, R = Read-Only
External Network Fields: C = Connected, N = Not Connected
Critical Events:
Time LNN Event
--------------- ---- -------------------------------------------------------
02/27 05:14:20 3 Node 3 offline
Cluster Job Status:
No running jobs.
No paused or waiting jobs.
No failed jobs.
Recent job results:
Time Job Event
--------------- -------------------------- ------------------------------
02/27 04:00:38 ShadowStoreProtect[518] Succeeded
02/27 02:00:14 WormQueue[517] Succeeded
02/27 00:00:21 ShadowStoreDelete[516] Succeeded
The isi status output shows the node as Down D---, this indicates that the node is unable to communicate with the cluster. If the node is not down for a known reason (hardware maintenance is being performed, cluster OS is being upgraded, etc.), see if you can establish connectivity to the node and open a service request immediately.
Establishing connectivity to a down node remotely
If the node is down, it means that it cannot communicate with the cluster. It is possible that you can still connect to the node, though. You may still be able to log in remotely, or via serial connection.
From another node in the cluster, you can attempt to connect to the down node using the internal network. Try to ping the clustername-node number? Using node 3 from the output above:
testcluster-1# ping testcluster-3 PING testcluster-3 (128.221.254.3): 56 data bytes 64 bytes from 128.221.254.3: icmp_seq=0 ttl=64 time=0.048 ms 64 bytes from 128.221.254.3: icmp_seq=1 ttl=64 time=0.042 ms 64 bytes from 128.221.254.3: icmp_seq=2 ttl=64 time=0.043 ms ^C --- testcluster-3 ping statistics --- 3 packets transmitted, 3 packets received, 0.0% packet loss
In this example, we were able to ping the clustername-node number, even though the node reports as down. We would try to ssh to the node and see if we can connect.
If the node has a statically assigned IP address on your public network, you may be able to connect to that. To determine if you have a statically assigned address from the cluster use the isi network command:
testcluster-1# isi network interfaces list | grep Static 1 25gige-1 Up - groupnet0.subnet0.pool0 Static 192.168.1.148 2 25gige-1 Up - groupnet0.subnet0.pool0 Static 192.168.1.149 3 25gige-1 Unknown - groupnet0.subnet0.pool0 Static 192.168.1.150
In this example, node 3 in the cluster has a statically assigned address at 192.168.1.150. From another node in the cluster or a workstation that has access to that network, we would attempt to ping the address. If we can successfully ping the address, we would then attempt to ssh into the node.
Establishing connectivity to a down node locally
If someone is onsite and they have a computer with a serial port or usb to serial adapter and a null modem cable or serial cable with null modem adapter. They can connect directly to the node for troubleshooting purposes. Information on how to connect to the serial port on the node can be found in PowerScale: Steps for customers to connect to serial port when remote connection is not possible