Unsolved
This post is more than 5 years old
356 Posts
0
755
Isilon - Maintenance on node in different storage pool while SmartFailing nodes in another
Community,
I am sure the answer is yes, but I want to confirm. Below is a status of the cluster. I already have 2 nodes Smartfailing out of the top pool and want to shutdown a node in the lower pool for maintenance. I need to make sure that when I shut down the node in the lower pool that the cluster doesn't panic or go into READ-ONLY mode. The protection levels are different in both pools and Id imagine that I should be able to perform this task with no issues. Let me know.
Cluster Name: [CLUSTERNAME]
Cluster Health: [ ATTN]
Cluster Storage: HDD SSD Storage
Size: 5.0P (5.0P Raw) 0 (0 Raw)
VHS Size: 15T
Used: 3.2P (65%) 0 (n/a)
Avail: 1.8P (35%) 0 (n/a)
Node Group Name: n400_108tb_12gb-ram Protection: +3 w19
Pool Storage: HDD SSD Storage
Size: 3.1P (3.1P Raw) 0 (0 Raw)
VHS Size: 6.9T
Used: 2.2P (69%) 0 (n/a)
Avail: 996T (31%) 0 (n/a)
Throughput (bps) HDD Storage SSD Storage
Name Health| In Out Total| Used / Size |Used / Size
-------------------+-----+-----+-----+-----+-----------------+-----------------
1|xx.xxx.x.1 | OK | 0| 26| 26| 67T/ 97T( 69%)|(No Storage SSDs)
2|xx.xxx.x.2 | OK | 0| 22| 22| 67T/ 97T( 69%)|(No Storage SSDs)
3|xx.xxx.x.3 | OK | 2.7M| 16| 2.7M| 67T/ 97T( 69%)|(No Storage SSDs)
4|xx.xxx.x.4 | OK | 0| 20| 20| 67T/ 97T( 69%)|(No Storage SSDs)
5|xx.xxx.x.5 | OK | 0| 40| 40| 67T/ 97T( 69%)|(No Storage SSDs)
6|xx.xxx.x.6 | OK | 0| 24| 24| 67T/ 97T( 69%)|(No Storage SSDs)
7|xx.xxx.x.7 | OK | 0| 12| 12| 67T/ 97T( 69%)|(No Storage SSDs)
8|xx.xxx.x.8 | OK | 0| 16| 16| 67T/ 97T( 69%)|(No Storage SSDs)
9|xx.xxx.x.9 | OK | 0| 16| 16| 67T/ 97T( 69%)|(No Storage SSDs)
10|xx.xxx.x.10 | OK | 0| 22| 22| 67T/ 97T( 69%)|(No Storage SSDs)
15|xx.xxx.x.15 |-AS- | 0| 22| 22| 64T/ 97T( 66%)|(No Storage SSDs)
16|xx.xxx.x.16 | OK | 0| 22| 22| 67T/ 97T( 69%)|(No Storage SSDs)
17|xx.xxx.x.17 | OK | 3.0M| 786K| 3.7M| 67T/ 97T( 69%)|(No Storage SSDs)
18|xx.xxx.x.18 | OK | 65K| 22| 65K| 67T/ 97T( 69%)|(No Storage SSDs)
19|xx.xxx.x.19 | OK | 0| 0| 0| 67T/ 97T( 69%)|(No Storage SSDs)
20|xx.xxx.x.20 | OK | 207K| 19| 207K| 67T/ 97T( 69%)|(No Storage SSDs)
21|xx.xxx.x.21 | OK | 0| 48| 48| 67T/ 97T( 69%)|(No Storage SSDs)
22|xx.xxx.x.22 | OK | 129K| 6.7M| 6.8M| 67T/ 97T( 69%)|(No Storage SSDs)
23|xx.xxx.x.23 | OK | 0| 28| 28| 67T/ 97T( 69%)|(No Storage SSDs)
24|xx.xxx.x.24 | OK | 0| 22| 22| 67T/ 97T( 69%)|(No Storage SSDs)
25|xx.xxx.x.25 |-A-- | 0| 1.7M| 1.7M| 67T/ 97T( 69%)|(No Storage SSDs)
26|xx.xxx.x.26 |-AS- | 0| 22| 22| 64T/ 97T( 66%)|(No Storage SSDs)
27|xx.xxx.x.27 | OK | 173K| 24| 173K| 67T/ 97T( 69%)|(No Storage SSDs)
28|xx.xxx.x.28 | OK | 130K| 16| 130K| 67T/ 97T( 69%)|(No Storage SSDs)
29|xx.xxx.x.29 | OK | 155K| 22| 155K| 67T/ 97T( 69%)|(No Storage SSDs)
30|xx.xxx.x.30 | OK | 0| 16| 16| 67T/ 97T( 69%)|(No Storage SSDs)
31|xx.xxx.x.31 | OK | 0| 19| 19| 67T/ 97T( 69%)|(No Storage SSDs)
32|xx.xxx.x.32 | OK | 0| 24| 24| 67T/ 97T( 69%)|(No Storage SSDs)
33|xx.xxx.x.33 | OK | 0| 40| 40| 67T/ 97T( 69%)|(No Storage SSDs)
34|xx.xxx.x.34 | OK | 0| 24| 24| 67T/ 97T( 69%)|(No Storage SSDs)
35|xx.xxx.x.35 | OK | 0| 16| 16| 67T/ 97T( 69%)|(No Storage SSDs)
36|xx.xxx.x.36 | OK | 0| 22| 22| 67T/ 97T( 69%)|(No Storage SSDs)
37|xx.xxx.x.37 | OK | 0| 8| 8| 67T/ 97T( 69%)|(No Storage SSDs)
38|xx.xxx.x.38 | OK | 0| 21| 21| 67T/ 97T( 69%)|(No Storage SSDs)
39|xx.xxx.x.39 | OK | 0| 24| 24| 67T/ 97T( 69%)|(No Storage SSDs)
-------------------+-----+-----+-----+-----+-----------------+-----------------
n400_108tb_12gb-ram|---S-| 6.6M| 9.2M| 16M| 2.2P/ 3.1P( 69%)|(No Storage SSDs)
Node Group Name: n400_144tb_48gb Protection: +2:1 w18
Pool Storage: HDD SSD Storage
Size: 1.9P (1.9P Raw) 0 (0 Raw)
VHS Size: 8.1T
Used: 1.1P (57%) 0 (n/a)
Avail: 826T (43%) 0 (n/a)
Throughput (bps) HDD Storage SSD Storage
Name Health| In Out Total| Used / Size |Used / Size
-------------------+-----+-----+-----+-----+-----------------+-----------------
11|xx.xxx.x.11 | OK | 327K| 188M| 189M| 28T/ 129T( 21%)|(No Storage SSDs)
12|xx.xxx.x.12 | OK | 0| 503M| 503M| 28T/ 129T( 21%)|(No Storage SSDs)
13|xx.xxx.x.13 | OK | 0| 226M| 226M| 4.9T/ 129T( 4%)|(No Storage SSDs)
14|xx.xxx.x.14 | OK | 19M| 428M| 447M| 4.8T/ 129T( 4%)|(No Storage SSDs)
40|None | OK | 0| 3.4M| 3.4M| 102T/ 129T( 79%)|(No Storage SSDs)
41|None | OK | 129K| 28| 129K| 102T/ 129T( 79%)|(No Storage SSDs)
42|None | OK | 0| 25| 25| 102T/ 129T( 79%)|(No Storage SSDs)
43|None | OK | 0| 8| 8| 102T/ 129T( 79%)|(No Storage SSDs)
44|None | OK | 0| 22| 22| 102T/ 129T( 79%)|(No Storage SSDs)
45|None | OK | 0| 16| 16| 102T/ 129T( 79%)|(No Storage SSDs)
46|None | OK | 0| 16| 16| 102T/ 129T( 79%)|(No Storage SSDs)
47|None | OK | 0| 26| 26| 102T/ 129T( 79%)|(No Storage SSDs)
48|None | OK | 0| 48| 48| 102T/ 129T( 79%)|(No Storage SSDs)
49|None | OK | 65K| 22| 65K| 102T/ 129T( 79%)|(No Storage SSDs)
50|None | OK | 156K| 10| 156K| 28T/ 129T( 22%)|(No Storage SSDs)
-------------------+-----+-----+-----+-----+-----------------+-----------------
n400_144tb_48gb | OK | 19M| 1.3G| 1.4G| 1.1P/ 1.9P( 57%)|(No Storage SSDs)
OK = Ok, U = Too few nodes, M = Missing drives,
D = Some nodes or drives are down, S = Some nodes or drives are smartfailed,
R = Some nodes or drives need repair
Thank you,
sjones51
252 Posts
1
February 22nd, 2017 11:00
HI chjatwork,
Based on that output, you can shutdown the node in the second pool (Node Group Name: n400_144tb_48gb). In terms of the nodes going into READ-ONLY mode, I assume you are talking about losing quorum. That should only happen when half of the nodes in the cluster or pool are no longer available. By the looks of it, you aren't at any risk there.