Unsolved
1 Rookie
•
2 Posts
0
469
September 7th, 2023 19:58
Isilon A200 1 node getting full
I have a pool of 4 A200 Nodes and a Pool of 6 H500 nodes in one cluster.
I only have the default File Pool Policy in place.
The Cluster is mainly used for GE PACS.
For some reason one of my A200 nodes sits at 87% full and autobalance jobs will not balance the data out across the other nodes. It will balance all the other nodes in the pool though.
isi status shows all nodes are healthy
Any ideas?
No Events found!



Gatto Sama
1 Rookie
•
93 Posts
0
September 7th, 2023 20:52
When you have an Isilon cluster with unevenly distributed data across the nodes, and auto-balance jobs are not effectively balancing the data on one specific node, there are a few steps you can take to investigate and potentially address the issue:
Check Auto-Balance Settings:
Review Data Distribution Policies:
Check Data Placement Rules:
Review Data Usage:
Data Mobility Policies:
Run Manual Balance Jobs:
isi job start balancecommand to initiate a manual balance job for a specific node.Check for Errors or Alerts:
Monitor Data Growth:
Contact Isilon Support:
Remember that data balancing in an Isilon cluster can take time, and it's important to monitor the situation periodically to ensure that the data distribution remains balanced across all nodes.
Phil.Lam
3 Apprentice
•
624 Posts
0
September 11th, 2023 18:21
@ssmith2075 , what does default File Pool Policy say? is H500 supposed to be for hot data and after x amount of time, it goes to A200?
ssmith2075
1 Rookie
•
2 Posts
0
September 12th, 2023 11:56
@Phil.Lam this is the only policy in place
Use SSDs for metadata read acceleration (Recommended)
Use SSDs for metadata read acceleration (Recommended)
Phil.Lam
3 Apprentice
•
624 Posts
0
September 19th, 2023 00:20
@Gatto Sama , point the default storage target to H500 (hot data) and then use a file policy to tier it to A2000 (cold data)
(edited)
RocketMan_NYC
1 Rookie
•
2 Posts
0
October 13th, 2024 05:34
@ssmith2075 did you ever find the root cause of this? Running OneFS 9.4.0.17, we have a pool with 8 A3000 nodes and while all are at 66-69%, one of them is at 84%.. and some disks on that node at 99%...(5 disks out 22 at 99). I am running autobalancelin to hopefully get back to an even pool.. but now seeing your comment that even after autobalance jobs that node is not balanced is worrying me... anyway keep you posted on my results, but any feedback is appreciated.
(edited)
RocketMan_NYC
1 Rookie
•
2 Posts
0
October 13th, 2024 15:28
@ssmith2075 Turns out we had node45 in a read-only state due to BBU issue... ad dell explained this can prevent pending deletes on the node hence uneven capacity.. also to reclaim the space on the node we are now running a collect job, which will be followed by snapdelete...after which we can run autobalance.