It is being suggested, due to a bug in any previous Firmware & OneFS Version, that we upgrade all our clusters to OneFS 184.108.40.206 (from various 8.x version) along with the corresponding firmware. I just wanted to check to see if anyone has had any issues with this upgrade similar to the issues seen during the 7.x to 8.x debacle?
We have upgraded from 8.x to 220.127.116.11, no issue reported till now. It’s just
that I feel the job engine is running a bit slow.
FSA job still does not completes
On Thu, 11 Oct 2018 at 2:21 PM, TonyHoover21 <email@example.com>
Same thing here. We have 400 NFS clients connected to the Isilon and several times a day NFS mounts stalls when a few clients do heavy reading and writing. It seems like the Isilon runs out of resources and lowers the TCP window to 0.
Isilon OneFS v18.104.22.168 B_MR_8_1_0_4_057(RELEASE)
Client OS Centos7
Clients are connected via Mellanox IB to ethernet PROXY gateways
Because there was no patch, we wrote a script that runs on the Isilon cluster and drops the wedged sessions. You'll find sessions stuck in FIN_WAIT_2 state. You can run /usr/sbin/tcpdrop to kill them.
Earlier releases also have the problem - we have clearly documented it on 22.214.171.124 and 126.96.36.199.
We need to get the patch rolled out but now is not a convenient time for the massive undertaking (over 100 nodes) . Some days I really dislike Isilon QA
I am not sure we are seeing this problem.
We use static NFS mounts and the problem so far more looks like the use of memory mapped files as one side of the problem. And simultaneous reads and write from the same node as another side of the problem. It also seems like the problem does't spread to other nodes even if they are serviced by the same Isilon node.
Do you BTW have a link to document describing the FIN_WAIT_2 issue?
I just discovered today, that the problem we hit is covered by this patch:
But it seems to unavailable to customers without support. Given the serious nature of this problem I simply don't understand this policy.