Unsolved
1 Rookie
•
37 Posts
0
920
January 17th, 2023 18:00
How to shutdown nodes not in cluster
So I’ve smartfailed a couple of dozen nodes. I don’t have physical access to them and it seems they stay powered on.
How do I power them off?
No Events found!



mawi82
1 Rookie
•
37 Posts
0
January 18th, 2023 00:00
Thanks but that’s for nodes that are part of a cluster.
The question relates to nodes that are removed(smartfailed) from the cluster.
I can see them if I go to “Cluster management > Hardware configuration > Add a node” but I’m looking for a way to power them off without physical access.
DELL-Sam L
Moderator
•
7.7K Posts
0
January 18th, 2023 00:00
Hello mawi82,
Here is a link to a kb that explains the steps for shutdown of nodes. https://dell.to/3WjibRt
DELL-Sam L
Moderator
•
7.7K Posts
0
January 18th, 2023 00:00
Hello mawi82,
Here is the command to power-off the node. You will need to open an ssh session as root to the node that you are wanting to power off.
# shutdown -p now
NOTE: Sometimes this commands fails to power down the node. On occasions, it will reboot the node instead or simply fail to work. In those situations, follow the next options in order until the node is fully powered-down and remains off
DELL-Sam L
Moderator
•
7.7K Posts
0
January 18th, 2023 01:00
Hello mawi82,
It is best to open a support case for this. There are some support only steps that can be tried, that the commands are not available to customers.
mawi82
1 Rookie
•
37 Posts
0
January 18th, 2023 01:00
As I can see them under “add a node”, there must be some sort of communication. So I was hoping there was some way of reaching them.
I tried running “isi_dump_fabric int-a” as I assumed it would show an internal IP like the other nodes. But it just shows them as “not advertised” and “unknown”.
mawi82
1 Rookie
•
37 Posts
0
January 18th, 2023 01:00
Again, these are nodes that are not part of the cluster anymore. Hence they have no IP(at least I cannot find any). So I cannot ssh to them.
DELL-Sam L
Moderator
•
7.7K Posts
0
January 18th, 2023 01:00
Hello mawi82,
The only way to power them off is physically, if you can’t SSH to them.
Phil.Lam
3 Apprentice
•
624 Posts
0
January 23rd, 2023 10:00
@mawi82 ,HTH.
ssh to -1
example
(root@philler-2) Password:
Last login: Mon Jan 23 10:30:10 2023 from 10.134.138.69
Copyright (c) 2001-2022 Dell Inc. or its subsidiaries. All Rights Reserved.
Copyright (c) 1992-2018 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
The Regents of the University of California. All rights reserved.
PowerScale OneFS 9.4.0.5
philler-2#
to get internal backend IPs
philler-2# isi_nodes %{lnn} %{internal_a} %{internal_b}
1 128.221.152.1 128.221.153.1
2 128.221.152.2 128.221.153.2
3 128.221.152.3 128.221.153.3
philler-2# ssh 128.221.152.1
(root@128.221.152.1) Password:
Last login: Mon Nov 28 07:27:15 2022 from 10.134.209.237
Copyright (c) 2001-2022 Dell Inc. or its subsidiaries. All Rights Reserved.
Copyright (c) 1992-2018 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
The Regents of the University of California. All rights reserved.
PowerScale OneFS 9.4.0.5
philler-1#
mawi82
1 Rookie
•
37 Posts
0
January 23rd, 2023 22:00
I opened a support case, but the answer was that the only way to shut them down was to do it physically.
mawi82
1 Rookie
•
37 Posts
0
January 23rd, 2023 22:00
Thanks Phil, but unfortunately that command only lists the nodes that are part of the cluster. The nodes that have been smartfailed are not listed.
Phil.Lam
3 Apprentice
•
624 Posts
0
January 24th, 2023 09:00
the newer nodes (F200, F600, F900) that has IDRAC installed can be shutdown from iDRAC.
Phil2018
1 Rookie
•
15 Posts
0
January 30th, 2023 00:00
Could it perhaps be a solution for the next time to shut down the nodes first and then smartfail them?
That should also work, because a node can also break down completely and the smartfail can always be carried out later when the node is already offline.
mhorvath
5 Posts
0
November 14th, 2023 02:44
Just want to add that this is literally the only information I can find regarding unjoined (but still powered and connected on backend switch) node excision. This needs to be a part of a 'tech refresh' KB, for those of us out here refreshing live clusters with new nodes... there is nothing else I could find that tells us what to do after we have the cluster running on the shiny new nodes, with the the old ones still powered up but not doing anything.