PowerEdge: Node not able to join existing Linux Cluster
Summary: When attempting to join a node to an existing Linux cluster, getting a "warning: csync2 run failed - some files may not be sync'd" and "error: cluster.join: [Errno 2] No such file or directory: '/etc/corosync/corosync.conf' messages ...
This article applies to
This article does not apply to
This article is not tied to any specific product.
Not all product versions are identified in this article.
Symptoms
A Linux Cluster has been created in node 01 using #crm cluster init command and the cluster was then created successfully.
When node02 is trying to join the cluster, we face some warning about csync2 run failed and error joining the cluster.
However, the error below could be seen:
node01:~ # crm status
Cluster Summary:
* Stack: corosync
* Current DC: node01 (version 2.1.2+20211124.ada5c3b36-150400.4.14.9-2.1.2+20211124.ada5c3b36) - partition with quorum
* Last updated: Thu Oct 26 13:44:44 2023
* Last change: Thu Oct 26 13:27:44 2023 by root via crm_node on node01
* 1 node configured
* 0 resource instances configured
Node List:
* Online: [ node01 ]
Full List of Resources:
* No resources
When node02 is trying to join the cluster, we face some warning about csync2 run failed and error joining the cluster.
node02:~ # crm cluster join INFO: Join This Node to Cluster: You will be asked for the IP address of an existing node, from which configuration will be copied. If you have not already configured passwordless ssh between nodes, you will be prompted for the root password of the existing node. IP address or hostname of existing node (e.g.: 192.168.1.1) []node01 INFO: The user 'hacluster' will have the login shell configuration changed to /bin/bash Continue (y/n)? y INFO: Generating SSH key for hacluster INFO: Configuring SSH passwordless with hacluster@node01 INFO: BEGIN Configuring csync2 WARNING: csync2 run failed - some files may not be sync'd INFO: END Configuring csync2 INFO: Merging known_hosts INFO: BEGIN Probing for new partitions INFO: END Probing for new partitions ERROR: cluster.join: [Errno 2] No such file or directory: '/etc/corosync/corosync.conf'Then we checked the csync2 status from node01.
node01:~ # csync2 -x Peer did provide a wrong SSL X509 cetrificate.The normal procedure to remedy this issue is to run a `csync2-rm-ssl-cert $PEERNAME` to remove the old entries.
However, the error below could be seen:
node01:~ # csync2-rm-ssl-cert node2 Certificate for 'node2' not in local database.
Cause
Wrong SSL X509 certificate
Resolution
Using the following command to delete the old entry :(NOTE: For SLES12, VERSION=3)
# echo "DELETE FROM x509_cert WHERE peername='HOST2';" |sqlite${VERSION} /var/lib/csync2/$(echo $HOSTNAME | tr [:upper:] [:lower:]).db${VERSION}
In our example:
node01: # echo "DELETE FROM x509_cert WHERE peername='node02';" |sqlite3 /var/lib/csync2/$(echo $HOSTNAME | tr [:upper:] [:lower:]).db3
Rejoining the cluster from node02 again will become successful.
node02:~ # crm cluster join INFO: Join This Node to Cluster: You will be asked for the IP address of an existing node, from which configuration will be copied. If you have not already configured passwordless ssh between nodes, you will be prompted for the root password of the existing node. IP address or hostname of existing node (e.g.: 192.168.1.1) []node01 INFO: BEGIN Configuring csync2 INFO: END Configuring csync2 INFO: Merging known_hosts INFO: BEGIN Probing for new partitions INFO: END Probing for new partitions Address for ring0 [192.168.XXX.XXX]192.168.XXX.XXX INFO: Hawk cluster interface is now running. To see cluster status, open: INFO: https://192.168.XXX.XXX:7630/ INFO: Log in with username 'hacluster', password 'XXXXX' WARNING: You should change the hacluster password to something more secure! INFO: BEGIN Waiting for cluster .. INFO: END Waiting for cluster INFO: BEGIN Reloading cluster configuration INFO: END Reloading cluster configuration INFO: Done (log saved to /var/log/crmsh/crmsh.log)
Check the cluster status:
node02:~ # crm status
Cluster Summary:
* Stack: corosync
* Current DC: node01 (version 2.1.2+20211124.ada5c3b36-150400.4.14.9-2.1.2+20211124.ada5c3b36) - partition with quorum
* Last updated: Thu Oct 26 14:10:32 2023
* Last change: Thu Oct 26 14:05:22 2023 by hacluster via crmd on node01
* 2 nodes configured
* 0 resource instances configured
Node List:
* Online: [ node01 node02 ]
Full List of Resources:
* No resources
Affected Products
Red Hat Enterprise Linux Version 7, Red Hat Enterprise Linux Version 9, Red Hat Enterprise Linux Version 8, SUSE Linux Enterprise Server 15Products
PowerEdge XR2, PowerEdge C6420, PowerEdge C6520, PowerEdge C6525, PowerEdge C6615, PowerEdge C6620, PowerEdge M640, PowerEdge M640 (for PE VRTX), PowerEdge MX5016s, PowerEdge MX740C, PowerEdge MX750c, PowerEdge MX760c, PowerEdge MX840C
, PowerEdge R240, PowerEdge R250, PowerEdge R260, PowerEdge R340, PowerEdge R350, PowerEdge R360, PowerEdge R440, PowerEdge R450, PowerEdge R540, PowerEdge R550, PowerEdge R640, PowerEdge R6415, PowerEdge R650, PowerEdge R650xs, PowerEdge R6515, PowerEdge R6525, PowerEdge R660, PowerEdge R660xs, PowerEdge R6615, PowerEdge R6625, PowerEdge R740, PowerEdge R740XD, PowerEdge R740XD2, PowerEdge R7415, PowerEdge R7425, PowerEdge R750, PowerEdge R750XA, PowerEdge R750xs, PowerEdge R7515, PowerEdge R7525, PowerEdge R760, PowerEdge R760XA, PowerEdge R760xd2, PowerEdge R760xs, PowerEdge R7615, PowerEdge R7625, PowerEdge R840, PowerEdge R860, PowerEdge R940, PowerEdge R940xa, PowerEdge R960, PowerEdge XE2420, PowerEdge XE7420, PowerEdge XE7440, PowerEdge XE8545, PowerEdge XE8640, PowerEdge XE9640, PowerEdge XE9680, PowerEdge XR11, PowerEdge XR12, PowerEdge XR4510c, PowerEdge XR4520c, PowerEdge XR5610, PowerEdge XR7620, PowerEdge XR8610t, PowerEdge XR8620t
...
Article Properties
Article Number: 000218952
Article Type: Solution
Last Modified: 03 Jan 2025
Version: 2
Find answers to your questions from other Dell users
Support Services
Check if your device is covered by Support Services.