PowerEdge: Node kan ikke tilsluttes eksisterende Linux-klynge

Summary: Når du forsøger at slutte en node til en eksisterende Linux-klynge, får du en "advarsel: csync2 kørsel mislykkedes - nogle filer synkroniseres muligvis ikke" og "fejl: cluster.join: [Errno 2] Filen eller mappen findes ikke. '/etc/corosync/corosync.conf' beskeder ...

This article applies to This article does not apply to This article is not tied to any specific product. Not all product versions are identified in this article.

Symptoms

En Linux-klynge er blevet oprettet i node 01 ved hjælp af #crm klynge init-kommando, og klyngen blev derefter oprettet med succes.
node01:~ # crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node01 (version 2.1.2+20211124.ada5c3b36-150400.4.14.9-2.1.2+20211124.ada5c3b36) - partition with quorum
  * Last updated: Thu Oct 26 13:44:44 2023
  * Last change:  Thu Oct 26 13:27:44 2023 by root via crm_node on node01
  * 1 node configured
  * 0 resource instances configured

Node List:
  * Online: [ node01 ]

Full List of Resources:
  * No resources

Når node02 forsøger at oprette forbindelse til klyngen, får vi en advarsel om, at kørsel af csync2 mislykkedes, og at der opstår fejl ved tilslutning til klyngen.
node02:~ # crm cluster join
INFO: Join This Node to Cluster:
  You will be asked for the IP address of an existing node, from which
  configuration will be copied.  If you have not already configured
  passwordless ssh between nodes, you will be prompted for the root
  password of the existing node.

IP address or hostname of existing node (e.g.: 192.168.1.1) []node01
INFO: The user 'hacluster' will have the login shell configuration changed to /bin/bash
Continue (y/n)? y
INFO: Generating SSH key for hacluster
INFO: Configuring SSH passwordless with hacluster@node01
INFO: BEGIN Configuring csync2

WARNING: csync2 run failed - some files may not be sync'd
INFO: END Configuring csync2
INFO: Merging known_hosts
INFO: BEGIN Probing for new partitions
INFO: END Probing for new partitions
ERROR: cluster.join: [Errno 2] No such file or directory: '/etc/corosync/corosync.conf'
Derefter kontrollerede vi csync2-status fra node01.
node01:~ # csync2 -x
Peer did provide a wrong SSL X509 cetrificate.
Den normale procedure til at afhjælpe dette problem er at køre en 'csync2-rm-ssl-cert $PEERNAME' for at fjerne de gamle poster.  
Fejlen nedenfor kunne dog ses:
node01:~ # csync2-rm-ssl-cert node2
Certificate for 'node2' not in local database.

Cause

Forkert SSL X509-certifikat

Resolution

Brug følgende kommando til at slette den gamle post :(BEMÆRK: For SLES12, VERSION=3)

# echo "DELETE FROM x509_cert WHERE peername='HOST2';" |sqlite${VERSION} /var/lib/csync2/$(echo $HOSTNAME | tr [:upper:] [:lower:]).db${VERSION}

I vores eksempel:

node01: # echo "DELETE FROM x509_cert WHERE peername='node02';" |sqlite3 /var/lib/csync2/$(echo $HOSTNAME | tr [:upper:] [:lower:]).db3

Det lykkes, at du tilslutter dig klyngen igen fra node02.

node02:~ # crm cluster join
INFO: Join This Node to Cluster:
  You will be asked for the IP address of an existing node, from which
  configuration will be copied.  If you have not already configured
  passwordless ssh between nodes, you will be prompted for the root
  password of the existing node.

IP address or hostname of existing node (e.g.: 192.168.1.1) []node01
INFO: BEGIN Configuring csync2
INFO: END Configuring csync2
INFO: Merging known_hosts
INFO: BEGIN Probing for new partitions
INFO: END Probing for new partitions
Address for ring0 [192.168.XXX.XXX]192.168.XXX.XXX
INFO: Hawk cluster interface is now running. To see cluster status, open:
INFO:   https://192.168.XXX.XXX:7630/
INFO: Log in with username 'hacluster', password 'XXXXX'
WARNING: You should change the hacluster password to something more secure!
INFO: BEGIN Waiting for cluster
..                                                                      
INFO: END Waiting for cluster
INFO: BEGIN Reloading cluster configuration
INFO: END Reloading cluster configuration
INFO: Done (log saved to /var/log/crmsh/crmsh.log)


Kontrollér klyngestatus:

node02:~ # crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node01 (version 2.1.2+20211124.ada5c3b36-150400.4.14.9-2.1.2+20211124.ada5c3b36) - partition with quorum
  * Last updated: Thu Oct 26 14:10:32 2023
  * Last change:  Thu Oct 26 14:05:22 2023 by hacluster via crmd on node01
  * 2 nodes configured
  * 0 resource instances configured

Node List:
  * Online: [ node01 node02 ]

Full List of Resources:
  * No resources

Affected Products

Red Hat Enterprise Linux Version 7, Red Hat Enterprise Linux Version 9, Red Hat Enterprise Linux Version 8, SUSE Linux Enterprise Server 15

Products

PowerEdge XR2, PowerEdge C6420, PowerEdge C6520, PowerEdge C6525, PowerEdge C6615, PowerEdge C6620, PowerEdge M640, PowerEdge M640 (for PE VRTX), PowerEdge MX5016s, PowerEdge MX740C, PowerEdge MX750c, PowerEdge MX760c, PowerEdge MX840C , PowerEdge R240, PowerEdge R250, PowerEdge R260, PowerEdge R340, PowerEdge R350, PowerEdge R360, PowerEdge R440, PowerEdge R450, PowerEdge R540, PowerEdge R550, PowerEdge R640, PowerEdge R6415, PowerEdge R650, PowerEdge R650xs, PowerEdge R6515, PowerEdge R6525, PowerEdge R660, PowerEdge R660xs, PowerEdge R6615, PowerEdge R6625, PowerEdge R740, PowerEdge R740XD, PowerEdge R740XD2, PowerEdge R7415, PowerEdge R7425, PowerEdge R750, PowerEdge R750XA, PowerEdge R750xs, PowerEdge R7515, PowerEdge R7525, PowerEdge R760, PowerEdge R760XA, PowerEdge R760xd2, PowerEdge R760xs, PowerEdge R7615, PowerEdge R7625, PowerEdge R840, PowerEdge R860, PowerEdge R940, PowerEdge R940xa, PowerEdge R960, PowerEdge XE2420, PowerEdge XE7420, PowerEdge XE7440, PowerEdge XE8545, PowerEdge XE8640, PowerEdge XE9640, PowerEdge XE9680, PowerEdge XR11, PowerEdge XR12, PowerEdge XR4510c, PowerEdge XR4520c, PowerEdge XR5610, PowerEdge XR7620, PowerEdge XR8610t, PowerEdge XR8620t ...
Article Properties
Article Number: 000218952
Article Type: Solution
Last Modified: 03 Jan 2025
Version:  2
Find answers to your questions from other Dell users
Support Services
Check if your device is covered by Support Services.