Start a Conversation

Unsolved

This post is more than 5 years old

5920

February 25th, 2014 01:00

Automatically getting IP addresses from each node in a cluster / gathering logfiles from Isilon

Hi everybody,

I am writing a script to gather log files from an Isilon cluster. My problem is to obtain automatically one IP address from each node in the cluster. Because of SmartConnect the IP addresses of the nodes are not fixed and change sometimes.

In my current script I have a user query for the last two digits of the IP addresses of each node. In a cluster with 4 nodes that is still practicable. The user has to open the web administration interface, look for the IP addresses and provide the information to the script.

Is there a possibility to gather a valid IP address from each node with a simple ssh command?

For establish the connection, automatically accept the server side given rsa fingerprint and copying the logfiles to the local computer I use plink and pscp with a simple windows batch script. Before I start the script I have to establish a secure VPN connection to the cluster.

Here an example part of my batch script:

REM Set the basis of the IP address

set IP=172.10.10.

...

REM User input for Node1

ECHO Please enter the last two digits of the IP address of node1:

set /p IP1=

set IPNODE1=%IP%%IP1%

...

REM User input for Node2

ECHO Please enter the last two digits of the IP address of node2:

set /p IP2

set IPNODE2=%IP%%IP2%

...

REM Accept the rsa keyfile automatically

ECHO yes | plink.exe -ssh %IPNODE1% 22 exit

ECHO yes | plink.exe -ssh %IPNODE2% 22 exit

...

ECHO Get the vsftp logfiles from every node

mkdir c:\temp\logs\node1

pscp.exe -r -q -batch -pw "PASSWORD" root@%IPNODE1%:/var/log/vsftpd.log c:\temp\logs\node1\

mkdir c:\temp\logs\node2

pscp.exe -r -q -batch -pw "PASSWORD" root@%IPNODE2%:/var/log/vsftpd.log c:\temp\logs\node2\

...

I would like to make the IP part of the script more simple and user-friendly. The best solution would be a script that runs without user interaction in the background. I know that there is isi_for_array but there is no documentation about that and the possible sub commands and parameters.

And the super best solution would be if I can bundle all needed files (putty.exe, plink.exe, pscp.exe, getlogfile.bat, ...) in one self running file, e.g. *.exe.

Any ideas?

Best regards

Philipp

1.2K Posts

February 25th, 2014 02:00

A couple of things:


# isi networks list interfaces

(6.x, 7.x)

# isi_for_array --help

(not that I suggest it might be useful here)

Have you considered setting up an extra pool of

statically assigned addresses yet?

Log messages can be replicated to an

external log server, just by the builtin

log system. Also, others have reported that they

soft-link the logfiles to some place under /ifs

and export this to the log processing host.

hth

-- Peter



1.2K Posts

February 25th, 2014 06:00

To echo Peter's sentiment - it's far, far easier to dump your Isilon logs to a syslog collector, such as Splunk.  For most uses, the free Splunk instance is ample to capture your logs.  Plus, an aggregator makes searching for messages and correlating them to network timestamps much easier (in one case, one cluster was in Central Time Zone and the other was in Eastern Time Zone).

Let us know how it goes!

Karl

2 Intern

 • 

467 Posts

February 25th, 2014 22:00

I'd echo what Peter said.  I find it very useful to have a subnet of statically assigned IP addresses or my cluster.  I use that to check the health of SMB connections and report on that.  It helps to know what node is actually having a problem.

Also I have all my logging centralized under /ifs using symlinks. I started with the smb audit log when my isi_for_array command became to cumbersome searching for specific user authentications.  It's worked well for me in the absence of a central syslog server.

1 Rookie

 • 

107 Posts

February 26th, 2014 00:00

Thank you all for the hints and recommendations.

I think we do not have the possibility to set-up an additional syslog server to collect and monitor log files. The cluster is used in an onAir AV broadcasting environment.

For a very special third party system the FTP access at the cluster is fixed to a special folder and restricted to that. That was done more than 10 month ago. They don't make use of a ftp configuration files as described in emc14000584. Instead of this every (ftp-) user is restricted to that specific folder. And because of that it is not possible to change the path through FTP to collect all log files via FTP. I think, the FTP configuration at the Isilon was a "quick-and-dirty" solution to get it quickly up and running. Maybe they will do a clean FTP configuration in future.

Maybe they are willing to reserve some further IP addresses for setting up a static pool. But I don't know.

Anyway - I think I will follow the idea of creating symbolic links of each node /var/log/ to /ifs/data/.../logs/nodex/

Everybody who is able and have the rights to mount the /ifs/ folder can easy access and copy the log files to a local machine. Some log files are really big and you cannot open them directly through an VPN connection - so you have to copy the files first to your local machine.

It is not recommended to create and use folders/files directly under /ifs/. Does that rule also apply for log files?

I would like to use e.g. /ifs/logs/nodex for the symbolic links of each log file folder.

1.2K Posts

February 26th, 2014 02:00

Philipp:

one should avoid excessive amounts of files/dir directly in /ifs,

so do not expose it to the end users.

A moderate number of dirs, rather static and controlled by admins,

is okay I've been told by Isilon.

Cheers

-- Peter

2 Intern

 • 

467 Posts

February 26th, 2014 08:00

You could do this ot get an ip address from each node...

isi_for_array -s "ifconfig |grep inet |grep broadcast |grep -v 192.168 |head -1" |awk '{print $1 $3}'

that will return nodename:firstip

1.2K Posts

March 4th, 2014 02:00

My understanding was that the link goes in the other direction

(mv /var/log /var/log.local-old)

mkdir /ifs/logfiles/node1

ln -s /ifs/logfiles/node1 /var/log

So the actual log files will live under /ifs.

You could also choose to do this for selected logfiles only,

so that /var/log still exists on /var. In case it would be needed

right before /ifs gets mounted at boot time...

-- Peter

1 Rookie

 • 

107 Posts

March 4th, 2014 02:00

Ok, I tried to create a symbolic link to the logfile-folder but I failed. I was able to create the link but I cannot access this symbolic link folder through windows. I connected to node1 and:

ln -s /var/log /ifs/logfiles/node1

The link was created and I can use it through SSH. But when I mount the Isilon at my windows machine to collect the logfiles through the symbolic link I get an 'access denied'. I know that there is a security configuration at the samba service (follow symlinks = yes/no) but Isilon do not use the normal smb service and there is no smb.conf.

What do I have to change to use symbolic links at a SMB mount by Windows?

2 Intern

 • 

467 Posts

March 5th, 2014 15:00

I've done this for the smb audit log.  What you need to do is restart syslog after /ifs is mounted..Otherwise you will run into issues... To restart syslog after mounting of /ifs,  you can use a rc script. 

This is the script we use..from know kB somewhere... this is designed for smb audit file,  so it may need tweaking... But I don't think so..

---being---

#!/bin/sh

# PROVIDE: local_audit

# REQUIRE: isi_mount_ifs syslogd

. /etc/rc.subr

name="local_audit"

start_cmd="echo 'send HUP to syslogd ...'; killall -HUP syslogd"

stop_cmd="/usr/bin/true"

load_rc_config $name

run_rc_command "$1"

/usr/bin/logger -i -p ifs.err "syslogd restarted successfully."

---end---

That goes into /etc/rc.d on each node in the cluster..  Put that file on /ifs/path/locallog and run "isi_for_array -sq 'install /ifs/path/locallog /etc/rc.d'"

The kill syslog with "isi_for_array -s killall -HUP syslogd"

No Events found!

Top