it's a "fast shot" but you may use something like this
1. Backup the original shares and permissions
isi smb shares list --format csv > shares_full.txt
isi smb shares list | sed '1d;$d' | sed '1d;$d' | awk '{print $1}' > shares_name.txt
for i in `cat shares_name.txt'`;do isi smb shares permission list $i --format csv> Sharebackup_$i.txt;done
2. Set shares to readonly
(too lazy to write the script right now but it comes down to parse the share_name.txt, get the permissions of each share from the file and set them to readonly
OR
delete shares completely by parsing shares_name.txt and recreate them using shares_full.txt
3. Delete Sessions
isi_for_array -s 'for i in `isi smb sessions list | sed '1d;$d' | sed '1d;$d' | awk '{print $2}'`;do isi smb session delete-user $i;done'
another possibility might be to just reboot the cluster to completely be sure the sessions are down.
the commands may vary depending on the OneFS version you are using.
furthermore you should make sure to have the backupfiles on the oneFS and not local to your node (so you don't have to search for them).
It may be a better way to completely write this in python or if you have several cluster from an external script utilizing i.e. powershell and the API
# isi services -a smb disable && isi_for_array killall lwio
That will remove all current SMB connections and prevent re-authentication.
Then you can run through and set shares to read-only with a script like sluetze had suggested.
After all shares are read-only, then re-enable smb (to allow connections again)
# isi services -a smb enable
Then all users can start accessing shares again on a read-only basis, until the offending user/client is found.
Overkill mode:
No need to even kick anyone off if you don't want to:
To lock the whole cluster into a read-only (I mean everything configs included, you would not be able to make config changes to the smb shares or anything else)
# isi_for_array "isi readonly on"
then to turn off (make read/write again) once you feel comfortable again
wow.... so focused on the whole "session" thing, that i didn't even think about stopping smb.
the readonly mode is a really interesting feature. in cases where you just "panic" you can now just set it to readonly, create a plan, disable readonly, run the plan.
It's really easy. We are using Windows Map Network Drive (or Net Use in cmd).
From My Computer, go to the folder you need to recover. Go to the previous version tab, then open the snapshot you want to work with. Finaly go to Properties and copy the full path of the snapshot.
Next step is mapping of a drive with a target pointing on the snasphot name (copied in the previous step) which includes the timestamp. Exemple : \\share\share1\@GMT-2016-04-05-19.55.01
Then you can browse the content like a "normal" network disk.
You can also browse the snapshots on the cluster it self : \\ipaddressofthecluster\ifs\.snapshot This folder contains all snapshots available on the cluster.
If the paths are too long, then you can map drive on a sub sub sub folder....
instead of mapping the subdrives you could also just use robocopy or emcopy. They are also limited in pathlength, but the Limit is 64K characters, which i havn't seen until now (and in this case you could also just map subdrives).
since you do a networktransfer i would recommend to do this from a Server with some nice Network Interfaces (or on OneFS8 with a server side copy enabled share) to fasten the process.
We like the emcopy64. It's fast multi-thread copy but it's a pain to scan for infected files and remove them first! Is there a better way to do scan other than a direct "find" on the cluster CLI ?
I have had "readonly" situations on single nodes before (for other reasons), but I wondered whether one can simple put all nodes -- the whole cluster -- into readonly state.
OneFS uses /ifs/.ifsvar for too many things behind the scene, so what would be the save maneuvre to bring the cluster back to normal operation?
In as simple test on virtual single-node cluster with 8.0.0.0, the isi readonly command simply did hang and time out, I had to reboot and try again; and that was for a single-node cluster...
We scan for a known file like *Lucky*. We do this from CLI. But with a large FS it's very long. We want to get the infected zone faster.
We already cut the FS in zone to start multiple find from different node (it's a bit faster), but large zone with millions of files are still long to scan.
sluetze
2 Intern
•
300 Posts
1
March 31st, 2016 23:00
it's a "fast shot" but you may use something like this
1. Backup the original shares and permissions
isi smb shares list --format csv > shares_full.txt
isi smb shares list | sed '1d;$d' | sed '1d;$d' | awk '{print $1}' > shares_name.txt
for i in `cat shares_name.txt'`;do isi smb shares permission list $i --format csv> Sharebackup_$i.txt;done
2. Set shares to readonly
(too lazy to write the script right now but it comes down to parse the share_name.txt, get the permissions of each share from the file and set them to readonly
OR
delete shares completely by parsing shares_name.txt and recreate them using shares_full.txt
3. Delete Sessions
isi_for_array -s 'for i in `isi smb sessions list | sed '1d;$d' | sed '1d;$d' | awk '{print $2}'`;do isi smb session delete-user $i;done'
another possibility might be to just reboot the cluster to completely be sure the sessions are down.
the commands may vary depending on the OneFS version you are using.
furthermore you should make sure to have the backupfiles on the oneFS and not local to your node (so you don't have to search for them).
It may be a better way to completely write this in python or if you have several cluster from an external script utilizing i.e. powershell and the API
--sluetze
Stdekart
104 Posts
1
April 1st, 2016 14:00
Effex805,
To kick everyone off SMB quickly you can run
# isi services -a smb disable && isi_for_array killall lwio
That will remove all current SMB connections and prevent re-authentication.
Then you can run through and set shares to read-only with a script like sluetze had suggested.
After all shares are read-only, then re-enable smb (to allow connections again)
# isi services -a smb enable
Then all users can start accessing shares again on a read-only basis, until the offending user/client is found.
Overkill mode:
No need to even kick anyone off if you don't want to:
To lock the whole cluster into a read-only (I mean everything configs included, you would not be able to make config changes to the smb shares or anything else)
# isi_for_array "isi readonly on"
then to turn off (make read/write again) once you feel comfortable again
# isi_for_array "isi readonly off"
sluetze
2 Intern
•
300 Posts
0
April 3rd, 2016 23:00
wow.... so focused on the whole "session" thing, that i didn't even think about stopping smb.
the readonly mode is a really interesting feature. in cases where you just "panic" you can now just set it to readonly, create a plan, disable readonly, run the plan.
Effex8051
3 Posts
0
April 4th, 2016 06:00
Thank you guys for your replies
arichard1
21 Posts
0
April 5th, 2016 12:00
Hi Fx,
I am wondering what tool you use to recover your data from the snapshot?
Revert is usaly to large scope, has to recover also the client security, Windows didn't like files with long names!
Do you recover background from Unix or foreground true SMB or NFS ?
Effex8051
3 Posts
0
April 5th, 2016 13:00
Bonjour Alain!
It's really easy. We are using Windows Map Network Drive (or Net Use in cmd).
From My Computer, go to the folder you need to recover. Go to the previous version tab, then open the snapshot you want to work with. Finaly go to Properties and copy the full path of the snapshot.
Next step is mapping of a drive with a target pointing on the snasphot name (copied in the previous step) which includes the timestamp. Exemple : \\share\share1\@GMT-2016-04-05-19.55.01
Then you can browse the content like a "normal" network disk.
You can also browse the snapshots on the cluster it self : \\ipaddressofthecluster\ifs\.snapshot This folder contains all snapshots available on the cluster.
If the paths are too long, then you can map drive on a sub sub sub folder....
Merci et Bonne journée,
FX Paquette
sluetze
2 Intern
•
300 Posts
0
April 6th, 2016 00:00
instead of mapping the subdrives you could also just use robocopy or emcopy. They are also limited in pathlength, but the Limit is 64K characters, which i havn't seen until now (and in this case you could also just map subdrives).
since you do a networktransfer i would recommend to do this from a Server with some nice Network Interfaces (or on OneFS8 with a server side copy enabled share) to fasten the process.
Rgds
--sluetze
arichard1
21 Posts
0
April 7th, 2016 12:00
Thank's guys for the info,
We like the emcopy64. It's fast multi-thread copy but it's a pain to scan for infected files and remove them first! Is there a better way to do scan other than a direct "find" on the cluster CLI ?
Sadly, Onefs8 is not in our scope for now!
sluetze
2 Intern
•
300 Posts
0
April 7th, 2016 23:00
how do you "scan" files with find to see if they are infected?
Peter_Sero
4 Operator
•
1.2K Posts
0
April 8th, 2016 03:00
Shane
I have had "readonly" situations on single nodes before (for other reasons), but I wondered whether one can simple put all nodes -- the whole cluster -- into readonly state.
OneFS uses /ifs/.ifsvar for too many things behind the scene, so what would be the save maneuvre to bring the cluster back to normal operation?
In as simple test on virtual single-node cluster with 8.0.0.0, the isi readonly command simply did hang and time out, I had to reboot and try again; and that was for a single-node cluster...
Cheers
-- Peter
arichard1
21 Posts
0
April 11th, 2016 11:00
Hi Sluetze,
We scan for a known file like *Lucky*. We do this from CLI. But with a large FS it's very long. We want to get the infected zone faster.
We already cut the FS in zone to start multiple find from different node (it's a bit faster), but large zone with millions of files are still long to scan.