Yep I know, check now, all fixed, that's what happens apparently when you respond to the thread via email with embedded photos in your email, they're all stripped off.
@Chris THANK YOU! I need to read through this and see how I can do this with little impact. Awesome input and need more of that here!
@Dynamox. Love you man! I think in this case, the DFS links have to be changed when the path is changed on Isilon. Both the SHARE and PATH are referred to in DFS links, 'lest I am naive and missing something???
In need of modifications of structures involving DFS links my procedure was as follows:
getting maintenance window
begin of maintenance
modify dfs Folder to target a "dummy path" which does not exist - thus preventing clients from accessing the structures
clear all smb sessions on the cluster / filer
do the stuff you have to do
modify dfs folder to target the correct path - allowing clients to connect
this of course is against the minimal impact thing
for a "fast" modification of the dfs you could use tools like dfsutil and script something. if I dig deep enough I might find something in my script-collection
Edit:
dfs supports "active" and "inactive" links. I would prepare the dfs to contain both the current path and the new path, while remaining inactive with the new path. during migration I would just switch the active and the inactive path and clean up later (when everything works fine)
You are 100% correct and thus that is where the issue is coming in. The DFS links are set up where the sharename refers to the share path. So if we change the path on the Isilon side, the DFS link will refer to a path that does not exist. So we have to change how the DFS links when we change the path and that is something that causes too much of a headache to make this feasible. My stance right now is go forward with the new naming standard and as we upgrade the hardware we will implement the new standards.
Really good info here from everyone. Chris K's info I am using going forward along with this info to show why we cannot do this now. It has to be new going forward.
following a scratchup of a method you could use to do it seamless:
1. create shares without "path" portion
2. add these shares as targets for the dfs folders (they both aim at the same data, so no problem)
3. disable the targets with path portion in dfs (users will now only use the shares without path portion)
4. wait for the client dfscache to flush to make sure they do not use the shares with path portion (depends on your config)
5. migrate the data using chris' method, modify every share you created / need
6. modify the targets with path portion (delete and create ) in dfs and enable them
7. delete the dfs targets without path portion you used for migration (or stay with them, as you like)
please note it is (afaik) against best practice to do "subshares" under other shares. This can get quite confusing regarding the combination of NTFS and share permissions
We have a cluster that has existed since 2008 and does not have any of the original hardware in it (including the infiniband switches!).
Our process uses smartpools heavily, and if we're doing a total replacement of node types, we do steps 1, 2b, and 3. We never do 2a. If we are not replacing the nodes, we do 1 and 2b, using tiers rather than node compatibility.
As for the actual stuff in this thread, no wonder I didn't really get it... we use SMB, but we also do not allow ACLs (POSIX only), and we don't use DFS. So...
In this case, it's not a matter of just a simple hardware refresh. We have a naming standards that we're trying to implement. However, we cannot do that because of the overhead required on the networking side. No matter how we would try this, it's an outage and we cannot take outages (at least lengthy ones) to make this sort of change.
In this case, we'll just have to go with the new naming standards and let the old legacy stuff go the way of the do-do bird. I wish there was a way to automate this and make it transparent to the user but at this point that is just not possible.
True but given the fact we're talking about TB and TB of data, the internal migration would take a long time to complete and cannot be done during a maintenance window. Plus, I do not have direct access to change DFS links, I'd have to depend on another team to do that so there are multiple delays. This cannot be done over night nor can we complete over a weekend. The outage time makes this unfeasible in production. I was aware we could do this but needed the bump to be certain that it's not simple, easy or seamless. Without that combination, we just have to deal with the legacy stuff and move forward with the new standards.
so when you are getting new hardware you build a new cluster, migrate the data to the new cluster (using synciq or external tools) and set the new cluster productive?
Until now I do not had to refresh my isilon hardware. but i'll have to in a few months. my migration method would be as follows:
1. stuff new hardware into the old clusters
2a) if new hardware is compatible (like x400 and x410), restripe data
2b) if new hardware is not compatible migrate data using filepoolpolicies / smartpools
Of course with OneFS 7.2 or above you could use an NFS Export Alias of /ifs/oldpath pointing to /ifs/newpath. Not saying that’s a good idea, but if it’s a huge environment it accomplishes the goals of getting the data moved and of no interrupting client access.
dynamox
9 Legend
•
20.4K Posts
0
August 6th, 2015 15:00
i don't blame them, you could be sending inappropriate images
crklosterman
450 Posts
0
August 6th, 2015 15:00
Yep I know, check now, all fixed, that's what happens apparently when you respond to the thread via email with embedded photos in your email, they're all stripped off.
Brian_Coulombe_
1 Rookie
•
107 Posts
0
August 6th, 2015 16:00
@Chris THANK YOU! I need to read through this and see how I can do this with little impact. Awesome input and need more of that here!
@Dynamox. Love you man! I think in this case, the DFS links have to be changed when the path is changed on Isilon. Both the SHARE and PATH are referred to in DFS links, 'lest I am naive and missing something???
sluetze
2 Intern
•
300 Posts
0
August 6th, 2015 23:00
Brian,
regarding the dfs links
if it is \\my.dfs.namespace\somelink refers to \\myisilon.domain\share$\path
and you modify the "path" portion, then you have to modify the dfs -links
if your dfs only refers to \\myisilon.domain\share$ then you wont have to modify it.
In need of modifications of structures involving DFS links my procedure was as follows:
getting maintenance window
begin of maintenance
modify dfs Folder to target a "dummy path" which does not exist - thus preventing clients from accessing the structures
clear all smb sessions on the cluster / filer
do the stuff you have to do
modify dfs folder to target the correct path - allowing clients to connect
this of course is against the minimal impact thing
for a "fast" modification of the dfs you could use tools like dfsutil and script something. if I dig deep enough I might find something in my script-collection
Edit:
dfs supports "active" and "inactive" links. I would prepare the dfs to contain both the current path and the new path, while remaining inactive with the new path. during migration I would just switch the active and the inactive path and clean up later (when everything works fine)
Regards
--Steffen
Peter_Sero
4 Operator
•
1.2K Posts
1
August 7th, 2015 03:00
The difference with NFS is that a server-side path is used for mounting rather than a share name.
Moving around the export's toplevel directory doesn't affect clients immediately
(as files are references by LINs in effect).
Which means that clients which have been missing the boat for updating
the mount path will find out the hard way, and much much later at some time
when the server path change might have been "forgotten" already...
So big caveat for NFS, double-check that all clients get
the path change, in order to avoid mount failures out of the blue sky later.
(automount can be your friend.)
Cheers
-- Peter
Brian_Coulombe_
1 Rookie
•
107 Posts
0
August 7th, 2015 04:00
@sluetze:
You are 100% correct and thus that is where the issue is coming in. The DFS links are set up where the sharename refers to the share path. So if we change the path on the Isilon side, the DFS link will refer to a path that does not exist. So we have to change how the DFS links when we change the path and that is something that causes too much of a headache to make this feasible. My stance right now is go forward with the new naming standard and as we upgrade the hardware we will implement the new standards.
Really good info here from everyone. Chris K's info I am using going forward along with this info to show why we cannot do this now. It has to be new going forward.
sluetze
2 Intern
•
300 Posts
1
August 7th, 2015 04:00
Brian,
following a scratchup of a method you could use to do it seamless:
1. create shares without "path" portion
2. add these shares as targets for the dfs folders (they both aim at the same data, so no problem)
3. disable the targets with path portion in dfs (users will now only use the shares without path portion)
4. wait for the client dfscache to flush to make sure they do not use the shares with path portion (depends on your config)
5. migrate the data using chris' method, modify every share you created / need
6. modify the targets with path portion (delete and create ) in dfs and enable them
7. delete the dfs targets without path portion you used for migration (or stay with them, as you like)
please note it is (afaik) against best practice to do "subshares" under other shares. This can get quite confusing regarding the combination of NTFS and share permissions
Regards
-- Steffen
carlilek
2 Intern
•
205 Posts
0
August 7th, 2015 05:00
Hi Steffen,
We have a cluster that has existed since 2008 and does not have any of the original hardware in it (including the infiniband switches!).
Our process uses smartpools heavily, and if we're doing a total replacement of node types, we do steps 1, 2b, and 3. We never do 2a. If we are not replacing the nodes, we do 1 and 2b, using tiers rather than node compatibility.
As for the actual stuff in this thread, no wonder I didn't really get it... we use SMB, but we also do not allow ACLs (POSIX only), and we don't use DFS. So...
Brian_Coulombe_
1 Rookie
•
107 Posts
0
August 7th, 2015 05:00
@sluetze:
In this case, it's not a matter of just a simple hardware refresh. We have a naming standards that we're trying to implement. However, we cannot do that because of the overhead required on the networking side. No matter how we would try this, it's an outage and we cannot take outages (at least lengthy ones) to make this sort of change.
In this case, we'll just have to go with the new naming standards and let the old legacy stuff go the way of the do-do bird. I wish there was a way to automate this and make it transparent to the user but at this point that is just not possible.
Brian_Coulombe_
1 Rookie
•
107 Posts
0
August 7th, 2015 05:00
True but given the fact we're talking about TB and TB of data, the internal migration would take a long time to complete and cannot be done during a maintenance window. Plus, I do not have direct access to change DFS links, I'd have to depend on another team to do that so there are multiple delays. This cannot be done over night nor can we complete over a weekend. The outage time makes this unfeasible in production. I was aware we could do this but needed the bump to be certain that it's not simple, easy or seamless. Without that combination, we just have to deal with the legacy stuff and move forward with the new standards.
Appreciate it!
sluetze
2 Intern
•
300 Posts
0
August 7th, 2015 05:00
Question out of the line (@all):
so when you are getting new hardware you build a new cluster, migrate the data to the new cluster (using synciq or external tools) and set the new cluster productive?
Until now I do not had to refresh my isilon hardware. but i'll have to in a few months. my migration method would be as follows:
1. stuff new hardware into the old clusters
2a) if new hardware is compatible (like x400 and x410), restripe data
2b) if new hardware is not compatible migrate data using filepoolpolicies / smartpools
3) smartfail old nodes
or do I miss something?
Regards
-- Steffen
sluetze
2 Intern
•
300 Posts
0
August 7th, 2015 06:00
@Brian,
I understood it is just not possible in your situation. The migration topic just came to my mind.
@carlilek
thanks for the response. was hoping for that answer
Regards
-- Steffen
crklosterman
450 Posts
0
August 7th, 2015 08:00
Of course with OneFS 7.2 or above you could use an NFS Export Alias of /ifs/oldpath pointing to /ifs/newpath. Not saying that’s a good idea, but if it’s a huge environment it accomplishes the goals of getting the data moved and of no interrupting client access.
Chris Klosterman
Email: chris.klosterman@emc.com
Advisory Solution Architect
Offer and Enablement Team
EMC²| Emerging Technologies Division