Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

13862

March 13th, 2014 14:00

Isilon folder layout - /ifs/data usage

Hi All,

1. Any suggestions on Isilon folder structure layout and why it has to be that way.

2. /ifs/data - Is it ok to use for regular file sharing purposes or better to leave it for any reason

Please add your suggestions and comments

Thanks,

Damal

450 Posts

March 14th, 2014 07:00

Good Morning Damaly,

I've seen this same question from a number of customers, and I've told them all the same thing.  There are 2 key factors to understand before you decide on a filesystem layout.

1. Understand Isilon Refresh Cycles:

  Just about any other NAS platform on the market has a refresh cycle that looks like this:

  Other Model

    In 3-5 years (depending on your depreciation schedule), you'll replace the box.  This may mean a fairly easy migration while on the platform such as using VDM mobility, or with vFilers, or a very complex migration that is host-based.  Regardless in the end you're left with a new NAS to manage, and it has a new, albeit perhaps copied from the old array, configuration.

  Isilon Model

    Isilon hardware depending on your business processes and needs is most likely still refreshed in that 3-5 year bracket.  There is one key difference however.  The way an Isilon tech refresh works, is by adding new nodes to your existing cluster, then smart-failing out the existing nodes.  That's it.  Your configuration, your data, your DNS info, all stay the same.  An in the context of this discussion the important points are the cluster name remains the same and the filesystem structure remains the same.

2. Understand Isilon DR:

  Disaster Recovery for any NAS product consists of 3 things.

  1. Sync the data

  2. Sync the configuration (meaning shares and exports)

  3. Re-direct the clients

  Syncing the data on Isilon is done by SyncIQ.

  Syncing the configuration can be done either manually or via script, contact your account team if you need assistance with this, and they can engage EMC Professional Services.

  Re-directing the clients with Isilon is done via a DNS change, and again we can leave that to a separate discussion.

  This is all important to filesystem layout because you have to think about the fact that when you failover, you want your failover target path to be exactly the same (soup-to-nuts) from source to target.

  This is critical for 2 reasons:

  1. any script that is matching your shares and exports from source to target needs a consistent path as a baseline.

  2. mount entries for any NFS connections must have a consistent mountpoint, in the format of sczonename.domain.com:/ifs/path, so that when you failover, they don't have to manually edit their fstab or automount entries.

Ok so now, with that primer out of the way, what do i recommend?

On ClusterA:

/ifs/clustera/

On ClusterB:

/ifs/clusterb/

Because the clustername should never change, even with refreshes, you're safe there, also now if you would cross-replicate our 2 examples above, you'd end up with:

On ClusterA:

/ifs/clustera/   SyncIQ'ed to clusterb

/ifs/clusterb/

On ClusterB:

/ifs/clusterb/ SyncIQ'ed to clustera

/ifs/clustera/

  

Beyond that, I usually recommend grouping data based upon importance to the business, so that it's easy to establish replication schedules, so you may have a directory:

/ifs/clustera/users01/ (for all normal user homedirs, protected at N+2:1, SyncIQ set to every 4 hours)

/ifs/clustera/users99/ (for all executives, protected at 2x (mirrored), SyncIQ set to every 30 minutes)

Also, if Access Zones come into play in your environment, then I would suggest, adding another layer, so:

/ifs/clustername/accesszonename/

This is to help segment off data per access zone, so that it can be compartmentalized, somewhat akin to what you would do with a VDM that had it's own root filesystem on a Celerra or VNX.

Last, when you build a new cluster, there are 3 default directories under /ifs as you mentioned.

/ifs/data

/ifs/home

&&

/ifs/.ifsvar

It is recommended that you do not use, or modify permissions on any of these directories.  Last, Never modify permissions on /ifs itself.

Again, this is not a formal best practice; yet, but has a lot of thought behind it.

Thanks,

Chris Klosterman, ICSP, ICIE, CCNA, VCP

chris.klosterman@emc.com

Twitter: @croaking

Senior Solution Architect

Offer and Enablement Team

EMC Isilon

4 Posts

March 14th, 2014 07:00

Hi,

My experience is it is better to create your own folders beneath /ifs ,  leave /ifs/data for use by the OneFS and EMC support when upgrading or patching etc.

-Shane

1 Rookie

 • 

20.4K Posts

March 14th, 2014 10:00

Chris,

this is good stuff, i wish someone would share these tips when we bought our cluster 3 years ago.  Can you elaborate more about access zones and how they would play into file system layout decision ?  I am very familiar with VDMs and really like them, they allow me to isolate "tenants" into their own little world so i can delegate permissions to local admins. Is Access zones strictly for using different authentication sources, we use Active Directory only for CIFS and regular NFS 3 export list.

1 Rookie

 • 

20.4K Posts

March 14th, 2014 13:00

Thank you Chris,

#1 - Ok, so this for shops/service providers that need to support multiple Active Directory environments (with no trust), as well as shops that need to have overlapping share names. On the point of overlapping share names ..how does that work with Access Zones. On Celerra/VNX each VDM would contain a different CIFS server so you could have multiple shares: IE: \\cifsserver1\home and \\cifsserver2\home . So how does that work on Isilon, can you access zone to a specific SmartConnect zone name ?

#2 Not following this example, isnt' it the same as #1 ..allow access from AD with no trusts ?

#3 Trying to understand this example, on VNX i am dealing with actual file systems so i know that file system mounted under specific VDM will not be access to any other VDMs (as where before if file system was mounted on physical datamover anyone could get to any file system simply by mapping to \\physicaldatamover\c$)

Thank you for taking the time to explain this Chris !

450 Posts

March 14th, 2014 13:00

Sure dynamox,

Filesystem Layout and access zones at first may not seem to be related, and I wondered if anyone would grasp onto that point, when I put that in there.  Here is the basic logic:

#1 Each access zone has a totally separate group of users, by definition because in most cases it is used because of a lack of trust relationship between domains or forests. Unless your cluster is the small subset that use the feature just because of a duplicate share name.

#2 Because there are a separate group of users, the administration of that security (primarily ACLs), is usually done per dis-jointed domain or forest.  For the purposes of auditing, and maintaining separation of that administration, the data needs to stay separate per access zone, not intermingled.

#3 If you take the long-view of how you might design for multi-tenancy, akin to VDM's, each container (access zone), must contain all data within it, like how your VDM's today are mounted at /root_vdm_1/, and then all the filesystems shared or exported out through that VDM are mounted underneath that path.

I hope that helps give you a little insight into that reccomendation.  If you want further details you might ask your account team for an NDA roadmap presentation if available.

Thanks,

Chris Klosterman, ICSP, ICIE, CCNA, VCP

chris.klosterman@emc.com

Twitter: @croaking

Senior Solution Architect

Offer and Enablement Team

EMC Isilon

450 Posts

March 14th, 2014 14:00

#1 - Correct, the access zone part is only for clusters with duplicate share names or in multiple untrusted domains or forests.  The way this works on Isilon today is to use a separate static smartconnect zone, and NS delegation (so that you have a new name).  That smartconnect zone is linked to a new access zone. So an example might be:

Access Zone     Sharename          path                                           ShareDisplayName

System               home1                    /ifs/clustera/system/home       home

DMZ                   home2                    /ifs/clustera/dmz/home             home

#2 This is just a continuation of the thoughts above, saying that you might for regulatory or audit reasons have to keep different data that is accessed and managed by different groups completely separate.

#3 Your analogy isn't lost here, you could create a share roughly equivalent given my examples above to get to all data like /ifs/ as ifs$ or /ifs/clustera as c$ if you want to be closer to what you see now.  But those should only be shared with administrators.

1 Rookie

 • 

20.4K Posts

March 14th, 2014 14:00

Thank you Chris,

#1 makes perfect sense. Do you see a lot of service providers go to this model, i guess they have to because customer 1 says i want to have a share called "software-deport" and then you can customer 2 who says i have to have the same share name.

#2 lagging on this one   . Above you said that "For the purposes of auditing, and maintaining separation of that administration, the data needs to stay separate per access zone, not intermingled." What defines that boundary, how is data kept "not intermingled". Customer data is still scatter all over OneFS file system right ?

I think i get the whole idea of Access Zones, create isolation in terms of SmartConnect zone name, permissions and authentication source.  I would love to see Windows MMC integration where "tenants" can connect to "their" CIFS server and manage share permissions, auditing ..etc (similar to what can be done today with VNX File).  Today i resell storage to different departments in my company, i try to give them sufficient permissions to where they can manage CIFS ACLs but they still have to come to me to change share permissions, create new shares ..etc. I want to give those tasks back to departmental IT groups, empower them to do anything they want in their little "world", where i am simply in charge of maintaining the infrastructure. That's what i did on NS80

14 Posts

March 16th, 2014 12:00

Thank you all for posting your replies and relevant questions

14 Posts

March 16th, 2014 12:00

Chris, thank you very much for your thoughtful reply

450 Posts

March 20th, 2014 17:00

Defining the boundary as with everything else on Isilon is done at the folder level, so you might keep /ifs/clustera/system and put all the data shared out with the system access zone into that container, and likewise /ifs/clustera/dmz, and put all data shared out with a dmz access zone in that directory.  In the end you're certainly right, it's about separating your data and planning for multi-tenancy, though as I've mentioned it also has some large ramifications in the DR space.

No Events found!

Top