We have OneFS version 8.
Currently our network on Isilon is composed of 2 subnets under the groupnet as listed below:
1. Mgmt Subnet
- Mgmt IP pool
2. Prod subnet
- SMB IP pool
- NFS IP pool
Each Isilon node has 2 aggregated NICs in prod IP pools & 2 aggregated NICs in mgmt pool.
We are now initializing Cloudpools. Cloudpools is being provided by a vendor that uses Isilon nodes to provide cloudpool.
We want to connect directly to this vendor rather then going through the firewall.
So a new Vlan is being created that will extend to the vendors data center.
I am thinking of adding another subnet and another IP pool for the communication to Cloudpool provider.
This will use the same aggregated NICs as the "Prod subnet" IP pool.
Also, static route will be added to ensure that communication to cloudpool provider goes through this subnet.
Will this work for SMB shares as those are the ones that we will archive to cloudpool?
existing subnets using VLAN tagging? I assume they are already tagged so it's just a matter of adding another VLAN to your port-channels on the switch and then creating a brand new subnet/pools on Isilon.
Thanks dynamox. Yeah our network team is adding the VLAN to port-channels on the switch. And then we will create the new subnet/pool.
I need help in Migration task can you help me that...
i using robocopy from netapp to isiloin..
when i run incremental copy without security switch acls are missing on the target
can you help me resolving this issue?
This should really be posted as a different non-related question. And for what it's worth, it's not surprising that running robocopy with /nosec (or equivalent) that your ACLs are missing on the target, that's the point of that option to copy data and strip off all security descriptors, SACLS and DACLS. Robocopy is also terribly slow for any migrations with high file counts.
EMC also provide emcopy utility which maybe useful.
Also, you can use "isi_vol_copy" to move data from netapp filer to Isilon cluster.
Refer to EMC support Article Number 000304513 for this.@ !