Start a Conversation

Unsolved

This post is more than 5 years old

5474

February 13th, 2013 07:00

Some basic questions - before Isilon training

        1. Is CTA a better way?
    • Our EMC / Isilon installer says we need to implement DNS Delegation.  I've looked for EMC documation on that but not coming up with anything.  Can someone point to where that would be?
    • Our EMC / Isilon installer keeps asking us for a full class C subnet for a 6 node X400 installation.  Is that really needed? 
Duplication of Shares, meaning, we have multiple CIFS \\ServerA\users$ and \\ServerB\users$ are multiple CIFS (at the isilon level) going to be required? 
    1. Lastly, has anyone had any issues when migrating many CIFS names from multiple Celerra's to a Isilon cluster and using CNAMES in order to direct the traffic to the correct cifs/share? 

2 Intern

 • 

20.4K Posts

February 13th, 2013 08:00

you can learn about DNS deligation and SmartConnect configuration from Jase's blog

http://www.jasemccarty.com/blog/?p=2131

Last i heard EMC started using their own private network 128.x.x.x for the internal subnet (infiniband switches)

I had to create multiple CNAMEs when i migrated customers from Celerra to Isilon because many of them had hard-coded the name in some kind of database and spreadsheet.

2 Intern

 • 

20.4K Posts

February 13th, 2013 11:00

James,

you don't think they are asking for /24 subnet for the internal network ? When we deployed our cluster 2 years ago reseller asked that we provided two subnets for Infiniband switches.

30 Posts

February 13th, 2013 11:00

Our EMC / Isilon installer keeps asking us for a full class C subnet for a 6 node X400 installation. Is that really needed?

Depends on how you plan to use it. SmartConnect static pools require one IP address per node interface. Assuming 2x1Gb and 2x10Gb interfaces per X400 node in your cluster, that's a total of 24 interfaces. If you set up a pool with 6 interface members, you'll need 6 unique addresses for that pool. Pretty straightforward.

Dynamic pools can be set up one of three ways. If the availability of unique IP addresses isn't an issue, the general recommendation is to assign N*(N-1) addresses for optimal failover performance, where N is the number of interfaces (not nodes) in the pool. If you have a 6-interface dynamic pool, then you'd apply 6*(6-1)=30 addresses to a given pool. That way, if/when one of the pool's interfaces fails, SmartConnect can distribute its 5 addresses (and their workloads) evenly to the remaining 5 active interfaces in the pool. The downside to this approach is you'll burn through IP addresses pretty quickly at that pace.

If you don't have that many IP addresses available upfront, then you have two remaining options for dynamic pools:

1. Allocate one less IP address to a given pool than there are member interfaces. For the 6-member dynamic pool above, instead of assigning 30 addresses, you'd assign only 5. That will leave one interface idle until there's a failure event on one of the active interfaces. At that point, the failed interface's IP address and all its client connections will fail over to the idle interface. Since the total number of active connections in the pool remains the same after failover as it was before, none of the pool's client connections should see any performance degradation. That's the upside to this approach. The downside is you'll be deliberately limiting the pool's network connectivity under normal operations in order to maintain reserve connections against failure events.

2. Allocate one IP address per interface to the dynamic pool. For a 6-interface pool, assign 6 IP addresses. That way, you're using all the available network connectivity and bandwidth under normal operations. The downside to this approach is that if/when one of the pool's interfaces fails, its lone IP address will fail over to one of the remaining interfaces in the pool, bringing all its client connections with it. In other words, one of the remaining 5 interfaces in the pool will be doing the work of two, so all its connections may see significant performance degradation until the failed interface is restored and the pool's connections rebalanced.

Besides Jase McCarty's awesome blog entries on SmartConnect, you can also download this white paper...:

http://simple.isilon.com/doc-viewer/1806/smartconnect-osmartconnect-optimize-scale-out-storage-performance-and-availability.pdf

...but since the Isilon website is being ported to www.emc.com/isilon, I don't know how long the paper can be downloaded from the above location.

Hope this helps.

30 Posts

February 13th, 2013 12:00

Unless there are massive expansion plans for that 6-node X400 cluster, a full /24 subnet for the internal network seems excessive. You could do up to 126 nodes, with a redundant IB subnet, on a single class C address pool by using a /25 netmask.

My point was that it's surprisingly easy to burn through a full class C subnet's worth of addresses even on a 6-node cluster, if there are enough SC dynamic pools configured with full failover capabilities.

Incidentally, I checked with the internal mid-tier SE crew, and a lot of them ARE starting to use 128.x.y.z addresses for the internal (IB) network(s), since 192.168.x.y and 10.x.y.z-based subnets are increasingly likely to conflict with existing customer network ranges.

2 Intern

 • 

20.4K Posts

February 13th, 2013 13:00

i have 3 zones per subnet (nfs, smb, smb for Macs) so i am burning through IPs pretty fast.

2 Intern

 • 

20.4K Posts

February 13th, 2013 13:00

it's just easier to dedicate consecutive ranges of IPs to smartconnect zones, for example i have 2 zones per subnet:

zone 1 IP pool for SMB - 10.224.4.11-17  ( i have 7 nodes in the cluster)

zone 2 IP pool for NFS - 10.22.4.50-71 (each node has 3 IPs from this range)

this will give me plenty of room for growth, anytime i add a node i will need to make sure i have one IP for my SMB zone and 3 IPs for my NFS zone.

306 Posts

February 13th, 2013 13:00

James, EMC is now using the 128.x.x.x addresses for the EMC internal network configuraiton for the cluster.

Dynamox, I was under the impression that EMC was asking for us to provide a full class C range.  When pushed on this he responded this was what they wanted, but not required.  And BTW, I did find the JaseMcCarty website.  Still learning... Cannot wait for some training and seat time on this product.

February 16th, 2013 01:00

The internal subnets being used are the same as used on a Celerra/VNX unified array:

128.221.252.0/24 => Celerra/VNX eth0 / Isilon int-a

128.221.253.0/24 => Celerra/VNX eth2 / Isilon int-b (if second IB switch is purchased)

128.221.254.0/24 => Celerra/VNX eth1 / Isilon failover (if second IB switch is purchased)

128.221.0.0/16 is owned by EMC so this will (well... should) guarantee no one else uses it; and therefore, prevents conflicts as someone already noted.  There was a time that the Celerra was shipped with the internal subnets assigned a 192.168 network.

No Events found!

Top