Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

1319

February 10th, 2015 06:00

Network Port Usage

I want to make full use of the network ports on my X200.  Can I use the 10gb ports to uplink to the core network and the 1gb ports to directly attach an Avid Media Composer PC?

450 Posts

February 10th, 2015 07:00

Mbun,

     You mentioned directly connecting a node to an Avid Media Composer.  Can you do this?  Yes, but connecting 1 node to 1 PC, opens you up to the failure of that 1 node possibly taking your workflow down:


This question is usually asked with 1 of two goals in mind.

If your aim is better performance:

The idea that adding network interfaces increases throughput makes sense from a total bandwidth to the node standpoint.  But when you look at it from a how much IO can the node push perspective, it doesn't help.  Most X200 clusters can't even saturate a single 10Gbe link, based on a number of things (protocol in use, number of nodes in cluster, MTU, size of the files in question, and smartcache settings).  Even using bonnie++ or iperf you'd be hard-pressed to push the network interfaces to a point of strain.  So my most general response to this question is that if you're seeking better performance from X200s, extra network interfaces isn't the best route to chase.  You can look at the items I listed above.  Are you using a Mac and trying to use SMB?  In general NFSv3 will be faster.  Are you doing large file reads?  If so smartcache settings of streaming on those larger files could be very helpful because it's a more aggressive prefetch algorithm.  Do you have jumbo frames enabled end-to-end?  So the point is tune your performance by understanding your workflow.


If your goal is instead to expose the cluster to an Isolated network:

This is fairly common in single-application environments, such as Media and Entertainment(Since you mentioned Avid above), where you want the cluster to be on the same dedicated media subnet as machines doing rendering, or transcoding.  Or in big-data where you want the compute and the storage as close as physically and logically possible.  At the same time with both of these workflows you still need to be able to manage the cluster, have it send events home, and perhaps use the cluster for storage of other more general purpose NAS data.  So in this vein my guidance would be:

1. Connect your fastest interfaces to wherever you will get the most load.  It wouldn't make a lot of sense to use the 10Gbe interfaces for 10% of the traffic, and 1Gbe for 90% of the traffic.If your Avid Composer is going to generate a ton of traffic, can it connect via 10Gbe to the same core switch in the same subnet?  The 1 extra hop might be worth it then.

2. Never connect both the 10Gbe and 1Gbe interfaces in the same subnet.  This can cause issues where traffic may come in the 10Gbe, but leave via the 1Gbe.

3. No one node can have more than 1 active gateway at any 1 time.  So if your second network connections are in a contained subnet, and you don't need that second gateway, then great, configure the second subnet, but don't give it a gateway.  Otherwise split-up load across nodes, so perhaps 3 nodes connected to subnet0 and 3 nodes connected to subnet1.  This changes slightly with OneFS 7.2, in that we now support SBR (or source-based-routing).  This is a global feature that when turned on causes the cluster to track where given packets came through (what router), and send them back on the same path.  Otherwise the default behavior would be to make outbound routing decisions based upon the routing table.

Andy Chung has a great blog post on this subject, with a much better explanation than my brief one:

Routing and Isilon, how to get from A to B and back again

One other thing:

Hopefully I've answered your question in the above.  Please let us know.

Thanks,

Chris Klosterman

Senior SA EMC Isilon Offer & Enablement Team

email: chris.klosterman@emc.com

twitter: @croaking

February 10th, 2015 07:00

Hello Chris, many thanks for your informative reply.  I will study it some more to determine my understanding.  I don’t want to implement something that is contrary to the equipment design.  Also thanks for referencing Andy’s document as it is helping the explanation.

Thanks again,

Mike Buncic

Manager, Network Services

TVO

122 Posts

February 10th, 2015 07:00

Hello,

Yes it can be done by creating pools. One pool can have all 10 gige interface and 2nd pool for 1 gige. Create smartconnect zone name for both pools and register under DNS and use it accordingly.

Thanks

No Events found!

Top