Start a Conversation

Unsolved

This post is more than 5 years old

2054

September 29th, 2014 12:00

Internal infiniband speed of a X200 cluster?

Hey!

We've got a 6 Node Isilion cluster of X200 nodes, all of them individualy connected to a 10GB HP switch. Now, I'm wondering, what speed does the Infiniband between the nodes have? The data sheet I found says dual DDR Infiniband, but what GB/s speed does that translate into?

Put another way, if, theoreticaly, we were to have full network load on all 6 10Gb ports on the switch going to the Isilion cluster, how much throughput can the cluster handle? Of course I understand it depends on how we access files, etc... but a ballpark figure would be nice...

Cheers,

SebastianH

5 Practitioner

 • 

274.2K Posts

September 29th, 2014 12:00

The front end, does. It affect the backed traffic. So you are running 20gb on the backend

-Deontray Jones

Systems Engineer

Aerospace and Defense

(571) 269-7394

99 Posts

September 29th, 2014 13:00

A clarification.  Nearly all X200 nodes have dual IB - so that means 2xDDR, which is 2x20Gb/sec.  There are very few (and should be zero, truth be told) nodes with a single IB card. 

So, there is 40 Gb/sec of IB bandwidth available on the backend of each X200.  Compare this with the 22Gb/sec (2x10+2x1) frontend Ethernet bandwidth.

The backend of any Isilon cluster is not a bottleneck in terms of overall system design.  In fact, the more recent nodes (A100, S210, X410) have 2xQDR, which is 2x40Gb = 80Gb/sec on the backend. 

205 Posts

September 29th, 2014 13:00

You'll run out of spindles before you run out of back end network. Assuming you've got 10 HDDs in each of the nodes, you end up with around 38400 Mbit/sec max bandwidth, assuming 80MB/sec/disk. Of course, caching plays into that too. In testing, the most we've seen out of our S200s on single stream large file is ~600MB/sec/node. 7.1.1 with L3 caching should improve that vastly (Isilon claimed at EMCWorld they could saturate a 10Gb link with the right conditions), but of course ymmv. On my 7 node 10000X-SSD test cluster running 7.1.1 with L3 on, the most I've seen single node single stream is ~480MB/sec.

No Events found!

Top