I have been a user of VMware with Dell's equallogics. An Isilon was purchased and hooked to a Cisco UCS Blade server. This datacenter where we are renting rack space has a 10 gig connection to another datacenter where we are renting rack space and also 10 gig connections between the datacenters and the Isilon is plugged into a a 10 gig backbone even though our Isilon (and I'm not that familiar with everything) has 4 1 gig connections that are supposed to load balance?
The biggest issue is speed. I can transfer a 3 gig iso image to a local drive or a server in the other datacenter and I get about near 1 gig speed at about 100 MBs per second. That I can live with. However if I transfer staff files that are much much smaller in size and I try to transfer 100 gigs of data that has about 145,000 files in it the speed is about 10-20 MBs or about 10% of a standard gig connection.
That happend if I tranfer a file from the share on the Isilon to the drive of an ESXi host whose datastore is also on the Isilon.
On backup jobs using dedupe from the 2nd datacenter to the 1st datacenter that has the Isilon I can get about 200 MB per minute. On a backup job from a building that only has a 100 MB to the 2nd datacenter that has the Equallogics I can jam the 100 Megabit connection on a backup job.
Is there something not configured right on the Isilon or is it just the nature of the beast.?
Load-balancing across multiple NICs works with SyncIQ, but if you're uploading data using standard NFS (including vSphere datastores) or CIFS protocols, you're restricted by the protocol architecture to a single NIC on the storage cluster.
Having said that, what you're describing sounds pretty slow to me too. Besides dynamox's question, what protcocol are you using to transfer the data, and are you mounting to a 1Gb or 10Gb interface on the target node?
Virtualization Solutions Architect
Isilon Storage Division
What I meant was that SyncIQ will transfer data between the source and target clusters using as many interfaces simultaneously as you specify, because it isn't based on either NFS or CIFS/SMB for data transfers.
With NFS datastores, though, SmartConnect will balance new client connections according to whichever policy you specify, and it will rebalance connections in the event of a NIC/path failure (by rebalancing IP addresses) but it won't do a round-robin-style distribution of data streams for existing connections.
iSCSI datastores can be configured to do that in vSphere, but the core NFS architecture doesn't allow for multipath connectivity. If you map an NFS datastore using a SmartConnect zone name, you're still mounting an ESXi host to a specific IP address, which in turn maps to a specific node interface on the Isilon cluster. Balancing a single NFS data stream across multiple physical paths requires pNFS, which isn't currently available in either OneFS or vSphere.
I do think there might be something misconfigured in the connection between the ESXi host in one data center and the Isilon storage cluster in the other. I just can't tell what it is from the information given in the original post.
Sorry for the confusion. Hope this clears things up a bit...
are you saying that SyncIQ could be overloading node interfaces and causing performance issues for NFS clients ? I am not following you how SyncIQ is affecting NFS performance ?
I hate to answer a question with a question, but the following part of your post has me scratching my head...
"That happend if I tranfer a file from the share on the Isilon to the drive of an ESXi host whose datastore is also on the Isilon."
Is the ESXi host in the same datacenter as the Isilon storage cluster? Also, is the vSphere Client connection to the ESXi host also made from the same datacenter, or at least from the same side of the WAN?
If not, a copy operation using the vSphere Browse Datastore functions is going to pull all the data across the WAN to the vSphere Client side from the Isilon Share and then send it all back across the WAN again to the ESXi Datastore/Isilon NFS. It also sound like you are processing the name space for 145,000 files for CIFS/SMB and then having the ESXi process the same namespace again for NFS. Finally, I have not double-checked, but I believe vSphere 5 Client connections are SSL-encrypted by default. If this is true, we also have the encapsulation and encryption overhead to account for.
This does not address all of your concerns, but if we can confirm the test parameters we may be able to sort more of this out.
The one test that I did was from the shared folder on the Isilon to the datastore of an ESXI host also on the same Isilon all hooked up to the same Cisco switch.
I should also mention that I get the same slowness when transferring files from a physical drive on an ESXi host (I am on ESXi 5) to the another ESXi host's physical drive at the other datacenter totally bypassing the Isilon and the Dell Equallogic at the other datacenter so maybe the issue is the networking.
There is a 40 gigabit connection between both datacenters and inside everything is connected by 10 gig except the Isilon which has 4 1 gig connections and the Equallogic on the other end that has 2 1 gig connections. The Equallogic does multipathing and it evenly distributes the load between both. The Equallogic so far is far superior with its ISCi connections that the nfs type share is.
EMC set up the Isilon but we are renting rack space from a provider so maybe there is some slowness to the network connections some how.
We are still missing confirmation on the user interface for the WinShare2ESXidatastore transfers and the ESXi2ESXi transfers. CLI versus vSphere Client can make a big difference as to where the transfer traffic actually goes.
Also, has anyone checked for a consistent Maximum Transmission Unit (MTU) size end to end in the same datacenter and end to end across datacenters? Isilon will support Jumbo Frames (9000 byte MTU), depending upon the OneFS version you are running. Isilon will also support Link Aggregation Control Protocol (LACP) in a way that can be compatible with Cisco switches. I do not recommend activating both LACP and Jumbo Frames unless you are one OneFS 6.5.5.x or later. THX
Someone has changed my password on me so I cannot get into the unit but the version on the login page is
v184.108.40.206. The network fellow who set it up didn't seem interested in Link Aggregation which I asked him about or setting the MTUs to 9000. I am used to that on the EqualLogics with its iScsi MTUs set to 9000 both on the switch and in ESXi as well as disabling storm control and port-spanning. We are renting these spaces so we have to go through the provider to make these changes.
The way I was told to set up shares for the staff was directly to the Isilon on the nfs shares. Is there ever any issues with that as opposed to building a server and sharing out that way?
And thanks for the help so far.