A colleague of mine ordered several PS6110 (40TB x 15Krpm SAS) and PS6510 (500TB x 7200rpm SATA) enclosures to host storage for a 3rd party project, with ESX 5 hosts. However it seems we may have architected the solution incorrectly by ordering the wrong kit. Basically we have a single EQL san with the 6110 and 6510 enclosures with the volumes presented to ESX as either datastores or RDMs and then attached to the relevant VM.
However the 3rd party app requires/expects the PS65100 storage to be presented to the virtual machines as/from a "NAS-Head"(??), so that all Windows VMs can access them directly. What is the best way to recover from this scenario. Could we take the 6510s out of the current array and attach them to another EQL NAS server/controller?.
For an integrated solution, you could buy an Equallogic FS7600. This is a NAS front-end that can use the (PS-series) SAN for its storage and the UI is fully integrated into group manager.
Another option is to use a (Microsoft or Linux) server or server-cluster that connects to the SAN for storage and then shares out this storage.
Member since 2003
If he have 10GbE EQLs all over the place the FS7610 would be the right choice. But to make it clear please take care that a FS76x0 doenst come with storage. It will offer Frontend Netzworking for Windows and Linux Client and use the Backend to connect to a EXISTING equallogic group which offer the "NAS Reserve".
We setup a PS6110XV+PS4110E together with a FS7610 in last november for a customer.
Thank you for the reply. Could we just add the FS7600 to our existing EQL group which has 5 x 6110s and 2 x 6510s
Yes, you can also add a 1GbE FS7600 to a 10GbE EQL Group because you only need the right Switch between them. There wasnt a huge price gap between the FS7600 and 7610 (8x 10SFP+) in our case. Is there a reason why you choose a 1GbE FS instead of the 10GbE model?
A "single" FS76x0 comes as a 2 node (controller) unit which can serve up to 509TB. If you add a 2nd FS76x0 and form a "cluster" all the number are doubled which mean 1000TB. Of cource you need a lot of PS65x0E on the backend :).
Just a site note.... in our case the Symatec BEA NDMP wasnt able to restore the data. Dell Quest Netvault works like the charm. You may think about how to backup your data and test the products before spending any money.
No I was just referring generally to FS76x0 class. All our PS and switching is 10GB so the FS7610 looks the best fit and we have some capacity on the iscsi switches. Thanks again for your advice much appreciated. This is a minefield for the generalist and I have tried without success to get my company to invest in some storage consultancy as our needs are huge and it is very specialized.
What type of 10GB connectors are on the FS7610? Are they SFP+. Also can they be connected to more than one EQL group. Due to the member limit it is possible that 1 group will not be sufficient for storage volumes required. What happens then?
We got it with 4x DUAL SFP+ because we use the PC8024F for switching which is SFP+. Your older 6510 also have 'only' SFP+ were the newer 10GbE EQLs comes with both, Base-T and SFP+. So i expect that you can specify the needed 10GbE interface type when you order your FS7610.
I dont think that you can add a FS to more than one group. If more capacity is needed you have to expand your EQL Group by adding more arrays.
Thanks Joerg, I think we hit the group enclosure/member limit before we reach the capacity we need (1PB+)