This post is more than 5 years old
24 Posts
0
1440
August 19th, 2008 08:00
Major N00b question
I'm acutally a prospective customer looking at moving from an entry level iSCSI SAN to possibly EMC, and I wanted to put this one out on the table. I'd appreciate it if only other customers responded (no EMC Employees, please).
This seems kinda basic, but we're getting ready to purchase either an NS20 or an HP EVA4400. We're leaning toward EMC, but I wanted to ask the customers on this forum:
What do you like best about your Celerra(s)?
What's the worst thing about it/them?
Did any of you switch from HP or other vendors to EMC (or has it always been EMC in your shop)?
Just so everyone knows what kind of environment I've got, we're about 3-4 TB of data, about 25% of that on HP ESX Blades, about 25% MS SQL 2000/2005 databases (on blades and 2 pizza boxes), and the rest file shares. We're thinking of taking the file shares off our file server and using CIFS on the array, and iSCSI for the VMware. I'm open to NFS, but unsure. We're also all ethernet, not high enough bandwidth needs for FC yet...
This seems kinda basic, but we're getting ready to purchase either an NS20 or an HP EVA4400. We're leaning toward EMC, but I wanted to ask the customers on this forum:
What do you like best about your Celerra(s)?
What's the worst thing about it/them?
Did any of you switch from HP or other vendors to EMC (or has it always been EMC in your shop)?
Just so everyone knows what kind of environment I've got, we're about 3-4 TB of data, about 25% of that on HP ESX Blades, about 25% MS SQL 2000/2005 databases (on blades and 2 pizza boxes), and the rest file shares. We're thinking of taking the file shares off our file server and using CIFS on the array, and iSCSI for the VMware. I'm open to NFS, but unsure. We're also all ethernet, not high enough bandwidth needs for FC yet...



packetboy2
28 Posts
0
August 22nd, 2008 01:00
We looked at buying Netapps and Celerras. One of the main reasons we chose Celerras is because of their blade architecture. We bought a 2 x NS80G and 2 x NS40G and we wanted the boxs clustered. To do this with Celerras you only need to buy a single Celerra for a site (because they are blades) but if you want to cluster the Netapp boxs you need to buy 2 seperate boxs so instead of 4 boxs we would have had to buy 8..
Another reason is we bought the integrated units which are connected to our back end CX3-80's. We thought it might be better to have EMC all the way through so if there was a problem with the NAS heads EMC would own the issue and there would be no finger pointing between Netapp and EMC..
We switched from HDS to EMC clariions and must say I like navisphere alot more than the DAMP program that comes with HDS..
I say go with the Celerras!!
dynamox
9 Legend
•
20.4K Posts
1
August 19th, 2008 08:00
1) i like to have a true unix shell that i can use to manage my NAS platform. If you had an opportunity to evaluate NetApp or others ..you will see that many of them offer this clunky console where the only commands you can issue are the system commands, you can't use native shell programming to automate things. For some people who use GUI only it may not be a big deal but simple things such as creating tree quotas for 100 users accomplished so much easier/faster by using unix shell.
2) for GUI users it's a very easy and initiative interface to use, i remember looking at NetApp interface and i had no idea where to start to simply create a file system.
what i dislike about Celerra:
1) very slow to adopt new technologies, for example native file system de-duplication that you see in NetApp boxes is not available in Celerra.
2) delegation of responsibilities is lacking a bit. I have a multi department environment where i want to delegate certain CIFS server management to different I.T. groups ..i can't delegate everything just yet.
jimkunysz
259 Posts
1
August 19th, 2008 09:00
Also, if you need a Windows share environment, you have to front the EVA with a Windows server and utilize iscsi attachment. I've never done an iscsi implementation to a Windows cluster environment but with 2 (or more datamovers) you have redundant fileserver failover capability already. If all you want is iscsi, maybe the EVA is the best option but if there is any future need for a NFS mount or a CIFS mount, with the Celerra, the options are already available to you.
We are an EMC shop but have always entertained solutions from other vendors - Sun, HP, Netapp, Pillar but we have yet to find a compelling reason to switch storage vendors as support, price, and featureset (both hw and sw) have always been met by EMC.
dynamox
9 Legend
•
20.4K Posts
0
August 22nd, 2008 03:00
Rainer_EMC
4 Operator
•
8.6K Posts
0
August 22nd, 2008 10:00
let me just say that an HP EVA4400 cant hold a candle to a NS20
The EVA just does Fibre-Channel - anything else you have to put other servers in front of it and then need to manage them and worry about their reliability.
The Celerra NS20 on the other hand can do true native Fibre-Channel due to its bulitin Clariion CX3.
For a comparison of the FC part see http://powerlink.emc.com/km/live1/en_US/Offering_Technical/White_Paper/H1864_emc_clariion_hp_enterprise_virt_arrays_wp_ldv.pdf
And the NS20 does ISCSI plus NFS plus CIFS natively through its Celerra data movers.
Not just some Samba emulation or Windows Storage Server - a full NAS product.
We can in fact replace several Windows servers with a NS20 and are faster, more reliable and more flexible.
For the SQL data you are probably best of to use ISCSI because of the ease of snapshots and replication there.
If want application integration take a look at the EMC Replication Manager software that works nicely with SQL Server or Exchange
And its got a great price point too - you are getting the NAS feature for not that much more than just a FC storage system.
Compared to other vendors our Celerra Manager GUI is quite nice.
Yes, using Java it takes a bit to start, but once you are there its easy to use with context sensitive help,
right-click mouse actions and wizards to make it even easier.
For VMWare it really depends on your environment. Most Apps in your size work well over either protocol.
NFS is nice since its got easy provisioning, expansion, replication, virtual provisioning....
Also you can directly back it up through NDMP
Even if you decide to use FC or ISCSI for your data stores it makes a lot of sense to look at using NFS for templates, ISO images
Fibre-Channel got the advantage that usually FC networks are more reliable than Ethernet networks and can provide a higher guaranteed (theoretical) performance.
ISCSI is in between - to the ESX server its block storage like FC, but the connectivity costs are lower.
Just make sure you talk to an EMC IP Storage Specialist in your region to tell you the VMWare connectivity pro's and best practices.
Also make sure to locate our best-practices documents and reference architecture documents on VMWare and SQL Server on Powerlink
hope it helps
Rainer
P.S.: the NS20 just got enlarged to 90 drives max so even from a capacity standpoint its on par with the EVA
ltfields
24 Posts
0
August 22nd, 2008 11:00
We are running up against that in our comparisons, that the EVA starts to required a lot of equipment in front of it, while the NS20 doesn't. Drive size isn't a huge issue for us yet (we're only 4TB), but I do still have concerns about expansion. Say we only have about 300GB left of space on the array, but we want to configure a 500GB CIFS share. Can we just buy one more drive to put in the RAID5 set, or are we going to need to expand in packs of drives?
jimkunysz
259 Posts
0
August 22nd, 2008 11:00
the standard pool configurations are 4+1 or 8+1. If you have available disk slots in an existing DAE, then just purchasing the drives will suffice, otherwise you will need to purchase a DAE as well.
jim
ltfields
24 Posts
0
August 22nd, 2008 12:00
dynamox
9 Legend
•
20.4K Posts
0
August 22nd, 2008 12:00
Rainer_EMC
4 Operator
•
8.6K Posts
0
August 22nd, 2008 14:00
I think technically yould could expand RAID groups that are used only for SAN LUNs - but we dont encourage it since it would mean having to restripe that data.
The NAS and ISCSI part uses a volume manager so you dont worry about individual LUNs - you cant increase them but you dont need to.
You just add LUNs to a storage pool for the NAS side - then wnen you need a new file system or enlarge an existing one you just specify a size and which pool.
In general - as with most other vendors - you should really get as much storage needed as possible with the initial system purchase.
A couple of more drives makes just a fraction of the price.
Rainer_EMC
4 Operator
•
8.6K Posts
0
August 22nd, 2008 14:00
The restrictions are that you dont mix RAID groups within a file system - .i.e create one file system that is part RAID1 and part RAID5.
We enforce that in the volume manager of the NAS side. I think the SAN side lets you do it but it isnt recommended.
And that SATA drives need to go into a separate shelf
Especially with NS20FC its quite common to use two or three protocols.
How you distribute that on the available disks is more like a philosophy.
If you have enough disks you can set it up so separate I/O goes on distinct RAID groups, but you can also share.
This is also a tradeoff between control vs. flexability
For power I dont think there is much of a difference. Of course you have to take into account that you arent just getting two RAID controllers -
you also get two NAS blades and a management station.
I think once you add that to the EVA its a wash
our data sheets are a bit fuzzy there - they mostly quote full configured systems.
Thats also why its a bit bigger. The SAN part is 1u for the RAID controllers plus 1u for the SPS and 3u for each shelf holding 15 disks.
Then another 1u for the NAS blades and 1u for the management station
So a typical NS20 system with 30 disks is 10u
If you can wait a few days we might be even able to knock a couple of u's off
ltfields
24 Posts
0
August 22nd, 2008 14:00
Rainer_EMC
4 Operator
•
8.6K Posts
0
August 22nd, 2008 14:00
The DART code doesnt have a bit of XP in it - its a custom-built microkernel that's all developed by EMC.
Yes, we do licence CIFS from Microsoft just so that we can get all the API's but DART is our own stuff.
DART has full Windows compatibiliby, CIFS, quota's, GPO's but it isnt Microsoft code.
As far a on-access virus scanning goes it work like other NAS vendors. We dont run AV software directly on the NAS blades.
Its running on one or a pool of Window servers that the Celerra interfaces with through CAVA.
We currently support at least the top 5 AV vendors for that.
ironcheflouie
76 Posts
0
August 22nd, 2008 14:00
ltfields
24 Posts
0
August 22nd, 2008 15:00