Hi, We looked at buying Netapps and Celerras. One of the main reasons we chose Celerras is because of their blade architecture. We bought a 2 x NS80G and 2 x NS40G and we wanted the boxs clustered. To do this with Celerras you only need to buy a single Celerra for a site (because they are blades) but if you want to cluster the Netapp boxs you need to buy 2 seperate boxs so instead of 4 boxs we would have had to buy 8..
Another reason is we bought the integrated units which are connected to our back end CX3-80's. We thought it might be better to have EMC all the way through so if there was a problem with the NAS heads EMC would own the issue and there would be no finger pointing between Netapp and EMC..
We switched from HDS to EMC clariions and must say I like navisphere alot more than the DAMP program that comes with HDS..
1) i like to have a true unix shell that i can use to manage my NAS platform. If you had an opportunity to evaluate NetApp or others ..you will see that many of them offer this clunky console where the only commands you can issue are the system commands, you can't use native shell programming to automate things. For some people who use GUI only it may not be a big deal but simple things such as creating tree quotas for 100 users accomplished so much easier/faster by using unix shell.
2) for GUI users it's a very easy and initiative interface to use, i remember looking at NetApp interface and i had no idea where to start to simply create a file system.
what i dislike about Celerra:
1) very slow to adopt new technologies, for example native file system de-duplication that you see in NetApp boxes is not available in Celerra.
2) delegation of responsibilities is lacking a bit. I have a multi department environment where i want to delegate certain CIFS server management to different I.T. groups ..i can't delegate everything just yet.
being a longtime celerra admin - 8 years, I am quite familiar with the box. Recently, I have had to support some Netapp filers and like Dynamox, I don't find the Netapp gui intuitive at all. I know comparisons with Celerra to Netapp doesn't help you with the EVA comparison but they are two completely different solutions. The Celerra provides cifs, nfs, iscsi flexibility that an EVA won't be able to. Also, if you need a Windows share environment, you have to front the EVA with a Windows server and utilize iscsi attachment. I've never done an iscsi implementation to a Windows cluster environment but with 2 (or more datamovers) you have redundant fileserver failover capability already. If all you want is iscsi, maybe the EVA is the best option but if there is any future need for a NFS mount or a CIFS mount, with the Celerra, the options are already available to you. We are an EMC shop but have always entertained solutions from other vendors - Sun, HP, Netapp, Pillar but we have yet to find a compelling reason to switch storage vendors as support, price, and featureset (both hw and sw) have always been met by EMC.
another thing that could help your decisions is if you could evaluate both units. EMC will be more then glad to give you an evaluation unit so you can test drive it. I know it requires time, staff ...but it would be very helpful, every company looks great on a glossy ..but when you get the box in house and start playing with it ..that's when you get to see its pluses and minuses.
let me just say that an HP EVA4400 cant hold a candle to a NS20
The EVA just does Fibre-Channel - anything else you have to put other servers in front of it and then need to manage them and worry about their reliability.
And the NS20 does ISCSI plus NFS plus CIFS natively through its Celerra data movers. Not just some Samba emulation or Windows Storage Server - a full NAS product. We can in fact replace several Windows servers with a NS20 and are faster, more reliable and more flexible.
For the SQL data you are probably best of to use ISCSI because of the ease of snapshots and replication there. If want application integration take a look at the EMC Replication Manager software that works nicely with SQL Server or Exchange
And its got a great price point too - you are getting the NAS feature for not that much more than just a FC storage system.
Compared to other vendors our Celerra Manager GUI is quite nice. Yes, using Java it takes a bit to start, but once you are there its easy to use with context sensitive help, right-click mouse actions and wizards to make it even easier.
For VMWare it really depends on your environment. Most Apps in your size work well over either protocol.
NFS is nice since its got easy provisioning, expansion, replication, virtual provisioning.... Also you can directly back it up through NDMP Even if you decide to use FC or ISCSI for your data stores it makes a lot of sense to look at using NFS for templates, ISO images
Fibre-Channel got the advantage that usually FC networks are more reliable than Ethernet networks and can provide a higher guaranteed (theoretical) performance.
ISCSI is in between - to the ESX server its block storage like FC, but the connectivity costs are lower.
Just make sure you talk to an EMC IP Storage Specialist in your region to tell you the VMWare connectivity pro's and best practices.
Also make sure to locate our best-practices documents and reference architecture documents on VMWare and SQL Server on Powerlink
hope it helps Rainer P.S.: the NS20 just got enlarged to 90 drives max so even from a capacity standpoint its on par with the EVA
Thanks for everyone's opinions on this topic, even Rainers
We are running up against that in our comparisons, that the EVA starts to required a lot of equipment in front of it, while the NS20 doesn't. Drive size isn't a huge issue for us yet (we're only 4TB), but I do still have concerns about expansion. Say we only have about 300GB left of space on the array, but we want to configure a 500GB CIFS share. Can we just buy one more drive to put in the RAID5 set, or are we going to need to expand in packs of drives?
you will need to expand in 'packs of drives'. the standard pool configurations are 4+1 or 8+1. If you have available disk slots in an existing DAE, then just purchasing the drives will suffice, otherwise you will need to purchase a DAE as well.
Thanks, has anyone found this to be a major obstacle to the flexibility of their NS array? I suspect power and cooling have been a problem for some folks (we do have it noted that the NS20 we configured uses almost double the power and rack space of the EVA 4400 we've configured). I'm also curious if it's become a problem when having mixed RAID groups on the array (if it's housing both CIFS shares, iSCSI VMware volumes, and iSCSI SQL volume). Can anyone speak to that, or do most folks have dedicated arrays for different purposes?
in terms of mixing services on the same unit (CIFS,NFS, iSCSI), the box is flexible enough to where you can either either let Celerra do its magic when it creates file systems (AVM) or manually create file system if you want to isolate specific file systems for performance considerations. We typically add storage one DAE at a time, and now that Celerra supports 1TB drives ..that's 15TB raw capacity. Lots of storage, good density
yes, you normally would expand with at least a complete RAID group - like 2 disks for RAID1 or 5 for a 4+1R5
I think technically yould could expand RAID groups that are used only for SAN LUNs - but we dont encourage it since it would mean having to restripe that data.
The NAS and ISCSI part uses a volume manager so you dont worry about individual LUNs - you cant increase them but you dont need to. You just add LUNs to a storage pool for the NAS side - then wnen you need a new file system or enlarge an existing one you just specify a size and which pool.
In general - as with most other vendors - you should really get as much storage needed as possible with the initial system purchase. A couple of more drives makes just a fraction of the price.
mixing of RAID groups within the system isnt a problem.
The restrictions are that you dont mix RAID groups within a file system - .i.e create one file system that is part RAID1 and part RAID5. We enforce that in the volume manager of the NAS side. I think the SAN side lets you do it but it isnt recommended.
And that SATA drives need to go into a separate shelf
Especially with NS20FC its quite common to use two or three protocols. How you distribute that on the available disks is more like a philosophy. If you have enough disks you can set it up so separate I/O goes on distinct RAID groups, but you can also share. This is also a tradeoff between control vs. flexability
For power I dont think there is much of a difference. Of course you have to take into account that you arent just getting two RAID controllers - you also get two NAS blades and a management station. I think once you add that to the EVA its a wash our data sheets are a bit fuzzy there - they mostly quote full configured systems.
Thats also why its a bit bigger. The SAN part is 1u for the RAID controllers plus 1u for the SPS and 3u for each shelf holding 15 disks. Then another 1u for the NAS blades and 1u for the management station
So a typical NS20 system with 30 disks is 10u
If you can wait a few days we might be even able to knock a couple of u's off
Everyone has been extremely helpful, thank you. One last question, maybe geared toward the CIFS folks in particular. Given that the DART OS has XPe underpinnings (even if it's just for bootstrap and memory management), has anyone had concerns or a good reason to find AntiVirus software for Celerra? If you have, what did you get and was it any good?
you probably have heard that from some not-so-informed competitiors
The DART code doesnt have a bit of XP in it - its a custom-built microkernel that's all developed by EMC.
Yes, we do licence CIFS from Microsoft just so that we can get all the API's but DART is our own stuff.
DART has full Windows compatibiliby, CIFS, quota's, GPO's but it isnt Microsoft code.
As far a on-access virus scanning goes it work like other NAS vendors. We dont run AV software directly on the NAS blades. Its running on one or a pool of Window servers that the Celerra interfaces with through CAVA. We currently support at least the top 5 AV vendors for that.
Had two Celerra's at my place, NS40. It was a pain to perform the migration from a Windows Server to the CIFS (took 6 months). We had these consultants come in who basically ran out the door without project completion. Also, the datamovers had issues a few times and certain files became corrupt. If you buy the product, make sure you get professional services with it. Doing it yourself will cause great pain. I had a bad experience from the begining. Good luck.
Interesting. The only reliable thing I heard on AV for the Celerra was from one of the customer reference calls we had where they mentioned that as one of the issues they encountered. They had difficulty implementing the Celerra Anti-Virus software with their NS40 setup. They got it running, but they struggled with it...
packetboy2
28 Posts
0
August 22nd, 2008 01:00
We looked at buying Netapps and Celerras. One of the main reasons we chose Celerras is because of their blade architecture. We bought a 2 x NS80G and 2 x NS40G and we wanted the boxs clustered. To do this with Celerras you only need to buy a single Celerra for a site (because they are blades) but if you want to cluster the Netapp boxs you need to buy 2 seperate boxs so instead of 4 boxs we would have had to buy 8..
Another reason is we bought the integrated units which are connected to our back end CX3-80's. We thought it might be better to have EMC all the way through so if there was a problem with the NAS heads EMC would own the issue and there would be no finger pointing between Netapp and EMC..
We switched from HDS to EMC clariions and must say I like navisphere alot more than the DAMP program that comes with HDS..
I say go with the Celerras!!
dynamox
9 Legend
•
20.4K Posts
1
August 19th, 2008 08:00
1) i like to have a true unix shell that i can use to manage my NAS platform. If you had an opportunity to evaluate NetApp or others ..you will see that many of them offer this clunky console where the only commands you can issue are the system commands, you can't use native shell programming to automate things. For some people who use GUI only it may not be a big deal but simple things such as creating tree quotas for 100 users accomplished so much easier/faster by using unix shell.
2) for GUI users it's a very easy and initiative interface to use, i remember looking at NetApp interface and i had no idea where to start to simply create a file system.
what i dislike about Celerra:
1) very slow to adopt new technologies, for example native file system de-duplication that you see in NetApp boxes is not available in Celerra.
2) delegation of responsibilities is lacking a bit. I have a multi department environment where i want to delegate certain CIFS server management to different I.T. groups ..i can't delegate everything just yet.
jimkunysz
259 Posts
1
August 19th, 2008 09:00
Also, if you need a Windows share environment, you have to front the EVA with a Windows server and utilize iscsi attachment. I've never done an iscsi implementation to a Windows cluster environment but with 2 (or more datamovers) you have redundant fileserver failover capability already. If all you want is iscsi, maybe the EVA is the best option but if there is any future need for a NFS mount or a CIFS mount, with the Celerra, the options are already available to you.
We are an EMC shop but have always entertained solutions from other vendors - Sun, HP, Netapp, Pillar but we have yet to find a compelling reason to switch storage vendors as support, price, and featureset (both hw and sw) have always been met by EMC.
dynamox
9 Legend
•
20.4K Posts
0
August 22nd, 2008 03:00
Rainer_EMC
4 Operator
•
8.6K Posts
0
August 22nd, 2008 10:00
let me just say that an HP EVA4400 cant hold a candle to a NS20
The EVA just does Fibre-Channel - anything else you have to put other servers in front of it and then need to manage them and worry about their reliability.
The Celerra NS20 on the other hand can do true native Fibre-Channel due to its bulitin Clariion CX3.
For a comparison of the FC part see http://powerlink.emc.com/km/live1/en_US/Offering_Technical/White_Paper/H1864_emc_clariion_hp_enterprise_virt_arrays_wp_ldv.pdf
And the NS20 does ISCSI plus NFS plus CIFS natively through its Celerra data movers.
Not just some Samba emulation or Windows Storage Server - a full NAS product.
We can in fact replace several Windows servers with a NS20 and are faster, more reliable and more flexible.
For the SQL data you are probably best of to use ISCSI because of the ease of snapshots and replication there.
If want application integration take a look at the EMC Replication Manager software that works nicely with SQL Server or Exchange
And its got a great price point too - you are getting the NAS feature for not that much more than just a FC storage system.
Compared to other vendors our Celerra Manager GUI is quite nice.
Yes, using Java it takes a bit to start, but once you are there its easy to use with context sensitive help,
right-click mouse actions and wizards to make it even easier.
For VMWare it really depends on your environment. Most Apps in your size work well over either protocol.
NFS is nice since its got easy provisioning, expansion, replication, virtual provisioning....
Also you can directly back it up through NDMP
Even if you decide to use FC or ISCSI for your data stores it makes a lot of sense to look at using NFS for templates, ISO images
Fibre-Channel got the advantage that usually FC networks are more reliable than Ethernet networks and can provide a higher guaranteed (theoretical) performance.
ISCSI is in between - to the ESX server its block storage like FC, but the connectivity costs are lower.
Just make sure you talk to an EMC IP Storage Specialist in your region to tell you the VMWare connectivity pro's and best practices.
Also make sure to locate our best-practices documents and reference architecture documents on VMWare and SQL Server on Powerlink
hope it helps
Rainer
P.S.: the NS20 just got enlarged to 90 drives max so even from a capacity standpoint its on par with the EVA
ltfields
24 Posts
0
August 22nd, 2008 11:00
We are running up against that in our comparisons, that the EVA starts to required a lot of equipment in front of it, while the NS20 doesn't. Drive size isn't a huge issue for us yet (we're only 4TB), but I do still have concerns about expansion. Say we only have about 300GB left of space on the array, but we want to configure a 500GB CIFS share. Can we just buy one more drive to put in the RAID5 set, or are we going to need to expand in packs of drives?
jimkunysz
259 Posts
0
August 22nd, 2008 11:00
the standard pool configurations are 4+1 or 8+1. If you have available disk slots in an existing DAE, then just purchasing the drives will suffice, otherwise you will need to purchase a DAE as well.
jim
ltfields
24 Posts
0
August 22nd, 2008 12:00
dynamox
9 Legend
•
20.4K Posts
0
August 22nd, 2008 12:00
Rainer_EMC
4 Operator
•
8.6K Posts
0
August 22nd, 2008 14:00
I think technically yould could expand RAID groups that are used only for SAN LUNs - but we dont encourage it since it would mean having to restripe that data.
The NAS and ISCSI part uses a volume manager so you dont worry about individual LUNs - you cant increase them but you dont need to.
You just add LUNs to a storage pool for the NAS side - then wnen you need a new file system or enlarge an existing one you just specify a size and which pool.
In general - as with most other vendors - you should really get as much storage needed as possible with the initial system purchase.
A couple of more drives makes just a fraction of the price.
Rainer_EMC
4 Operator
•
8.6K Posts
0
August 22nd, 2008 14:00
The restrictions are that you dont mix RAID groups within a file system - .i.e create one file system that is part RAID1 and part RAID5.
We enforce that in the volume manager of the NAS side. I think the SAN side lets you do it but it isnt recommended.
And that SATA drives need to go into a separate shelf
Especially with NS20FC its quite common to use two or three protocols.
How you distribute that on the available disks is more like a philosophy.
If you have enough disks you can set it up so separate I/O goes on distinct RAID groups, but you can also share.
This is also a tradeoff between control vs. flexability
For power I dont think there is much of a difference. Of course you have to take into account that you arent just getting two RAID controllers -
you also get two NAS blades and a management station.
I think once you add that to the EVA its a wash
our data sheets are a bit fuzzy there - they mostly quote full configured systems.
Thats also why its a bit bigger. The SAN part is 1u for the RAID controllers plus 1u for the SPS and 3u for each shelf holding 15 disks.
Then another 1u for the NAS blades and 1u for the management station
So a typical NS20 system with 30 disks is 10u
If you can wait a few days we might be even able to knock a couple of u's off
ltfields
24 Posts
0
August 22nd, 2008 14:00
Rainer_EMC
4 Operator
•
8.6K Posts
0
August 22nd, 2008 14:00
The DART code doesnt have a bit of XP in it - its a custom-built microkernel that's all developed by EMC.
Yes, we do licence CIFS from Microsoft just so that we can get all the API's but DART is our own stuff.
DART has full Windows compatibiliby, CIFS, quota's, GPO's but it isnt Microsoft code.
As far a on-access virus scanning goes it work like other NAS vendors. We dont run AV software directly on the NAS blades.
Its running on one or a pool of Window servers that the Celerra interfaces with through CAVA.
We currently support at least the top 5 AV vendors for that.
ironcheflouie
76 Posts
0
August 22nd, 2008 14:00
ltfields
24 Posts
0
August 22nd, 2008 15:00