Start a Conversation

Unsolved

This post is more than 5 years old

1205

July 23rd, 2010 14:00

Time to buy storage

And I'm throwing around some ideas...light reading for the weekend if you will...

I have some 500GB SATA II drives (whole shelf) which have some "images" on them (some is an understatement - more like 1.6TB of them after being deduped) as well as some savvols.  I have 9 drives allocated to the Celerra, which is roughly 3.6TB of space.  I have about 1.3TB of "archive" stuff that for various reasons needs to stick around. The pool has about 400GB free

I have another "set" of images (these are worked on a little more frequently) on an 8+1 R5 on 300GB/10K drives.  The filesystem is nearly full and the pool is also nearly maxed out.

My main production sit on 4+1 sets on 146GB/15K drives - home dirs, file-based databases (a lot of these), etc... This we won't touch, we have free space, etc..plus once Rainfinity is in place those pesky iTunes files will live on something slower...

So, to archive some more of the data from the second set of images to make room for more productions, I need some more space.

I also have a pending EmailXtender to SourceOne migration - we will want to store the volumes on the NAS so they get replicated.  I don't think I want to put these on low-powered SATA drives.  The EX archives are roughly .5TB which grows about 5-8GB a month.

I have two choices here - either I add some more SATA or some more 10K FC drives.

If I add some more 10K 600GB drives for the 8+1 pool, I'd move the existing images on the SATA to these drives and liberate the SATA drives to be a target for Rainfinity.  Any comments on using a R6-12+2 configuration?  If it is the only SATA shelf I've got, then I'll need to keep a hot spare around so I can't do a 3xR5-4+1.  I'm assuming that if I get the savvols and file systems off the SATA drives I can yank those out of the config and reintroduce those are the R6...?

If I go the SATA route with R6, the existing SATA drives won't pool with those (unless of course I bring them in as R5..) but I'll have a load of space available.  I just don't like the idea of getting more slow drives but this IS for archiving purposes...  Also, if I need more high-performance storage I'll need to shuffle something to the SATA or get another DAE.  Plus I've got nowhere but SATA to move the Xtender archives.

I guess I have a third option - just put in the SATA drives in a RAID5, throw away the Rainfinities, move my production set of images to it, extend the filesystems, and call it a day

Thoughts?

Dan

5 Practitioner

 • 

274.2K Posts

July 27th, 2010 08:00

Hi Dan

you have a lot of variables in this ...

the main components of any decision you make would be 1) cost 2) speed  3) complexity

some would say FC disks for high demand apps etc, low cost storage for your least acccessed data and R5 is enough.

but almost as many others would say that the more levels of redundancy you can provide the better and the reduced downtime in the event of any issues

will offset the cost of any implementations (inc hardware)

my suggestion would be to do up a chart showing your tiered data storage costs. and move the various data into the respective columns!

this will give you an overview of the value of this data, and allows for better perspective for hardware decisions.

also factor in integration into existing infrastructure Quote (If I go the SATA route with R6, the existing SATA drives won't pool with those)

but although you have provided a lot of setup information in this, there is a lot of information needed to make a decision

I would suggest speaking with an implementation engineer about this, although we can reference support matrixs and quote from that

implementation engineers will have realtime experience of customer environments and the needs of those customers.

they will go into great detail with you (app latencies, tiered storage data costs etc)

8.6K Posts

July 27th, 2010 16:00

Hi Dan,

most likely your SATA drives are now in the clarata_archive pool

new SATA drives you could configure either with 6+1R5 or 4+1R5 or 8+1R5 if you want them to also be in clarata_archive

two 6+1R5 plus a hot spare fills a DAE nicely

or do 4+2R6 or 6+2R6 or 12+2R6 but then they would go into clarata_r6

If you want to go Raid6 and have only one ATA pool I would create 12+2R6 for the new disks, move the file systems using Replicator and reconfigure the old disks as a 6+2R6

Personally I think 6+1R5 is reasonable with SATA disks if you have your alerting configured so that you know about broken disks immediately

Rainer

190 Posts

August 2nd, 2010 12:00

What are your thoughts on the 2TB drives?  Would going with a R5 6+1 be unreasonable for a rebuild time?  The price/GB is neglible between the 1TB and 2TB (comparing a full shelf of 1TB and a shelf with 8 2TB drives - but getting the 1/2 full shelf would give me some expansion done the road)

Dan

147 Posts

August 3rd, 2010 13:00

thanks - but that document is a bit dated :-)

2 Intern

 • 

20.4K Posts

August 3rd, 2010 13:00

if you are bored (i think you are ..it's 22:40 in Munich )..this document has rebuild information

https://community.emc.com/docs/DOC-6261

147 Posts

August 3rd, 2010 13:00

personally I prefer to have at least two raidgroups like 2x 4+1R5    We do benefit from having multiple devices per fs since we can queue more outstanding IOs which can be more important than the sheer number of disks    I haven't seen rebuild times - maybe the Clariion forum has more info      Rainer

2 Intern

 • 

20.4K Posts

August 3rd, 2010 14:00

that's all i got, you are welcome to post some docs from CSPEED website

190 Posts

August 5th, 2010 06:00

After some deliberation, we've decided to go with 1TB 7200RPM drives with two R5 6+1 raidgroups - it's the least painful as far as moving data around.  I already have an R5 8+1 500GB set in this pool and with some other gyrations I could have a 4+1 500GB to add if this doesn't hold me out until I can go back for more.  

One last question ...How many LUNs do I need to use on these RAID groups?  In the past I remember that there was a 2TB limit on the LUN size - if this is still the case, would I bind 3 or 4 since a 6+1 yields roughly 5.4TB?  I'm sure there is a doc out there that covers this - heading over to powerlink to see if I can dig it up...

Dan

296 Posts

August 5th, 2010 07:00

Thats good to split the LUN's between SP's but in later versions of Dart you can create a single LUN also.

Sameer

59 Posts

August 5th, 2010 07:00

Make the number of LUNs even so you can split them between the SP’s. 4 should work.

190 Posts

September 15th, 2010 08:00

Just to close the loop on this -

As stated before, I went with 1TB drives but I'm replacing the 500GB hot spare in my other shelf so I can have three 4+1 R5 groups instead of two 6+1's.  I'll end up with the same space (12 drives worth after RAID overhead).  For the extra bucks, the shorter rebuild times and the common remark about the Celerra working "mo betta" with more RG's I feel it was worth the (relative) small expense.

Thanks for all the comments and words of wisdom.

Dan

8.6K Posts

September 15th, 2010 09:00

good choice

No Events found!

Top