Start a Conversation

Unsolved

This post is more than 5 years old

2 Intern

 • 

20.4K Posts

1239

July 12th, 2009 12:00

B2D and MetaLUN consideration

Hello guys,

I have a new CX4-120 with 3 x DAEs full off 1T drives and 4 x DAEs full of 450G FC drives. My customer is getting ready to deploy Microsoft DPM and would like to use SATA drives for their backup pool. I plan on carving our raid groups this way:

DAE0: HS ~ 8+1R5 ~ 4+1R5
DAE1: 8+1R5 ~ 5+1R5
DAE2: 4+1R5 ~ 8+1R5 ~ HS

What i was hoping to do is to use 8+1R5 raid groups exclusively for B2D and create striped MetaLUNs that span all three. If i understand correctly, by having 27 spindles behind each MetaLUN i should have very good write performance because IO size will be big, it will be sequential, file system will be aligned so it should be a full stripe write when SPs are flushing from memory to disk (should be smaller parity penalty), and just the fact that i have 27 spindles ..these flushes should be fast. Is my reasoning ok ?

I am reading "EMC Clariion Best Practices for Performance and Availability:28.5" and on page 46 is says:

"In a workload characterized by sequential I/O, it is advantageous to distribute the workloads LUNs across as few RAID groups as possible"

I don't understand why this would be the case ? Don't we want to have more spindles involved if IO patters is same (read or write).

Thank you for your feedback

34 Posts

July 12th, 2009 13:00

I am interested in what you are doing as I recently did a redesign of our backup environment. Additionally, I'll just weigh in with my thoughts knowing that some resident experts will be along shortly.

One of the requirements for striping metaLUNs is that the base LUN and component LUN(s) have to be the same size. That said, what would your design look like of you did 6+1R5 ~ 6+1R5 ~ HS per DAE? Each metaLUN being striped with half of each drive presented to the OS.

This does give you a considerably more aggressive approach to hot spares (more than the best practice), but to me the pay off is evenly distributed LUNs with 14 drives per logical drive.

One particular B2D recommendation is RAID3, but that does change the approach to your RG layout. With the RAID5 configuration, I would be interested in seeing the performance that the RGs have. I never had a RAID5 baseline to see a performance improvement when I went to RAID3.

Hope this helps.

Bart

2 Intern

 • 

20.4K Posts

July 12th, 2009 14:00

Hi Bart ..thank you for sharing your ideas. I know it's kind of not symmetrical to have one raid group to be 5+1 while everything else is either 8+1 or 4+1. I am trying to squeeze us much "usable" space as possible out of this configuration and give it decent performance as well. The odd 5+1R1 group would not be used for any MetaLUNs, maybe just space for file servers, something with low IO requirements.

What did you mean "Each metaLUN being striped with half of each drive presented to the OS"

34 Posts

July 12th, 2009 15:00

The whitepaper I used was "EMC CLARiiON Backup Storage Solutions: Backup-to-Disk Guide with NetWorker DiskBackup Operations -- 11-2006". However, I have found one specific to my backup environment. At a quick glance, the storage side looked the same in both documents. You might search around Powerlink for one specific to your client's environment.

Hope this helps.

34 Posts

July 12th, 2009 15:00

Sorry that wasn't very clear. The picture in my head was quite clear =]

For the B2D server, assuming a 2 TB drive is being presented to the OS, it would be made up of a 2 part metaLUN with the base and component containing half of the total drive (1 TB ea). This is where I understand that in order to stripe them, each component would have to be the same size in addition to metaLUN requirements.

There is a whitepaper for B2D that was very helpful for me, but I'll have to wait until I get to work tomorrow before I can get the title.

I am learning a little bit right now about hot spots and more importantly how to prevent them. I may be way off on this, but I am under the impression that when metaLUNs are formed across uneven RGs, hot spots have the potential to occur. But like I said, I am working on a better understanding of that concept.

Bart

392 Posts

July 13th, 2009 06:00

Its a recommendation to make the RAID groups of your metaLUNs the same size and type. With multiple threads, a failure in one of the smaller RAID groups significantly effects overall meta performance.

Check your B2D I/O size. Large I/Os are going to be bypassing cache.

"In a workload characterized by sequential I/O, it is advantageous to distribute the workloads LUNs across as few RAID groups as possible"

I don't understand why this would be the case ? Don't we want to have more spindles involved if IO patters is same (read or write).


Its the next paragraph which contains the explaination. Its to keep all those RAID groups handling sequential I/O only. Many times users will create more than one LUN on a RAID group. Its not uncommon to see that RAID group recieving a mixed I/O type as one (or more) LUNs is characterized by random I/O, and others are sequential.

In your case, the RAID groups would be dedicated to their LUNs; all the I/O going to them would be sequential.

As an aside, you might want to look at the whitepaper "EMC® Disk Library Disk Drive Spin Down Technical Note P/N 300-007-552 REV A01 July 15, 2008". The example in the paper is for 10x, 1 TB SATA disks configured as 8+2 RAID 6 groups (RG). Each RG consists of five disks from two separate DAEs. This recomendation is based on setting a high value on the backup data.

2 Intern

 • 

20.4K Posts

July 13th, 2009 06:00

Thanks John,

I don't have DPM deployed yet so not sure what I/O size is going to be, could not find anything on Microsoft site either.

I plan on dedicating these raid groups for B2D only but in case there is an urgent need to provision storage from this RG, say for a file server that is not active at night when backups are running, then it should not cause a lot of contention ?

I've also looked at a couple EMC B2D papers for Networker and Netbackup, they don't mention anything about using MetLUNs. Why is that ?

392 Posts

July 13th, 2009 07:00

I plan on dedicating these raid groups for B2D only but in case there is an urgent >need to provision storage from this RG, say for a file server that is not active at >night when backups are running, then it should not cause a lot of contention ?


Temporal I/O separation works.

I've also looked at a couple EMC B2D papers for Networker and Netbackup, they >don't mention anything about using MetLUNs. Why is that ?


Unless you have a lot of streams its difficult to keep all the drives of a large meta active.

2 Intern

 • 

20.4K Posts

July 13th, 2009 21:00

So i decided to run a couple of tests using IOmeter, here are my settings:

100% sequential + 100% write
4 workers
8K request size

and here are my resuluts:

Striped MetaLUN from 2 x 8+1R5 raid groups = 88MB/s
Striped MetaLUN from 2 x 8+1R3 raid groups = 85MB/s
Regular LUN from 1 x 8+1R5 raid group = 90MB/s

file systems were aligned to 1024, default block size.

Should i even bother with MetaLUNs for B2D ?

2 Intern

 • 

5.7K Posts

July 14th, 2009 03:00

I would have expected no difference between R5 and R3 because you're writing 100% sequential and it's only full stripe writes you're doing. I did however expect some difference between a meta and a single LUN. Interesting to see there's no difference.
In older Flares (R19 and before) Raid3 was recommended for ATA drives. Newer flares have an optimized Raid5 so writing to ATA is about as fast as it was on Raid3 with the advantage that random write I/O's are much faster than compared to Raid3.

What could cause this non expected non existent speed difference between the meta and the single LUN ?
- streaming data doesn't come in any faster than about 90MBps, so the full stripe writes are alternating between the two components and both components cannot be fead data fast enough. I guess this must be it...
- .... uhm, I can't think of anythnig else.

Anyone ?

2 Intern

 • 

20.4K Posts

July 14th, 2009 08:00

CX4-120
Flare 04.28.000.5.706
3 DAEs - 37 x 1T 7200K drives (DAE 3 is not full)

i test with 100G Meta LUN (each 50G component in in different RG, different enclosure). Only one bus on CX4-120 so can't do bus balancing.

what do you mean 8+2 ?

261 Posts

July 14th, 2009 08:00

+2 is the parity drives, so probably Raid 6, unless a typo.

-Ryan

392 Posts

July 14th, 2009 08:00

An 8K request size is not characteristic of backup. Re-run using 64KB. In addition, specify the the buss distribution of the Meta's. (Each LUN should be on a separate bus).

Five streams would be the practical maximum to use. I expect the Metas to look better as the streams increase.

You should run an 8+2 in there as a HA option. Split the RAID group across two enclosures on separate busses five and five.

Also specify make, model of CLARiiON, FLARE rev., drive, and LUN capacity.

I like experiments. :)

392 Posts

July 14th, 2009 09:00

As Ryan wrote, a 10-disk RAID 6 (8+2).

Also, a 100GB LUN is a bit 'thin' for backup. You're getting the bandwidth of the fastest outer tracks.

You running off the FC front-end ports? Otherwise, you want to keep an eye on iSCSI port bandwidth.

null

2 Intern

 • 

20.4K Posts

July 14th, 2009 09:00

raid 6 ? what about that extra parity calculation overhead ? I was testing with 100G LUN as IOmeter has to "prepare" a drive and that takes forever. I am running of FC front-end ports. Any recommendations regarding file system block size, should i format it as 64k ?

Thank you

392 Posts

July 14th, 2009 09:00

The parity calculation is a trivial amount of time.

Your block size is OK.

You might want to try the running the Size Multiplier (ESM) for the metas at both 1 or 2 and the default (the default is 4) with the metaLUNs. This will have you going across the disks more quickly.
No Events found!

Top