This post is more than 5 years old

33 Posts

1

2289

# E20-324 Lab 7 Part 1 - Mirrorview/s Fracture Time

This is page 33 of the lab exercise:

Q. ABC Corp has 2 VNX systems used for MirrorView/S. They are connected by a 15 Mb/s

dedicated link, and their data has a 2:1 compression ratio. The 4 primary mirror LUNs,

each 64 GB in size, have a throughput total of 1,000 8 kB IOPs with a R/W ratio of 3:1.

After the mirrors become fractured, the customer requires them to be fully

synchronized within 2 hours.

What is the maximum time that the mirrors can remain fractured if the data is purely

random?

**The given solution :**

LUNs are 64 GiB, therefore extent size is 128 KiB. LUNs perform a total of 250 writes/s. Link is 15 Mb/s, and data is

compressible by 2:1, therefore effective link speed is 30 Mb/s [3 MB/s]. Of this, 250 writes/s x 8 KiB = 2,000 KiB/s is

used by host traffic, leaving 1,000 KiB/s available for synchronization traffic.

Q1. LUNs are 64 GiB is equivalent 64 blocks which is 32 KiB, is the minimum extent size always 128 KiB ?

Q2. How is 30 Mb/s converted to 3MB/s, if you use any storage calculator to convert mbps to Mbit/s, it will yield 3.7 MB/s ?

Q3 . The question says 4 LUNs with a total throughput of 1000 IOPS R/W ratio 3:1

How did the solution obtain 250 write/s ? 1000/4 ?, can one ignore the reads ?

what about the 3:1, was write ratio ignored ?

Q4. According to the solution, 2000 KiB is used by host traffic and 1000 KiB/s for sync traffic, how do we know how much is required for which traffic ?

Appreciate any help. Thank you.

andre_rossouw

62 Posts

0

July 8th, 2013 11:00

Q1. Yes, the minimum extent size is always 128 kB - it applies to all LUNs <= 256 GB in size.

Q2. We divide bits/s by 10 to get bytes/s in

serialenvironments. This is somewhat conservative, and allows for all the overhead that is present in a TCP/IP network environment. It also makes the calculation a lot simplerQ3. There are 1,000 I/Os with a R/W ratio of 3:1, therefore 750 reads/s and 250 writes/s. The number of LUNs doesn't matter in this case, since we're given the

totalIOPs.Q4. The way this is calculated is to start with the ongoing host I/O, which is 250 writes/s @ 8 kB each, giving the 2,000 kB/s mentioned. The remainder of the bandwidth - the headroom - is left for synchronization traffic, and that's 1,000 kB/s.

paul881

33 Posts

0

July 9th, 2013 02:00

Thanks for the help. I just realized that I was using the wrong ratios.

Sorry for asking you so many questions, I am taking the exam real soon.

Scenario 1# Snapshot in Module 5.

-------------------------------------

My apologies, I think I made a mistake in my earlier posting. Please tell me if I did this right or wrong because are no solutions provided.

This is what I did earlier :

--------------------------------------------------------------

LUN 1 : 500 random 4 kib IOPs , 3:1 R/W

( 500 / 4 ) * 3 ( 3 writes to the RLP ) = 375 IOPs - definitely wrong

# of R5 10K SAS disks :

( read i/o + ( write i/o * wp ) / disk iops

( 0 + ( 375 * 4 ) / 150 ( for sas 10k) = 10 drives - wrong ?

--------------------------------------------------------------------

I supposed I made a mistake with the number of writes.

It should be 125 writes/s * 4 writes = the peak lun load is 500 writes

# of R5 10K SAS disks:

(( 375 reads) + (125writes * 4 wp)) / 150 IOPs = 5.8 or should I round up to 2 x (4+1)R5 groups?

Someone in an earlier posting , ignored the reads for both Random and Sequential. He calculated it based on writes only.

So, I am including the 375 reads in my disk capacity calculation.

Q1. Should I include the reads in the disk capacity calculation for snapshots ?Q2. In the sizing answers, should i round up to the recommended raid group sizes ?===============================

Scenario # 2 : Clone

LUN 2 : 500 Sequential 4 KiB IOPs , 3:1 R/W

Each src LUN is associated with a single RLP LUN.

What is the peak RLP LUN load assuming a single session

How many spindles for R1/0 on 10k SAS ?

My answer :

(125 writes / 16 ) * 4 writes to the RL = 31.25 writes - the peak RLP lun load

# of R1/0 disks required -

(( 375 reads) + ( 31.25 writes * 2 wp )) / 150 IOPs = 2.9 or 4 disks

Q3. Is the peak RLP LUN load correct ?Q4. Should I include the 375 reads in the disk capacity for clones ?( again, someone posted a solution which ignored the reads )Thank you.

andre_rossouw

62 Posts

1

July 9th, 2013 07:00

For Scenario #1:

No reads are performed from the RLP during a COFW, so only writes are relevant, and there are 4 of them - 3 to the RL map area, and 1 64 kB chunk write. Note that some older examples will still show 3 writes, because previously there were only 2 writes to the map, as discussed in several prior posts. Also note that these are

LUNwrites, so any calculation of the number of disks required must take the RAID type into account.In your example, there are 125 host writes per second, and they cause 125 COFW/s. Each COFW causes 4 LUN writes, so there are 125 x 4 = 500 LUN writes/s. If the RL is RAID 5, the write penalty is 4, so the number of disk IOPs is 500 x 4 = 2000. At 150 IOPs per disk, that's 13.33 disks. If we must use 4+1 R5, that'll be either 10 disks or 15 disks, depending on whether or not you are prepared to sacrifice a little performance. In this case, I'd probably round up to 15 disks.

For Scenario #2:

You have 125 sequential writes/s of 4 kB each. That means that only each 16th write causes a COFW, so the number of COFWs is 125/16, and the number of LUN IOPs is (125/16) x 4 = 31.25.

For R1/0, with a write penalty of 2, that's 62.5 disk IOPs, easily achieved by a 1+1 R1/0 group. As before, ignore the reads.

paul881

33 Posts

0

July 11th, 2013 01:00

Thank you for the explanation.

I noticed that for Scenario 1 , we multiplied the 125 writes x 4 COFWs and the 4 kib was not used in the calculations.

However in Scenario 2 , we use the 4 kib since there are no COFWs for Clones.

Q1. In scenario 1 , when the question ask for the peak RLP lun load did I forget to multiply by 4kib ?

Q2. In scenario 2, the additional I/O load for Clones is always the no. of writes x size of each write ?

Scenario 1 : snapshotsLUN 1 : 500 random 4 kib iops , 3:1 R/W

Peak RLP LUN load for SRC = 125 writes x 4 COFWs = 500 <------- Do I need to multiply this by 4 kib ?

No. of spindles for R5 10K SAS = ( 0 + 500 x 4 wp ) / 150 = 13.33 = 15 disks

Scenario 2 : CloneLUN 1 : 500 random 4 kib iops , 3:1 R/W

Additional I/O load for each clone = 125 writes x 4 kib = 500

No. of spindles for R5 = ( 0 + 500 x 4 wp ) / 150 = 13.33 = 15 disks

Thank you

andre_rossouw

62 Posts

0

July 11th, 2013 08:00

In a SnapView Snapshot environment, the size of the host I/O - if it's random - does not matter provided it's <= 64 kB, since that's the size of the chunk. Of course, if the host I/I is larger than 64 kB, then it does matter, because a single host write can cause multiple COFWs. For sequential I/O, the host write size will also matter, as discussed before.

In Scenario #1, you're looking at 125 host writes/s, which is the same [for question purposes] as 125 COFWs/s. Since the COFW performs 4 writes to the RLP, it's 125 x 4 = 500

LUNIOPs. To het the required number of disks, multiply by the RAID write penalty, 4 in this case, and divide by the number of IOPs a disk can perform, 150 in this case. We therefore have [500 x 4] / 150 = 13.33 disks, which you can round up to 15 disks. At no point is the 4 kB I/O size relevant tothis example.