Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

1772

June 20th, 2013 07:00

E20-324 VideoILT questions module 4 and 5, please help

Hi,

I have a few questions from module 4 and 5 of the course.  Any explanation is much appreciated.

Q1.  Module 4 - Check your knowledge section

Which write will bypass cache if settings have default values ?

I suppose the  question is referring to fast cache

The answer given is 2050 blocks , which is 1.000976562 MB

I can't figure out why the answer is 2050 blocks

Q2 . Module 5 pg 7

" Very small I/Os that cause a COFW appear to be affected more than larger I/Os. If a 512 B host write causes a COFW, the ratio of host data : RLP data is between 1 : 160 (1 x 64 kB write, and 2 x 8 kB writes) and 1 : 384 (3 x 64 kB writes). If a 64 kB host I/O causes a COFW, then the ratio is 1 : 3 at worst, and it appears as though the performance impact is less "

I know that COFW causes one  64KiB RL write and three 8KiB/64KiB RL writes - a total of 4 writes

, but I can't figure out the following:

For 512 B:

the ratio of host data : RLP data is between 1 : 160 (1 x 64 kB write, and 2 x 8 kB writes),  1 : 384 (3 x 64 kB writes)

Q1 -  If there a total of 4 writes why is it written as (1 x 64 kib, 2 x 8 kib) ? 

Should it not be written as ( 1x 64kib, 3x8kib ) instead ?

Q2 -  What is 1:160 ?

Q3. -  What is 1:384 and ( 3 x 64 kib) ?

I guess I am getting confused with the 1:160 and 1:384

Q3. So how does this differ from a 64KB host 1/O ?

Module 5  page 14, example 2 and 3

Q1.  200 sequential 4 Kib writes to source lun

RLP sees 200 / 16x4 writes initially , how come it is not 200 / (64/4) ?

Q2.  100 sequential 256 kib writes to src lun

RLP sees 100 x 4x4 writes -  why not 100 x (64/256) ?

Any help appreciated. Thank you

June 20th, 2013 08:00

The default setting for LUN write-aside allows 2048 block writes to hit write cache, but causes anything larger to bypass cache. 2050 block writes will therefore bypass write cache.

COFW does perform 4 writes to the RL, as you stated. Some time back, though, it was 3 writes, and that's why the question still shows 3. The ratios shown, 1:160 and 1:384, are the ratios between host write size [512B] and the size of data written to the RL. A 64 kB host I/O will still cause the same write activity to the RL, but the ratio of host write size to RL write sizes will be lower.

4 kB sequential writes to the Source perform only 1 COFW for each 16 writes, since there are 16 data pieces of 4 kB each in a 64 kB chunk. The number of writes seen by the RL is therefore 1/16 of what it would be for random host writes.

For 256 kB host writes, each write causes 4 COFWs, so the RL sees 100 writes/s x 4 COFWs/write x 4 writes/COFW.

33 Posts

June 20th, 2013 09:00

Hi Andre,

Thank you for your reply.  I still can't figure out the 2 ratios for 512B.

Q1 . How did they get , 1:160 ( 1x64kib, 3x8kib) and 1:384 ( 3x 64 kib) ?

Why does 1:384 have a (3x 64 kib ) ?

If it is 64 KB,  what should the ratio be ?

Q2 .  For the 100 sequential 256kib,  why 100 x 4 x 4 ?  If it causes 4 COFW why multiply by 4 twice ?

Appreciate your help, as it is challenging trying to figure out things in a videoILT . 

Thank you once again.

June 20th, 2013 10:00

If the COFW writes are 1x 64 kB and 2x 8 kB for a total of 80 kB [160 blocks], then the ratio of host write size [512B = 1 block] to COFW writes is 1 block:160 blocks = 1:160. If the COFW performed 3x 64 kB writes it would be 1 block: 384 blocks [192 kB = 3x 64 kB]. COFWs now perform 1x 64 kB and 3x 8 kB [or, very occasionally, 3x 64 kB] so the ratios would be 1:176 and 1:512. For a 64 kB host write [128 blocks], the ratio is 128:176 = 8:11. If the COFW map writes were 64 kB, the ratio would be 128:512 = 1:4. The ratios were for illustration only, are not mentioned in any other EMC documentation, and will not be found in the exam.

For the sequential I/Os, note that a COFW performs 4 writes to the RL. The host I/O size is 256 kB, which is 4 chunks in size, so each host write causes 4 COFWs. The total number of writes is therefore 100 host writes x 4 COFWs per host write x 4 writes per COFW = 100 x 4 x 4. If the question asked how many COFWs are performed, the answer is 100 x 4 per second.

33 Posts

June 20th, 2013 10:00

Hi Andre,

Thank you so much for your clear explanation.

Best regards,

Paul

33 Posts

June 21st, 2013 00:00

Hi Andre,

There is an example 2 on page 20 of module 5 ,which I am not sure if there is calculation error

"  200 sequential 4kib writes/s to the src lun

RLP sees 200/16x64 kib/s of new data initially - 0.8MB/s  "

Q1.  How should I explain 16 x 64.  For sequential I/O, COFW performs 4 writes so we take 4 x 4Kib writes/s = 16

so if it is 8kib/s writes, then it should 4x8kib = 32

Is this right way to explain it ?

We multiply by 16 x 64kib/s because each chunk is 64kib ?

Q2.  If you do the math , it should be 0.19531 kib/s or 0.000190 MB/s

How did they get 0.8MB/s ?

Thank you for your help.

June 21st, 2013 06:00

They key phrase here is 'new data' - we're only looking at data [64 kB chunk] writes, and not the writes to the map area. Because this is sequential I/O with a write size of 4 kB [as I explained before], we do only 1 COFW for every 16 host writes. That's why the term '200/16' is present. The 64 is the size of the chunk - 64 kB - so the initial data rate is 200/16 x 64 kB/s = 12.5 x 64 kB/s = 800 kB/s.

Don't confuse kB/s with kb/s.

In your Q2 answer, you've made a mistake with operator precedence - the division appears first, so should be performed first, and then the multiplication should be performed. In other words, 200/16*64 = (200/16) * 64 and NOT 200/(16 * 64).

33 Posts

June 25th, 2013 07:00

Hi Andre,

Thank you once again.  Your help and guidance is really invaluable. 

I have  questions on Mirror View/S extents, my questions are after the given solution.

On page 33 of the course lab 7 , there is a Mirror view /S question:

" Q.  ABC Corp has 2 VNX systems used for MirrorView/S. They are connected by a 15 Mb/s

dedicated link, and their data has a 2:1 compression ratio. The 4 primary mirror LUNs,

each 64 GB in size, have a throughput total of 1,000 8 kB IOPs with a R/W ratio of 3:1.

After the mirrors become fractured, the customer requires them to be fully

synchronized within 2 hours. "

The given solution is as follows :

" LUNs are 64 GiB, therefore extent size is 128 KiB. LUNs perform a total of 250 writes/s. Link is 15 Mb/s, and data is

compressible by 2:1, therefore effective link speed is 30 Mb/s [3 MB/s]. Of this, 250 writes/s x 8 KiB = 2,000 KiB/s is

used by host traffic, leaving 1,000 KiB/s available for synchronization traffic.


1,000 KiB/s for 2 hours is 1,000 KiB/s x 7,200s = 7,200,000 KiB [round to 7200 MiB], therefore the mirrors can fall 7,200

MiB behind, and still synchronize in 2 hours. When a mirror is fractured, each write touches an extent – a new extent

for random writes – so the number of writes [same as the number of extents] is 7,200 MiB / 128 KiB = 57,600 extents.

At a rate of 250 writes/s, it takes 57,600 / 250 seconds [230.4 seconds] to dirty 7,200 MiB of data. The mirrors can

therefore be fractured for a maximum of 230.4 s, or just under 4 minutes.

If the data was sequential, it would take 128 KiB / 8 KiB [extent size divided by write size] to dirty an extent before

moving on to the next. The number of writes is therefore 16 times as great, and the maximum allowable fracture time

also 16 times as great, or 3,686.4 s – just over an hour.

If mirrors are fractured for 9 minutes, a total of 9 x 60 x 250 [writes/s] x 128 KiB [extent size] = 17,280,000 KiB = 16,875

MiB is marked dirty. To transfer this in 7,200 s requires 16,875 / 7,200 = 2.34 MiB/s. Add to this the 2,000 KiB/s [call it 2

MiB/s] for host traffic, for a total of 4.34 MiB/s.

If mirrors are fractured for 60 minutes, the amount of data marked dirty is 60 x 60 x 250 x 128 KiB = 112,500 MiB.

To synchronize this in 2 hours requires 112,500 / 7200 = 15.6 MiB/s to which must be added the 2 MiB/s for host traffic,

for a total of 17.6 MiB/s "

My questions :

I am confused with extents because I read that 256 Gib = 128 Kib, 1kib per extra 2Gib.


Q1 .  From the above lab question : " LUNs are 64 GiB, therefore extent size is 128 KiB" -  How is 128 KiB is obtained ?


  "When a mirror is fractured, each write touches an extent – a new extent "

for random writes – so the number of writes [same as the number of extents] is 7,200 MiB / 128 KiB = 57,600 extents "


Q2. For random writes, why is the number of writes = number of extends ?


Q3. Is there any difference for sequential writes ?



Q4 .  From Module 5, page 111 - extent size is 256 blocks (LUN size is 256 GiB) = 128 KiB  - Again how is 128 KiB obtained ?


Q5. Is there a default extent size ?


Q6.  Are clone extents the same as the mirror view extents  ?


Thank you.

June 25th, 2013 08:00

The easiest way to calculate MirrorView/S and Clone extent sizes is to use the size of the LUN in GB as the extent size in blocks, as I've posted elsewhere. There is a minimum extent size of 128 kB, though, so any LUN smaller than 256 GB still has an extent size of 128 kB. There is no default extent size - it depends on the size of the LUN. That should answer Q1 and Q4-Q6.

When we deal with random writes, we assume worst-case conditions. That means that for snapshots every write touches a new chunk, and that for Clones and MirrorView/S mirrors, each write touches a new extent. For sequential writes, it takes a certain number of writes before a new extent is touched. If my extent size is 128 kB, and my sequential write size is 8 kB, then I'll do 16 writes on one extent before moving to the next. The important ratio is therefore that of extent size to I/O size.

33 Posts

June 25th, 2013 09:00

Thanks for the explanation.

If we use the size of the LUN in GB as the extent size in KB, for a LUN size of 2 TB we will have 2048 GB.

2048 GB will give us 2147483648 KB , which is not the answer in your reply to someone's posting.

Your replied 2048 GB  = 1024 KB as the extend size.

Q1.  Is this how you obtained 1024 KB ?

For 256 GB, extent size is 128 KB. thereafter every additional 2GB is 1KB

For 2TB,

2048 - 256 = 1792 GB

1792GB / 2 = 896 KB,   hence total is 896 + 128 = 1024 KB.  Is this the correct way to do it ?

Q2.  Please give me an example of random writes, and how you assume a worst case scenario.

Thank you.

June 25th, 2013 09:00

I made a mistake in my reply, and have now corrected it. The LUN size in GB is the extent size in blocks, with a minimum of 128 kB.

An example of random writes? Many applications perform random writes. For example, in an email application used by a large number of users, writes to the database [or store, whatever it may be called] will typically be random because the location of user mailboxes will be scattered all over the LUN used for that store. The worst case sceanrio simply means that we assume that each new write touches a new extent - which is typically not what is seen in real-world environments. Put another way, the worst-case scenario assumes no locality of reference, and no rewrites.

33 Posts

June 25th, 2013 10:00

My apologies, I was asking for random write examples for worst case scenarios in terms of  extent sizes that you mentioned earlier.

Is this a worst case scenario using 7200MiB/ 128KiB ?

" 1,000 KiB/s for 2 hours is 1,000 KiB/s x 7,200s = 7,200,000 KiB [round to 7200 MiB], therefore the mirrors can fall 7,200

MiB behind, and still synchronize in 2 hours. When a mirror is fractured, each write touches an extent – a new extent

for random writes – so the number of writes [same as the number of extents] is 7,200 MiB / 128 KiB = 57,600 extents "

If the extent size is in blocks 2048 GB, converted to blocks would  = 4294967295 blocks.

That is why I was asking you if my calculation method  to obtain 1023 KB was correct

Thank you

June 25th, 2013 10:00

Yes, that's an example of the calculation performed as part of a worst-case scenario.

For a LUN of 2048 GB, the extent size is 2048 blocks = 1024 kB. As I mentioned, take the LUN size in GB, and use that number as the number of blocks in an extent.

33 Posts

June 28th, 2013 02:00

Hi Andre,

I came across Lab 3 Part 1 - Analysis of NAR001.nar

" step 2

LUNs 500 and 501 are active. At around 48 minutes into the run, the write‐aside value

for LUN 500 is changed from 2048, the default, to 255 "

I can't find any ' write-aside value' in step 2 mentioned above.  What should i be looking for ?

Refer to the graph below : The only thing that I see that is close to the value 255 is the write size but it can't be at 48 minutes into the run (around the time stamp 8:46 ?) as it is a linear line.   I must be looking at wrong metric.

Nar001.jpg

Thank you once again

June 28th, 2013 07:00

The LUN write-aside value, as discussed in the class, can be viewed and modified only with the CLI, and therefore only on a live system. Unisphere Analyzer does not show this value, which is why it was specified as part of the exercise. There's no metric related to write-aside.

If you weren't told that the write-aside value had been changed, you could also conclude [from looking at the NAR] that write cache had been disabled for the LUN. That, of course, could only be true if the NAR showed no write cache hits, and no forced flushes, for that LUN.

5.7K Posts

July 1st, 2013 00:00

Great discussion, guys!!

No Events found!

Top