Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

1228

May 24th, 2011 10:00

Stripe size..

Hi,

Tdev chunk size: 768 KB..

What is stripe size for RAID5 (7+1) TDAT and which way 768 kb distibuted on 7 spindles?

What is stripe size for RAID5 (3+1) TDAT?

What is stripe size for RAID10 TDAT?

which is batter raid to distribute 768 KB for read and write?

I know we need to consider many parameter here but trying to understand flow..

1.3K Posts

May 25th, 2011 07:00

That should be a good approximate.  

1.3K Posts

May 24th, 2011 10:00

All TDEV allocation on TDAT is in 768k chunks, no matter the protection on the backend.

The only real difference in performance is with sequential writes, where 3+1 RAID5 will potentially fill the whole RAID5 stripe allowing for a full optimized write.  With 7+1, we will only allocate 768k out of 1,792k (RAID5 7+1 stripe width) and therefore less likely to get an optimized write.  Reads or random write performance won't matter if the allocation chunk doesn't match the RAID stripe.

1.3K Posts

May 24th, 2011 11:00

Cache is allocated based on device activity and has nothing to with the protection or VP or traditional thick provisioning.  A TDEVs have a device WP limit that is the same as a thick device if one was in the same system.

131 Posts

May 24th, 2011 11:00

Got it..

one more question..

what is cache distribution in terms of read and write per TDEV?

is it depends on # TDEV, # of active path, size?

is there any boundary for read cache per device?

and what is math for write pending track per device?

131 Posts

May 24th, 2011 12:00

Thank you for answer..

Found following details from white paper..

Symmetrix Enginuity has a logical volume write pending limit to prevent one volume from monopolizing writeable cache. Because each metamember gets a small percentage of cache, a striped meta is likely to offer more writeable cache to the metavolume.

just trying to understand, what is percentage here (4%)? is there any limit for read?

is it same like Max # of Device Write Pending Slots?

I am asking for general device no matter thick or thin..

1.3K Posts

May 24th, 2011 13:00

There is no limit for reads.  If only one large device is active, it could fill all of usable cache.  The most active data should stay in cache because a modified LRU algorithm is used instead of a simple FIFO.

131 Posts

May 24th, 2011 14:00

I found following details from old thread..

we need 300MB cache for metadata per 1 TB configured storage( DAVE,VDEV,SAVE DAVE) (this is for DMAX not sure about VMAX)

My question is...

if we have 60G cache with 60 TB frame.

I can see available cache solt is almost 48G.

If we consider this frame as a bunker frame and assume we will create 2 VEDV per device. like 20 TB VDEV..( just assumption)

Should i loose 20TB*300MB= 16000MB cache slot?

is it correct math or no?

No Events found!

Top