In this document, you'll find that on the VNX and VNX OE for Block 32 and earlier, pool LUNs are made up a slices of 1GB. That is, each time you ask the pool to create a LUN for you, it will be comprised of 1GB slices from all of the drives of the private RAID groups that make up the pool.
Starting in VNX OE for Block 33 (VNX2), these slices became more granualar at 256MB.
Whether you're talking about a Thin LUN, Thick LUN or Classic (RAID Group) LUN, all of them will be striped across all of the disks available, either the pool or the RAID Group.
Unless Rainer or another EMCer (sheesh - do I call you guys 'Dellers' now? ;-) ) posts an engineering document with more detail, I don't think we'll get more detail other than what he posted above.
There isnt a single document that covers this – its different layers in the software that changed over time
Plus with FAST VP moving slices between disk tiers the initial allocation pretty quickly no longer applies
There are a lot of optimizations in the code – in general we want to spread the data over multiple drives but also keep enough data on one drive that we can do large efficient I/O’s and full-stripe write’s
Its also changing – with flash drives doing that much more IOPS's at lower response time than magnetic drives it is becoming more important to do efficient I/O’s and less to do "wide-striping"
The “cost” of doing 1x 64k I/O request is a lot less then doing 8x 8k requests
umichklewis
3 Apprentice
•
1.2K Posts
0
September 7th, 2016 11:00
Hello and welcome to the community!
Your question is quite common, and it's certainly good to know these details. You can find more information about how virtual provisioning works on the VNX2 series at https://support.emc.com/docu48706_Virtual-Provisioning-for-the-VNX2-Series---Applied-Technology.pdf?language=en_US.
In this document, you'll find that on the VNX and VNX OE for Block 32 and earlier, pool LUNs are made up a slices of 1GB. That is, each time you ask the pool to create a LUN for you, it will be comprised of 1GB slices from all of the drives of the private RAID groups that make up the pool.
Starting in VNX OE for Block 33 (VNX2), these slices became more granualar at 256MB.
Whether you're talking about a Thin LUN, Thick LUN or Classic (RAID Group) LUN, all of them will be striped across all of the disks available, either the pool or the RAID Group.
Let us know if that helps!
Karl
DWormsbecher
11 Posts
0
September 7th, 2016 13:00
Hi Karl,
thanks for your reply.
I've read the documents and I'm familiar with how the pools are builded.
Maybe I have not explained well what I want to know.
If I write Data to the LUN in a pool, how much of data would be written to the first, the second, the third...... drive.
I mean, I cannot imagine that the system writes 256MB to the first drive and then deals with the next one.
It would mean if 100 user would write 100 word documents, all of them would just work with one drive
Didi
Rainer_EMC
4 Operator
•
8.6K Posts
1
September 7th, 2016 17:00
Its not that simple
Pools are built from private raidgroups
I think element size is 64k on VNX2 for the raidgroups
then on top there are the pool slices that Karl mentioned
Above that you usually have either the host file system or VNX NAS file system which also distributes files across the LUN(s).
Even the simplest file systems don’t allocate file just sequentially and continuous.
With thin LUNs (8k block size), multiple tiers and FAST VP it gets even more interesting
At the end though it usually works out that I/O is distributed across the available disks in a pool
DWormsbecher
11 Posts
0
September 8th, 2016 04:00
Hi Rainer,
yes this is what I mean, its not so easy, but I want really like to know it.
Hope somebody can explain the whole process.
Regards
Didi
umichklewis
3 Apprentice
•
1.2K Posts
0
September 8th, 2016 06:00
Unless Rainer or another EMCer (sheesh - do I call you guys 'Dellers' now? ;-) ) posts an engineering document with more detail, I don't think we'll get more detail other than what he posted above.
Thanks!
Rainer_EMC
4 Operator
•
8.6K Posts
2
September 8th, 2016 07:00
There isnt a single document that covers this – its different layers in the software that changed over time
Plus with FAST VP moving slices between disk tiers the initial allocation pretty quickly no longer applies
There are a lot of optimizations in the code – in general we want to spread the data over multiple drives but also keep enough data on one drive that we can do large efficient I/O’s and full-stripe write’s
Its also changing – with flash drives doing that much more IOPS's at lower response time than magnetic drives it is becoming more important to do efficient I/O’s and less to do "wide-striping"
The “cost” of doing 1x 64k I/O request is a lot less then doing 8x 8k requests
Rainer_EMC
4 Operator
•
8.6K Posts
1
September 8th, 2016 07:00
Didi,
if you are an EMC partner I would suggest to join the partner USpeed program.
Rainer
DWormsbecher
11 Posts
0
September 12th, 2016 02:00
Rainer,
thanks, will see that we join as soon as possible.
Took a look at some old documents again, found just a little section about element size on page 86-87.
VNX Fundamentals.pdf
Regards
Didi