Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

4714

December 27th, 2011 12:00

MetaLUN vs. FAST Pool

We recently purchased a VNX5300 for a small VDI deployment (500 users or less).

We ordered 38 SAS drives and 10 EFDs.  The vSpecialist at EMC is recommending:

2 EFD for FAST Cache

7 EFD in FAST Pool 1 (RAID-5)

30 SAS drives in FAST Pool 1 (RAID-5)

The remaining EFD and SAS drives are for the vault and spares and stuff like that.

I'm fine with the recommendation, but I wonder if this might be better:

8 EFD as FAST Cache

6-disk RAID-10 groups on the SAS drives (five of them)

MetaLUN across the RAID-10 groups for replica datastore and all the desktops.  Many (if not all) of the desktops are persistent.  This will be running on vSphere 5 and View 5.

Assume 3 images and about 200 users initially, scalable to about 500.  With this many spindles, capacity is not a concern.  This array is dedicated to VDI.  What do you think?

27 Posts

December 28th, 2011 08:00

Thanks all for the great discussion and feedback!

9 Posts

December 29th, 2011 04:00

Hi Vinn,

I would recommend you to read these two articles before you make your final decision:

1- The biggest Linked Clone “IO” Split Study http://myvirtualcloud.net/?p=2084

2- Use Flash Drives (SSD) for Linked Clones, not Replicas http://myvirtualcloud.net/?p=2513

Also IO wise it turns out that a 3+1 RAID5 is more efficient that a RAID10 set, thx to write optimization techniques on the VNX.

I've read valid points from every one here. The most important is now to collect data to identify your VDI IO trend and behavior. Then you can design the storage accordingly to your real needs and requirements.

2 Intern

 • 

1.3K Posts

January 19th, 2012 17:00

Erik, can you help to clarify the three backend IO for the per data going to Fast Cache

5 Practitioner

 • 

274.2K Posts

January 20th, 2012 00:00

Hi,

The three backend iops required is a theoretical one. For starters, your FAST-cache should have reached the high watermark. So any io that was promoted to FAST-cache had to be written to those disks. Next, because we need to flush data from FAST-cache, we need to read that data. Finally, we need to write it out again.

However, this is not taking account several things:

1) Raid overhead (when I say 3 backend iops it is really 3 actions taken);

2) The write out to disk is probably a full-stripe write, which helps a lot in raid5 or 6 setups;

3) The fact that is FAST-cache is sized properly, many iops will never leave FAST-cache. As long as the io is hot and FAST-cache is not overly filled, nine of this overhead applies (which is where you want to be).

Best regards,

Erik Zandboer

vSpecialist

PS: Sent from my Blackberry - Please excuse typos

No Events found!

Top