Start a Conversation

Unsolved

This post is more than 5 years old

7145

November 29th, 2011 16:00

FAST Cache vs Fast VP

First off- I just wanted to start by saying hello and giving a little background.  We've had a CX3-20c for several years, which was an upgrade from a CX300 back in late 2005.  Knock on wood, I've never had a single minute of downtime on our CX and I am very anxious to get going on our new VNX 5300.  Our VNX 5300 has 8 x 100GB SSD's, 31 x 600GB 15K SAS, and 21 x 2TB NL-SAS.  The VNX will mostly store "active" data in the form of VMware VMFS volumes that have a variety of file servers, email, web, and various app servers, and a multitude of SQL DB servers (high and low performance both). In addition, there will be a ton of "inactive" data in the form of a dozen of terabytes of radiology images stored on physical Windows boxes. Most of that data will rarely get accessed.  It's currently stored in non-EMC JBOD's which after a couple close scares, I am planning to move onto the production VNX SAN. I hate to waste the more costly VNX storage on these terabytes of rarely touched images, but we'll sleep better at night having them on what is more reliable hardware.  All of these hosts will be attached FC.

Anyway- I've been reading through the VNX Best Practices document (OE 31.5). It has been very useful and I've learned a lot, but I have several questions which are confusing me. I have been planning on using the 8 x 100GB SSD's for a 2 drive FAST Cache mirror, and a 4+1 RAID5 for FAST VP.  My understanding was that we could put all the SSD, SAS, and NL-SAS in a big pool and let it work it's magic.  This is very appealing to me after years of manually working with RAID Groups in the CX3.  I also understand I trade off some flexibility and maybe some custom performance, but with the VNX having a 6GB SAS backplane, and 8GB FC connectivity, those things alone should blow my old CX out of the water, since it was limited by the old CX300 DAE's running on a 2GB loop.  Knowing this, I think while my priority has always been performance and availability in the past, availability and ease-of-use might be a bit more important going forward. I would think if you add FAST Cache and/or FAST VP into the mix, I'm not going to be upset losing the granularity of manually setting RAID Groups.

BUT- statements like these in the BP document really confuse me:

"It is recommended that you consider using flash drives as a FAST cache with a two-tiered mechanical drive provisioned pool before provisioning flash drives to create a third tier"

and

"Note that a FAST Cache cannot be used with either pool tiers or traditional LUNs made-up of flash drives."

Is this to say that FAST Cache and FAST VP cannot work together?  That would go against everything I thought I knew before we purchased and before reading the BP document.  If that is just plain incorrect and I'm reading things out of context somehow (like maybe it means pools made of ONLY flash drives), then am I on the right track by putting things in a giant pool with 3 tiers and having FAST cache as well? 

Thanks!

2 Intern

 • 

20.4K Posts

November 29th, 2011 16:00

FAST Cache and FAST VP work together and complement each other. I think what they are trying to say is if you have limited number of flash drives, it makes more sense to use them as FAST Cache vs tier 1 in your pool because FAST Cache will complement pool LUNs and traditional LUNs.

Here is another good read:

EMC Fast Cache whitepaper

https://community.emc.com/docs/DOC-12903

8.6K Posts

November 30th, 2011 01:00

Right

The general guideline is – if you have a limited number of flash drives first use them for FAST CACHE since that reacts faster and more granular (64k. vs 1GB) and benefits almost all scenarios.

If you then still have flash drives then use them with FAST CACHE.

The last comment just meant that data blocks that are residing on flash (due to FAST VP or manual) wont be cached again by FAST Cache – which makes sense.

For the rarely accessed data I would try if its compressible – if yes put them onto a NFS/CIFS share (and not into a vmdk) and use file compression.

Make sure you work with your EMC or partner technical contact for design.

It might make sense to use more than one pool or a mixture of a pool and some traditional raidgroups for part of the databases.

Rainer

2 Intern

 • 

20.4K Posts

November 30th, 2011 02:00

Rainer_EMC wrote:

If you then still have flash drives then use them with FAST CACHE.

you mean FAST VP

8.6K Posts

November 30th, 2011 02:00

Yes I meant FAST Cache first and then FAST VP

392 Posts

November 30th, 2011 05:00

"It is recommended that you consider using flash drives as a FAST cache with a two-tiered mechanical drive provisioned pool before provisioning flash drives to create a third tier"

"Note that a FAST Cache cannot be used with either pool tiers or traditional LUNs made-up of flash drives."

To expand on Rainer's post. 

FAST Cache offers the 'biggest bang for your buck' with Flash drives.  The performance advantage of a FAST Cache can be applied to more than one pool, or traditional LUNs.  Within a pool, a FAST Cache's performance advantage can be applied to one or more tiers made-up of mechanical drives.  The performance boost of Flash drives can be potentially felt by all the LUNs based on mechanical drives on the storage system.

Putting flash drives in Virtual Provisioning pool Flash drive tier gives their performance benefit to only the LUNs or parts of LUNs resident in that one tier of that one pool.  A smaller fraction of the storage system's data would receive the benefit of Flash drive performance. 

Finally,  a  FAST VP pool Flash tier and the FAST Cache are natively Flash drive based logical storage objects.  Their performance is essentially the same.  There would be no additional performance benefit to FAST Caching a FAST VP Flash tier.

I recommend you browse EMC Unified Storage System Fundimentals for Performance and Availablity.  As an experienced Storage System Administrator, you likely know most of it.  However, it contains additional information on the VNX not found in the Best Practices document, that you may find helpful.  Fundimentals is downloadable from Powerlink.

38 Posts

November 30th, 2011 06:00

Thanks for all the clarification guys.

jps00 wrote:

Finally,  a  FAST VP pool Flash tier and the FAST Cache are natively Flash drive based logical storage objects.  Their performance is essentially the same.  There would be no additional performance benefit to FAST Caching a FAST VP Flash tier.

If there is a piece of cold data living in SAS or NL-SAS storage, which is part of my 3-tier pool, and there is an immediate need for that data, and it is accessed several times (is the rule 3?), won't it immediately get promoted into FAST Cache?  Wouldn't that, as this particular EMC guy's blog words it, basically create an instant "I/O turbo boost" and give you added performance right away?  Versus waiting for a scheduled tier move?  If so, that would seem to me that you would definitely benefit from having FAST Cache working with/on the a 3-tier pool, even with it already having Flash drives in it.  Is that right?

As I said above, the VNX will probably destroy the performance of my CX3 that is currently running at 2gbit FC, whether I have FAST Cache/VP or not.  I just want to do this in a way that is easy to manage, and still make sure I'm not completely shooting myself in the foot with the expensive SSD drives.

392 Posts

November 30th, 2011 07:00

that would seem to me that you would definitely benefit from having FAST Cache working with/on the a 3-tier pool, even with it already having Flash drives in it.

I think you are beginning to understand the flexibilty of the VNX's provisioning.

Flash tier resident LUNs (and parts of LUNs) and traditional LUNs based on flash drives have consistent highest level of performance.

FAST Cache assisted LUNs, after their initial accesses are promoted into the FAST Cache.  Once there, they have the highest level of performance.  Promotion of data into the cache is based on number of accesses OR proximity to an address begin accessed.  That is, a 'block' data around a frequently accessed address or addresses is promoted into the cache.

Flash drives are a low capacity expensive resource.  Mechanical drives are a medium to high capacity less-expensive resource.

Figuring out if you will get the best cost/performance benefit from a FAST Cache or a larger sized FAST Cache vs. a Flash FAST VP tier is the subject of a lot of spreadsheet analysis, and guesstimation.

If I was running an OLTP workload and I knew a specific LUN  or a small number of LUNs of modest size was 'hit' everytime, all the time I would fix it in a RAID group made-up of Flash drives. This is the Flash tier deployment of flash drives case.  If my workload accessed a number of different LUNs of TB capacity, and subsets of these LUNs were accessed together leaving the others unused for minutes to a small number of hours, I would use all my Flash drives in a FAST Cache.

You know your workload.

HTH 

38 Posts

November 30th, 2011 08:00

Rainer_EMC wrote:

For the rarely accessed data I would try if its compressible – if yes put them onto a NFS/CIFS share (and not into a vmdk) and use file compression.

The bulk of what is going to be the "rarely accessed data" is either already highly compressed radiology JPEG images, or radiology archive data which is in large ZIP files. So I don't think I would benefit from compression.

But I am interested, and is part of the reason I bought the VNX in the "unified" setup, in using the VNX Cellera components and setting up CIFS shares. From a cost standpoint, it didn't add much to the overall purchase and I thought it might be useful someday.  However, from somone who has always setup file shares under Windows servers (both physical and virtual) and never used NAS's, is it going to be much of a challenge to start using the VNX CIFS abilities?  I know this is a big question, but generally speaking from those that have done both-  What advantages, other than not having to manage/maintain Windows servers, are there to using VNX NAS to serve out files to applications and users?  What about disadvantages?  I wonder if I would miss the ability to remote into a Windows box and directly move/manage files on the box, or have scheduled tasks/scripts that run directly on that Windows box?

38 Posts

November 30th, 2011 08:00

jps00 wrote:

If I was running an OLTP workload and I knew a specific LUN  or a small number of LUNs of modest size was 'hit' everytime, all the time I would fix it in a RAID group made-up of Flash drives. This is the Flash tier deployment of flash drives case.  If my workload accessed a number of different LUNs of TB capacity, and subsets of these LUNs were accessed together leaving the others unused for minutes to a small number of hours, I would use all my Flash drives in a FAST Cache.

You know your workload.

Well I'm not sure how well I really know our workload, heh.  I do know which systems I want to really improve the performance on (a particular clinical SQL DB app for one) and would love to dump the whole thing into Flash storage, but the databases are much too large for that.  I have been hoping that FAST VP would work it's magic and find the 1GB chunks that are "really hot" and move them magically into Flash storage for me.  But how could I know ahead of time how many 1GB chunks of data that the VNX will move from these various SQL DB data sets until it's actually doing it?  If I knew that now, I could probably more rightly size how much for Cache versus for the Tier.  Maybe I am thinking too much about this and should just experiment, but time is not on my side. 

So what happens if I make a big storage pool and decide later I want to change things up in regards to the flash drives?  Am I going to have to migrate data off to rebuild the pool or can I just add/remove flash drives to that tier while data is in the pool?

8.6K Posts

November 30th, 2011 09:00

Setting up the Celerra side for CIFS or NFS is easy

If you want to do multi-protocol it gets a bit more difficult

Advantages are that its more flexible to have that “bulk file” data in a file share than inside a VMDK

You also get instant multi-user access

In comparison to Windows servers its less work, less patches to install, more flexible

Disadvantages would be if you need functions that only work with native Windows servers like DFS-R (FRS) or encryption

Might even make sense for part of your VMware data like templates and installation CD’s

That way you can access from every client with NFS/CIFS/ftp without having to go through VMware.

Obviously as an EMC employee and long-time Celerra advocate I am a bit biased there – so what do others think ?

The only thing to be aware of is to do proper capacity planning – think how much space you want to use for SAN vs. NAS since its more effort to change it afterwards.

Adding space isn’t a problem – but reducing

Rainer

38 Posts

November 30th, 2011 09:00

Thanks Rainer.  I do like the idea of not having to mess with Windows service packs, version changes, patches, AV issues, etc.  The Celerra stuff and NAS in general is just foreign to me.  I'll need to read up and/or do some web training videos on it once I jump in.

Rainer_EMC wrote:

The only thing to be aware of is to do proper capacity planning – think how much space you want to use for SAN vs. NAS since its more effort to change it afterwards.

Adding space isn’t a problem – but reducing

So this was going to be my next question and hopefully you can clarify.  I don't know exactly how much I am going to use the Celerra file portion of my VNX, but I do need to proceed with carving out LUN's for all my block storage needs.  I have been playing with it this morning and reading through the forums. Can I just go ahead and create my big storage pool now, then allocate LUN's within it to assign to the Celerra file portion for use as CIFS/NFS/etc later? 

I'm just tinkering right now, and I'm sure I have the steps out of order, but for example I'm getting a rather drab window with a message that there is no available Template Pool when I go under the Storage Pools For File and try and click create.  I created a LUN within my big 3-tier storage pool and assigned it to the ~filestorage group under the Storage Groups section.  Maybe I'm way off the path here haha, I need to do some more reading.  Just want to make sure I don't have to determine up front how much space to exclude from my big storage pool to "save" for Celerra file use.  Would be great if you can allocate it's space *from* the main storage pools.

2 Intern

 • 

20.4K Posts

December 1st, 2011 05:00

So this was going to be my next question and hopefully you can clarify.  I don't know exactly how much I am going to use the Celerra file portion of my VNX, but I do need to proceed with carving out LUN's for all my block storage needs.  I have been playing with it this morning and reading through the forums. Can I just go ahead and create my big storage pool now, then allocate LUN's within it to assign to the Celerra file portion for use as CIFS/NFS/etc later? 

i was kind of struggling with this myself, take a look at this thread

https://community.emc.com/thread/124591

2 Intern

 • 

20.4K Posts

December 1st, 2011 05:00

r1214 wrote:

So what happens if I make a big storage pool and decide later I want to change things up in regards to the flash drives?  Am I going to have to migrate data off to rebuild the pool or can I just add/remove flash drives to that tier while data is in the pool?

you can't "drain" and remove specific devices from a pool (something that you can do on VMAX) , it would be a complete rebuild of a pool.

38 Posts

December 1st, 2011 08:00

dynamox wrote:

you can't "drain" and remove specific devices from a pool (something that you can do on VMAX) , it would be a complete rebuild of a pool.

So basically I better be happy with my planned setup of 4+1 Flash drives for FAST VP in the tier, and 1+1 Flash drives for Cache, otherwise if I want more drives for Cache later, I'll either have to either buy more or destroy/re-create the pool?

38 Posts

December 1st, 2011 08:00

dynamox wrote:

i was kind of struggling with this myself, take a look at this thread

https://community.emc.com/thread/124591

Some of that was over my head since I have no past Celerra experience, but it was a good read.  But the basic net-net is that while it may not be the most uber-optimal recommended setup, you CAN indeed create LUNs out of your big block storage pool and assign them for use to the Celerra NAS?

No Events found!

Top