Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

2088

December 20th, 2010 08:00

Creating FAST Storage Pools

Hello all,

I am about to start a FAST configuration for a customer that will only include FC and SATA disks.  I have read the EMC Clariion FAST document but it doesnt answer a few questions i have.

From what i read, i should create storage pools that i need and then i will add them to FAST.

The customer has the following disks to work with

40 300G 15K FC

15 146G 15K FC

7 2TB SATA

My plan was to create multiple 4+1 R5 groups with the 300G FC disks.  I read that the 4+1 was the optimal pool config to use in storage pools.

I was then going to use (14) 146G FC drives and create a R1/0 pool.

For the SATA disk, i am really limited since i need one for a HS and 6 disks is not realy optimal for a R5 pool.  I can make a 6 disk R6 pool but just not sure.  If i dont make the R6 pool, then i have a  disk just sitting there...

I am hoping that with this config, i will have 3 tiers of storage R1/0 FC, R5 FC, R5 SATA (since i cant have EFD, FC, SATA) and that FAST will know to move something up to the R10 pool if it needs to from the R5 pool.

First question is, once i create this pool, if later down the road the customer purchases EFD and wants to add it to the FAST config, can it be added without wiping everything off the current FAST pool?

I think i am missing something since i dont understand how to make those multiple 4+1 R5 groups into one big group?  Unless it is that i need to create a pool, right click it and select Expand, and then add the other pools?   I thought you can only do that with disks?  This will be my first time configuring FAST so i guess i will have to wait till i can see it in front of me.

Any advice will be great

Thanks

474 Posts

December 20th, 2010 10:00

FASTVP in Clariion/FLARE30 works within a single storage pool.  So you would create pools that each contain multiple disk types in order to use FASTVP.  There is really no difference between a normal thin pool and a FASTVP pool, other than containing multiple disk types/tiers.  You can added more disks of each type to the pool, so if a customer orders some EFDs later, you can simply add those spindles to the pool and FASTVP will start to tier up.  You also don't need to create any RAID Groups, you just create the storage pool from the physical disks.

There are a couple limitations with FASTVP on Clariion that will affect how you configure the pools.

1.) A pool can consist of 1 to 3 disk technologies.  EFD, FC, and/or SATA.  When you create the pool, you specify the number of disks of each type to use and a single pool is created, from which you can create LUNs.

2.) FASTVP does not know the difference between different speeds or sizes of the same disk technology.  To the FASTVP engine, 10K and 15K FC are the same, 5K and 7K SATA are also the same as each other.  Because of this, it is recommended to use a the same speed/capacity drives of each technology in within a single pool.

3.) Because FASTVP operates within a single pool, all disks regardless of tier use the same RAID type.  In RAID5 it's 4+1, RAID6 uses 6+2 in pools.

The downside for your configuration is there is not way to tier between the RAID10 FC and RAID5 FC disks automatically.  You will end up with two pools at a minimum, one for the RAID10, the other for RAID5.  You could put a single pair of SATA drives in the RAID10 pool and put 4 of them in the RAID5 pool, leaving one for spare.  4 disks is not an optimum disk count for a RAID5 Pool since it uses 4+1 by default but since it's the SATA tier it's not the end of the world.

Depending on the customer's budget, if they can't afford EFDs for FASTCache AND for FAST, the recommendation is to use EFDs for FASTCache first, then buy for FAST itself.  FASTCache works across the entire array for normal LUNs as well as FASTVP and non-FASTVP pool LUNs.  So you get the best bang for your buck with FASTCache.

The Clariion VP White Paper attached discusses pools, including FAST Pools and has some screenshots of the dialogs to create pools.

1 Attachment

542 Posts

December 20th, 2010 11:00

Richard,  thanks for the explanation.  i was thinking of something different before.  I thought that i had to create the 4+1 groups in a pool.  I didn't know that the pool does it for me.  that's where i got confused.  I have read that document and i am thinking that i will configure it like this:

(35) 300G FC disks  (will create (7) 4+1 groups) Cant do 40 drives since i need a HS and then i wouldn't be left with multiples of 5

(5) 2Tb SATA disks  (will have 1 extra disk.  maybe just make a extra HS since the disks are so big?)

I will leave the extra (14) 146G and (4) 300G drives for normal RG's if and when the customer needs them.   I know that i can first create the (35) disks and then expand it with (10) 146G drives but i already know that the space they will get from the first 35 disks is overkill.

Does the best practice of HotSpares (1 for every 30 disks) still apply to large storage pools? Should i use more of my (4) extra 300G FC disks as HS's?

What is the possible IOPS for a 35 disk R5 pool with 15K drives?  35x180=6300.  but i think i am missing something for the raid type calculation?

542 Posts

December 20th, 2010 14:00

Great.  thanks for the info.  this helps a lot

474 Posts

December 20th, 2010 14:00

For FC disks, 1 spare per 30 disks is still best practice but you would want to have a minimum of a couple spares.  I don't recommend only having 1 spare for a system just because you have only 30 or 35 drives.  The easiest thing to do it use the largest FC drive you have a global spares for any FC drives.  So in your case, use 300GB spares, that will also cover the 146GB drives.

IOPS is dependent on a variety of things as I'm sure you know.  Number of threads can increase the number of IOPS per spindle for example.

The basic calculation you already did of 180 IOPS per spindle for 15K drives x 35 drives gives you conservative disk level IOPS for small block random.  That number holds in RAID5 for a 100% read environment.  Writes have more impact and the basic assumption is 4 disk IOPS per host write IO.

Raid5:

Disk IOPS = ReadIOPS + (Write IOPS * 4)

Raid10

Disk IOPS = ReadIOPS + (Write IOPS * 2)

Raid6

Disk IOPS = ReadIOPS + (Write IOPS * 6)

There are optimizations for RAID5 that can help if you are getting sequential writes or large block.  For example, a 4+1 RAID5 nets a 256KB stripe size so if you are issuing 256KB IOs to the array, those writes will not incur the 4X IO penalty.  CX can also coalesce multiple 64KB IOs into a single 256KB IO assuming its sequential IO.  Similar optimizations occur for RAID6 and RAID3.

Further, the 180 IOPS per spindle is a rule-of-thumb number based on 70% disk utilization, keeping latency low for DBs, etc.  I've seen much higher (over 300 IOPS per spindle) in some cases but generally the response time increases as the IOPS increase.  Write cache and prefetching will also help.

26 Posts

August 6th, 2015 07:00

Good day Rich,

  could you plz explain the statement below,

  Because FASTVP operates within a single pool, all disks regardless of tier use the same RAID type.  In RAID 5 it's 4+1, RAID 6 uses 6+2 in pools.

RAID 6 6+2 means how many data and how many parity? same with RAID 5

why it is 6+2, we need minimum 3 disks in raid 6 so can it be 3+2 as well.

plz advise

1.2K Posts

August 6th, 2015 10:00

Shr@y wrote:

Good day Rich,

  could you plz explain the statement below,

  Because FASTVP operates within a single pool, all disks regardless of tier use the same RAID type.  In RAID 5 it's 4+1, RAID 6 uses 6+2 in pools.

RAID 6 6+2 means how many data and how many parity? same with RAID 5

why it is 6+2, we need minimum 3 disks in raid 6 so can it be 3+2 as well.

plz advise

RAID5 4+1 means 4 Data drives and 1 Parity drive (not completely accurate, because data and parity are striped across all the members in that RAID Group).  RAID6 6+2 means 6 Data drives and 2 Parity drives.  Unlike RAID5, RAID6 requires even numbers of data drives, i.e RAID 6 4+2, 6+2, 8+2, 12+2, etc.  You cannot create a RAID6 3+2 group, as it is.

Let us know if that helps!

Karl

26 Posts

August 6th, 2015 11:00

it helped, Indeed thanks Karl

No Events found!

Top