Unsolved
This post is more than 5 years old
4 Operator
•
2.1K Posts
0
1331
October 1st, 2007 11:00
Tiering storage within a single array
An interesting question came up recently for one of our applications. We recently migrated the hosts for this application from IBM ESS arrays onto a DMX3 and now they are looking to change how the backups are done for the hosts. They want to use additional low performance disk to dump the database and them back that up to tape.
Currently our DMX is configured with all 146GB 10K rpm drives. Their request would have us adding 300GB 10k rpm drives to the same array. We looked at this closely and due to some issues on how that would be managed we gave them the addition space on a CLARiiON instead. This breaks an established best practice in our enterprise to avoid attaching a host to multiple arrays, but seemed at the time like the best option.
My question is for anyone out there who is using multiple drive types (with different performance characteristics) in a single DMX3 (or other Symm for that matter). How do you manage which devices are built on which type of drive? How do you ensure that any given device you mask to a host has the required performance characteristics?
I found one EMC whitepaper that suggests mixing the drives up so that devices are built across different drive types and therefore provide a "blended" performance, but that isn't what we want. I want to be able to give a specific performance class to meet each request... and I don't want to have to track everything manually.
Is anyone doing this? Am I missing something simple? We have ECC 5.2 sp5 as well as the full suite of Solutions Enabler commands to work with, so I'm open to anything (expect manually tracking this).
I appreciate any help that you have for me.
Thanks
Currently our DMX is configured with all 146GB 10K rpm drives. Their request would have us adding 300GB 10k rpm drives to the same array. We looked at this closely and due to some issues on how that would be managed we gave them the addition space on a CLARiiON instead. This breaks an established best practice in our enterprise to avoid attaching a host to multiple arrays, but seemed at the time like the best option.
My question is for anyone out there who is using multiple drive types (with different performance characteristics) in a single DMX3 (or other Symm for that matter). How do you manage which devices are built on which type of drive? How do you ensure that any given device you mask to a host has the required performance characteristics?
I found one EMC whitepaper that suggests mixing the drives up so that devices are built across different drive types and therefore provide a "blended" performance, but that isn't what we want. I want to be able to give a specific performance class to meet each request... and I don't want to have to track everything manually.
Is anyone doing this? Am I missing something simple? We have ECC 5.2 sp5 as well as the full suite of Solutions Enabler commands to work with, so I'm open to anything (expect manually tracking this).
I appreciate any help that you have for me.
Thanks


mcd2
3 Posts
1
October 1st, 2007 12:00
We have separate disk groups for 146G 15K drives and 300G 10K drives (I think they have to live in distinct groups), plus an additional disk group of 8 300G disks for special purposes.
At device creation time, we use the -disk_group option of symconfigure to indicate which group to use. Unfortunately ECC (5.2 for us) doesn't seem to have a way of specifying which disk group to use when creating a device, and also has no easy way to distinguish volumes from the different disk groups. From the command line, the same -disk_group option can be used with symdev and symdisk to list disks or devices per disk group.
To help us track things in ECC we manually created groups with the devices for each tier. In other parts of the interface you can then choose to "arrange by" devices based on your tiers. This is manageable if you're adding devices massively via the bin file (you get large ranges for each tier), but would certainly be a pain if you create them as you go.
Marc
dynamox
9 Legend
•
20.4K Posts
1
October 1st, 2007 13:00
we have 146G drives, 300G drives and 500G drives. For my shop business needs we decided that create hypers of different size that would represent different tiers of storage, for example my 146G drives are split into 8632MB hypers, while my 300G drives are split into 17264MB hypers and my 500G LCFC drives are split into 58G hypers. I understand that with this setup i can't do any Timefinder operations to go from one tier of storage to another, but that's fine for my shop. We have different needs for different tiers of storage, 500G LCFC drives are strictly used for NAS (Celerra) gateways where performance is not a requirement, 300G drives are used for test,dev and some prod ..and 146G drives are used for prod.
xe2sdc
4 Operator
•
2.8K Posts
0
October 2nd, 2007 00:00
As you read it's possible to mix different drive types in a Symmetrix (yes even Symms allowed to mix different devices in the backend
Can you please tell us where did you find the whitepaper you mentioned ?? I'm interested in reading the whitepaper since AFAIK EMC tells exactly the opposite .. But maybe I'm wrong. Considere that when you build a metadevice you are FORCED to use devices with the same protection to have the best performances.
Allen Ward
4 Operator
•
2.1K Posts
0
October 11th, 2007 06:00
It sounds like this is something we could do using command line... unfortunately we have been given direction from the "powers that be" that we will use the ECC GUI for as much as possible to ensure simplified, consistent provisioning of storage. At least that is the excuse they are giving us!
Allen Ward
4 Operator
•
2.1K Posts
0
October 11th, 2007 06:00
It refers to a few options for tiering storage, but none of them are practical for us to manage.
Allen Ward
4 Operator
•
2.1K Posts
0
October 11th, 2007 06:00
That sounds pretty much like one of the "solutions" we put on the table, but in the end we didn't want to introduce additional device sizes... we have standardized on two sizes for open systems and don't want to add more complexity.
It is sounding more and more like this may not be something we want to pursue until EMC can manage it more effectively in their "flagship" management product.
xe2sdc
4 Operator
•
2.8K Posts
0
October 11th, 2007 07:00
drives up so that devices are built across different
drive types and therefore provide a "blended"
performance, but that isn't what we want.
This isn't exactly what the Technote says .. It simply refers to the drives in the backend and states that the preferred way of implementing tiered storage in a DMX4 is "to spread the different types of drives throughout the system with no special attention paid to intermixing drive speeds and capacities" .. This have little if nothing to share with the devices you'll map to the hosts. The concept behind the technote is that it's better (for the storage and for your hosts) to spread the BACKEND workload on ALL AVAILABLE PROCESSORS instead of creating "hardware" partitions (as in the previous example from the same technote).
It doesn't talk about "blended" performances .. since -I think- nobody want to blend everything in a single big pool .. devices on high performances disks will have the best possible performances .. devices on slower disks will be simply slower, but will give you the best possible performance for a slower disk ..
Different drive types will be in different disk group. So you'll have different DiskGroups and each and every disk group will span each and every processor. This will give you the best possible performances for every drive type. Or speaking in other terms, given the Disk Group you'll be able to know the performances of the drives in the DG. Different DG will mean different performances of your drives .. and of your devices
dynamox
9 Legend
•
20.4K Posts
0
October 12th, 2007 05:00
Allen Ward
4 Operator
•
2.1K Posts
0
October 12th, 2007 06:00
You can create Storage Pools (as long as you have the ARM licence) and put different devices in different pools. Unfortunately there is no way to put devices in a pool at time of creation, so you still have to figure out which devices offer which performance characteristics to divide them up.
I think EMC needs to work on their Storage Pool implementation anyway. I can drop an entire DMX on a Storage Pool and it will put all the devices in that pool, but it can't remember that you wanted that Symm in that pool. Anything new you create after that still needs to be manually added to the correct pool.
Allen Ward
4 Operator
•
2.1K Posts
0
October 15th, 2007 08:00
If so, please point me to the right documentation so I can figure it out. If not, we're still stuck since our processes require using ECC to provision (to maintain accountability and track who did what.
MrTS2Symm
113 Posts
0
October 18th, 2007 06:00
You might want to ask this in the EMC Control Center forum as a quicker reply could come about that way.
Michael
xe2sdc
4 Operator
•
2.8K Posts
0
October 18th, 2007 07:00
But if you want to use symcli to solve this problem, you can create some DeviceGroups (a DG for every DiskGroup) and put symdevs in DG (devicegroup) dividing them by DG (diskgroup) .. ECC will discover the DG and will import them in its configuration.
You can also create a small shell script that will delete and recreate the DGs, rescanning the devices and putting them in the appropriate DG (depending on the diskgroup they belong). ECC will later discover the DGs and you'll have what you want
-s-
Message was edited by:
Stefano Del Corno
Allen Ward
4 Operator
•
2.1K Posts
0
October 18th, 2007 07:00
Kiran3
410 Posts
0
October 18th, 2007 19:00
I know not everyone with Symms uses ECC, but I suspect everyone who has
Symms AND ECC will visit here
LOL
In our shop, we are planning to add 10K drives as tier2 storage. we might not logically partition the system by reserving FAs for each tier. we already have FAs reserved for diff OSs so we plan to continue with that.
this is not in place yet so i think i am also in the same boat as we are...thinking how to see it with ECC (though we prefer cli)
and once its all done, i need to open a new thread to see how EMC reports tiered storage in storagescope cause thats where we look at to get billing reports. we certainly do not want to do the chargeback manually
#sleep
Allen Ward
4 Operator
•
2.1K Posts
0
August 13th, 2008 13:00