We have separate disk groups for 146G 15K drives and 300G 10K drives (I think they have to live in distinct groups), plus an additional disk group of 8 300G disks for special purposes.
At device creation time, we use the -disk_group option of symconfigure to indicate which group to use. Unfortunately ECC (5.2 for us) doesn't seem to have a way of specifying which disk group to use when creating a device, and also has no easy way to distinguish volumes from the different disk groups. From the command line, the same -disk_group option can be used with symdev and symdisk to list disks or devices per disk group.
To help us track things in ECC we manually created groups with the devices for each tier. In other parts of the interface you can then choose to "arrange by" devices based on your tiers. This is manageable if you're adding devices massively via the bin file (you get large ranges for each tier), but would certainly be a pain if you create them as you go.
we have 146G drives, 300G drives and 500G drives. For my shop business needs we decided that create hypers of different size that would represent different tiers of storage, for example my 146G drives are split into 8632MB hypers, while my 300G drives are split into 17264MB hypers and my 500G LCFC drives are split into 58G hypers. I understand that with this setup i can't do any Timefinder operations to go from one tier of storage to another, but that's fine for my shop. We have different needs for different tiers of storage, 500G LCFC drives are strictly used for NAS (Celerra) gateways where performance is not a requirement, 300G drives are used for test,dev and some prod ..and 146G drives are used for prod.
Allen, different threads seems to blend and mix together ... The answer to all your needs are DG (Disk Groups) .. But we already talked about that in another thread about differences between RG and DG
As you read it's possible to mix different drive types in a Symmetrix (yes even Symms allowed to mix different devices in the backend ) .. Disks of different size or speed will be in different disk groups in the backend .. And you can use the "symdev -disk_group #" command to find devices in a given disk group. You can use symdisk to have the list of your disks and to have details about every single disk in the array, even filtering "-by_diskgroup" ... You can use Disk Groups (hmm a little confusing since I'm now talking about Symcli Disk Groups) to keep things separated. ECC will find the "symcli Disk Groups" and this will help you in tracking different storage types when you mask storage to your hosts. The best thing is to create devices in big batches (bus somebody already told you )
Can you please tell us where did you find the whitepaper you mentioned ?? I'm interested in reading the whitepaper since AFAIK EMC tells exactly the opposite .. But maybe I'm wrong. Considere that when you build a metadevice you are FORCED to use devices with the same protection to have the best performances.
It sounds like this is something we could do using command line... unfortunately we have been given direction from the "powers that be" that we will use the ECC GUI for as much as possible to ensure simplified, consistent provisioning of storage. At least that is the excuse they are giving us!
That sounds pretty much like one of the "solutions" we put on the table, but in the end we didn't want to introduce additional device sizes... we have standardized on two sizes for open systems and don't want to add more complexity.
It is sounding more and more like this may not be something we want to pursue until EMC can manage it more effectively in their "flagship" management product.
I found one EMC whitepaper that suggests mixing the drives up so that devices are built across different drive types and therefore provide a "blended" performance, but that isn't what we want.
This isn't exactly what the Technote says .. It simply refers to the drives in the backend and states that the preferred way of implementing tiered storage in a DMX4 is "to spread the different types of drives throughout the system with no special attention paid to intermixing drive speeds and capacities" .. This have little if nothing to share with the devices you'll map to the hosts. The concept behind the technote is that it's better (for the storage and for your hosts) to spread the BACKEND workload on ALL AVAILABLE PROCESSORS instead of creating "hardware" partitions (as in the previous example from the same technote).
It doesn't talk about "blended" performances .. since -I think- nobody want to blend everything in a single big pool .. devices on high performances disks will have the best possible performances .. devices on slower disks will be simply slower, but will give you the best possible performance for a slower disk ..
Different drive types will be in different disk group. So you'll have different DiskGroups and each and every disk group will span each and every processor. This will give you the best possible performances for every drive type. Or speaking in other terms, given the Disk Group you'll be able to know the performances of the drives in the DG. Different DG will mean different performances of your drives .. and of your devices
you probably know ECC better then i do .. would it be possible to keep hypers all the same size, but maybe on the ECC side put them into logical containers where you keep hypers from one disk group (one type of drives) in one container ..and so on ? But then would you be able to identify them once they are provisioned to the host which container they came from ? This will be a little more elaborate to identify their tier from symcli.
There is a way to do this as well, but it still requires manual maintenance.
You can create Storage Pools (as long as you have the ARM licence) and put different devices in different pools. Unfortunately there is no way to put devices in a pool at time of creation, so you still have to figure out which devices offer which performance characteristics to divide them up.
I think EMC needs to work on their Storage Pool implementation anyway. I can drop an entire DMX on a Storage Pool and it will put all the devices in that pool, but it can't remember that you wanted that Symm in that pool. Anything new you create after that still needs to be manually added to the correct pool.
Is there a way to reference Disk Groups from within ECC?
If so, please point me to the right documentation so I can figure it out. If not, we're still stuck since our processes require using ECC to provision (to maintain accountability and track who did what.
I don't know ECC .. so -as MrTs2Symm suggested- I think it's better to ask in the appropriate forum ..
But if you want to use symcli to solve this problem, you can create some DeviceGroups (a DG for every DiskGroup) and put symdevs in DG (devicegroup) dividing them by DG (diskgroup) .. ECC will discover the DG and will import them in its configuration.
You can also create a small shell script that will delete and recreate the DGs, rescanning the devices and putting them in the appropriate DG (depending on the diskgroup they belong). ECC will later discover the DGs and you'll have what you want
I thought about posting there, but it made more sense to me to post it here. Not everyone who uses ECC has Symms, and this is a Symm specific question. I know not everyone with Symms uses ECC, but I suspect everyone who has Symms AND ECC will visit here.
Not everyone who uses ECC has Symms, and this is a Symm specific question. I know not everyone with Symms uses ECC, but I suspect everyone who has Symms AND ECC will visit here
LOL
In our shop, we are planning to add 10K drives as tier2 storage. we might not logically partition the system by reserving FAs for each tier. we already have FAs reserved for diff OSs so we plan to continue with that.
this is not in place yet so i think i am also in the same boat as we are...thinking how to see it with ECC (though we prefer cli)
and once its all done, i need to open a new thread to see how EMC reports tiered storage in storagescope cause thats where we look at to get billing reports. we certainly do not want to do the chargeback manually #sleep
I guess the answer is that there really isn't a perfect answer to this. Lots of helpful suggestions on how to work around the issue, but I guess we'l have to wait for the Control Center Engineers to build this in to a new version someday.
mcd2
3 Posts
1
October 1st, 2007 12:00
We have separate disk groups for 146G 15K drives and 300G 10K drives (I think they have to live in distinct groups), plus an additional disk group of 8 300G disks for special purposes.
At device creation time, we use the -disk_group option of symconfigure to indicate which group to use. Unfortunately ECC (5.2 for us) doesn't seem to have a way of specifying which disk group to use when creating a device, and also has no easy way to distinguish volumes from the different disk groups. From the command line, the same -disk_group option can be used with symdev and symdisk to list disks or devices per disk group.
To help us track things in ECC we manually created groups with the devices for each tier. In other parts of the interface you can then choose to "arrange by" devices based on your tiers. This is manageable if you're adding devices massively via the bin file (you get large ranges for each tier), but would certainly be a pain if you create them as you go.
Marc
dynamox
9 Legend
•
20.4K Posts
1
October 1st, 2007 13:00
we have 146G drives, 300G drives and 500G drives. For my shop business needs we decided that create hypers of different size that would represent different tiers of storage, for example my 146G drives are split into 8632MB hypers, while my 300G drives are split into 17264MB hypers and my 500G LCFC drives are split into 58G hypers. I understand that with this setup i can't do any Timefinder operations to go from one tier of storage to another, but that's fine for my shop. We have different needs for different tiers of storage, 500G LCFC drives are strictly used for NAS (Celerra) gateways where performance is not a requirement, 300G drives are used for test,dev and some prod ..and 146G drives are used for prod.
xe2sdc
4 Operator
•
2.8K Posts
0
October 2nd, 2007 00:00
As you read it's possible to mix different drive types in a Symmetrix (yes even Symms allowed to mix different devices in the backend
Can you please tell us where did you find the whitepaper you mentioned ?? I'm interested in reading the whitepaper since AFAIK EMC tells exactly the opposite .. But maybe I'm wrong. Considere that when you build a metadevice you are FORCED to use devices with the same protection to have the best performances.
Allen Ward
4 Operator
•
2.1K Posts
0
October 11th, 2007 06:00
It sounds like this is something we could do using command line... unfortunately we have been given direction from the "powers that be" that we will use the ECC GUI for as much as possible to ensure simplified, consistent provisioning of storage. At least that is the excuse they are giving us!
Allen Ward
4 Operator
•
2.1K Posts
0
October 11th, 2007 06:00
It refers to a few options for tiering storage, but none of them are practical for us to manage.
Allen Ward
4 Operator
•
2.1K Posts
0
October 11th, 2007 06:00
That sounds pretty much like one of the "solutions" we put on the table, but in the end we didn't want to introduce additional device sizes... we have standardized on two sizes for open systems and don't want to add more complexity.
It is sounding more and more like this may not be something we want to pursue until EMC can manage it more effectively in their "flagship" management product.
xe2sdc
4 Operator
•
2.8K Posts
0
October 11th, 2007 07:00
drives up so that devices are built across different
drive types and therefore provide a "blended"
performance, but that isn't what we want.
This isn't exactly what the Technote says .. It simply refers to the drives in the backend and states that the preferred way of implementing tiered storage in a DMX4 is "to spread the different types of drives throughout the system with no special attention paid to intermixing drive speeds and capacities" .. This have little if nothing to share with the devices you'll map to the hosts. The concept behind the technote is that it's better (for the storage and for your hosts) to spread the BACKEND workload on ALL AVAILABLE PROCESSORS instead of creating "hardware" partitions (as in the previous example from the same technote).
It doesn't talk about "blended" performances .. since -I think- nobody want to blend everything in a single big pool .. devices on high performances disks will have the best possible performances .. devices on slower disks will be simply slower, but will give you the best possible performance for a slower disk ..
Different drive types will be in different disk group. So you'll have different DiskGroups and each and every disk group will span each and every processor. This will give you the best possible performances for every drive type. Or speaking in other terms, given the Disk Group you'll be able to know the performances of the drives in the DG. Different DG will mean different performances of your drives .. and of your devices
dynamox
9 Legend
•
20.4K Posts
0
October 12th, 2007 05:00
Allen Ward
4 Operator
•
2.1K Posts
0
October 12th, 2007 06:00
You can create Storage Pools (as long as you have the ARM licence) and put different devices in different pools. Unfortunately there is no way to put devices in a pool at time of creation, so you still have to figure out which devices offer which performance characteristics to divide them up.
I think EMC needs to work on their Storage Pool implementation anyway. I can drop an entire DMX on a Storage Pool and it will put all the devices in that pool, but it can't remember that you wanted that Symm in that pool. Anything new you create after that still needs to be manually added to the correct pool.
Allen Ward
4 Operator
•
2.1K Posts
0
October 15th, 2007 08:00
If so, please point me to the right documentation so I can figure it out. If not, we're still stuck since our processes require using ECC to provision (to maintain accountability and track who did what.
MrTS2Symm
113 Posts
0
October 18th, 2007 06:00
You might want to ask this in the EMC Control Center forum as a quicker reply could come about that way.
Michael
xe2sdc
4 Operator
•
2.8K Posts
0
October 18th, 2007 07:00
But if you want to use symcli to solve this problem, you can create some DeviceGroups (a DG for every DiskGroup) and put symdevs in DG (devicegroup) dividing them by DG (diskgroup) .. ECC will discover the DG and will import them in its configuration.
You can also create a small shell script that will delete and recreate the DGs, rescanning the devices and putting them in the appropriate DG (depending on the diskgroup they belong). ECC will later discover the DGs and you'll have what you want
-s-
Message was edited by:
Stefano Del Corno
Allen Ward
4 Operator
•
2.1K Posts
0
October 18th, 2007 07:00
Kiran3
410 Posts
0
October 18th, 2007 19:00
I know not everyone with Symms uses ECC, but I suspect everyone who has
Symms AND ECC will visit here
LOL
In our shop, we are planning to add 10K drives as tier2 storage. we might not logically partition the system by reserving FAs for each tier. we already have FAs reserved for diff OSs so we plan to continue with that.
this is not in place yet so i think i am also in the same boat as we are...thinking how to see it with ECC (though we prefer cli)
and once its all done, i need to open a new thread to see how EMC reports tiered storage in storagescope cause thats where we look at to get billing reports. we certainly do not want to do the chargeback manually
#sleep
Allen Ward
4 Operator
•
2.1K Posts
0
August 13th, 2008 13:00