This post is more than 5 years old
3 Posts
0
1266
September 23rd, 2009 20:00
Optimal Disk Configuration
I have read several disk configuration documents and posts, but I am still curious about custom configurations. We currently have an NS20 with 2 DAE's configured this way:
DAE 0_0: ATA_RAID5_4+1_HS_8+1 (1 TB)
DAE 0_1: FC_RAID5_4+1_8+1_HS (450 GB 15K)
We are purchasing a new DAE to address performance issues due to IOPS. In a perfect world I would order 2 x 300 GB 15K FC drives and make them RAID 1 (logs), and 13 x 146 GB 15K FC drives and make them RAID 5 (data) - while using the HS from DAE 0_1. There is no template for this configuration. Is there a way to create a custom template (truly user_defined) to fit this model? If not, is my only other option to purchase 2 DAE's and use an R5 8+1 setup on one DAE and an R1 setup on another? That wouldn't be cost or space effective.
DAE 0_0: ATA_RAID5_4+1_HS_8+1 (1 TB)
DAE 0_1: FC_RAID5_4+1_8+1_HS (450 GB 15K)
We are purchasing a new DAE to address performance issues due to IOPS. In a perfect world I would order 2 x 300 GB 15K FC drives and make them RAID 1 (logs), and 13 x 146 GB 15K FC drives and make them RAID 5 (data) - while using the HS from DAE 0_1. There is no template for this configuration. Is there a way to create a custom template (truly user_defined) to fit this model? If not, is my only other option to purchase 2 DAE's and use an R5 8+1 setup on one DAE and an R1 setup on another? That wouldn't be cost or space effective.
No Events found!



Rainer_EMC
4 Operator
•
8.6K Posts
0
September 30th, 2009 02:00
technically speaking this isnt a custom storage pool - its merely not using the enclosure templates for raid configuration.
a storage pool is the concept on the Celerra side how LUNs (dvols) are grouped together.
This is fairly new - before you the SPW you had no supported way to configure this on an NS Integrated without a Fibre Channel license since you didnt have a NaviSphere license.
I cant find it spelled out in a manual - so far the Provision Storage wizard is only documented in its online help and in internal trainings.
Just point the pre-sales guy to this thread and have him contact me.
regards
Rainer
dynamox
9 Legend
•
20.4K Posts
1
September 23rd, 2009 21:00
welcome to EMC Forums.
I think it's possible to achieve what you are trying to do, you will need to manually create all the different components (slice volumes,metavolumes) and for easy of administrator you could add them to user_defined pools. Take a look at these documents: "Managing Celerra Volumes and File Systems Manually 5.6.44 A04" and "Managing Celerra Volumes and File Systems with Automatic Volume Management 5.6.45 A06". They can be found here:
Home > Support > Technical Documentation and Advisories > Hardware/Platforms Documentation > Celerra Network Server > Maintenance/Administration
jamiedoherty
3 Posts
0
September 25th, 2009 07:00
Rainer_EMC
4 Operator
•
8.6K Posts
0
September 29th, 2009 07:00
2 x 300 GB 15K FC drives and make them RAID 1 (logs),
and 13 x 146 GB 15K FC drives and make them RAID 5 (data) - while using the HS from DAE 0_1. There is
no template for this configuration.
You dont have to stick to the templates, but you can only use the raid group sizes and types that the Celerra support.
For a NS20 that would be:
Disk Group Type Attach Type Storage Profile/ Storage Pool Default Number of Disk Volumes
8+1 RAID5 Fibre Channel clar_r5_economy 2
4+1 RAID5 Fibre Channel clar_r5_performance 2
RAID1 Fibre Channel clar_r1 2
4+2 RAID 6 Fibre Channel clar_r6 2
6+2 RAID 6 Fibre Channel clar_r6 2
12+2 RAID 6 Fibre Channel clar_r6 4
6+1 RAID5 CX ATA clarata_archive 2
6+1 RAID5 CX3 ATA clarata_archive 1
4+1 RAID5 CX3 only ATA clarata_archive 1
8+1 RAID5 CX3 only ATA clarata_archive 2
4+1 RAID3 ATA clarata_r3 1
8+1 RAID3 ATA clarata_r3 2
4+2 RAID 6 ATA clarata_r6 2
6+2 RAID 6 ATA clarata_r6 2
So if you add 15 drives and use 2 for RAID1 its a bit difficult to fit something supported in there.
The only thing I can think of is 6+2R6 and 4+1R5 but I dont think you want another storage pool
I would either get more than one DAE and more disks or:
- just get 11 or 12 disks and go for one 8+1R5 or 2x 4+1R5 plus your 1+1R1
- or 2x 4+1R5 and 2x 1+1R1
- create 2x 4+1R5 on the new disks, migrate your file systems from the 8+1R5 in the first DAE to free it up and then create 2x more 4+1R5 and your 1+1R1
just depends on where you need your additional storage. I find that with too many storage pool types you tend to run out of space in the wrong pool so I prefer systems with all 4+1 or not to many pools
no, in terms of setup_clariion there arent any truly user_defined templates.
You have two choices to configure it freely
1) use NaviSphere - officially thats only if you happen to have a NS-20FC with a FC enablement license
see Powerlink Home > Support > Product and Diagnostic Tools > Celerra Tools > NS20, NS20FC
then Step 5: Configure for Production
then Perform Common Post-CSA tasks
then Configure additional storage for your integrated system
then Fibre Channel (FC) enabled integrated configuration Celerra system (Navisphere Manager)
which should leed you to this: http://corpusweb130.corp.emc.com/ns/common/post_install/Config_storage_FC_NaviManager.htm
2) or the easier way if you have DART 5.6.45 that includes the Storage Provisioning Wizard
that way you cant do any mistakes and you can see how much space you get in which pool before making the config
see attached
I would recommend a DART upgrade and use 2)
and an R1 setup on another?
no - you can setup an 8+1R5 plus one or two 1+1R1 in the same DAE
1 Attachment
How_to_configure_storage_using_SPW.pdf
Rainer_EMC
4 Operator
•
8.6K Posts
0
September 29th, 2009 07:00
to make the LUNs visible to the Celerra its just a server_devconfig - the hard part is to create them with the correct settings so that they are recognzied
Victor2100
10 Posts
0
September 29th, 2009 09:00
Could you clarify ?
jamiedoherty
3 Posts
0
September 29th, 2009 15:00
This would answer my question but I just had a call with Pre-Sales that contradicts this. Any manual which might be able to prove them wrong? They told me I needed to match the templates and order 2 DAE's. I would much prefer your 8+1 R5 and 1+1 R1 configuration on the same DAE into a custom storage pool!
Thanks for your help - great reply.
Rainer_EMC
4 Operator
•
8.6K Posts
0
September 30th, 2009 02:00
restrictions on the number of disks.
Thanks for pointing that out - thats definitely an error in the documentation.
The Celerra will only recognize support RAID group sizes as listed on page 21
I'll notifiy the documentation group to get it corrected