here is another primus that talks about binding disks across enclosures.
emc98039 - "Considerations when binding RAID groups across buses, disk array enclosures (DAEs), and disk processor enclosures (DPEs)"
it mentions the following... There is absolutely no advantage binding a RAID 1/0 group in more than two DAEs, but it certainly is not harmful in any way.
aah...referring to primus emc98033 - "What is the minimum and maximum number of disks that can be bound to RAID groups on CLARiiON arrays?", it does mention that you can create RAID 1/0 with 2 drives... not sure how the data will be laid on them in this case...maybe they kept this so that you can create a raid 1/0 with two drives and then expand later by adding more (even) drives...stumps me out otherwise.
Just some clarification on Binding RAID 1/0 across DAEs... This comes from the EMC CLARiiON Best Practices for Fibre Channel Storage whitepaper:
"Binding mirrored RAID groups across two busses {backend busses} increases availability to over 99.999 percent and keeps rebuild times lower"
It goes on to explain that to do this you need to use the CLI to create the RAID group so you can force the correct order of binding:
NaviCLI -h createrg XX Primary0 Mirror0 Primary1 Mirror1, etc
Where is the IP of the CLARiioN SP, XX is the RAID Group number, and Primary0, Mirror0, etc represent the drives in BED format (Bus, Enclosure, Drive) like 0_1_0
NaviCLI -h createrg 10 0_1_0 1_1_0 0_1_1 1_1_1
This command would create a RAID group 10 with the Primary stripe across 0_1_0 and 0_1_1 and the Mirror stripe across 1_1_0 & 1_1_1
I know if you have a 2 Drive RAID 1/0 Group, you will need 2 drives to expand. This is because the Clariion runs as a RAID 1 until you bind the additional 2 drives. At this time it will do the stripe operations.
I have never tried to expand past 4 drives, but I think you would only have to add 2 more if you had a 4 drive config. 1 new drive for added capacity and 1 for its stripe.
so if later on you wanted to expand that raid group ..do you have to add 2 drives? And then later on ..if you have a 4 drive raid 1/0 group and you need to expand it ..do you add 4 drives ?
On the question of RAID 1/0 with two drives, it's a little strange (We've been through this one with support to make sure we weren't missing anything).
When you create a RAID group with two drives you can bind a RAID 1 or RAID 1/0 LUN on it (assuming you have FLARE 19 I think). At that point there is no difference in performance or capacity of the LUNs or the RAID group.
The difference is that if you bind RAID 1 you can never expand the RAID group. If you bind the LUNs as RAID 1/0 you can add drives to the RAID group to expand it later. Once this option became available we made a change to ALWAYS bind RAID 1/0 even if we don't think we will expand. Why shoot yourself in the foot when there is no downside to the RAID 1/0?
I just found that ... and the reference to 10 drives I was remembering came from the Best Practices whitepaper as well, but it wasn't a hard limit. It was a recommendation to try to keep RAID groups no larger than 10 drives to minimize rebuild times and rotational latency (how long it takes to line up all the drives in a group for an optimal stripe read off the entire set).
Personally we don't have any databases that would ever require log file spaces that big so it isn't an issue for us. Since the major advantage of RAID 1/0 for the log files is in it's performance for sequential writes, making a big RAID group and carving LUNs for too many different database systems defeats the sequentiality (OK, so maybe it isn't really a word, but it works) of the data flow.
Yes, anything that runs like a database gets RAID 1/0 for the transaction logs. Oracle, Sybase, Microsoft SQL, Exchange, and a custom app for weeding out spam before it gets into our system.
It actually makes a surprising difference in how much the DBAs whine about performance
I keep all the versions of the Best Practices guides in a binder on my desk for quick reference. They have too much in them to absorb all at once so I go back and reread them every few months to get just that little bit more that I missed last time. And there are sometimes things that weren't relevant to our environment last time so I just skipped over them.
I highly recommend everyone managing CLARiiONs get this whitepaper and read it.
keep RAID groups no larger than 10 drives to minimize rebuild times and rotational latency
yeah...i am no expert on maths and statistics but having more drives in a raid group theoritically has more probability of multiple drive failures in the same raid group.
the reference to 10 drives I was remembering came from the Best Practices whitepaper
best practices whitepaper is one of the best consolidated reference on these stuff. emc puts in substantial efforts in making things and then documenting at the end. my earlier posts on this thread mentions few primuses and all them were overed in best practices, in a jiffy!
Kiran3
410 Posts
0
March 28th, 2007 00:00
emc98039 - "Considerations when binding RAID groups across buses, disk array enclosures (DAEs), and disk processor enclosures (DPEs)"
it mentions the following...
There is absolutely no advantage binding a RAID 1/0 group in more than two DAEs, but it certainly is not harmful in any way.
aah...referring to primus emc98033 - "What is the minimum and maximum number of disks that can be bound to RAID groups on CLARiiON arrays?", it does mention that you can create RAID 1/0 with 2 drives... not sure how the data will be laid on them in this case...maybe they kept this so that you can create a raid 1/0 with two drives and then expand later by adding more (even) drives...stumps me out otherwise.
Kiran3
410 Posts
0
March 28th, 2007 00:00
as far as ordering of disks in RAID 1/0 is concerned, there is a primus that will describe how drives are allocated in a RAID 1/0 group...
its emc59462 - "Determining order in which disks are bound in RAID 1/0 group"
dynamox
9 Legend
•
20.4K Posts
0
March 28th, 2007 06:00
Allen Ward
4 Operator
•
2.1K Posts
0
March 28th, 2007 06:00
Although now I have to go look... I'm thinking there was a limit of 10 drives for a RAID 1/0 group, but I'm not certain.
Allen Ward
4 Operator
•
2.1K Posts
0
March 28th, 2007 06:00
"Binding mirrored RAID groups across two busses {backend busses} increases availability to over 99.999 percent and keeps rebuild times lower"
It goes on to explain that to do this you need to use the CLI to create the RAID group so you can force the correct order of binding:
NaviCLI -h createrg XX Primary0 Mirror0 Primary1 Mirror1, etc
Where is the IP of the CLARiioN SP, XX is the RAID Group number, and Primary0, Mirror0, etc represent the drives in BED format (Bus, Enclosure, Drive) like 0_1_0
NaviCLI -h createrg 10 0_1_0 1_1_0 0_1_1 1_1_1
This command would create a RAID group 10 with the Primary stripe across 0_1_0 and 0_1_1 and the Mirror stripe across 1_1_0 & 1_1_1
scn1
12 Posts
0
March 28th, 2007 06:00
I have never tried to expand past 4 drives, but I think you would only have to add 2 more if you had a 4 drive config. 1 new drive for added capacity and 1 for its stripe.
Kiran3
410 Posts
0
March 28th, 2007 06:00
referring the primus(s) i mentioned earlier...
RAID 1/0 (Mirroring and striping) Must consist of 2,4, 6, 8, 10, 12, 14, or 16 disks
dynamox
9 Legend
•
20.4K Posts
0
March 28th, 2007 06:00
Allen Ward
4 Operator
•
2.1K Posts
0
March 28th, 2007 06:00
When you create a RAID group with two drives you can bind a RAID 1 or RAID 1/0 LUN on it (assuming you have FLARE 19 I think). At that point there is no difference in performance or capacity of the LUNs or the RAID group.
The difference is that if you bind RAID 1 you can never expand the RAID group. If you bind the LUNs as RAID 1/0 you can add drives to the RAID group to expand it later. Once this option became available we made a change to ALWAYS bind RAID 1/0 even if we don't think we will expand. Why shoot yourself in the foot when there is no downside to the RAID 1/0?
Kiran3
410 Posts
0
March 28th, 2007 06:00
i dont have any arrays left with spare disks to try
Allen Ward
4 Operator
•
2.1K Posts
0
March 28th, 2007 07:00
I just found that ... and the reference to 10 drives I was remembering came from the Best Practices whitepaper as well, but it wasn't a hard limit. It was a recommendation to try to keep RAID groups no larger than 10 drives to minimize rebuild times and rotational latency (how long it takes to line up all the drives in a group for an optimal stripe read off the entire set).
Personally we don't have any databases that would ever require log file spaces that big so it isn't an issue for us. Since the major advantage of RAID 1/0 for the log files is in it's performance for sequential writes, making a big RAID group and carving LUNs for too many different database systems defeats the sequentiality (OK, so maybe it isn't really a word, but it works) of the data flow.
Allen Ward
4 Operator
•
2.1K Posts
0
March 28th, 2007 07:00
It actually makes a surprising difference in how much the DBAs whine about performance
Allen Ward
4 Operator
•
2.1K Posts
0
March 28th, 2007 07:00
Allen Ward
4 Operator
•
2.1K Posts
0
March 28th, 2007 07:00
I keep all the versions of the Best Practices guides in a binder on my desk for quick reference. They have too much in them to absorb all at once so I go back and reread them every few months to get just that little bit more that I missed last time. And there are sometimes things that weren't relevant to our environment last time so I just skipped over them.
I highly recommend everyone managing CLARiiONs get this whitepaper and read it.
Kiran3
410 Posts
0
March 28th, 2007 07:00
yeah...i am no expert on maths and statistics but having more drives in a raid group theoritically has more probability of multiple drive failures in the same raid group.
best practices whitepaper is one of the best consolidated reference on these stuff.
emc puts in substantial efforts in making things and then documenting at the end.
my earlier posts on this thread mentions few primuses and all them were overed in best practices, in a jiffy!