We just purchased a cx4-120. My recommendation was to get a cx4-240, however due to budget restraints our management went with the cx4-120.
We are currently running production and development on a cx-700 with the intent to split the two env and move production to the cx4-120. While tuning the cx4-120, I noticed that my write cache limit is about 500-600 mb depending on how I tune read cache. On the cx-700, my write cache limit is roughly 2999 mb(3G). I am wondering if it is wise to move our write intensive production Oracle env to the cx4-120 based on these numbers . Or should we move development instead to the cx4-120. I have looked in powerlink and cannot find a comparison document for the allowable tunable cache values.
As long as the backend raid groups can keep up with the IO that you will be sending to them, then the size of the write cache MAY not be a problem.
The more drives in a Raid Group, usually the more performance in that group. When the cache reaches the watermarks and starts flushing, if you don't have any bottlenecks in the backend (which are almost always a Raid Group sizing problem) then the cache can flush efficiently and your cache may not get full. If the groups are not sized to handle what is trying to go through them (total read IO for all luns and then write IO with any needed parity updates) then you will most likely have issues. Your problems would be write cache getting full and everything suffering.
What I suggest doing is lowering the watermarks from 60/80 to say 40/60. This will cause your cache to flush sooner, and if we begin flushing at the beginning of a large burst of IO we may be able to control it. Also, plan your backend the best you can. If you know how much IO you will be doing then use the Best Practices to size the Raid Groups accordingly.
Hate to say it but sometimes you just may need to let things run to see what happens. If you hit cache full (99% dirty pages on an SP), then most likely there is a Raid Group that isn't sized large enough to handle what is being sent to it.
Thank you for your prompt reply. One point I need clarifying on, is that you mention the more disks in the raid group the more performance. I want to put my heavy write mount points on Raid-10 and the recommendation is 8 disks per Raid-10 group. Should I go over that amount?
If 8 drives total, then you only have 4 primary drives and 4 secondary drives. If we say that the group will do only writes (for sake of an example) then we know the primary and secondary drives will be doing the same IO (making sure the data is saved on the primary and its mirror). This means we will only have 4 drives of performance.
4 primary drives (assuming 10k RPM FC) will only fetch you near 560 IOPs total for the Raid Group (in the B.P. guide multiply what the drive can do by the number of drives in the group, so 4*140=560). Bandwidth wise for writes on 10k FC its 4*10MBs/sec=40MBs/sec. If 15k RPM then its 4*180IOPs and 4*12MBs/sec. Raid 1/0 is completely different than other raid types also, so don't use this example for R5.
These numbers of course vary depending on size of the IOs and locality of the data on the drives. You can push more IO for sequential data, or a lot less for very random data. There is no way to exactly predict what you will be doing or could be doing. If this may not be enough, then you will need to add more drives (max 16 for a group). Performance is a very deep topic once you start digging into it.
Please pardon my lack of knowledge about this. This is the first time I've had to configure Raid groups on new storage. I pretty much inherited what was on the cx-700 already done. When I go to configure a raid-10 group, I get eight disks by default. So my assumption is eight disks. I am not sure what an 8+8 is.
So according to your message the default number of drives to choose for a Raid 1/0 was 8. This will give you 4 primary drives and 4 secondary drives (known as a 4+4. 4 primary, 4 secondary).
One thing to point out is that the order by which the drives appear in the create raid group screen is the order that it will be created primary and secondary wise. The first drive in the list is the first primary drive and the second drive is the mirror to the first drive. Usually people make sure all primary drives are on 1 bus and the secondary drives are all on a different bus (in your case I think you only have 1 bus so you may want to put the secondary drives in a different enclosure if possible).
To make sure the primary drives are separated from the secondary drives use the manual button to choose the drives in the set. First choose the first primary drive, then the next drive is the mirror to that one, then pick the next primary drive, then the next mirror.
Ryan, When did the drive selection in the GUI for building RAID1/0 groups begin determining the primary and secondary mirrors? All of the documentation I have read (I have not seen this in FLARE 28 docs yet) state that to determine the primary and secondary mirrors for RAID1/0 groups you have to use the CLI.
Happy, The Navisphere GUI will autoselect a 'best practice' number of unbound disks from the array when you start the RAID group creation process. This is usually annoying for me as I have specific disks that I use in RAID groups and this process requires me to first unselect all those disks then select the disks I want for the RAID group.
To get a nice graphical view of your enclosures and disks generate a Navisphere report and select the Configuration > Available Storage report. This will display an enclosure by enclosure view of the disks in your array and will show which disks are free and which are in use (these reports also port nicely into excel). As Ryan stated it is a good idea to bind RAID1/0 groups across buses and for high availability. With the Cx700 you have four back end buses but unfortunately with the Cx4-120 you only have one.
The Cx700 is a respectable array and is a good performer. In comparison though, even though the the Cx4-120 has less cache it has newer processors, 64bit operating code, and a 4Gb backend bus. So it has some speed advantages apart from the cache.
That said though, I would lean to running production on the Cx700, with the goal of upgrading it to a Cx4-480 the next time maintenance is due on the array. The Cx700 has more cache and more backend buses to distribute the load. And if you are using 15k FC drives for production and RAID1/0 you should have good performance on that array.
I am actually not sure when this was put into play, but if you take a look at Primus emc59462 it says this:
When creating a RAID 1/0 group, you can use Navisphere Manager or Navisphere CLI. The order in which you select the drives will determine the primary/secondary relationship. Even if the drive order is resorted by the GUI, the order they were added is still retained for creating the RAID Group.
Primus was written in 2002, and I don't even know how long ago Navi Manager 6 came out (before 19).
Ryan, There was a contradictory note on that feature in one of the configuration guides that stated that the GUI in fact did not create the primary and secondary mirrors as implied. This was pointed out to me by an EMC consultant. I will have to dig up the documentation that stated that you had to use the CLI in order to specify the mirror pairs. This was actually about a year or two ago so it was relevant to release 24 and possibly 26, I can't remember. I am trying to find the EMC document that stated that you had to use the CLI.
Is there a way to confirm that the GUI will create the mirrored pairs in order of selection? After creating it through the GUI can we confirm the pairs through the CLI?
I will have to dig up the documentation that stated that you had to use the CLI in order to specify the mirror pairs. This was actually about a year or two ago so it was relevant to release 24 and possibly 26, I can't remember. I am trying to find the EMC document that stated that you had to use the CLI.
EMC Best Practises Guide FLARE 26, page 41, Mirrored Groups - When creating the RAID group use Navisphere CLI to bind across buses....