The CX4 range replaced the CX3 and looking at available cache levels, for comparable positioned arrays, we increased the useable write cache e.g. CX3-10 to CX4-120, CX3-20 to CX4-240 and so on. With the CX4 series, we now look at the available write cache on a CX4 to be the maximum amount of cache on that array, not the memory, and then take some allocation for read cache. This differs to the CX3 and previous generations as we have greater write cache availability and increased levels supported on the CX4 i.e. some ¿older¿ models would have more total cache but had restrictions on how much could be used for write cache.
With the CX4 several factors contributed to the increased memory required by the environment. These include the move to a 64 bit OS, the increased drive count supported by each array model, the increased replication counts supported by the array, and the greater granularity for some of the replication products that results in great improvements to re-sync operations for SnapView clones and MirrorView LUNs.
From the performance testing across the CX4 range the scaling data we have observed for random operations indicates the available cache levels are sufficient to provide a good balance of performance with a level of predictability in line with the disk architectures used. We also have tuning options for environments that may benefit from tuning. Options include cache page size, watermark settings, global allocation of cache, read/write cache enable per LUN, fixed and variable pre-fetch per LUN. In the majority of deployments these optimization adjustments are not necessary and the default options are suitably used.
Globally: 3GB per SP 2GB for the SP itself (operating system) 1GB left for Read and Write cache But because of HA (high availibility) write cache is mirrored to the other SP as well, so suppose you have 200MB of read cache, the remaining 800MB is 400MB for SPA and 400MB for SPB, so in total you only have 200 + 400 effectively (because 400MB is the mirrored write cache of the other SP).
Do i need to explain it in another way: I think it's pretty clear. If you think I answered your question satisfactory, please mark the question as answered and reward the correct answer with some points
The Cx4-240 only has 1000MB Write Cache? That is a lot lower than I expected. The Cx700 I just looked at, which is a 240 drive array, has 3968MB total cache. Of that 884 is used by the SP, 450 is used for Read Cache, and 2634 is used for Write Cache.
I know the Cx700, being the top of the line for the second generation Cx models, is supposed to be compared to the top of the line for the successive generations but I would expect the 240 drive array to be at least equal.
That is what I wonder about.Event the cache of new cx4 model has been increased more than before, but the usable is lower.if I can say that the improvement of CPU and 64 bit flare has more effect on performance than cache size?
You can play with the settings of course. simply add up the R and W cache and devide the total as you please. But yes, it's not a lot to play with, indeed......
If you want 11GB of W cache, you'll need to buy the 960
ksnell
38 Posts
0
December 19th, 2008 08:00
With the CX4 several factors contributed to the increased memory required by the environment. These include the move to a 64 bit OS, the increased drive count supported by each array model, the increased replication counts supported by the array, and the greater granularity for some of the replication products that results in great improvements to re-sync operations for SnapView clones and MirrorView LUNs.
From the performance testing across the CX4 range the scaling data we have observed for random operations indicates the available cache levels are sufficient to provide a good balance of performance with a level of predictability in line with the disk architectures used. We also have tuning options for environments that may benefit from tuning. Options include cache page size, watermark settings, global allocation of cache, read/write cache enable per LUN, fixed and variable pre-fetch per LUN. In the majority of deployments these optimization adjustments are not necessary and the default options are suitably used.
~Keith
RRR
4 Operator
•
5.7K Posts
1
December 5th, 2008 04:00
3GB per SP
2GB for the SP itself (operating system)
1GB left for Read and Write cache
But because of HA (high availibility) write cache is mirrored to the other SP as well, so suppose you have 200MB of read cache, the remaining 800MB is 400MB for SPA and 400MB for SPB, so in total you only have 200 + 400 effectively (because 400MB is the mirrored write cache of the other SP).
RRR
4 Operator
•
5.7K Posts
0
December 16th, 2008 12:00
If you think I answered your question satisfactory, please mark the question as answered and reward the correct answer with some points
power3
31 Posts
0
December 16th, 2008 17:00
why the SP need to use 2GB cache? for 64 bit flare?or other purpose?
AranH1
2.2K Posts
0
December 17th, 2008 08:00
AranH1
2.2K Posts
0
December 17th, 2008 14:00
I know the Cx700, being the top of the line for the second generation Cx models, is supposed to be compared to the top of the line for the successive generations but I would expect the 240 drive array to be at least equal.
RRR
4 Operator
•
5.7K Posts
0
December 17th, 2008 14:00
A CX4-120 I checked has 100MB Read and 498MB Write cache.
power3
31 Posts
0
December 17th, 2008 17:00
RRR
4 Operator
•
5.7K Posts
0
December 18th, 2008 03:00
But yes, it's not a lot to play with, indeed......
If you want 11GB of W cache, you'll need to buy the 960
power3
31 Posts
0
December 18th, 2008 19:00