410 Posts

May 22nd, 2007 02:00

CX3 can be managed by ECC but its native Navisphere can be used...

for performance monitoring, Navisphere provides Navisphere Analyser component which can help you track various parameters related to CX3.

you may want to look at emc whitepapers on CX3 and oracle integration. I have seen exchange and sql whitepapers so oracle should also be there.
apart from this, there is another whitepaper (best practices whitepaper) which lists down guidelines to configure CX3 for optimum benefit.

Major difference you will see with the change is the performance. Symmetrix follows a active-active architecture and Clariion is active-passive. so there are different considerations for zoning, performance tuning etc for both cases.

50 Posts

May 22nd, 2007 13:00

I guess my biggest concern is people often distort reality in white papers - you never see one that says "this was terrible". I don't see how I can manage the performance issues. Even if I know whats busy and whats not from Performance Manager I have no tools to easily move things, let alone automate the process.

2.2K Posts

May 22nd, 2007 13:00

I would expect a performance impact moving from DMX to CLARiiON line of arrays. If this is not what acceptable then why the move to CLARiiON?

One item that will help with the CLARiiON in FLARE24 is Navisphere QoS. You will be able to allocate minimum performance levels for specific systems. Keep in mind though that you are slicing up a smaller pie than available on a DMX.

About the Cx3 series though, I have on Cx3-80 and three Cx700s. Both are top of their respective lines from the second and third generation CLARiiONs and I have been impressed with how much more horsepower the Cx3-80 has over the Cx700. I have 28 DAEs (420 drives) on the Cx3-80 with some heavy duty sql reporting clusters hammering the array with i/o and it holds up quite well and I have not been able to push the system to it's limits.

50 Posts

June 6th, 2007 07:00

I got about what I expected. The kind of thing I was hoping to get is a real explaintion of 5, 9's (not counting scheduled downtime) - what does that mean exactly? What happens when an x fails. I'll keep digging.

2.2K Posts

June 6th, 2007 07:00

Bruce,
Below is a list of the HA features on the CLARiiON arrays from the CLARiiON Pocket Reference Guide. The Cx3 series have a good HA reputation in my opinion, we have over 5 9s on ours and it has been in service for a year now. About the same on the Cx700s as well. The one fault I had in the past year that could have created downtime was the failure of a LCC in a DAE which when it occurred caused a few drives in that enclosure to go offline. If I had not implemented RAID10 sets with the primary disks in one bus and enclosure and the secondary disks in another bus and enclosure, then the host would have lost access to that LUN for a period of time.

Considering the list below, if a major component fails then the worst you will typically see is write caching disabled. Which drops the performance of the array significantly until the component is replaced.


All components are dual-redundant and hot swappable (no single point of failure).
Write cache is protected by a 'vault' area on disk. On a failure the contents are written to disks (de-staged or dumped). When the failure is corrected the contents are written to the back-end disks and write cache is re-enabled.
The de-stage process is supported by batteries during power failures. Write cache will not be re-enabled until the batteries are sufficiently recharged to support another cache de-stage.
The following conditions must be met for write-cache to be enabled:
- There must be a standby power supply present, and it must be fully charged.
- At least 4 vault drives must be present (all 5 if 'Non-HA' option is not selected); they cannot be faulted or rebuilding.
- The ability to keep write cache enabled when a single vault drive fails is optional under R12 and later.
- Both storage processors must be present and functional.
- Both power supplies must be present in the DPE/SPE.
- Both fan packs must be present in the DPE/SPE.
- The DPE/SPE and all DAEs must have two non-faulted link control cards (LCC) each.
Each data block on a CLARiiON contain 8 bytes of error checking data.
8 bytes consist of LRC, Shedstamp, Writestamp and Timestamp
SNiiFER runs in the background and continuously checks all data blocks for errors.
Updates to the array SW are non-disruptive from a host perspective given that the host is configured for HA.
Failure of an SP results in all LUNs owned by that SP being trespassed to the other SP (assuming PowerPath is running on the host(s) accessing those LUNs).
Striping a RAID1 RG across multiple DAEs that include the first DAE (the one containing the vault drives) is not recommended.
No Events found!

Top