Unsolved
This post is more than 5 years old
50 Posts
0
691
May 21st, 2007 12:00
replace a DMX with CX3
I'm still very lacking in details but we are considering replacing a DMX-3000, 2 series with some number of CX3's - configuration and model TBD. These would be attached to Red Hat Linux servers running an Oracle 10G RAC cluster. The idea is to dedicate a set of disks to an Oracle instance. In some cases a whole array may be dedicated to an Oracle Instance.
My question is how do I manage this? Do I still use ECC as I would on the symms for performance numbers confiuration and reporting? I really need to be to see the whole san somehow. It also appears there is no equivalent to Symm optimizer so would I look to Virtualization like Invista to manage hotspots? The whole thing sounds like a lot of manual configuration on a regular basis.
Anyone doing anything like this?
My question is how do I manage this? Do I still use ECC as I would on the symms for performance numbers confiuration and reporting? I really need to be to see the whole san somehow. It also appears there is no equivalent to Symm optimizer so would I look to Virtualization like Invista to manage hotspots? The whole thing sounds like a lot of manual configuration on a regular basis.
Anyone doing anything like this?
0 events found
No Events found!


Kiran3
410 Posts
0
May 22nd, 2007 02:00
for performance monitoring, Navisphere provides Navisphere Analyser component which can help you track various parameters related to CX3.
you may want to look at emc whitepapers on CX3 and oracle integration. I have seen exchange and sql whitepapers so oracle should also be there.
apart from this, there is another whitepaper (best practices whitepaper) which lists down guidelines to configure CX3 for optimum benefit.
Major difference you will see with the change is the performance. Symmetrix follows a active-active architecture and Clariion is active-passive. so there are different considerations for zoning, performance tuning etc for both cases.
kirkb00
50 Posts
0
May 22nd, 2007 13:00
AranH1
2.2K Posts
0
May 22nd, 2007 13:00
One item that will help with the CLARiiON in FLARE24 is Navisphere QoS. You will be able to allocate minimum performance levels for specific systems. Keep in mind though that you are slicing up a smaller pie than available on a DMX.
About the Cx3 series though, I have on Cx3-80 and three Cx700s. Both are top of their respective lines from the second and third generation CLARiiONs and I have been impressed with how much more horsepower the Cx3-80 has over the Cx700. I have 28 DAEs (420 drives) on the Cx3-80 with some heavy duty sql reporting clusters hammering the array with i/o and it holds up quite well and I have not been able to push the system to it's limits.
kirkb00
50 Posts
0
June 6th, 2007 07:00
AranH1
2.2K Posts
0
June 6th, 2007 07:00
Below is a list of the HA features on the CLARiiON arrays from the CLARiiON Pocket Reference Guide. The Cx3 series have a good HA reputation in my opinion, we have over 5 9s on ours and it has been in service for a year now. About the same on the Cx700s as well. The one fault I had in the past year that could have created downtime was the failure of a LCC in a DAE which when it occurred caused a few drives in that enclosure to go offline. If I had not implemented RAID10 sets with the primary disks in one bus and enclosure and the secondary disks in another bus and enclosure, then the host would have lost access to that LUN for a period of time.
Considering the list below, if a major component fails then the worst you will typically see is write caching disabled. Which drops the performance of the array significantly until the component is replaced.
All components are dual-redundant and hot swappable (no single point of failure).
Write cache is protected by a 'vault' area on disk. On a failure the contents are written to disks (de-staged or dumped). When the failure is corrected the contents are written to the back-end disks and write cache is re-enabled.
The de-stage process is supported by batteries during power failures. Write cache will not be re-enabled until the batteries are sufficiently recharged to support another cache de-stage.
The following conditions must be met for write-cache to be enabled:
- There must be a standby power supply present, and it must be fully charged.
- At least 4 vault drives must be present (all 5 if 'Non-HA' option is not selected); they cannot be faulted or rebuilding.
- The ability to keep write cache enabled when a single vault drive fails is optional under R12 and later.
- Both storage processors must be present and functional.
- Both power supplies must be present in the DPE/SPE.
- Both fan packs must be present in the DPE/SPE.
- The DPE/SPE and all DAEs must have two non-faulted link control cards (LCC) each.
Each data block on a CLARiiON contain 8 bytes of error checking data.
8 bytes consist of LRC, Shedstamp, Writestamp and Timestamp
SNiiFER runs in the background and continuously checks all data blocks for errors.
Updates to the array SW are non-disruptive from a host perspective given that the host is configured for HA.
Failure of an SP results in all LUNs owned by that SP being trespassed to the other SP (assuming PowerPath is running on the host(s) accessing those LUNs).
Striping a RAID1 RG across multiple DAEs that include the first DAE (the one containing the vault drives) is not recommended.