AranH1
4 Ruthenium

Re: cx-700 vs cx4-120 write cache and Oracle

Jump to solution
Ryan,
I found what I was looking for. In all the CLARiiON Best Practices for Performance and Availability (even the one for FLARE 28), in the section on binding across back-end buses it states to use the CLI to designate primary and mirror pair for mirrored groups (RAID 1 and 1/0). So it wasn't quite what I stated, I read that as being a requirement for binding across buses only and that if we are not binding across buses we can use the GUI?

Aran
0 Kudos
RyanP2
3 Argentum

Re: cx-700 vs cx4-120 write cache and Oracle

Jump to solution
Can use the GUI for all disk selections. It won't restrict you from doing it across buses.

-Ryan
0 Kudos
AranH1
4 Ruthenium

Re: cx-700 vs cx4-120 write cache and Oracle

Jump to solution
So that section in the best practices guide is not accurate? It is pretty misleading then the way it states to use the CLI for creation of mirrored RAID groups.
0 Kudos
kelleg
5 Rhenium

Re: cx-700 vs cx4-120 write cache and Oracle

Jump to solution
I think they highlighted the CLI method as it is easier to explain. The GUI method worked at least back to release 12 that I can remember, but I always used the CLI method as I was always sure then that I had it correct - since there is no feedback on the GUI if it's correct or not.

glen
0 Kudos
AranH1
4 Ruthenium

Re: cx-700 vs cx4-120 write cache and Oracle

Jump to solution
Thanks Glen, I was misinformed then on that process.

Is there a way to view the properties of a RAID group through the CLI or the GUI to confirm the pairing of the mirrors?
0 Kudos
RyanP2
3 Argentum

Re: cx-700 vs cx4-120 write cache and Oracle

Jump to solution
From looking at the getrg cli output for a raid group, it will re-organize the list to list all primary drives first, then the second half of the list is the secondary luns. if you have a 4 drive 1/0, the first two are 1P 2P, then 1S 2S (1st Primary and second Primary, then 1st mirror then second mirror.)

-Ryan
0 Kudos
AranH1
4 Ruthenium

Re: cx-700 vs cx4-120 write cache and Oracle

Jump to solution
Thanks Ryan, good stuff. I did not know that.
0 Kudos
happyadm
3 Silver

Re: cx-700 vs cx4-120 write cache and Oracle

Jump to solution
One thing that I forgot to mention is that we are using 2gb switches. So I am correct to assume that even with the faster bus, disk(4gb), that I will not be able to take advantage of that because of our switches?
0 Kudos
RRR
6 Indium

Re: cx-700 vs cx4-120 write cache and Oracle

Jump to solution
The data transfers from hosts to switch to array is at 2Gb max, but internally the Clariion works at 4Gb, so writing data from cache to and from disks works at 4Gb !
0 Kudos
kelleg
5 Rhenium

Re: cx-700 vs cx4-120 write cache and Oracle

Jump to solution
One extra point, when you first select to create a a new Raid Group, the array will out up a default number of disks - you can change this if you need more or less disks. On R10, you need to have disks added in 2's - one for the primary and one for the secondary. So you could take the default, 8 disks, which will give you a 4+4 R10 - as Ryan notes.

If you need more disks to handle more IO, then when you create the Raid Group, you can manually select the disk.

You really need to determine what the current workload on the CX700 is for the applications - this is the bandwidth, IO/s, IO size, etc. If you have Navisphere Analyzer licensed on the CX700, you should run this for a week or so to gather Archives to see where and what the workload is on the CX700. Start the analyzer at Archive Interval of 600 seconds - this will give you one archive for each 24-26 hours - one complete day. If you see times when the workload is very busy, you can change the Archive Interval to a shorter time - say 120 seconds, that would create an archive that contains 5 hours of data - more detail. Check the On0Line Help for more complete description about using Analyzer.

glen
0 Kudos