1thumb
2 Iron

CX4-480 R1_0 4+4 Performance

Jump to solution

Array = CX4-480 with Flare 6.28.21.0.39

Application Mimosa Nearpoint (Exchange Archive application)

DRIVES =

24 450GB 15K FC drives Carved into 6 Raid 1_0 Raid Groups of 3+3.

56 600GB 10K FC drives Carved into 7 Raid 1_0 Raid Groups of 4+4.

58 1TB SATA drives Carved into 11 Raid 5 Raid Groups of 4+1.

For purposes of testing the I/O capabilities, we used the 450GB drives and built the luns and MetaLuns with them.  I used the following scripts to build the Raid Groups and to carve the luns and expand them into MetaLuns:

naviseccli -h  {ip address of array} createrg 23 2_2_0 3_2_0 2_2_1 3_2_1 2_2_2 3_2_2  -rm no
naviseccli -h  {ip address of array} createrg 24 2_2_3 3_2_3 2_2_4 3_2_4 2_2_5 3_2_5 -rm no
naviseccli -h  {ip address of array} createrg 25 2_2_6 3_2_6 2_2_7 3_2_7 2_2_8 3_2_8 -rm no
naviseccli -h  {ip address of array} createrg 26 2_2_9 3_2_9 2_2_10 3_2_10 2_2_11 3_2_11 -rm no


naviseccli -address  {ip address of array} bind r1_0 100 -rg 23 -rc 1 -wc 1 -aa 0 -cap 435200 -r Medium -v Low -sp a -sq mb
naviseccli -address  {ip address of array} bind r1_0 101 -rg 24 -rc 1 -wc 1 -aa 0 -cap 435200 -r Medium -v Low -sp b -sq mb
naviseccli -address  {ip address of array} bind r1_0 102 -rg 25 -rc 1 -wc 1 -aa 0 -cap 435200 -r Medium -v Low -sp a -sq mb
naviseccli -address  {ip address of array} bind r1_0 103 -rg 26 -rc 1 -wc 1 -aa 0 -cap 435200 -r Medium -v Low -sp b -sq mb

naviseccli -address  {ip address of array} bind r1_0 104 -rg 23 -rc 1 -wc 1 -aa 0 -cap 332800 -r Medium -v Low -sp b -sq mb
naviseccli -address  {ip address of array} bind r1_0 105 -rg 24 -rc 1 -wc 1 -aa 0 -cap 332800 -r Medium -v Low -sp a -sq mb
naviseccli -address  {ip address of array} bind r1_0 106 -rg 25 -rc 1 -wc 1 -aa 0 -cap 332800 -r Medium -v Low -sp b -sq mb
naviseccli -address  {ip address of array} bind r1_0 107 -rg 26 -rc 1 -wc 1 -aa 0 -cap 332800 -r Medium -v Low -sp a -sq mb

naviseccli -address  {ip address of array} bind r1_0 108 -rg 23 -rc 1 -wc 1 -aa 0 -cap 332800 -r Medium -v Low -sp a -sq mb
naviseccli -address  {ip address of array} bind r1_0 109 -rg 24 -rc 1 -wc 1 -aa 0 -cap 332800 -r Medium -v Low -sp b -sq mb
naviseccli -address  {ip address of array} bind r1_0 110 -rg 25 -rc 1 -wc 1 -aa 0 -cap 332800 -r Medium -v Low -sp a -sq mb
naviseccli -address  {ip address of array} bind r1_0 111 -rg 26 -rc 1 -wc 1 -aa 0 -cap 332800 -r Medium -v Low -sp b -sq mb

naviseccli -address  {ip address of array} bind r1_0 112 -rg 23 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp b -sq mb
naviseccli -address  {ip address of array} bind r1_0 113 -rg 24 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp a -sq mb
naviseccli -address  {ip address of array} bind r1_0 114 -rg 25 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp b -sq mb
naviseccli -address  {ip address of array} bind r1_0 115 -rg 26 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp a -sq mb

naviseccli -address  {ip address of array} bind r1_0 116 -rg 23 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp a -sq mb
naviseccli -address  {ip address of array} bind r1_0 117 -rg 24 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp b -sq mb
naviseccli -address  {ip address of array} bind r1_0 118 -rg 25 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp a -sq mb
naviseccli -address  {ip address of array} bind r1_0 119 -rg 26 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp b -sq mb

naviseccli -address  {ip address of array} bind r1_0 120 -rg 23 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp b -sq mb
naviseccli -address  {ip address of array} bind r1_0 121 -rg 24 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp a -sq mb
naviseccli -address  {ip address of array} bind r1_0 122 -rg 25 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp b -sq mb
naviseccli -address  {ip address of array} bind r1_0 123 -rg 26 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp a -sq mb

naviseccli -address  {ip address of array} bind r1_0 124 -rg 23 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp a -sq mb
naviseccli -address  {ip address of array} bind r1_0 125 -rg 24 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp b -sq mb
naviseccli -address  {ip address of array} bind r1_0 126 -rg 25 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp a -sq mb
naviseccli -address  {ip address of array} bind r1_0 127 -rg 26 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp b -sq mb


naviseccli -h  {ip address of array} metalun -expand -base 100 -lus 101 102 103 -name LTCFISWSQLNP01_MetaLUN_1700GB -defaultowner A -type S -sq mb -o
naviseccli -h  {ip address of array} metalun -expand -base 104 -lus 105 106 107 -name LTCFISWSQLNP01_MetaLUN_1300GB -defaultowner B -type S -sq mb -o
naviseccli -h  {ip address of array} metalun -expand -base 108 -lus 109 110 111 -name LTCFISWSQLNP02_MetaLUN_1300GB -defaultowner A -type S -sq mb -o
naviseccli -h  {ip address of array} metalun -expand -base 112 -lus 113 114 115 -name LTCFISWSQLNP01_MetaLUN_100GB -defaultowner B -type S -sq mb -o
naviseccli -h  {ip address of array} metalun -expand -base 116 -lus 117 118 119 -name LTCFISWSQLNP01_MetaLUN_100GB -defaultowner A -type S -sq mb -o
naviseccli -h  {ip address of array} metalun -expand -base 120 -lus 121 122 123 -name LTCFISWSQLNP02_MetaLUN_100GB -defaultowner B -type S -sq mb -o
naviseccli -h  {ip address of array} metalun -expand -base 124 -lus 125 126 127 -name LTCFISWSQLNP02_MetaLUN_100GB -defaultowner A -type S -sq mb -o

We are using I/O-iometer tool to test the I/O capabilities, and while I would think I should get 1920 iops (EMC said to use 160 iops for calculations)  from the meta luns, during testing we only get 1400.

Hosts are HP DL580 G7's with 32 cores and dual Emulex HBA's  and a  HP DL380 G7 with 16 cores and Dual Emulex HBA's - Heat reports have been run and come back clean.

The Nearpoint application has a write size of 64k which should match what the Clariion stripe size is for the Raid Groups .  Can anyone think of anything I am overlooking or misconfigured.  Any help would be appreciated, as it may keep the clients crosshairs focused elsewhere.

Regards

Shannon

Labels (1)
0 Kudos
1 Solution

Accepted Solutions
SKT2
4 Beryllium

Re: CX4-480 R1_0 4+4 Performance

Jump to solution

considering the RD/WR rario 30/70 % , did switching to R10 helped compared to R5 on further tests?

View solution in original post

0 Kudos
21 Replies
dynamox
7 Thorium

Re: CX4-480 R1_0 4+4 Performance

Jump to solution

what's the read to write ratio

0 Kudos
dynamox
7 Thorium

Re: CX4-480 R1_0 4+4 Performance

Jump to solution

also, is that 1400 IOPS when you are running IOmeter against one LUN or all of them at once ?

0 Kudos
1thumb
2 Iron

Re: CX4-480 R1_0 4+4 Performance

Jump to solution

30% Read 70% Write 100% Random 64K write size

The Database luns are the ones suffering

Luns 100, 105 & 110

0 Kudos
dynamox
7 Thorium

Re: CX4-480 R1_0 4+4 Performance

Jump to solution

what do you get when you run IOmeter against LUN 100 only. Do number of IOPs go up as you add more workers ?

0 Kudos
1thumb
2 Iron

Re: CX4-480 R1_0 4+4 Performance

Jump to solution

Yes as more threads were added the i/o went up. EMC support thinks the host is not pushing enough I/O during the test. We are running tests now to see if we are able to push enough I/O from the host.

0 Kudos
kelleg
5 Rhenium

Re: CX4-480 R1_0 4+4 Performance

Jump to solution

how many Workers do you have configured? Start with 2. What about the "# Outstanding I/Os" - try 2 to begin with. Also, use 4KB as the IO size and see what you get. Remember, in the White Paper, the disk vaules are based on small block IO (less than 32KB), the smaller the IO size, the more IO/s you get.

glen

0 Kudos
1thumb
2 Iron

Re: CX4-480 R1_0 4+4 Performance

Jump to solution

How can I change what their I/O size is. This application (Nearpoint) will determine that wont it?

0 Kudos
dynamox
7 Thorium

Re: CX4-480 R1_0 4+4 Performance

Jump to solution

i don't know if you can tweak their application (typically you can't) but with analyzer you can see what they are using and try to tweak host/storage.

1thumb
2 Iron

Re: CX4-480 R1_0 4+4 Performance

Jump to solution

Thank you I will take a look.

0 Kudos