Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

2926

August 18th, 2011 12:00

CX4-480 R1_0 4+4 Performance

Array = CX4-480 with Flare 6.28.21.0.39

Application Mimosa Nearpoint (Exchange Archive application)

DRIVES =

24 450GB 15K FC drives Carved into 6 Raid 1_0 Raid Groups of 3+3.

56 600GB 10K FC drives Carved into 7 Raid 1_0 Raid Groups of 4+4.

58 1TB SATA drives Carved into 11 Raid 5 Raid Groups of 4+1.

For purposes of testing the I/O capabilities, we used the 450GB drives and built the luns and MetaLuns with them.  I used the following scripts to build the Raid Groups and to carve the luns and expand them into MetaLuns:

naviseccli -h  {ip address of array} createrg 23 2_2_0 3_2_0 2_2_1 3_2_1 2_2_2 3_2_2  -rm no
naviseccli -h  {ip address of array} createrg 24 2_2_3 3_2_3 2_2_4 3_2_4 2_2_5 3_2_5 -rm no
naviseccli -h  {ip address of array} createrg 25 2_2_6 3_2_6 2_2_7 3_2_7 2_2_8 3_2_8 -rm no
naviseccli -h  {ip address of array} createrg 26 2_2_9 3_2_9 2_2_10 3_2_10 2_2_11 3_2_11 -rm no


naviseccli -address  {ip address of array} bind r1_0 100 -rg 23 -rc 1 -wc 1 -aa 0 -cap 435200 -r Medium -v Low -sp a -sq mb
naviseccli -address  {ip address of array} bind r1_0 101 -rg 24 -rc 1 -wc 1 -aa 0 -cap 435200 -r Medium -v Low -sp b -sq mb
naviseccli -address  {ip address of array} bind r1_0 102 -rg 25 -rc 1 -wc 1 -aa 0 -cap 435200 -r Medium -v Low -sp a -sq mb
naviseccli -address  {ip address of array} bind r1_0 103 -rg 26 -rc 1 -wc 1 -aa 0 -cap 435200 -r Medium -v Low -sp b -sq mb

naviseccli -address  {ip address of array} bind r1_0 104 -rg 23 -rc 1 -wc 1 -aa 0 -cap 332800 -r Medium -v Low -sp b -sq mb
naviseccli -address  {ip address of array} bind r1_0 105 -rg 24 -rc 1 -wc 1 -aa 0 -cap 332800 -r Medium -v Low -sp a -sq mb
naviseccli -address  {ip address of array} bind r1_0 106 -rg 25 -rc 1 -wc 1 -aa 0 -cap 332800 -r Medium -v Low -sp b -sq mb
naviseccli -address  {ip address of array} bind r1_0 107 -rg 26 -rc 1 -wc 1 -aa 0 -cap 332800 -r Medium -v Low -sp a -sq mb

naviseccli -address  {ip address of array} bind r1_0 108 -rg 23 -rc 1 -wc 1 -aa 0 -cap 332800 -r Medium -v Low -sp a -sq mb
naviseccli -address  {ip address of array} bind r1_0 109 -rg 24 -rc 1 -wc 1 -aa 0 -cap 332800 -r Medium -v Low -sp b -sq mb
naviseccli -address  {ip address of array} bind r1_0 110 -rg 25 -rc 1 -wc 1 -aa 0 -cap 332800 -r Medium -v Low -sp a -sq mb
naviseccli -address  {ip address of array} bind r1_0 111 -rg 26 -rc 1 -wc 1 -aa 0 -cap 332800 -r Medium -v Low -sp b -sq mb

naviseccli -address  {ip address of array} bind r1_0 112 -rg 23 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp b -sq mb
naviseccli -address  {ip address of array} bind r1_0 113 -rg 24 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp a -sq mb
naviseccli -address  {ip address of array} bind r1_0 114 -rg 25 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp b -sq mb
naviseccli -address  {ip address of array} bind r1_0 115 -rg 26 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp a -sq mb

naviseccli -address  {ip address of array} bind r1_0 116 -rg 23 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp a -sq mb
naviseccli -address  {ip address of array} bind r1_0 117 -rg 24 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp b -sq mb
naviseccli -address  {ip address of array} bind r1_0 118 -rg 25 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp a -sq mb
naviseccli -address  {ip address of array} bind r1_0 119 -rg 26 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp b -sq mb

naviseccli -address  {ip address of array} bind r1_0 120 -rg 23 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp b -sq mb
naviseccli -address  {ip address of array} bind r1_0 121 -rg 24 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp a -sq mb
naviseccli -address  {ip address of array} bind r1_0 122 -rg 25 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp b -sq mb
naviseccli -address  {ip address of array} bind r1_0 123 -rg 26 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp a -sq mb

naviseccli -address  {ip address of array} bind r1_0 124 -rg 23 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp a -sq mb
naviseccli -address  {ip address of array} bind r1_0 125 -rg 24 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp b -sq mb
naviseccli -address  {ip address of array} bind r1_0 126 -rg 25 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp a -sq mb
naviseccli -address  {ip address of array} bind r1_0 127 -rg 26 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp b -sq mb


naviseccli -h  {ip address of array} metalun -expand -base 100 -lus 101 102 103 -name LTCFISWSQLNP01_MetaLUN_1700GB -defaultowner A -type S -sq mb -o
naviseccli -h  {ip address of array} metalun -expand -base 104 -lus 105 106 107 -name LTCFISWSQLNP01_MetaLUN_1300GB -defaultowner B -type S -sq mb -o
naviseccli -h  {ip address of array} metalun -expand -base 108 -lus 109 110 111 -name LTCFISWSQLNP02_MetaLUN_1300GB -defaultowner A -type S -sq mb -o
naviseccli -h  {ip address of array} metalun -expand -base 112 -lus 113 114 115 -name LTCFISWSQLNP01_MetaLUN_100GB -defaultowner B -type S -sq mb -o
naviseccli -h  {ip address of array} metalun -expand -base 116 -lus 117 118 119 -name LTCFISWSQLNP01_MetaLUN_100GB -defaultowner A -type S -sq mb -o
naviseccli -h  {ip address of array} metalun -expand -base 120 -lus 121 122 123 -name LTCFISWSQLNP02_MetaLUN_100GB -defaultowner B -type S -sq mb -o
naviseccli -h  {ip address of array} metalun -expand -base 124 -lus 125 126 127 -name LTCFISWSQLNP02_MetaLUN_100GB -defaultowner A -type S -sq mb -o

We are using I/O-iometer tool to test the I/O capabilities, and while I would think I should get 1920 iops (EMC said to use 160 iops for calculations)  from the meta luns, during testing we only get 1400.

Hosts are HP DL580 G7's with 32 cores and dual Emulex HBA's  and a  HP DL380 G7 with 16 cores and Dual Emulex HBA's - Heat reports have been run and come back clean.

The Nearpoint application has a write size of 64k which should match what the Clariion stripe size is for the Raid Groups .  Can anyone think of anything I am overlooking or misconfigured.  Any help would be appreciated, as it may keep the clients crosshairs focused elsewhere.

Regards

Shannon

1.3K Posts

September 17th, 2011 12:00

considering the RD/WR rario 30/70 % , did switching to R10 helped compared to R5 on further tests?

1 Rookie

 • 

20.4K Posts

August 18th, 2011 13:00

what's the read to write ratio

1 Rookie

 • 

20.4K Posts

August 18th, 2011 13:00

what do you get when you run IOmeter against LUN 100 only. Do number of IOPs go up as you add more workers ?

40 Posts

August 18th, 2011 13:00

30% Read 70% Write 100% Random 64K write size

The Database luns are the ones suffering

Luns 100, 105 & 110

40 Posts

August 18th, 2011 13:00

Running against three database luns at the same time - luns 100, 105, 110. 30% Read 70% Write 100% Random 64K write size

40 Posts

August 18th, 2011 13:00

Yes as more threads were added the i/o went up. EMC support thinks the host is not pushing enough I/O during the test. We are running tests now to see if we are able to push enough I/O from the host.

1 Rookie

 • 

20.4K Posts

August 18th, 2011 13:00

also, is that 1400 IOPS when you are running IOmeter against one LUN or all of them at once ?

1 Rookie

 • 

20.4K Posts

August 18th, 2011 14:00

i don't know if you can tweak their application (typically you can't) but with analyzer you can see what they are using and try to tweak host/storage.

1 Rookie

 • 

20.4K Posts

August 18th, 2011 14:00

also have you tried to format NTFS file system with 64k file allocation unit size and see if that does anything.

4.5K Posts

August 18th, 2011 14:00

how many Workers do you have configured? Start with 2. What about the "# Outstanding I/Os" - try 2 to begin with. Also, use 4KB as the IO size and see what you get. Remember, in the White Paper, the disk vaules are based on small block IO (less than 32KB), the smaller the IO size, the more IO/s you get.

glen

40 Posts

August 18th, 2011 14:00

They have set that on this next test round. I will let you know what the results are Tomorrow. I appreciate the quick responses.

4.5K Posts

August 18th, 2011 14:00

In IOmeter - you can set the IO size - not sure about the application - I just wanted to show the difference between 4KB and 32KB in the number of IO's.

glen

1 Rookie

 • 

20.4K Posts

August 18th, 2011 14:00

when you configure your workers, crank up "number of outstanding I/Os per target" to 10 ..make sure to set it for each workers.

40 Posts

August 18th, 2011 14:00

How can I change what their I/O size is. This application (Nearpoint) will determine that wont it?

40 Posts

August 19th, 2011 04:00

Thank you I will take a look.

No Events found!

Top