Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

2963

August 18th, 2011 12:00

CX4-480 R1_0 4+4 Performance

Array = CX4-480 with Flare 6.28.21.0.39

Application Mimosa Nearpoint (Exchange Archive application)

DRIVES =

24 450GB 15K FC drives Carved into 6 Raid 1_0 Raid Groups of 3+3.

56 600GB 10K FC drives Carved into 7 Raid 1_0 Raid Groups of 4+4.

58 1TB SATA drives Carved into 11 Raid 5 Raid Groups of 4+1.

For purposes of testing the I/O capabilities, we used the 450GB drives and built the luns and MetaLuns with them.  I used the following scripts to build the Raid Groups and to carve the luns and expand them into MetaLuns:

naviseccli -h  {ip address of array} createrg 23 2_2_0 3_2_0 2_2_1 3_2_1 2_2_2 3_2_2  -rm no
naviseccli -h  {ip address of array} createrg 24 2_2_3 3_2_3 2_2_4 3_2_4 2_2_5 3_2_5 -rm no
naviseccli -h  {ip address of array} createrg 25 2_2_6 3_2_6 2_2_7 3_2_7 2_2_8 3_2_8 -rm no
naviseccli -h  {ip address of array} createrg 26 2_2_9 3_2_9 2_2_10 3_2_10 2_2_11 3_2_11 -rm no


naviseccli -address  {ip address of array} bind r1_0 100 -rg 23 -rc 1 -wc 1 -aa 0 -cap 435200 -r Medium -v Low -sp a -sq mb
naviseccli -address  {ip address of array} bind r1_0 101 -rg 24 -rc 1 -wc 1 -aa 0 -cap 435200 -r Medium -v Low -sp b -sq mb
naviseccli -address  {ip address of array} bind r1_0 102 -rg 25 -rc 1 -wc 1 -aa 0 -cap 435200 -r Medium -v Low -sp a -sq mb
naviseccli -address  {ip address of array} bind r1_0 103 -rg 26 -rc 1 -wc 1 -aa 0 -cap 435200 -r Medium -v Low -sp b -sq mb

naviseccli -address  {ip address of array} bind r1_0 104 -rg 23 -rc 1 -wc 1 -aa 0 -cap 332800 -r Medium -v Low -sp b -sq mb
naviseccli -address  {ip address of array} bind r1_0 105 -rg 24 -rc 1 -wc 1 -aa 0 -cap 332800 -r Medium -v Low -sp a -sq mb
naviseccli -address  {ip address of array} bind r1_0 106 -rg 25 -rc 1 -wc 1 -aa 0 -cap 332800 -r Medium -v Low -sp b -sq mb
naviseccli -address  {ip address of array} bind r1_0 107 -rg 26 -rc 1 -wc 1 -aa 0 -cap 332800 -r Medium -v Low -sp a -sq mb

naviseccli -address  {ip address of array} bind r1_0 108 -rg 23 -rc 1 -wc 1 -aa 0 -cap 332800 -r Medium -v Low -sp a -sq mb
naviseccli -address  {ip address of array} bind r1_0 109 -rg 24 -rc 1 -wc 1 -aa 0 -cap 332800 -r Medium -v Low -sp b -sq mb
naviseccli -address  {ip address of array} bind r1_0 110 -rg 25 -rc 1 -wc 1 -aa 0 -cap 332800 -r Medium -v Low -sp a -sq mb
naviseccli -address  {ip address of array} bind r1_0 111 -rg 26 -rc 1 -wc 1 -aa 0 -cap 332800 -r Medium -v Low -sp b -sq mb

naviseccli -address  {ip address of array} bind r1_0 112 -rg 23 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp b -sq mb
naviseccli -address  {ip address of array} bind r1_0 113 -rg 24 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp a -sq mb
naviseccli -address  {ip address of array} bind r1_0 114 -rg 25 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp b -sq mb
naviseccli -address  {ip address of array} bind r1_0 115 -rg 26 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp a -sq mb

naviseccli -address  {ip address of array} bind r1_0 116 -rg 23 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp a -sq mb
naviseccli -address  {ip address of array} bind r1_0 117 -rg 24 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp b -sq mb
naviseccli -address  {ip address of array} bind r1_0 118 -rg 25 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp a -sq mb
naviseccli -address  {ip address of array} bind r1_0 119 -rg 26 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp b -sq mb

naviseccli -address  {ip address of array} bind r1_0 120 -rg 23 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp b -sq mb
naviseccli -address  {ip address of array} bind r1_0 121 -rg 24 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp a -sq mb
naviseccli -address  {ip address of array} bind r1_0 122 -rg 25 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp b -sq mb
naviseccli -address  {ip address of array} bind r1_0 123 -rg 26 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp a -sq mb

naviseccli -address  {ip address of array} bind r1_0 124 -rg 23 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp a -sq mb
naviseccli -address  {ip address of array} bind r1_0 125 -rg 24 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp b -sq mb
naviseccli -address  {ip address of array} bind r1_0 126 -rg 25 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp a -sq mb
naviseccli -address  {ip address of array} bind r1_0 127 -rg 26 -rc 1 -wc 1 -aa 0 -cap 25600 -r Medium -v Low -sp b -sq mb


naviseccli -h  {ip address of array} metalun -expand -base 100 -lus 101 102 103 -name LTCFISWSQLNP01_MetaLUN_1700GB -defaultowner A -type S -sq mb -o
naviseccli -h  {ip address of array} metalun -expand -base 104 -lus 105 106 107 -name LTCFISWSQLNP01_MetaLUN_1300GB -defaultowner B -type S -sq mb -o
naviseccli -h  {ip address of array} metalun -expand -base 108 -lus 109 110 111 -name LTCFISWSQLNP02_MetaLUN_1300GB -defaultowner A -type S -sq mb -o
naviseccli -h  {ip address of array} metalun -expand -base 112 -lus 113 114 115 -name LTCFISWSQLNP01_MetaLUN_100GB -defaultowner B -type S -sq mb -o
naviseccli -h  {ip address of array} metalun -expand -base 116 -lus 117 118 119 -name LTCFISWSQLNP01_MetaLUN_100GB -defaultowner A -type S -sq mb -o
naviseccli -h  {ip address of array} metalun -expand -base 120 -lus 121 122 123 -name LTCFISWSQLNP02_MetaLUN_100GB -defaultowner B -type S -sq mb -o
naviseccli -h  {ip address of array} metalun -expand -base 124 -lus 125 126 127 -name LTCFISWSQLNP02_MetaLUN_100GB -defaultowner A -type S -sq mb -o

We are using I/O-iometer tool to test the I/O capabilities, and while I would think I should get 1920 iops (EMC said to use 160 iops for calculations)  from the meta luns, during testing we only get 1400.

Hosts are HP DL580 G7's with 32 cores and dual Emulex HBA's  and a  HP DL380 G7 with 16 cores and Dual Emulex HBA's - Heat reports have been run and come back clean.

The Nearpoint application has a write size of 64k which should match what the Clariion stripe size is for the Raid Groups .  Can anyone think of anything I am overlooking or misconfigured.  Any help would be appreciated, as it may keep the clients crosshairs focused elsewhere.

Regards

Shannon

40 Posts

August 19th, 2011 04:00

I will ask them to change the I/O size in iometer before the next test. Unfortunately we do not have access to the tool or servers so I must rely on our Wintel team to make the changes, run the test and get the results that the Company “Autonomy” parses through to give me the results. I am going to pull SP collects and the .NAR file to see how the array performed during the test. Just not sure how to compare apples to apples between the iometer results and the .NAR file results.

4.5K Posts

August 19th, 2011 12:00

easiest way is to look at the LUN under test, determine what the speed of the disks are (7200, 10K, 15K), how many disks are in the raid group (2+2 or 4+4 etc), then look at the type of test - is the test all reads or all writes. Then determine what the raid group can handle - see the Best Pratice guide in the "Size the Storage Requirements" section - there are a number of formula for determining the capactiy of the raid groups.

EMC CLARiiON Performance and Availability Release 30 Firmware Update Applied Best Practices.pdf

http://powerlink.emc.com/km/live1/en_US/Offering_Technical/White_Paper/h5773-clariion-best-practices-performance-availability-wp.pdf

EMC CLARiiON Storage System Fundamentals for Performance and Availability

http://powerlink.emc.com/km/live1/en_US/Offering_Technical/White_Paper/H1049_emc_clariion_fibre_channel_storage_fundamentals_ldv.pdf

glen

4.5K Posts

August 29th, 2011 06:00

Please remember to mark the question as Answered when you get the correct or best answer. Also, please award points to the person providing the best answer.

glen

40 Posts

August 29th, 2011 14:00

I am still battling with the Application owner as to the testing parameters they are using with iometer.  The FC luns are performing as they want, but when we also add the SATA luns into the test we are filling up cach and then performance is affected everywhere.  We believe their testing parameters are not set correctly and our local EMC rep and the application owner are meeting to discuss the iometer settings.  The I/O footprint for the SATA luns is 4k block size 30% random writes times four servers.  We may have to change the SATA from Raid 5 4+1 to Raid 1_0 4+4.  Will update when the next test completes.  All of the answers thus far have been helpful in narrowing in on a fix.  Thank you

Shannon

159 Posts

August 30th, 2011 20:00

As an add, you can take your actual disk specs and throw them in WMarrow's calculator with data to match your iometer and host tests and determine approximatly what IOPS you should expect:

http://www.wmarow.com/strcalc/

Ted

2 Intern

 • 

1.3K Posts

September 17th, 2011 12:00

Look at "white paper EMC clariion metaluns A details review`.. I see you ahve all the base LUNs (meta head) on sam e RG/23 which is against the best practice. You have to distribute the head across RG; Like one   meta head  on different RG

No Events found!

Top