4 Operator

 • 

4.5K Posts

September 8th, 2009 12:00

Either way will work - using the expand will take time and will affect the performance of the LUNs in the raid group. Creating a new RG and a new LUN and then migrating the LUN to the new LUN will also take time - probably about the same amount of time.

glen

2 Intern

 • 

234 Posts

August 24th, 2009 22:00

Hi All,

Any suggestions for raid layout will be helpful?

regards,
Samir

9 Legend

 • 

20.4K Posts

August 25th, 2009 04:00

SAP application servers or database are having performance issues ? Oracle/Informix ?

4 Operator

 • 

2.1K Posts

August 25th, 2009 08:00

It would be helpful if you could provide an overview of your configuration layout. This would give us a baseline for comment/recommendations.

2 Intern

 • 

234 Posts

August 25th, 2009 22:00

Hi All,

Please find RG layout for this SAP servers with MS SQL2005 as follows:-

LUN 76 1000MB RAID 1_0 Raid Group 9
LUN 77 81920MB metalun Raid Group N/A
LUN 78 40960MB RAID 1_0 Raid Group 9
LUN 79 25600MB RAID 5 Raid Group 1
LUN 81 200MB RAID 1_0 Raid Group 111

reagrds,
Samir

4 Operator

 • 

2.1K Posts

August 26th, 2009 07:00

Could you also provide:

* the layout of the raid groups used? We just need to know the number and type of drives used for each group
* The composition of MetaLUN 77. Striped or Concatenated. What RAID groups are the components on.
* What each of the LUNs is used for in your config (e.g. LUN 76 appears to be transaction logs)

I know this is a bit of a pain, but performance is not a simple topic and the more info you can provide, the more accurate the answer can be.

2 Intern

 • 

234 Posts

August 27th, 2009 00:00

Hi Allen,

Please find details as follows:-
=============================================================
RG 9 1_1_0,1_1_1,1_1_2,1_1_3,1_1_4,1_1_5 300gb FC disks LUN 73, 76, 78
RG 1 0_1_2,0_1_3,0_1_4,1_0_3,1_0_4,1_0_5,2_0_3,2_0_4 300gb FC disks LUN 52, 79
RG 111 0_0_2,0_0_3,0_0_4,0_0_5 73gb FC disks LUN 65, 71, 81
=============================================================

My mistake LUN 76 and LUN 77 are metaluns.
LUN 76 had been expanded with Concatenation, components are LUN 76 and Lun 59

LUN 59 raid 1_0 RG12 300gb FC disks disk=2_1_8,2_1_9,2_1_10,2_1_11,2_1_12,2_1_13,2_1_14,2_2_0
=============================================================

LUN 77 is also Concatenated, components are LUN 77 and LUN 58.

LUN 77 is a Private LUN disk= 0_0_6, 0_0_7, 0_0_8, 0_0_9, 0_0_10, 0_0_11, 0_0_12, 0_0_13 from RG6 raid1_0 146dg FC disks

LUN 58 is Private LUN disk=0_1_0,0_1_1,1_0_0,1_0_1,1_0_2,2_0_0,2_0_1,2_0_2 from RG0 raid5 300gb disks
============================================================


Total we have 6*73GB FC 15KRPM, 9*146GB FC 10K RPM, 36*300GB FC 10K RPM, 46*300GB FC 15K RPM, 30*500GB ATA 7.2K RPM hard disks on this CX700.

I hope this information would be helpful.

regards,
Samir

4 Operator

 • 

4.5K Posts

August 27th, 2009 10:00

Is Navisphere Analyzer installed on your array?

If so, have you collected Analyzer archives that cover the times that your experiencing the problems?

What flare version is running on the CX700?

What is your Read and Write cache set for - memory allocated, check to ensure that cache is enabled for all the LUNs and the SP's?

glen

2 Intern

 • 

234 Posts

August 29th, 2009 22:00

Hi Glen,

Analyzer is installed on this array and I've gathered nar files which show LUN 76 99% utilized. LUN 76 is part of RG9 level 1_0 and is also shared by other servers, if you need i can send the nar files to you, would appreciated if you can suggest some steps to correct it?

Array is running Rel 19, Read is set at 1889MB and Write set at 1254MB memory is enabled for all LUN and SP's.

regards,
Samir

2 Intern

 • 

234 Posts

August 31st, 2009 00:00

Hi All,

I've gathered the following details and would like if someone can guide to resolve this problem..

===============================================================
LUN76 Utilization (%) Total Throughput IO/s Queue Length
Latest 68.55405 729.448 26.01217
Average 44.30802 457.1073 12.91021
Maximum 100 1602.339 43.96342
Minimum 0 0.922646 0
===============================================================

LUN 59 Utilization (%) Total Throughput IO/s
Latest 50 0
Average 47.82 33.054
Maximum 100 338.04
Minimum 0 0
===============================================================

Raid Group 9 Response Times (ms) Utilization (%)
Latest 15.46930372 53.36298
Average 15.02573385 29.64063
Maximum 86.44497259 99.83819
Minimum 0 0
===============================================================
Raid Group 9 Disk Components Performance Details
Utilization % 1_1_0 1_1_1 1_1_2 1_1_3 1_1_4 1_1_5
Latest 41.89 49.11 41.75 48.97 42.85 50.30
Average 23.81 28.68 23.67 28.52 25.21 30.11
Maximum 99.19 99.35 99.68 99.83 99.19 99.53
Minimum 0 0 0 0 0 0
===============================================================

I see LUN 59 part of concatenated metalun with Lun 76 shows 100% utilization.

Appreciate suggestions from all.

regards,
Samir

4 Operator

 • 

4.5K Posts

August 31st, 2009 07:00

For performance issues, you would be best served by opening a case with EMC, attempting to diagnose a performance issue via chat is very difficult.

Some things that you can do when you open a case with EMC:

1. review Knowledgebase article emc218359 - this article will provide links to a number of other articles about using Analyzer and other issues

2. look at emc161922 "How to gather the necessary information for a CLARiiON performance analysis SR" (there is a link in emc218359 to this article - near the bottom). You will need at a minimum the spcollects, Analyzer archives (NAR) and the host grab from the host that is experiencing the problem along with a description of the problem, the times that problem occurs, the LUNs involved, the hosts involved and any other data that you can provide that will help point to when and where the problem occurs.

3. When viewing the data in the NAR files, ignore the values for metaLUN components - always look at either the actual metaLUN (called the metaLUN head) or the disks within the metaLUN - you want to look for those objects that have high queue length.

4. review the following article, pay special attention to the metaLUN section:

EMC CLARiiON Best Practices for Fibre Channel Storage: FLARE Release 26 Firmware Update - Best Practices Planning

http://powerlink.emc.com/km/live1/en_US/Offering_Technical/White_Paper/H2358_clariion_best_prac_fibre_chnl_wp_ldv.pdf

5. review the metaLUN Best Practice guide:

http://powerlink.emc.com/km/live1/en_US/Offering_Technical/White_Paper/H1024.1_clariion_metaluns_cncpt_wp_ldv.pdf

glen

2 Intern

 • 

234 Posts

August 31st, 2009 10:00

Hi Al,

As per nar files I see LUN 76 and disks under this Raid Group 9 show 99% utilization.
where as LUN 59 which forms part of this LUN 76 as metalun is also badly hit.But disks under LUN 59 are only 50% utilized.

What I plan to do is increase the no. of disks under Raid Group 9 from 6 currently to 10 by expanding this Raid Group, kindly suggest if this could be the best approach?

regards,
Samir

4 Operator

 • 

4.5K Posts

August 31st, 2009 11:00

Rather than using Utilization, look at Total Throughput (IOPS) - this will give you a good idea if the disks are being overloaded.

When you create a metaLUN, if the component LUNs are in raid groups that contain other non-metaLUN components, the non-metaLUN LUNs can interfere with their IO.

If the disks are 10K, then you should add disks if the Total IOPS for the disks is consistently over 120 IOPS. Use 180 IOPS for 15K disks.

Look in the Section of the Best Practices guide called "Sizing the Storage Requirements" - this section will explain how to figure out the number of disks to use for a given IO load.

glen

2 Intern

 • 

234 Posts

August 31st, 2009 11:00

I've gathered a lot stats for this, and have checked Throughput for raid group.

But couldn't come to conclusion as to change what that shall improve performance for this servers?

Glen if the throughput is high for disks under this RG's,than what should be done, coz I's seen Performance summary for this RG 9 and shows around 35000 on average?

rgds,
Samir

4 Operator

 • 

4.5K Posts

August 31st, 2009 14:00

Please look at the disks in the Raid Group - if the amount of Total IOPS at the disks exceeds the limits for the speed of the disk, then you need to multiple the number of disks in the raid group times the IOPS for the disk - this will give you the total IOPS for the RG.

ex. if you have a 4+1 R5 raid group and each disk is receiving 200 IOPS, then the total for the Raid Group is 5 * 200 = 1000 IOPS. If the disks are 10K FC, each disk can handle 120 IOPS, then divide 1000 / 120 = 8.3 - this is the number of 10K disks that you need to handle the IO load - you would need an 7+1 or 8+1 R5

glen
No Events found!

Top