Start a Conversation

Unsolved

This post is more than 5 years old

2747

May 19th, 2011 10:00

MAX values for I/O when viewing in Celerra Monitor

I need to have some reference when viewing the I/O within Celerra Monitor.

For example if the Front End shows 2200 I/O is that a lot, a little?  Is there a Maximum value?

I understand 0, but I also need to understand how much it can handle.

8.6K Posts

May 20th, 2011 06:00

Difficult to say since it doesnt say block size and which I/Os

May 20th, 2011 10:00

Continuing my shameless promotion of the server_stats command, on later DART, you can see all of your I/O breakdowns on the disk volumes

The store.diskVolume statpath has a pile of possibly useful stats.  You can export this to .csv and chart, to see your I/O performance.  While it doesn't tell you the limits, it can characterize the data you see in Celerra Monitor, and put it all in context.

Karl

91 Posts

May 20th, 2011 12:00

Hello

Generic 1Gb NIC or TOE NIC or iSCSI HBA are capable of servising 150 000 or more iops.

So 2200 iops very small amount of load.

Best regards,

Alex

June 22nd, 2011 14:00

This is production output from my celerra . can you give me  example for store.diskVolume ??

server_stats server_2 -table fsvol -interval 10 -count 6
server_2       FS name          MetaVol        Read        Read      FS     Avg Read     Write      Write      FS    Avg Write
Timestamp                                      KiB/s    Requests/s  Read      Size       KiB/s    Requests/s  Write     Size  
                                                                    Reqs%    Bytes                            Reqs%    Bytes  
           View_NFS_Hosp4   v178                     2           0      0       24576        135          23      -        6063
           View_NFS_Hosp3   v171                     7           0      1       24576         55          10      -        5923
           View_NFS_Hosp2   v164                   395          16     39       25136        176          24      -        7406
           View_NFS_Hosp1   v157                     0           0      0           -         11           2      -        6752
           SNBS1            SNBS1                    0           0      0           -          0           0      -         512
           SNBS5            SNBS5                    0           0      0           -         18           1      -       13166
           SNBS6            SNBS6                    0           0      0           -          2           1      -        3200
           NonPro...S_Hosp  v144                     4           0      1        8192        848         120      -        7263
           View_NFS_Hosp    v141                    23           3      7        8192        123           9      -       13863
           VM_NFS_Hosp      v96                    350          21     51       17047       9364        1405      -        6825

Thank you

June 22nd, 2011 19:00

On more recent versions of DART (5.6, 6.0 and 7.0), you can list the various statpaths that can be monitored.

Try this:  server_stats server_2 -info store.diskVolume

This command will list a variety of statistics within the store.diskVolume statpath.  server_stats server_2 -info will list all statpaths and variables, which you might want to pipe into a file, to list all of the current statistics groups on your Celerra or VNX.

June 24th, 2011 10:00

Thank you

I tried server_stats server_2 -info store.diskVolume it gave me looks like disk volume number .


name            = store.diskVolume
description     = Per disk volume statistics
type            = Set
member_stats    = store.diskVolume.ALL-ELEMENTS.currentQueueDepth,store.diskVolume.ALL-ELEMENTS.reads,store.diskVolume.ALL-ELEMENTS.readBytes,store.diskVolume.ALL-ELEMENTS.readSizeAvg,store.diskVolume.ALL-ELEMENTS.writes,store.diskVolume.ALL-ELEMENTS.writeBytes,store.diskVolume.ALL-ELEMENTS.writeSizeAvg,store.diskVolume.ALL-ELEMENTS.util
member_elements = store.diskVolume.root_disk,store.diskVolume.root_ldisk,store.diskVolume.NBS5,store.diskVolume.NBS6,store.diskVolume.d7,store.diskVolume.d8,store.diskVolume.d9,store.diskVolume.d10,store.diskVolume.d11,store.diskVolume.d16,store.diskVolume.d12,store.diskVolume.d17,store.diskVolume.d13,store.diskVolume.d18,store.diskVolume.d29,store.diskVolume.d19,store.diskVolume.d31,store.diskVolume.d20,store.diskVolume.d33,store.diskVolume.d30,store.diskVolume.d35,store.diskVolume.d32,store.diskVolume.d37,store.diskVolume.d34,store.diskVolume.d39,store.diskVolume.d36,store.diskVolume.d40,store.diskVolume.d38,store.diskVolume.d41,store.diskVolume.d42,store.diskVolume.d47,store.diskVolume.d48
member_of       =

June 24th, 2011 15:00

Yes - this the proper output of the command.  This show your all of the available per-volume stastistics available on your Celerra.  You can work through all of the various monitor groups in server_stats and get more information.  Since the variables change, based on how many filesystems, disks and metavolumes you have, you have to run the server_stats server_2 -info <stat_path> command to get all of the possible variables and combinations.

Here's some ouptut for store.diskVolume:

$ server_stats server_2 -monitor store.diskVolume.ALL-ELEMENTS -interval 1 -te no

server_2        dVol          Queue      Read      Read      Avg Read    Write     Write    Avg Write     Util %

Timestamp                     Depth      Ops/s     KiB/s       Size      Ops/s     KiB/s       Size

                                                              Bytes                           Bytes

17:47:02   root_ldisk                0        0          0           -        9         12        1308           0

           d55                       0        1          8        8192        0          0           -           1

           d9                        0        1          2        1536        3         20        6997           2

           d103                      0        6         48        8192        2         56       28672           2

           d110                      0        3         24        8192        0          0           -           2

           d60                       0        1          8        8192        4         26        6656           0

           d61                       0        1          8        8192        0          0           -           1

           d15                       0        1          8        8192        4         32        8192           2

           d66                       0        2         14        7424        5         32        6554           2

           d19                       0        1          8        8192        5         32        6554           2

           d68                       0        0          0           -        4         30        7552           1

           d70                       0        0          0           -        8         58        7360           1

           d25                       0        1          8        8192        6         48        8277           2

           d76                       0        0          0           -        5         32        6554           1

           d29                       0        0          0           -        4         30        7680           0

           d31                       0        2         16        8192        2         16        8192           2

           d82                       0        8         64        8192        7         64        9362           0

           d37                       0        1          8        8192        1          8        8192           2

           d88                       0        7         56        8192        2         16        8192           0

           d92                       0       37        304        8413        1          8        8192          18

           d47                       0       32        256        8192        1          8        8192          23

           d98                       0       37        304        8413        3         32       10923          19

           d53                       0       41        328        8192        1          8        8192          26

           d121                      0        4         32        8192        1         32       32768           2

           d116                      0        4         32        8192        0          0           -           2

           d127                      0        4         32        8192        1         32       32768           2

           d134                      0        1         16       16384        0          0           -           0

           d145                      0        6         48        8192        2         48       24576           1

           d140                      0        8         64        8192        1          8        8192           4

The word-wrapping kills the output, but you're looking at the read/write performance for each disk device used by the Celerra.  It's hard to see the columns, but the 'Util %" column is wrapped around to the first spot.  This is the utilization of the particular disk device.  Notice the Queue Depth column?  It's "0" for every filesystem - there are no outstanding I/Os in the queue waiting to go to disk.   On your own system, you can look at this output and write it out as .CSV file format, import into Excel and graph the results.  If you take this output during peak usage, you can see how the backend disk of the Celerra is handling your I/O requests.

store.diskVolume is one of many stastistic paths.  If you really want to see what you can collect, try server_stats server_2 -info store > store.statpaths.txt; cat statpaths.txt.  This will write out a file with all of the storage statpaths on your system to a text file.  Go through each statpath with server_stats server_2 -monitor to see what statistics you can observe.  A good approach is collect data during peak and off-peak usage, and use the results to gauge the "envelope" of your system.

Please let us know if this helps!

Karl

June 27th, 2011 06:00

Thank you Carl but I just need one clarification .

.  If you take this output during peak usage, you can see how the backend disk of the Celerra is handling your I/O requests.

Does this mean what you are seeing is actually IOPS & do not need to worry about write penalty ?

Sincerely,

Viral

June 27th, 2011 07:00

If the backend is having difficulty handling your I/O, you'll start to see queued I/O showing in the aforementioned output.  Write penalty from RAID5 and RAID6 parity operations generally aren't visible here, unless the backend is having difficulty - again, you'll see it as queued I/O.  Unless the backend is really misconfigured - unbalanced SP usage, overloaded buses, mixed drive types in RAID groups, etc. - RAID parity writes should never be an issue.  More often, it'slikely that the backend does not have enough disk to absorb the IOPs.  Sometimes, people mistake this for a parity issue.

Some of other statpaths, store.diskVolume.ALL-ELEMENTS.util and store.diskVolume.ALL-ELEMENTS.avgServiceTime, provide more detail on backend performance, relative to the disks used by the Celerra.  During idle times, you can characterize utilization and average service time, then compare them under peak load.

Hope this helps!

Karl

7 Posts

November 2nd, 2011 11:00

Hi,  does this command just work on celerra OS 6.0. I tried it on 5.6 but it came back with an error? Also are these real figures of IOPS/sec  of  dvols from cache on the array or spindles.

8.6K Posts

November 2nd, 2011 14:00

Yes server_stats has been enhanced over the last couple of releases and not everything has been back-ported

Take a look at “man server_stats” to see if the option you are using is implemented in your DART version

No Events found!

Top