Highlighted
SunshineM
2 Iron

Test CIFS/Pool load capability

Jump to solution

Hello,

We have a CIFS share on RAID 6 (6 + 2) NL-SAS disks.

I have theoretical calculations this pool is capable of to handle below;

1) IOP per disk * total disks = 80 * 8 = 640 IOPS

2) MB/s per disk * total disks = 80 * 16 = 128 MB/s

But I want to know how can I generate TEST load on this CIFS share on this pool, to calculate real time statistics this pool is capable of to handle.

Thanks.

Tags (2)
0 Kudos
1 Solution

Accepted Solutions
kelleg
4 Ruthenium

Re: Test CIFS/Pool load capability

Jump to solution

When you run IOmeter and select a disk, that disk on the array is one device (at least if this is a Block configurations - the LUN which is in a Pool) and the IOPS (187.82) is what the LUN can handle, not the individual physical disks under the LUN that make up the Pool. For FILE it's a bit different as File takes a LUN(s) and creates a file system on the array and then presents that to a host as a device (I think this is correct - I'm not a File person). That File system may behave differently than a LUN on the Block side.

If you have 8 physical disks in the Pool using Raid 6, then you have a 6+2 configuration. Each individual disk can handle about 80 IOPS. So the total Pool can handle about 8 (disks) * 80 IOPS = 640 IOPS for the Pool. The 80 IOPS is based on a small block (less than 32KB IO Size), random IO (combination of Reads and Writes 70/30) workload. If you increase the IO Size, then IOPS will be less, but Bandwidth increases. Check page 115 in the "EMC Unified Storage Best Practices for Performance and Availability - Common Platform and Block Storage 31.5 — Applied Best Practices.pd" document that I attached in my last post.

glen

0 Kudos
17 Replies
SunshineM
2 Iron

Re: Test CIFS/Pool load capability

Jump to solution

Awaiting reply from anyone please.

0 Kudos
Peter_EMC
3 Silver

Re: Test CIFS/Pool load capability

Jump to solution

Sorry, I do not understand your calculations.

80 * 16 is not 128

Isn t the bandwidth highly dependent on the size of the IOs?

And shouldn´t there be a difference between read or write, especially when using Raid-6

0 Kudos
SunshineM
2 Iron

Re: Test CIFS/Pool load capability

Jump to solution

Sorry it should be 1280 MB/s.

Basically those are the calculations I have done based on the IOPs and MB per disk can handle, hence multiplied by total disks in the Raid group.

But the thing is I need to generate load on NAS pool and then capture performance statistics. so I am looking for how to do it?

Any such tool available to generate load for test purpose?

Thanks

0 Kudos
kelleg
4 Ruthenium

Re: Test CIFS/Pool load capability

Jump to solution

For the NL-SAS disk

About 80 IOPS per disk so your first calculation is correct:  80 * 8 = 640 IOPS

About 8MB/s per disk:  8 * 8 = 64MB/s for the raid group.

Both of these calculations are dependent on the IO size and the IO Type (reads or writes) and pattern (random or sequential).

You might try IOmeter for testing from the host. On the array you need to enable Data Logging. Set the Archive Interval to 60 seconds, Enable Periodic Archiving, set Stop After to 1 day, then Start Data Logging. When Data Logging is running you must collect at least 10 data points before a archive can be created. That means you must let Data Logging run for a minimum of 11 minutes (each data poll is one minute).

Set you test to run for at least 30 minutes, then stop test, then Stop Data Logging. A new Archive will be created when you stop Data Logging and will contain the data points from that point back 2.6 hours or to when you started the test.

For more information using using Data Logging see KB article 11289 https://support.emc.com/kb/12289

Other article contained in KB 12289:

9196 "How do I enable Navisphere Analyzer monitoriing in FLARE Release 24 and later"

14039 "How do I use Navisphere or Unisphere UI to collect NAR or NAZ files?"

8677 "How do I automate the collecting of .NAR files?


glen

0 Kudos
SunshineM
2 Iron

Re: Test CIFS/Pool load capability

Jump to solution

Ok, I will try IO meter.

How can I test Throughput (MB/s) and Latency (ms)?

Thanks.

0 Kudos
kelleg
4 Ruthenium

Re: Test CIFS/Pool load capability

Jump to solution

The FAQ for IOmeter explains how to run it. If you want to test for Throughput you would use a small block IO Size - say 4KB or 8KB size. If you want to test for Bandwidth you would use a larger IO Size - say 32KB or 64KB. You start with 100% Sequential Reads or 100%  Sequential Writes, then try 100% Random Reads or 100% Random Writes. Generally 100% Sequential Writes should be the fastest. followed by 100% Random Writes, then 100% Sequential Reads and last 100% Random Reads.

You can also set the Queued IO's to get better performance - I've found that 10 queued IO's seems to provide the best results. You can also set more users (each "User" is a thread) as multiple threads leads to better performance. Try a single User first, then add more to see what you get.

In IOmeter it will show the response times while running - you can set the "update" rate to 1 second for the fastest rate. IOmeter will just send out IO as fast as it can, it does not bother with what the response times are, just how fast it can transmit data. so at the fastest rate the latency should be high.

What this will show is the raw performance of the array and the raid group that you will be using. This will also show the effect of the pre-fetching for the 100% Sequential Reads. The Writes will show the effects of the amount of Write cache configured on the array. You want to use a "Maximum Disk Size" to a size (in sectors) that is greater than the size of the Write cache otherwise the Write tests can run in the Write cache and not really test the underlying disks (the write cache flush to disk when Write cache gets full). If your Write cache is 8GB, then the size of test file should be 2x that size (16GB). In IOmeter you would set the "Maximum Disk Size" to 64,000,000 sectors. This will create a 32GB file on the LUN under test. Not sure how you do this on the File side of the array though. I've only run this on the Block side with a host connected to the Block and not the File side.

glen

0 Kudos
SunshineM
2 Iron

Re: Test CIFS/Pool load capability

Jump to solution

Glen, you are the champ!!!!!!!!!!!!!

You took me in the direction I was looking after.

I have installed and ran IOmeter, with 512kb 100% read and 100% write. Also with 512kb 70-30% read/write ratio.

Attached are the results but could not interpret.

How to interpret or understand the values, please help?

Attached are the results.

'Test Type Test Description
0 Test Setup
'Version
1.1.0
'Time Stamp
2015-06-23 06:22:56:046
'Access specifications
'Access specification name default assignment
512K_Write 0
'size % of size % reads % random delay burst align reply
524288 100 0 100 0 1 524288 0
'End access specifications
'Results
'Target Type Target Name Access Specification Name # Managers # Workers # Disks IOps Read IOps Write IOps MiBps (Binary) Read MiBps (Binary) Write MiBps (Binary) MBps (Decimal) Read MBps (Decimal) Write MBps (Decimal) Transactions per Second Connections per Second Average Response Time Average Read Response Time Average Write Response Time Average Transaction Time Average Connection Time Maximum Response Time Maximum Read Response Time Maximum Write Response Time Maximum Transaction Time Maximum Connection Time Errors Read Errors Write Errors Bytes Read Bytes Written Read I/Os Write I/Os Connections Transactions per Connection Total Raw Read Response Time Total Raw Write Response Time Total Raw Transaction Time Total Raw Connection Time Maximum Raw Read Response Time Maximum Raw Write Response Time Maximum Raw Transaction Time Maximum Raw Connection Time Total Raw Run Time Starting Sector Maximum Size Queue Depth % CPU Utilization % User Time % Privileged Time % DPC Time % Interrupt Time Processor Speed Interrupts per Second CPU Effectiveness Packets/Second Packet Errors Segments Retransmitted/Second 0 to 50 uS 50 to 100 uS 100 to 200 uS 200 to 500 uS 0.5 to 1 mS 1 to 2 mS 2 to 5 mS 5 to 10 mS 10 to 15 mS 15 to 20 mS 20 to 30 mS 30 to 50 mS 50 to 100 mS 100 to 200 mS 200 to 500 mS 0.5 to 1 S 1 to 2 s 2 to 4.7 s 4.7 to 5 s 5 to 10 s >= 10 s
ALL All 512K_Write 1 1 1 152.244526 0 152.244526 76.122263 0 76.122263 79.819978 0 79.819978 152.244526 0 6.564171 0 6.564171 6.564171 0 221.088504 0 221.088504 221.088504 0 0 0 0 0 71838466048 0 137021 0 -1 0 1024000 1 18.477683 0.317211 18.16162 12.918975 2.64169 8117.484073 8.239373 34075.71973 0 0.016667 0 0 0 0 0 0 2299 131243 2379 639 409 38 6 7 1 0 0 0 0 0 0
MANAGER NEWTEST 512K_Write 1 1 152.244526 0 152.244526 76.122263 0 76.122263 79.819978 0 79.819978 152.244526 0 6.564171 0 6.564171 6.564171 0 221.088504 0 221.088504 221.088504 0 0 0 0 0 71838466048 0 137021 0 -1 0 12878189986 12878189986 0 0 3165585 3165585 0 12886449182 0 1024000 1 18.477683 0.317211 18.16162 12.918975 2.64169 14318180 8117.484073 8.239373 34075.71973 0 0.016667 0 0 0 0 0 0 2299 131243 2379 639 409 38 6 7 1 0 0 0 0 0 0
PROCESSOR CPU 0 33.782672 0.320678 33.463143 25.837949 5.210578 14318180 7995.176236
PROCESSOR CPU 1 3.172693 0.313744 2.860098 0 0.072802 14318180 122.307837
WORKER Worker 1 512K_Write 1 152.244526 0 152.244526 76.122263 0 76.122263 79.819978 0 79.819978 152.244526 0 6.564171 0 6.564171 6.564171 0 221.088504 0 221.088504 221.088504 0 0 0 0 0 71838466048 0 137021 0 -1 0 12878189986 12878189986 0 0 3165585 3165585 0 12886449182 0 1024000 1 14318180 0 0 0 0 0 0 2299 131243 2379 639 409 38 6 7 1 0 0 0 0 0 0
DISK MAPPED NETWORK DRIVE ON CIFS
Mapped Network Drive on CIFS
152.244526 0 152.244526 76.122263 0 76.122263 79.819978 0 79.819978 152.244526 0 6.564171 0 6.564171 6.564171 0 221.088504 0 221.088504 221.088504 0 0 0 0 0 71838466048 0 137021 0 -1 0 12878189986 12878189986 0 0 3165585 3165585 0 12886449182 0 1024000 1 14318180 0 0 0 0 0 0 2299 131243 2379 639 409 38 6 7 1 0 0 0 0 0 0
'Time Stamp
2015-06-23 06:41:14:409
'Access specifications
'Access specification name default assignment
512K_Read 0
'size % of size % reads % random delay burst align reply
524288 100 100 100 0 1 524288 0
'End access specifications
'Results
'Target Type Target Name Access Specification Name # Managers # Workers # Disks IOps Read IOps Write IOps MiBps (Binary) Read MiBps (Binary) Write MiBps (Binary) MBps (Decimal) Read MBps (Decimal) Write MBps (Decimal) Transactions per Second Connections per Second Average Response Time Average Read Response Time Average Write Response Time Average Transaction Time Average Connection Time Maximum Response Time Maximum Read Response Time Maximum Write Response Time Maximum Transaction Time Maximum Connection Time Errors Read Errors Write Errors Bytes Read Bytes Written Read I/Os Write I/Os Connections Transactions per Connection Total Raw Read Response Time Total Raw Write Response Time Total Raw Transaction Time Total Raw Connection Time Maximum Raw Read Response Time Maximum Raw Write Response Time Maximum Raw Transaction Time Maximum Raw Connection Time Total Raw Run Time Starting Sector Maximum Size Queue Depth % CPU Utilization % User Time % Privileged Time % DPC Time % Interrupt Time Processor Speed Interrupts per Second CPU Effectiveness Packets/Second Packet Errors Segments Retransmitted/Second 0 to 50 uS 50 to 100 uS 100 to 200 uS 200 to 500 uS 0.5 to 1 mS 1 to 2 mS 2 to 5 mS 5 to 10 mS 10 to 15 mS 15 to 20 mS 20 to 30 mS 30 to 50 mS 50 to 100 mS 100 to 200 mS 200 to 500 mS 0.5 to 1 S 1 to 2 s 2 to 4.7 s 4.7 to 5 s 5 to 10 s >= 10 s
ALL All 512K_Read 1 1 1 134.203422 134.203422 0 67.101711 67.101711 0 70.361244 70.361244 0 134.203422 0 7.447769 7.447769 0 7.447769 0 1393.257453 1393.257453 0 1393.257453 0 0 0 0 63326650368 0 120786 0 0 -1 0 1024000 1 18.117883 0.50613 17.610548 13.129912 2.489917 8668.930695 7.407235 56967.59361 0 0 0 0 0 0 0 0 16866 102060 1400 322 119 2 2 2 3 3 7 0 0 0 0
MANAGER NEWTEST 512K_Read 1 1 134.203422 134.203422 0 67.101711 67.101711 0 70.361244 70.361244 0 134.203422 0 7.447769 7.447769 0 7.447769 0 1393.257453 1393.257453 0 1393.257453 0 0 0 0 63326650368 0 120786 0 0 -1 12880437132 0 12880437132 0 19948911 0 19948911 0 12886673582 0 1024000 1 18.117883 0.50613 17.610548 13.129912 2.489917 14318180 8668.930695 7.407235 56967.59361 0 0 0 0 0 0 0 0 16866 102060 1400 322 119 2 2 2 3 3 7 0 0 0 0
PROCESSOR CPU 0 34.002043 0.565063 33.435776 26.251157 4.9365 14318180 8559.843237
PROCESSOR CPU 1 2.233723 0.447197 1.785321 0.008667 0.043333 14318180 109.087457
WORKER Worker 1 512K_Read 1 134.203422 134.203422 0 67.101711 67.101711 0 70.361244 70.361244 0 134.203422 0 7.447769 7.447769 0 7.447769 0 1393.257453 1393.257453 0 1393.257453 0 0 0 0 63326650368 0 120786 0 0 -1 12880437132 0 12880437132 0 19948911 0 19948911 0 12886673582 0 1024000 1 14318180 0 0 0 0 0 0 16866 102060 1400 322 119 2 2 2 3 3 7 0 0 0 0
DISK MAPPED NETWORK DRIVE ON CIFS
Mapped Network Drive on CIFS
134.203422 134.203422 0 67.101711 67.101711 0 70.361244 70.361244 0 134.203422 0 7.447769 7.447769 0 7.447769 0 1393.257453 1393.257453 0 1393.257453 0 0 0 0 63326650368 0 120786 0 0 -1 12880437132 0 12880437132 0 19948911 0 19948911 0 12886673582 0 1024000 1 14318180 0 0 0 0 0 0 16866 102060 1400 322 119 2 2 2 3 3 7 0 0 0 0
'Time Stamp
2015-06-23 06:59:14:598
'End Test

'Test Type Test Description
0 Test Setup
'Version
1.1.0
'Time Stamp
2015-06-23 07:33:32:920
'Access specifications
'Access specification name default assignment
Read_Write_70_30 1
'size % of size % reads % random delay burst align reply
524288 100 70 100 0 1 524288 0
'End access specifications
'Results
'Target Type Target Name Access Specification Name # Managers # Workers # Disks IOps Read IOps Write IOps MiBps (Binary) Read MiBps (Binary) Write MiBps (Binary) MBps (Decimal) Read MBps (Decimal) Write MBps (Decimal) Transactions per Second Connections per Second Average Response Time Average Read Response Time Average Write Response Time Average Transaction Time Average Connection Time Maximum Response Time Maximum Read Response Time Maximum Write Response Time Maximum Transaction Time Maximum Connection Time Errors Read Errors Write Errors Bytes Read Bytes Written Read I/Os Write I/Os Connections Transactions per Connection Total Raw Read Response Time Total Raw Write Response Time Total Raw Transaction Time Total Raw Connection Time Maximum Raw Read Response Time Maximum Raw Write Response Time Maximum Raw Transaction Time Maximum Raw Connection Time Total Raw Run Time Starting Sector Maximum Size Queue Depth % CPU Utilization % User Time % Privileged Time % DPC Time % Interrupt Time Processor Speed Interrupts per Second CPU Effectiveness Packets/Second Packet Errors Segments Retransmitted/Second 0 to 50 uS 50 to 100 uS 100 to 200 uS 200 to 500 uS 0.5 to 1 mS 1 to 2 mS 2 to 5 mS 5 to 10 mS 10 to 15 mS 15 to 20 mS 20 to 30 mS 30 to 50 mS 50 to 100 mS 100 to 200 mS 200 to 500 mS 0.5 to 1 S 1 to 2 s 2 to 4.7 s 4.7 to 5 s 5 to 10 s >= 10 s
ALL All Read_Write_70_30 1 1 1 187.820938 131.234658 56.58628 93.910469 65.617329 28.29314 98.472264 68.804756 29.667507 187.820938 0 5.321172 5.073434 5.895724 5.321172 0 1500.359752 1500.359752 230.098448 1500.359752 0 0 0 0 61924704256 26700939264 118112 50928 0 -1 0 1024000 1 18.099867 0.292943 17.807994 13.519585 2.569752 9963.087512 10.376924 67189.90712 0 0.007778 0 0 0 0 0 0 116940 50795 934 214 132 8 3 3 1 0 10 0 0 0 0
MANAGER NEWTEST Read_Write_70_30 1 1 187.820938 131.234658 56.58628 93.910469 65.617329 28.29314 98.472264 68.804756 29.667507 187.820938 0 5.321172 5.073434 5.895724 5.321172 0 1500.359752 1500.359752 230.098448 1500.359752 0 0 0 0 61924704256 26700939264 118112 50928 0 -1 8579932779 4299139933 12879072712 0 21482421 3294591 21482421 0 12886450128 0 1024000 1 18.099867 0.292943 17.807994 13.519585 2.569752 14318180 9963.087512 10.376924 67189.90712 0 0.007778 0 0 0 0 0 0 116940 50795 934 214 132 8 3 3 1 0 10 0 0 0 0
PROCESSOR CPU 0 34.437945 0.384813 34.054203 27.039169 5.120438 14318180 9846.833266
PROCESSOR CPU 1 1.761788 0.201073 1.561785 0 0.019067 14318180 116.254246
WORKER Worker 1 Read_Write_70_30 1 187.820938 131.234658 56.58628 93.910469 65.617329 28.29314 98.472264 68.804756 29.667507 187.820938 0 5.321172 5.073434 5.895724 5.321172 0 1500.359752 1500.359752 230.098448 1500.359752 0 0 0 0 61924704256 26700939264 118112 50928 0 -1 8579932779 4299139933 12879072712 0 21482421 3294591 21482421 0 12886450128 0 1024000 1 14318180 0 0 0 0 0 0 116940 50795 934 214 132 8 3 3 1 0 10 0 0 0 0
DISK MAPPED NETWORK DRIVE ON CIFS 187.820938 131.234658 56.58628 93.910469 65.617329 28.29314 98.472264 68.804756 29.667507 187.820938 0 5.321172 5.073434 5.895724 5.321172 0 1500.359752 1500.359752 230.098448 1500.359752 0 0 0 0 61924704256 26700939264 118112 50928 0 -1 8579932779 4299139933 12879072712 0 21482421 3294591 21482421 0 12886450128 0 1024000 1 14318180 0 0 0 0 0 0 116940 50795 934 214 132 8 3 3 1 0 10 0 0 0 0
'Time Stamp
2015-06-23 07:51:33:717
'End Test

Thanks.

0 Kudos
SunshineM
2 Iron

Re: Test CIFS/Pool load capability

Jump to solution

Hello Glen,

Awaiting your reply please

Thanks.

0 Kudos
kelleg
4 Ruthenium

Re: Test CIFS/Pool load capability

Jump to solution

The interpretation of the results is pretty much up to you. The values for IOPs (IO per second) and Response Times are what you're looking for. IOPS is what the array is providing for the configuration that you use in the test (the Pool or raid group and the LUN). Response Times are the average of all the IO's sent to the array and how fast the array is responding to each IO sent to it.

This may help:

Ask the Expert: Performance Calculations on Clariion/VNX

http://storagesavvy.com/2011/03/30/performance-analysis-for-clariion-and-vnx-part-1/

glen

0 Kudos