Intel's data sheet[14] claims 3,300 IOPS and 35,000 IOPS for writes and reads, respectively. 5,000 IOPS are measured for a mix. Intel X25-E G1 has around 3 times higher IOPS compared to the Intel X25-M G2.[15]
SandForce-1200 based SSD drives with enhanced firmware, states up to 50,000 IOPS, but benchmarking shows for this particular drive ~25,000 IOPS for random read and ~15,000 IOPS for random write.[16]
Below is an ESTIMATE of IOPS of a drive based on their rotation speed:
Disk Type
Disk Capacity
RPM
IOPS
Flash
100GB, 200GB
N/A
6000
SAS
300GB, 600GB
15k
170-180
SAS
900GB
10k
125
NL SAS
1TB, 2TB, 3TB
7200
75
Note: The above stats were calculated in a controlled lab environment and are only for providing a general reference. These should in no way be used as a benchmark.
I ran through the iometer tests on the vmktree iso and below are the results from an NFS share. The first result is with the NFS share cached which is the wrong behavior. The second test being after the MR1 install with the cache turned off. On the Max Throughput-50% read it is taxing the 1Gb connection now. My numbers are not overly scientific given that there are 20 VM's running on the 3100 as I ran the test but the performance difference is substantial for those of you fighting with NFS performance.
Test name
Latency
Avg iops
Avg MBps
cpu load
Max Throughput-100%Read
16.07
3694
115
3%
RealLife-60%Rand-65%Read
128.02
435
3
10%
Max Throughput-50%Read
25.20
2352
73
2%
Random-8k-70%Read
150.09
362
2
10%
Max Throughput-100%Read
16.47
3596
112
4%
RealLife-60%Rand-65%Read
29.68
1882
14
18%
Max Throughput-50%Read
13.16
4505
140
2%
Random-8k-70%Read
30.74
1783
13
28%
用户2:
these are my iometer test
VNXe config
vnxe 3100 (2 SP) + i/o module 4 eth 1Gb/s
R5 4+1 SAS 15K 600GB
R6 4+2 NL-SAS 1TB
HS 1 SAS 15K 600GB
networking
1 lacp eth2 + eth3
1 lacp eth10 + eth11 + eth12 + eth13
Tested on VMFS volume
1 iSCSI 500GB VMFS on 600GB 15K
IOMeter test on a VM windows 2003 32bit 2GB ram and CPU X5660
DELL-Leo
Community Manager
Community Manager
•
7.1K 消息
0
2013年12月12日 17:00
Hey Lin,
据我所知,VNXe上暂时还没有分析IOPS性能的工具。
当前,在Unisphere-> System -> System Performance 页面下只集成了 CPU, Network和Volume等的活动情况.
其实IOPS方面的数据一般都是根据现在实际的情况测试出来的。因为IOPS的结果根据磁盘类型(SAS,NL-SAS,SSD或闪存等),磁盘容量大小,RAID类型等的不同结果也不同。所以说,实测为王嘛~~
我这里找到一个在线计算IOPS的网站,你可以在你的实际环境中测试一下。
这个在线计算IOPS网站还不错,支持各种常见RAID,和多种型号硬盘,如SSD。你可以参考一下。
http://www.wmarow.com/strcalc/
除了上面这个,最常用的IOPS的测试benchmark工具主要有Iometer, IoZone等,这些可以综合用于测试磁盘在不同情形下的IOPS。下面的磁盘IOPS数据来自http://en.wikipedia.org/wiki/IOPS,给你一个基本参考。
Examples
Some commonly accepted averages for random IO operations, calculated as 1/(seek + latency) = IOPS:
Solid State Devices
Ato_lin
14 消息
0
2013年12月12日 17:00
收到!!
因為我自己在support.emc or powerlink 也是沒看到相關方式
現在似乎只能在unisphere vnxe system performance內看到cpu, network, volume r/w
謝謝Leo
DELL-Leo
Community Manager
Community Manager
•
7.1K 消息
0
2013年12月12日 18:00
不客气。。
我还找到了一些VNXe上IOPS相关的一些数据,你可以参考一下。正如下面所说,仅供参考哦~
Below is an ESTIMATE of IOPS of a drive based on their rotation speed:
Note: The above stats were calculated in a controlled lab environment and are only for providing a general reference. These should in no way be used as a benchmark.
请点击此处免费订阅论坛每月简报
DELL-Leo
Community Manager
Community Manager
•
7.1K 消息
0
2013年12月12日 20:00
Hey Lin,
我找到了一些用户在实际环境中用IO Meter测试VNXe性能的结果,这个可能对你帮助更大。具体内容,你可以参考下面这个帖子。
IO Meter Performance Stats?
用户1:
I ran through the iometer tests on the vmktree iso and below are the results from an NFS share. The first result is with the NFS share cached which is the wrong behavior. The second test being after the MR1 install with the cache turned off. On the Max Throughput-50% read it is taxing the 1Gb connection now. My numbers are not overly scientific given that there are 20 VM's running on the 3100 as I ran the test but the performance difference is substantial for those of you fighting with NFS performance.
用户2:
these are my iometer test
VNXe config
vnxe 3100 (2 SP) + i/o module 4 eth 1Gb/s
R5 4+1 SAS 15K 600GB
R6 4+2 NL-SAS 1TB
HS 1 SAS 15K 600GB
networking
1 lacp eth2 + eth3
1 lacp eth10 + eth11 + eth12 + eth13
Tested on VMFS volume
1 iSCSI 500GB VMFS on 600GB 15K
IOMeter test on a VM windows 2003 32bit 2GB ram and CPU X5660
Test 100% Read, 100% sequential 32K
Result I/Os 3450
Response 19ms
Throughput 107 MB/s
Test 65% Read, 40% sequential 8K
Result
I/Os 1100
Response 50ms
Throughput 9 MB/s