开始新对话

此帖子已超过 5 年

Solved!

Go to Solution

2386

2013年12月12日 08:00

VNXe 效能IOPS?

Hi 各位前輩

抱歉我又來打擾問問題了!

   

請問一下!針對VNXe效能部分 有任何工具針對像是iops分析的工具嗎?

更細節的報告, 該怎麼收log等等的

如同vnx analyzer這類的軟體

麻煩各位了!

Community Manager

 • 

6.1K 消息

2013年12月12日 17:00

Hey Lin,

据我所知,VNXe上暂时还没有分析IOPS性能的工具。

当前,在Unisphere-> System -> System Performance 页面下只集成了 CPU, Network和Volume等的活动情况.

其实IOPS方面的数据一般都是根据现在实际的情况测试出来的。因为IOPS的结果根据磁盘类型(SAS,NL-SAS,SSD或闪存等)磁盘容量大小RAID类型等的不同结果也不同。所以说,实测为王嘛~~

我这里找到一个在线计算IOPS的网站,你可以在你的实际环境中测试一下。

这个在线计算IOPS网站还不错,支持各种常见RAID,和多种型号硬盘,如SSD。你可以参考一下。

http://www.wmarow.com/strcalc/

除了上面这个,最常用的IOPS的测试benchmark工具主要有Iometer, IoZone等,这些可以综合用于测试磁盘在不同情形下的IOPS。下面的磁盘IOPS数据来自http://en.wikipedia.org/wiki/IOPS,给你一个基本参考。

Examples

Some com­monly accepted aver­ages for random IO operations, calculated as 1/(seek + latency) = IOPS:

Device Type IOPS Interface Notes
7,200 rpm SATA drives HDD ~75-100 IOPS[2] SATA 3 Gbit/s
10,000 rpm SATA drives HDD ~125-150 IOPS[2] SATA 3 Gbit/s
10,000 rpm SAS drives HDD ~140 IOPS[2] SAS
15,000 rpm SAS drives HDD ~175-210 IOPS[2] SAS

Solid State Devices

Device Type IOPS Interface Notes
Simple SLC SSD SSD ~400 IOPS[citation needed] SATA 3 Gbit/s
Intel X25-M G2(MLC) SSD ~8,600 IOPS[11] SATA 3 Gbit/s Intel's data sheet[12] claims 6,600/8,600 IOPS (80 GB/160 GB version) and 35,000 IOPS for random 4 KB writes and reads, respectively.
Intel X25-E (SLC) SSD ~5,000 IOPS[13] SATA 3 Gbit/s Intel's data sheet[14] claims 3,300 IOPS and 35,000 IOPS for writes and reads, respectively. 5,000 IOPS are measured for a mix. Intel X25-E G1 has around 3 times higher IOPS compared to the Intel X25-M G2.[15]
G.Skill Phoenix Pro SSD ~20,000 IOPS[16] SATA 3 Gbit/s SandForce-1200 based SSD drives with enhanced firmware, states up to 50,000 IOPS, but benchmarking shows for this particular drive ~25,000 IOPS for random read and ~15,000 IOPS for random write.[16]
OCZ Vertex 3 SSD Up to 60,000 IOPS[17] SATA 6 Gbit/s Random Write 4 KB (Aligned)
Corsair Force Series GT SSD Up to 85,000 IOPS[18] SATA 6 Gbit/s 240 GB Drive, 555 MB/s sequential read & 525 MB/s sequential write, Random Write 4 KB Test (Aligned)
OCZ Vertex 4 SSD Up to 120,000 IOPS[19] SATA 6 Gbit/s 256 GB Drive, 560 MB/s sequential read & 510 MB/s sequential write, Random Read 4 KB Test 90K IOPS, Random Write 4 KB Test 85K IOPS
Texas Memory Systems RamSan-20 SSD 120,000+ Random Read/Write IOPS[20] PCIe Includes RAM cache
Fusion-io ioDrive SSD 140,000 Read IOPS, 135,000 Write IOPS[21] PCIe
Virident SystemstachIOn SSD 320,000 sustained READ IOPS using 4KB blocks and 200,000 sustained WRITE IOPS using 4KB blocks[22] PCIe
OCZ RevoDrive 3 X2 SSD 200,000 Random Write 4K IOPS[23] PCIe
Fusion-io ioDrive Duo SSD 250,000+ IOPS[24] PCIe
Violin Memory Violin 3200 SSD 250,000+ Random Read/Write IOPS[25] PCIe /FC/Infiniband/iSCSI Flash Memory Array
WHIPTAIL,ACCELA SSD 250,000/200,000+ Write/Read IOPS[26] Fibre Channel, iSCSI, Infiniband/SRP, NFS, CIFS Flash Based Storage Array
DDRdrive X1, SSD 300,000+ (512B Random Read IOPS) and 200,000+ (512B Random Write IOPS)[27][28][29][30] PCIe
SolidFireSF3010/SF6010 SSD 250,000 4KB Read/Write IOPS[31] iSCSI Flash Based Storage Array (5RU)
Texas Memory Systems RamSan-720 Appliance SSD 500,000 Optimal Read, 250,000 Optimal Write 4KB IOPS[32] FC / InfiniBand
OCZ Single SuperScale Z-Drive R4 PCI-Express SSD SSD Up to 500,000 IOPS[33] PCIe
WHIPTAIL,INVICTA SSD 650,000/550,000+ Read/Write IOPS[34] Fibre Channel, iSCSI, Infiniband/SRP, NFS Flash Based Storage Array
Violin Memory Violin 6000 3RU Flash Memory Array 1,000,000+ Random Read/Write IOPS[35] /FC/Infiniband/10Gb(iSCSI)/ PCIe
Texas Memory Systems RamSan-630 Appliance SSD 1,000,000+ 4KB Random Read/Write IOPS[36] FC / InfiniBand
Fusion-io ioDrive Octal (single PCI Express card) SSD 1,180,000+ Random Read/Write IOPS[37] PCIe
OCZ 2x SuperScale Z-Drive R4 PCI-Express SSD SSD Up to 1,200,000 IOPS[33] PCIe
Texas Memory Systems RamSan-70 SSD 1,200,000 Random Read/Write IOPS[38] PCIe Includes RAM cache
Kaminario K2 Flash/DRAM/Hybrid SSD Up to 1,200,000 IOPS SPC-1 IOPS with the K2-D (DRAM)[39][40] FC
Fusion-io ioDrive2 SSD Up to 9,608,000 IOPS[41] PCIe

14 消息

2013年12月12日 17:00

收到!!

因為我自己在support.emc or powerlink 也是沒看到相關方式

現在似乎只能在unisphere vnxe system performance內看到cpu, network, volume r/w


謝謝Leo

Community Manager

 • 

6.1K 消息

2013年12月12日 18:00

不客气。。

我还找到了一些VNXe上IOPS相关的一些数据,你可以参考一下。正如下面所说,仅供参考哦~

Below is an ESTIMATE of IOPS of a drive based on their rotation speed:

Disk TypeDisk CapacityRPMIOPS
Flash100GB, 200GBN/A6000
SAS300GB, 600GB15k170-180
SAS900GB10k125
NL SAS1TB, 2TB, 3TB720075

Note: The above stats were calculated in a controlled lab environment and are only for providing a general reference. These should in no way be used as a benchmark.

请点击此处免费订阅论坛每月简报

Community Manager

 • 

6.1K 消息

2013年12月12日 20:00

Hey Lin,

我找到了一些用户在实际环境中用IO Meter测试VNXe性能的结果,这个可能对你帮助更大。具体内容,你可以参考下面这个帖子。

IO Meter Performance Stats?

用户1:

I ran through the iometer tests on the vmktree iso and below are the results from an NFS share.  The first result is with the NFS share cached which is the wrong behavior.  The second test being after the MR1 install with the cache turned off.  On the Max Throughput-50% read it is taxing the 1Gb connection now.  My numbers are not overly scientific given that there are 20 VM's running on the 3100 as I ran the test but the performance difference is substantial for those of you fighting with NFS performance.

Test name Latency Avg iops Avg MBps cpu load
Max Throughput-100%Read 16.07 3694 115 3%
RealLife-60%Rand-65%Read 128.02 435 3 10%
Max Throughput-50%Read 25.20 2352 73 2%
Random-8k-70%Read 150.09 362 2 10%
Max Throughput-100%Read 16.47 3596 112 4%
RealLife-60%Rand-65%Read 29.68 1882 14 18%
Max Throughput-50%Read 13.16 4505 140 2%
Random-8k-70%Read 30.74 1783 13 28%

用户2:

these are my iometer test

VNXe config

vnxe 3100 (2 SP) + i/o module 4 eth 1Gb/s

R5 4+1 SAS 15K 600GB

R6 4+2 NL-SAS 1TB

HS 1 SAS 15K 600GB

networking

1 lacp eth2 + eth3

1 lacp eth10 + eth11 + eth12 + eth13

Tested on  VMFS volume

1 iSCSI  500GB VMFS on 600GB 15K

IOMeter test on a VM windows 2003 32bit 2GB ram and CPU X5660

Test 100% Read, 100% sequential 32K

Result I/Os 3450

Response 19ms

Throughput 107 MB/s

Test 65% Read, 40% sequential 8K

Result

I/Os 1100

Response 50ms

Throughput 9 MB/s

找不到事件!

Top