2 Bronze

VNXe 效能IOPS?

跳至解决方案

Hi 各位前輩

抱歉我又來打擾問問題了!

   

請問一下!針對VNXe效能部分 有任何工具針對像是iops分析的工具嗎?

更細節的報告, 該怎麼收log等等的

如同vnx analyzer這類的軟體

麻煩各位了!

标签 (1)
0 项奖励
1 解答

已接受的解答
MOD
MOD

Re: VNXe 效能IOPS?

跳至解决方案

Hey Lin,

据我所知,VNXe上暂时还没有分析IOPS性能的工具。

当前,在Unisphere-> System -> System Performance 页面下只集成了 CPU, Network和Volume等的活动情况.

其实IOPS方面的数据一般都是根据现在实际的情况测试出来的。因为IOPS的结果根据磁盘类型(SAS,NL-SAS,SSD或闪存等)磁盘容量大小RAID类型等的不同结果也不同。所以说,实测为王嘛~~

我这里找到一个在线计算IOPS的网站,你可以在你的实际环境中测试一下。

这个在线计算IOPS网站还不错,支持各种常见RAID,和多种型号硬盘,如SSD。你可以参考一下。

http://www.wmarow.com/strcalc/

除了上面这个,最常用的IOPS的测试benchmark工具主要有Iometer, IoZone等,这些可以综合用于测试磁盘在不同情形下的IOPS。下面的磁盘IOPS数据来自http://en.wikipedia.org/wiki/IOPS,给你一个基本参考。

Examples

Some com­monly accepted aver­ages for random IO operations, calculated as 1/(seek + latency) = IOPS:

DeviceTypeIOPSInterfaceNotes
7,200 rpm SATA drivesHDD~75-100 IOPS[2]SATA 3 Gbit/s
10,000 rpm SATA drivesHDD~125-150 IOPS[2]SATA 3 Gbit/s
10,000 rpm SAS drivesHDD~140 IOPS[2]SAS
15,000 rpm SAS drivesHDD~175-210 IOPS[2]SAS

Solid State Devices

DeviceTypeIOPSInterfaceNotes
Simple SLC SSDSSD~400 IOPS[citation needed]SATA 3 Gbit/s
Intel X25-M G2(MLC)SSD~8,600 IOPS[11]SATA 3 Gbit/sIntel's data sheet[12] claims 6,600/8,600 IOPS (80 GB/160 GB version) and 35,000 IOPS for random 4 KB writes and reads, respectively.
Intel X25-E (SLC)SSD~5,000 IOPS[13]SATA 3 Gbit/sIntel's data sheet[14] claims 3,300 IOPS and 35,000 IOPS for writes and reads, respectively. 5,000 IOPS are measured for a mix. Intel X25-E G1 has around 3 times higher IOPS compared to the Intel X25-M G2.[15]
G.Skill Phoenix ProSSD~20,000 IOPS[16]SATA 3 Gbit/sSandForce-1200 based SSD drives with enhanced firmware, states up to 50,000 IOPS, but benchmarking shows for this particular drive ~25,000 IOPS for random read and ~15,000 IOPS for random write.[16]
OCZ Vertex 3SSDUp to 60,000 IOPS[17]SATA 6 Gbit/sRandom Write 4 KB (Aligned)
Corsair Force Series GTSSDUp to 85,000 IOPS[18]SATA 6 Gbit/s240 GB Drive, 555 MB/s sequential read & 525 MB/s sequential write, Random Write 4 KB Test (Aligned)
OCZ Vertex 4SSDUp to 120,000 IOPS[19]SATA 6 Gbit/s256 GB Drive, 560 MB/s sequential read & 510 MB/s sequential write, Random Read 4 KB Test 90K IOPS, Random Write 4 KB Test 85K IOPS
Texas Memory Systems RamSan-20SSD120,000+ Random Read/Write IOPS[20]PCIeIncludes RAM cache
Fusion-io ioDriveSSD140,000 Read IOPS, 135,000 Write IOPS[21]PCIe
Virident SystemstachIOnSSD320,000 sustained READ IOPS using 4KB blocks and 200,000 sustained WRITE IOPS using 4KB blocks[22]PCIe
OCZ RevoDrive 3 X2SSD200,000 Random Write 4K IOPS[23]PCIe
Fusion-io ioDrive DuoSSD250,000+ IOPS[24]PCIe
Violin Memory Violin 3200SSD250,000+ Random Read/Write IOPS[25]PCIe /FC/Infiniband/iSCSIFlash Memory Array
WHIPTAIL,ACCELASSD250,000/200,000+ Write/Read IOPS[26]Fibre Channel, iSCSI, Infiniband/SRP, NFS, CIFSFlash Based Storage Array
DDRdrive X1,SSD300,000+ (512B Random Read IOPS) and 200,000+ (512B Random Write IOPS)[27][28][29][30]PCIe
SolidFireSF3010/SF6010SSD250,000 4KB Read/Write IOPS[31]iSCSIFlash Based Storage Array (5RU)
Texas Memory Systems RamSan-720 ApplianceSSD500,000 Optimal Read, 250,000 Optimal Write 4KB IOPS[32]FC / InfiniBand
OCZ Single SuperScale Z-Drive R4 PCI-Express SSDSSDUp to 500,000 IOPS[33]PCIe
WHIPTAIL,INVICTASSD650,000/550,000+ Read/Write IOPS[34]Fibre Channel, iSCSI, Infiniband/SRP, NFSFlash Based Storage Array
Violin Memory Violin 60003RU Flash Memory Array1,000,000+ Random Read/Write IOPS[35]/FC/Infiniband/10Gb(iSCSI)/ PCIe
Texas Memory Systems RamSan-630 ApplianceSSD1,000,000+ 4KB Random Read/Write IOPS[36]FC / InfiniBand
Fusion-io ioDrive Octal (single PCI Express card)SSD1,180,000+ Random Read/Write IOPS[37]PCIe
OCZ 2x SuperScale Z-Drive R4 PCI-Express SSDSSDUp to 1,200,000 IOPS[33]PCIe
Texas Memory Systems RamSan-70SSD1,200,000 Random Read/Write IOPS[38]PCIeIncludes RAM cache
Kaminario K2Flash/DRAM/Hybrid SSDUp to 1,200,000 IOPS SPC-1 IOPS with the K2-D (DRAM)[39][40]FC
Fusion-io ioDrive2SSDUp to 9,608,000 IOPS[41]PCIe

在原帖中查看解决方案

0 项奖励
4 回复数
MOD
MOD

Re: VNXe 效能IOPS?

跳至解决方案

Hey Lin,

据我所知,VNXe上暂时还没有分析IOPS性能的工具。

当前,在Unisphere-> System -> System Performance 页面下只集成了 CPU, Network和Volume等的活动情况.

其实IOPS方面的数据一般都是根据现在实际的情况测试出来的。因为IOPS的结果根据磁盘类型(SAS,NL-SAS,SSD或闪存等)磁盘容量大小RAID类型等的不同结果也不同。所以说,实测为王嘛~~

我这里找到一个在线计算IOPS的网站,你可以在你的实际环境中测试一下。

这个在线计算IOPS网站还不错,支持各种常见RAID,和多种型号硬盘,如SSD。你可以参考一下。

http://www.wmarow.com/strcalc/

除了上面这个,最常用的IOPS的测试benchmark工具主要有Iometer, IoZone等,这些可以综合用于测试磁盘在不同情形下的IOPS。下面的磁盘IOPS数据来自http://en.wikipedia.org/wiki/IOPS,给你一个基本参考。

Examples

Some com­monly accepted aver­ages for random IO operations, calculated as 1/(seek + latency) = IOPS:

DeviceTypeIOPSInterfaceNotes
7,200 rpm SATA drivesHDD~75-100 IOPS[2]SATA 3 Gbit/s
10,000 rpm SATA drivesHDD~125-150 IOPS[2]SATA 3 Gbit/s
10,000 rpm SAS drivesHDD~140 IOPS[2]SAS
15,000 rpm SAS drivesHDD~175-210 IOPS[2]SAS

Solid State Devices

DeviceTypeIOPSInterfaceNotes
Simple SLC SSDSSD~400 IOPS[citation needed]SATA 3 Gbit/s
Intel X25-M G2(MLC)SSD~8,600 IOPS[11]SATA 3 Gbit/sIntel's data sheet[12] claims 6,600/8,600 IOPS (80 GB/160 GB version) and 35,000 IOPS for random 4 KB writes and reads, respectively.
Intel X25-E (SLC)SSD~5,000 IOPS[13]SATA 3 Gbit/sIntel's data sheet[14] claims 3,300 IOPS and 35,000 IOPS for writes and reads, respectively. 5,000 IOPS are measured for a mix. Intel X25-E G1 has around 3 times higher IOPS compared to the Intel X25-M G2.[15]
G.Skill Phoenix ProSSD~20,000 IOPS[16]SATA 3 Gbit/sSandForce-1200 based SSD drives with enhanced firmware, states up to 50,000 IOPS, but benchmarking shows for this particular drive ~25,000 IOPS for random read and ~15,000 IOPS for random write.[16]
OCZ Vertex 3SSDUp to 60,000 IOPS[17]SATA 6 Gbit/sRandom Write 4 KB (Aligned)
Corsair Force Series GTSSDUp to 85,000 IOPS[18]SATA 6 Gbit/s240 GB Drive, 555 MB/s sequential read & 525 MB/s sequential write, Random Write 4 KB Test (Aligned)
OCZ Vertex 4SSDUp to 120,000 IOPS[19]SATA 6 Gbit/s256 GB Drive, 560 MB/s sequential read & 510 MB/s sequential write, Random Read 4 KB Test 90K IOPS, Random Write 4 KB Test 85K IOPS
Texas Memory Systems RamSan-20SSD120,000+ Random Read/Write IOPS[20]PCIeIncludes RAM cache
Fusion-io ioDriveSSD140,000 Read IOPS, 135,000 Write IOPS[21]PCIe
Virident SystemstachIOnSSD320,000 sustained READ IOPS using 4KB blocks and 200,000 sustained WRITE IOPS using 4KB blocks[22]PCIe
OCZ RevoDrive 3 X2SSD200,000 Random Write 4K IOPS[23]PCIe
Fusion-io ioDrive DuoSSD250,000+ IOPS[24]PCIe
Violin Memory Violin 3200SSD250,000+ Random Read/Write IOPS[25]PCIe /FC/Infiniband/iSCSIFlash Memory Array
WHIPTAIL,ACCELASSD250,000/200,000+ Write/Read IOPS[26]Fibre Channel, iSCSI, Infiniband/SRP, NFS, CIFSFlash Based Storage Array
DDRdrive X1,SSD300,000+ (512B Random Read IOPS) and 200,000+ (512B Random Write IOPS)[27][28][29][30]PCIe
SolidFireSF3010/SF6010SSD250,000 4KB Read/Write IOPS[31]iSCSIFlash Based Storage Array (5RU)
Texas Memory Systems RamSan-720 ApplianceSSD500,000 Optimal Read, 250,000 Optimal Write 4KB IOPS[32]FC / InfiniBand
OCZ Single SuperScale Z-Drive R4 PCI-Express SSDSSDUp to 500,000 IOPS[33]PCIe
WHIPTAIL,INVICTASSD650,000/550,000+ Read/Write IOPS[34]Fibre Channel, iSCSI, Infiniband/SRP, NFSFlash Based Storage Array
Violin Memory Violin 60003RU Flash Memory Array1,000,000+ Random Read/Write IOPS[35]/FC/Infiniband/10Gb(iSCSI)/ PCIe
Texas Memory Systems RamSan-630 ApplianceSSD1,000,000+ 4KB Random Read/Write IOPS[36]FC / InfiniBand
Fusion-io ioDrive Octal (single PCI Express card)SSD1,180,000+ Random Read/Write IOPS[37]PCIe
OCZ 2x SuperScale Z-Drive R4 PCI-Express SSDSSDUp to 1,200,000 IOPS[33]PCIe
Texas Memory Systems RamSan-70SSD1,200,000 Random Read/Write IOPS[38]PCIeIncludes RAM cache
Kaminario K2Flash/DRAM/Hybrid SSDUp to 1,200,000 IOPS SPC-1 IOPS with the K2-D (DRAM)[39][40]FC
Fusion-io ioDrive2SSDUp to 9,608,000 IOPS[41]PCIe

在原帖中查看解决方案

0 项奖励
2 Bronze

Re: VNXe 效能IOPS?

跳至解决方案

收到!!

因為我自己在support.emc or powerlink 也是沒看到相關方式

現在似乎只能在unisphere vnxe system performance內看到cpu, network, volume r/w


謝謝Leo

0 项奖励
MOD
MOD

Re: VNXe 效能IOPS?

跳至解决方案

不客气。。

我还找到了一些VNXe上IOPS相关的一些数据,你可以参考一下。正如下面所说,仅供参考哦~

Below is an ESTIMATE of IOPS of a drive based on their rotation speed:

Disk TypeDisk CapacityRPMIOPS
Flash100GB, 200GBN/A6000
SAS300GB, 600GB15k170-180
SAS900GB10k125
NL SAS1TB, 2TB, 3TB720075

Note: The above stats were calculated in a controlled lab environment and are only for providing a general reference. These should in no way be used as a benchmark.

请点击此处免费订阅论坛每月简报

0 项奖励
MOD
MOD

Re: VNXe 效能IOPS?

跳至解决方案

Hey Lin,

我找到了一些用户在实际环境中用IO Meter测试VNXe性能的结果,这个可能对你帮助更大。具体内容,你可以参考下面这个帖子。

IO Meter Performance Stats?

用户1:

I ran through the iometer tests on the vmktree iso and below are the results from an NFS share.  The first result is with the NFS share cached which is the wrong behavior.  The second test being after the MR1 install with the cache turned off.  On the Max Throughput-50% read it is taxing the 1Gb connection now.  My numbers are not overly scientific given that there are 20 VM's running on the 3100 as I ran the test but the performance difference is substantial for those of you fighting with NFS performance.

Test nameLatencyAvg iopsAvg MBpscpu load
Max Throughput-100%Read16.0736941153%
RealLife-60%Rand-65%Read128.02435310%
Max Throughput-50%Read25.202352732%
Random-8k-70%Read150.09362210%
Max Throughput-100%Read16.4735961124%
RealLife-60%Rand-65%Read29.6818821418%
Max Throughput-50%Read13.1645051402%
Random-8k-70%Read30.7417831328%

用户2:

these are my iometer test

VNXe config

vnxe 3100 (2 SP) + i/o module 4 eth 1Gb/s

R5 4+1 SAS 15K 600GB

R6 4+2 NL-SAS 1TB

HS 1 SAS 15K 600GB

networking

1 lacp eth2 + eth3

1 lacp eth10 + eth11 + eth12 + eth13

Tested on  VMFS volume

1 iSCSI  500GB VMFS on 600GB 15K

IOMeter test on a VM windows 2003 32bit 2GB ram and CPU X5660

Test 100% Read, 100% sequential 32K

Result I/Os 3450

Response 19ms

Throughput 107 MB/s

Test 65% Read, 40% sequential 8K

Result

I/Os 1100

Response 50ms

Throughput 9 MB/s

0 项奖励