Start a Conversation

Unsolved

This post is more than 5 years old

6112

May 11th, 2015 22:00

[VNX 5400] Unused disks in a storage pool

Hi there,

My company has purchased an EMC VNX 5400 appliance two months ago. We have created a storage pool with deduplicated LUN.

After migrating data to this new pool we are facing performance issues. I have analyzed the performance via the VNX monitoring tool (and also Unisphere Analyzer) and I have noticed that :

- 3 out of 5 Flash drives (Extreme Performance) are seen as "unused" (0% utilization and 0 IOPS)

- 4 out of 5 SAS drives (Performance) are displayed as "unused"

- 8 NL-SAS drives (Capacity) are all intensively used

Then we have added 5 new SAS drives to the pool and the activity spread out across the added disks. The workload is getting lower over the NL-SAS drives ... however the initial "unused" drives still remain "unused".

I contacted the EMC support few weeks ago, they asked us some SP collect and NAR files but nothing more, the service request is still open ...

I just want to know if one of you have already faced this issue and have an idea about this improper working. To avoid again performance issue we have suspended the data migration until a reliable solution can be found.

For now we don't have enough left drives to create a new pool and migrate the datas from the current pool to the new one. In this case we would have been able to check the "failed" drives.

Thank for your help.

VNX_5400_Issue_20150512.png

4.5K Posts

May 18th, 2015 07:00

Your question about can a Pool using the below can handle more than 6000 IOPS. The answer depends on a lot of factors; IO size, Read/Write ratio, sequential or random. It may be possible that this configuration "could" handle a > 6000 IOSP, but I'm not in a position to say that it will or will not. The local EMC team has a tool they use to calculate the configuration based on workload, but I do not have access to that.

2 SSD (1+1 R1)

6 SAS (5+1 R5)

8 NL-SAS (6+2)

In general we have what are called "Rule of Thumb" performance numbers for different drive types and you can use that get a rough estimation of what a particular configuration can handle. For example, the 15K RPM SAS drives can handle about 180 IOPS when the workload is small block (IO size less than 32KB), random IO (mix of Read and Writes). If you have a 4+1 R5 configuration using the 15K SAS disks, then you multiply the number of disks in the raid group by 180 IOPS (5 * 180) and you get about 900 IOPS for that 4+1 R5. If the IO Size is larger, the number of IOPS per drive will decrease.

For NL-SAS we use 90 IOPS as the Rule of Thumb, so a 6+2 R6 would handle about 8 * 90 = 720 IOPS. Each of the Raid parity calculations are further defined by the Raid overhead.

for R5 the forumla is

Drive IOPS = Read IOPS + (4* Write IOPS)


For Raid 6


Drive IOPS = Read IOPS + (6* Write IOPS)


I've attached a performance White Paper that is useful in helping determine the number of disks needed for a particular workload. See Chapter 5 (page 95).

1 Attachment

104 Posts

May 27th, 2015 05:00

did you get this sorted?

May 28th, 2015 22:00

Hi all,

Thank you Glen for the information about disk performance.

The SR has been escaladed and finally this is a known issue which is still under investigation by Engineering SMEs.

"This is only a reporting issue, the lack of stats does not mean that the drives are not being used. Drive usage may be estimated from other drives in the same pool and tier which are reporting properly."

We got new SSD drives and then we created another identical pool (disabled deduplication) and now all the drive are reporting activity. I'm waiting tomorrow to launch manually the auto-tiering process (which doesn't perform well on the old pool). We won't use anymore the deduplication functionality because of performance impact and different issues.

2 Intern

 • 

715 Posts

May 28th, 2015 23:00

Thanks for the update.

The VNX2 implementation of block dedup is certainly raising eyebrows, and I won't be enabling it on anything mission critical for some time. Disappointing really, should really be a non-event.

No Events found!

Top