Start a Conversation

Unsolved

This post is more than 5 years old

1406

May 17th, 2018 07:00

Storage Latency - Shifting Bottleneck

Enterprise SSDs are delivering around 500+ MB/s, and NVMe SSDs are 10x faster, and 3D XPoint are 1000x faster.

Originally SANs were designed with hard disk drives in mind, which meant that the bottleneck was the spinning disks. With SSDs becoming exponentially faster, where is the bottleneck being shifted to? The controllers CPU?

Also is the diagram below accurate? What amount of latency is introduced in each layer of this stack?

6a00e552e53bd28833011570408872970c.png

465 Posts

May 18th, 2018 17:00

As the speed of storage heads closer to the local host memory speed, the SCSI protocol becomes an increasing percentage of the time cost of performing an IO. NVMe is the protocol that addresses the capabilities of the newer storage technologies. I the above diagram, host to array communications is still SCSI over FC. NVMe over fabric is part of the NVMe architecture, so you can imagine at some point in the future, NVMe will replace SCSi as the protocol of choice for host to array communications to exploit the faster storage technologies.

Having said that, some bottlenecks are immovable in our universe. The speed of light will always limit synchronous IO distances. Your diagram is a local SAN only and no reference to Disaster Recovery. So imagine another one of those SAN's 100KM away, with synchronous replication between the two. In terms of the overall host write response time, what is the biggest component do you think in this scenario, where NVMe is being used and the fasted available disk technology?

2 Posts

May 21st, 2018 06:00

I didn't create that diagram, it is originally from this post.

With all flash arrays the SCSI protocol is now the bottleneck? Can you send me some references for this so that I can understand this more? This article mentions that there is only a single SCSI queue that can process commands one at a time, whereas NVMe has 64k queues each with 64k simultaneous commands being able to be processed.

Since each SCSI command is being executed one at a time, I am assuming that the controller CPU is now the limiting factor? Even with the SCSI commands being executed one at a time, processing a single SCSI IO consumes CPU resources on the storage processors, right?

465 Posts

May 21st, 2018 16:00

NVMe has the 64k queues as a maximum in the architecture. A given implementation may or may not exploit all 64K.  There will be a cost to establish and manage that may queues so the NVMe architect will need to determine the right number based on the cost and the benefit of a certain number of queues for the application. You are right that SCSI is a single queue per device, so in that sense, you could see that an NVMe implementation with just a small number of queues is vastly superior to SCSI. The NVMe protocol itself is much more efficient than SCSI. That means less CPU cost per IO - or - more IO's for the same CPU cost as SCSI. I don't have any papers for you but I have read that a halving of the CPU cost for NVMe is realistic.

No Events found!

Top