NVME Storage Technology Introduction

A major evolution in hardware has affected the data center. To start with, in compute alone, the latest CPUs for servers have more compute power than the legacy storage arrays that used to handle the input/output (I/O) load of all servers in an enterprise. Server interconnection to disks via Peripheral Component Interconnect express (PCIe) buses and the Non-Volatile Memory express (NVMe) protocol is dramatically increasing I/O bandwidth. The evolution of caching technology has created a new breed of all-flash arrays and high-performance servers. This article discusses this hardware evolution and how it affects the data center.

The NVMe controller is contained in the storage device itself, alleviating the need for an additional I/O controller between the CPU and the storage device. The result of this type of design is faster access to the storage, bandwidth that is highly increased, and a much more easy to troubleshoot solution.

NVMe Latency

NVMe SSDs will tend to service requests in the 10-20 microseconds range, whereas SATA/SAS-based SSDs will typical service requests in the 50-100 microseconds range. NVMe devices also tend to have a more consistent latency profile with an increase in the queue depth, making them ideal for cases where multiple disks might use a single SSD as the same journal. Way ahead are the hard drives that are measured in tens of milliseconds, although they are fairly consistent in terms of latency as the I/O size increases.

It should be obvious that for small, high-performance workloads, hard drive latency would dominate the total latency figures, and so, SSDs, preferably NVMe, should be used for it.

Comparing the NVMe Protocol to AHCI

NVMe is a high-performance storage “protocol” that supports the PCIe technology. It was developed by the manufacturers of SSD drives to overcome the limitation of older protocols used with SATA drives, such as the Advanced Host Controller Interface (AHCI). AHCI is a hardware mechanism that allows the software to talk to SATA devices. It specifies how to transfer data between system memory and the device, such as a host bus adapter. AHCI offers, among other things, native command queuing (NCQ), which optimizes how read/write commands are executed to reduce the head movement of an HDD. Although AHCI was good for HDDs, it was not adequate for SSDs and PCIe interconnects.

It is important to note that NVMe is not an interface but rather a “specification” for the PCIe interface, and it replaces the AHCI protocol. NVMe is a protocol for accessing non-volatile memory, such as SSDs connected with PCIe buses. NVMe standardizes the register set, feature set, and command set for non-volatile memory.

NVMe vs Older Standards

The first generation of NVME drives were offered by Samsung with their 960 PRO and later on companies such as Intel joined the market with their 900 series. NVMe is much faster than traditional SSDs. This is because NVMe drives use the NVMHC interface to communicate with a system’s CPU instead of SATA or AHCI.

In contrast, legacy SSDs are significantly slower as they operate using an older protocol like SCSI, and must be translated into something that can easily work over SATA in order for them to run. Due to NVMe’s popularity, there are hundreds of options to choose from. We recommend reviewing this article from Techstat, for the best nvme ssd.