Overview

As a cloud provider, Reperio needed a way to do functional disk testing to provide the best service of our hardware.  There are many layers to this, from hardware manufacturer specifications, to systems design, to applied systems design; all while considering the cost of each part and total cost.  When we searched the internet for data and explanations of how to compose hardware systems for cloud (at the disk level), we came up empty.

Cost for performance

Reperio, like any other business, has a fiscal responsibility to produce the most performance for the least money.  This seems obvious, but when interacting with systems providers, there is no shortage of “highest performance systems”, but there is a shortage of cost for performance analysis.  This is even more complicated when considering hosting a cloud workload, e.g. very difficult to know what your customers will want to run on your platform before hand. We are going to try an elaborate on this point as best, and openly, as we can.

Disk performance

Just trying to define and measure single disk performance is quite difficult.  We are going to define disk performance in a multi-dimentional matrix: IOPS (I/O operations per second), throughput, and latency.  We also are going to measure multiple block sizes, further extending the dimensions.  In this iteration of testing, we are focused on IOPS and throughput at many block sizes, and not on latency.

What started all of this?

First, illumos merged native zfs encryption.  Since Reperio hosts our cloud with Triton Data Center, all nodes are running the customized SmartOS distribution on the illumos kernel/userland.  This is an amazing addition to the ecosystem.  Offering native on disk, at rest, encryption is essential to meeting customers needs as well as security and compliance certifications.

That being said, we already had a number of nodes deployed with Triton Data Center, and to offer this feature as soon as possible, we need to add secondary zpool (encrypted) to the existing nodes.  The easiest way to do this is through an external storage chassis.  But the biggest questions are what card, chassis, and drives to use.

Two other design parameters existed when we embarked on this project.  The first was that there was no support for hot-swap NVMe drives in illumos.  The second was that a 1TB Samsung EVO 860 Pro was retailing around $130 a drive.  This price seemed amazing, according to the published specifications from Samsung. We thought: “Hey, maybe we can get some amazing performance for cheap!”.  There was only one way to find out for sure.

The testing hardware

Supermicro –  X10DRI-T-B

(2) Intel – Xeon E5-2640v4


(16) Hynix – HMA84GR7CJR4N-VK

(2) LSI/Avago/Broadcom – 9300-8i

(1) LSI/Avago/Broadcom – 9300-8e

(1) Supermicro – MCP-220-82616-0N

(1) Supermicro – BPN-SAS-826A

(1) Supermicro – BPN-SAS3-216EL1

The drives

Manufacturer

Model

Type

Capacity

Interface

Seagate

ST4000NM003A

HD

4000GB

SAS 6Gb/s

Samsung

MZ-76E1T0B/AM

SSD

1024GB

SATA 6Gb/s

HGST

Z4RZF3D-8UCS

SSD

8GB

SAS 6Gb/s

Seagate

ST400FM0323

SSD

400GB

SAS 12Gb/s

Complete 50%

Enter your email to get instant access to the case study

Your information is 100% secure