SSD SAS NL-SAS Benchmark

A recent thread I was replying to reminded me about throughput and SSD versus NL-SAS and SAS.

After recent experiences with SSD’s in RAID I am reminded of the Blade Runner’s Replicant school of performance v. lifespan.

The conversation was lead re virtualization and SAN… which imho SSD has no place, short of in some kind of hot data / cache store for a number of reasons, cost per gigabyte, and predisposition for sudden and catastrophic failure. However – this being said – use in a stand alone system delivering non enterprise content, quickly – really quickly – with the confidence of a granular CDP backup – sign me up. This is where we are currently using them.

So – this highly non laboratory test we find a Dell Power Edge R220 (Quad core hyper-threaded Xeon), with an LSI based Dell Perc310 PCIe RAID controller, and two drives in a RAID 1 (mirrored) configuration. The test tool was the Ubuntu Disks application running it’s benchmark tool in read only mode. 1000 samples for latency, and 100 samples on throughput. Figures are average over that test:

SSD – 512MB – (Crucial/Micron Business Grade) – 1.1GB/s read, latency 0.04 msec;

SAS – 300GB – (Toshiba 10k) – 150MB/s read, latency 7.23msec.

NL-SAS – 500GB –  (Seagate Constellation.2 7.2k) – 138MB/s read, latency 11.08 msec;

SATA and SAS differ in the protocol used, and support is most clearly indicated by the extra pins on the connector where SATA would have a gap. SAS hardware fills the traditional SCSI gap in the market. Highest tolerance, best performance, higher rotational speeds lower capacity.

In the case of the 10k drive in comparison to the 7.2k drive you are seeing what is essentially a 40% decrease in latency. The heads are in place, and the disk has physically rotated ready for the read. In an environment where you are seeing non contiguous block reads – that is a no brainer.

The SSD does not suffer from the mechanical restraints of moving a read/write head to the right track, and then waiting for the sector they are looking for to pass under the head. Over simplification of the logic is essentially a grid reference X by Y and you have it. There is no move head, and wait for data to pass by (like buses, right?).

The SSD is also using the SATA protocol.

When looking at the output from the benchmark what really REALLY shows up is the cache dependency for sustained operations.

– The SSD churns away between 1100MB/sec to 1000MB/sec across the benchmark;

– The SAS drive has a linear decay – seeing it drop from 200MB/sec to 90MB/sec;

– The NL-SAS drive has a non-linear drop off-  seeing it decrease from 180MB/sec down to 50MB/sec.

The latter being the assumed sustained throughput from a SATA drive of around 40-50MB/sec – the gene line of the mechanics give you away old chap.

So – for low capacity storage, low latency, high throughput, short life, step up SSD.

You will likely see 500% more throughput;

You will likely see 10,000x less latency for non sequential operations;

Sustained operations have zero effect on throughput;

To the point where caching on the card or drive is something that really would be worth looking into as to whether it is even worthwhile enabling.

This being said – as an enterprise storage method – it is better suited to hot data, caching, and similar.

Where the technology goes from here. Well that really would change things up.

Either a return to hybrid drives with SSD’s for hot data, or software support for using different mediums for large scale hot data. Who knows.

Anyway – some figures to muse none the less.

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this:
Skip to toolbar