The I/O Bottleneck and Solid-State Drives (SSD), a new and highly-recommended website, has a recent article describing the I/O Bottleneck and how solid-state storage can overcome it.

In a nutshell, the I/O Bottleneck is a performance problem in computer hardware as a result of storage technology falling behind the times. Server performance is largely determined by three factors – processor speed, networking speed and storage drive speed. As the above article describes, processor technology (such as the new Intel Xeon 5600 and AMD Opteron 6100 multi-core processors that we have been following on this blog) and networking innovations (such as InfiniBand) have outpaced storage technology, which has been hamstrung by its reliance on spinning hard disk drives (HDD).

Solid-state drives (SSD) with no moving parts are poised to replace HDDs and allow storage to catch up with processors and networking standards. There are several advantages that SSD storage has over HDD. With the growth of cloud computing and Web 2.0, the way data is accessed from drives has become more sporadic and random. According to the HPC in the Cloud article,

Enterprise servers, running applications in the datacenter range from Web2.0 to HPC to business analytics, can generate hundreds of thousands of random I/O operations per second (IOPS).  In these environments, the HDDs available today can only perform thousands of IOPS combined.  HDDs are great for capacity and large blocks of sequential data but are not very good at delivering small pieces of random data at a high IOPS rate.  The physical characteristics and power envelope of the HDD make it an expensive option for increasing application throughput.  Consequently, the CPUs are under-utilized as they wait for data.

The scenario above illustrates well why HDDs are thought of as a bottleneck – with inadequate storage performance, the powerful CPUs of today are forced to sit and wait for the slower HDDs. Many data hosting companies solve this problem by just purchasing more servers with HDD storage, whereas they could just as well boost performance by running less servers (which would save space, energy, and maintenance fees) with SSDs.

There are several factors to consider when purchasing SSD storage for servers. First, there are volatile (DRAM) and non-volatile (flash) SSDs. The volatile SSDs will lose their data if the power supply stops while the non-volatile flash drives – modified versions of which are used in portable USB drives,  iPods, and mobile phones – do not.

Within the flash SSD category, there is single-level cell (SLC) memory and multi-level cell (MLC) memory. Most consumer products like cameras and phones use MLC, but this should not be used for enterprise environments unless you want to replace your storage drives every few weeks. If you are looking for flash SSD drives for servers, definitely get SLC flash memory.

A new standard for storage is gaining popularity called Tier-0. This class of flash and PCI-E SSD storage solutions will offer data centers the full advantage of their increasing processor performance, reduce maintenance costs because there will be less moving parts in the server, and make the random I/O functions demanded by contemporary applications and cloud computing more efficient. Once the cost of SSD storage goes down, the decision to switch from HDDs will be a no-brainer. As for now, the intrepid hosting companies that do decide to adopt SSD will be reaping the benefits before everybody else.

  • Servers and liquid cooling – International Computer Concepts Blog 14.06.2010

    […] liquid cooled hard drives that cut down on the sound created by their spinning parts (you can also replace HDDs with SSDs to achieve the same […]

  • How a SAS switch can improve storage management | International Computer Concepts Blog 19.10.2010

    […] new solutions such as a DAS cluster configuration with a 6Gb/s SAS switch are helping overcome the various I/O bottlenecks that hamper computing performance. This entry was posted in Clustering, Data Centers, LSI, […]

  • Our solution for an NCSA high-performance storage prototype | International Computer Concepts Blog 24.02.2011

    […] some of Einstein’s theories about the universe and the way the storage system gets around the I/O bottleneck problem. I’ll provide a brief overview here (see the case study for greater […]

  • Nathan 09.03.2012

    Seems a bit incomplete to say “…whereas they could just as well boost performance by running less servers (which would save space, energy, and maintenance fees) with SSDs.”

    In fact using SSDs would only be less servers if the amount of storage you had was insignificant to your architecture. If you have a san and want the benefits of SSD, you’re talking about at least 4 times the amount of servers.

    So that point really applies to things like web servers, etc, that don’t need heavy storage.

  • Heriberto 18.04.2014

    This is very interesting, You are a very skilled blogger.
    I’ve joined your rss feed and look forward to seeking more of your great post.

    Also, I’ve shared your site in my social networks!

  • Dude Max 09.10.2014

    Dude, the word is “fewer” not “less”! It hurt me every time I read it!