By Henry Newman
Data storage has become the weak link in enterprise applications, and without a concerted effort on the part of storage vendors, the technology is in danger of becoming irrelevant. The I/O stack just isn’t keeping pace with advances in silicon, and it could find itself replaced by new technologies like phase change memory (PCM) that promise unfettered access to data.
The problem is simple: Memory bandwidth and CPU performance continue to grow much faster than disk and bus performance and disk channel speed, and combined with a limited I/O interface (POSIX), the result is in an I/O bottleneck that only gets worse with time.
A look at the performance increases for various elements of the storage stack over the last five years paints a clear picture:
* Memory bandwidth: Intel has gone from 4.3 GB/sec in 2004 to 40 GB/sec, while AMD has gone from 5.3 to 25.6, an increase of 9.3 times for Intel and 4.8 times for AMD.
* CPU performance: Using Moore’s Law that transistor count doubles every 18 months, I will assume that this translates to performance (which it does not) for a greater than tenfold performance improvement.
* PCIe bus has increased from 250 MB/sec per lane in 2004 to 500 MB/sec per lane in 2008 and an expected 1 GB/sec next year, an increase of two (and soon four) times.
* Per disk channel speed has increased by 50 percent, from 4Gb Fibre Channel to Gb SAS.
* Disk performance for SATA has improved from 68 MB/sec to 84 MB/sec, while FC/SAS has gone from 125 MB/sec to 194 MB/sec, a modest improvement similar to per disk channel speed in the case of FC and SAS, while SATA has improved only about 24 percent.
* Disk density has doubled for FC/SAS, from 300 GB to 600 GB, while SATA has seen an eightfold increase from 250 GB to 2 TB.