Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Storm on Demand have done SSD storage for a while now, pretty solid IOPS numbers too. I'll run a benchmark comparison tonight.


Ran some benchmarks, really insane IOPS on this plan. Pretty average CPU performance (they're using fairly old E5620's).

http://blog.serverbear.com/post/27553311076/hi1-4xlarge-benc...


I just ran the numbers on the new EC2 instance, and I'm pretty skeptical about the benchmarks above. I'm not sure that, for example, a half second of dd /dev/zero really tells us much.

When interpreting any benchmarks on EC2, it's important to understand that there is a 5-10% read/write performance hit on first use because AWS uses lazy block wipes between customer instance launches. See http://www.youtube.com/watch?v=IedaYaKsb-4#t=29m49s (should pre-cue, if not, skip to 29:49). This is referenced in the docs, but it's easy to miss: http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/In...)

So here you go, for hi1.4xlarge:

*

Summary for the impatient - After initialization (i.e., second-write), quasi-realistic I/O on the new SSD EC2 instances sustains writes @ 420 MB/sec and reads @ 6 GB/sec. The entire 8.6GB / filesystem copied over to SSD in 21 seconds.

Not bad.

*

    # df -h

    Filesystem            Size  Used Avail Use% Mounted on
    /dev/sda1            8.0G  1.1G  6.9G  14% /
    tmpfs                  30G     0   30G   0% /dev/shm
    /dev/xvdf            1023G   16G  957G   2% /media/ephemeral0

    (Note: /dev/xvdf and /dev/xvdg are just soft links to /dev/sdf and /dev/sdg respectively)

    Crude stats on first-use:

    # hdparm -tT /dev/xvdf

    /dev/xvdf:
    Timing cached reads:   14788 MB in  1.99 seconds = 7446.69 MB/sec
    Timing buffered disk reads:  1066 MB in  3.00 seconds = 355.04 MB/sec

    Wipe the device:

    dd if=/dev/zero of=/dev/xvdf bs=1M& pid=$!
    while true; do kill -USR1 $pid; sleep 4; done;
     [...]
    dd: writing `/dev/xvdf': No space left on device

    1048567+17 records in
    1048566+17 records out
    1099511627776 bytes (1.1 TB) copied, 1955.42 s, 562 MB/s

    Stats after zero-wipe (dd /dev/zero) to device:

    hdparm -tT /dev/xvdf

    /dev/xvdf:
    Timing cached reads:   13260 MB in  1.99 seconds = 6673.05 MB/sec
    Timing buffered disk reads:  1124 MB in  3.01 seconds = 374.02 MB/sec

    hdparm -tT /dev/xvdf

    /dev/xvdf:
    Timing cached reads:   11188 MB in  1.99 seconds = 5624.17 MB/sec
    Timing buffered disk reads:  1122 MB in  3.00 seconds = 373.99 MB/sec

    hdparm -tT /dev/xvdf

    /dev/xvdf:
    Timing cached reads:   12930 MB in  1.99 seconds = 6505.78 MB/sec
    Timing buffered disk reads:  1124 MB in  3.00 seconds = 374.15 MB/sec

    Confirming Effect Of Pre-wiped I/O:

    hdparm -tT /dev/xvdg

    Timing cached reads:   11796 MB in  1.99 seconds = 5931.68 MB/sec
    Timing buffered disk reads:  1038 MB in  3.00 seconds = 345.87 MB/sec

    hdparm -tT /dev/xvdg

    /dev/xvdg:
    Timing cached reads:   12658 MB in  1.99 seconds = 6367.41 MB/sec
    Timing buffered disk reads:  1050 MB in  3.00 seconds = 349.47 MB/sec

    hdparm -tT /dev/xvdg

    /dev/xvdg:
    Timing cached reads:   12856 MB in  1.99 seconds = 6468.39 MB/sec
    Timing buffered disk reads:  1066 MB in  3.00 seconds = 354.80 MB/sec

    Pre- Vs. Post-wipe performance: 373.6 MB/sec vs. 349.3 MB/sec (6-7% speed improvement)

    Somewhat more real-world numbers:

    dd if=/dev/sda1 of=/dev/xvdf bs=1M
    8192+0 records in
    8192+0 records out
    8589934592 bytes (8.6 GB) copied, 19.7876 s, 434 MB/s

    dd if=/dev/sda1 of=/dev/xvdf bs=1M
    8192+0 records in
    8192+0 records out
    8589934592 bytes (8.6 GB) copied, 20.0365 s, 429 MB/s

    dd if=/dev/sda1 of=/dev/xvdf bs=1M
    8192+0 records in
    8192+0 records out
    8589934592 bytes (8.6 GB) copied, 21.4193 s, 401 MB/s

*Edit: formatting




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: