Amazon EC2 Disk Performance

Amazon Web Services Logo
Update (3/3/2010): a better measure of RAID performance is available here.

While considering different options for a database server, I decided to do some digging into Amazon Web Services (AWS) as an alternative to dedicated servers from an ISP. I was most curious about the I/O of the Elastic Block Storage (EBS) on the Elastic Compute Cloud (EC2). What I tested was a number of different file systems EXT3, JFS, XFS, ReiserFS as single block devices and then some different software RAID configurations leveraging JFS. The tests were run using Bonnie++.

The configuration was vanilla, no special tuning was done, just the default values that are assigned by the tools. I used Fedora Core 9 as the OS from the default Amazon AMI and used “yum install” to aquire the necessary utilities (more on that below). I expect with further tuning, some increases in performance can still be obtained. I used the small instance for cost reasons, which includes “moderate” I/O performance. Running on a large or extra-large standard instance should perform even better with “high” I/O performance. You can get all the instance specifications from Amazon.

First I wanted to determine what the EBS devices would compare to in the physical world. I ran Bonnie against a few entry level boxes provided by a number of ISP’s and found the performance roughly matched a locally attached SATA or SCSI drive when formatted with EXT3. I also found that JFS, XFS and ReiserFS performed slightly better than EXT3 in most tests except block writes.

The Numbers

Again, let me re-iterate that some numbers may not be accurately reflected in your production environment. Amazon states, small instances have “moderate” I/O availability. Presumably if your running this for a production DB, you’ll want to consider a large or extra-large instance for the memory and so you should see slightly better performance from your configuration. Also note, that the drives I allocated were rather small (to keep testing costs low) so you may experience different results with larger capacities.

Note: The graph below is in KB, not bytes as titled.

Bonnie Disk Performance on EC2

Size (Filesystem) Output Per Char Output Block Output Re-write Input Per Char Input Block
4x5Gb RAID5 (JFS) 22,349 58,672 39,149 25,332 84,863
4x5Gb RAID0 (JFS) 24,271 99,152 43,053 26,086 96,320
10Gb (XFS) 20,944 43,897 24,386 25,029 65,710
10Gb (ReiserFS) 22,864 57,248 17,880 21,716 44,554
10Gb (JFS) 23,905 47,868 21,725 24,585 55,688
10Gb (EXT3) 22,986 57,840 22,100 24,317 48,502

As expected, RAID 0 does best with read/write speed and RAID 5 does very well on reads (input block) as well. For InnoDB, the re-write and block read (input)/write (output) operations are the most critical values. Longer bars mean better values. To better understand what the test is doing, be sure to read the original Bonnie description of each field.

Making Devices

The process for making a device is simple. There are many tutorials on how to make this persistent and you can certainly build this into your own AMI when you’re done – this is not a tutorial on how to do that. To get a volume up and running you’ll follow these basic steps:

  1. Determine what you want to create – capacity, filesystem type etc.
  2. Allocate EBS storage
  3. Attache the EBS storage to your EC2 instance
  4. If using RAID, create the volume.
  5. Format the filesystem
  6. Create the mount point on the instance filesystem
  7. Mount the storage
  8. Add any necessary entries to mount storage at boot time.

Single Disk Images

Remember, the speed and efficiency of the single EBS device is roughly comparable to a modern SATA or SCSI drive. Use of a different filesystem (other than EXT3) can increase different aspects of drive performance, just as it would with a physical hard drive. This isn’t a comparison of the pros and cons of different engines, but simply providing my findings during testing.

JFS yum install jfsutils
XFS yum install xfsprogs
ReiserFS yum install reiserfs-utils

I didn’t test any other filesystems such as ZFS, because I’ve read some other filesystems are unstable on Linux and I’ll be running production on Linux so the extra time for the tests seemed unnecessary. I am interested in other alternatives that could increase performance if you have any to share I’d love to hear about them.

You can quickly get a volume setup with the following:

mkfs -t ext3 /dev/sdf
mkdir /vol1
mount /dev/sdf /vol1

Next time you mount the volume, you won’t need to use “mkfs” because the drive is already formatted.


The default AMI already includes support for RAID, but if you needed to add them to your yum enabled system, it’s “yum install mdadm”. On the Fedora Core 9 test rig I was using, RAID 0, 1, 5, 6 were supported, YMMV.

To create a 4 disk RAID 0 volume, it’s simply:

mdadm --create --verbose /dev/md0 --level=0 --raid-devices=4 /dev/sdf /dev/sdg /dev/sdh /dev/sdi
mkfs -t ext3 /dev/md0
mkdir /raid
mount /dev/md0 /raid

To create a 4 disk RAID 5 volume instead, it’s simply:

mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sdf /dev/sdg /dev/sdh /dev/sdi
mkfs -t ext3 /dev/md0
mkdir /raid
mount /dev/md0 /raid

This example assumes you have 4 EBS volumes attached to the system. AWS shows 7 possible mount points /dev/sdf – /dev/sdl in the web console, however, the documentation states you can use through /dev/sdp, which is 11 EBS volumes in addition to the non-persistent storage. This would be a theoretical maximum of 10TB of RAID 5 or 11TB of RAID 0 storage!

Checking in on things…

  • cat /proc/mdstat
    is a great way to check in on the RAID volume. If you run it directly after creating a mirroring or striping array, you’ll also be able to see the scrubbing process and how far along it is.
  • mount -l
    shows the currently mounted devices and any options specified.
  • df
    disk free provides a nice list of device mounts and their total, available and used space.


It’s clear from the numbers that software RAID offer a clear performance advantage over a ESB volume. Since with EBS you pay per Gb not per disk, it’s certainly cost effective to create a robust RAID volume. The question that remains is how careful do you need to be with your data? RAID 0 offered blistering fast performance but like a traditional array, without redundancy. You can always set it up as RAID 5, RAID 6 or RAID 10 but this of course requires more unusable space to handle the redundancy.

Since the volumes on EBS are theoretically invincible, it may be okay to run unprotected by a mirror or parity drive, however, I haven’t found anyone who would recommend this in production. If anyone knows of a good reason to ignore the saftey of RAID 10 or RAID 6 or RAID 5, I’d love to hear the reasoning.

I am also curious if these drives maintain a consistent throughput over the full capacity of the disk or will they slow down as the drive fills like a traditional drive? I did not test this. It remains open for another test (and subsequent blog post). Should anyone run ZCAV against a 100Gb+ drive and figure that out, please let me know.

Fine Print – The Costs

Storage starts at a reasonable $0.10/GB-Month which is reasonable and is prorated for only the time you use it. A 1Tb RAID 0 volume made of 10x100Gb volumes would only cost $1,200 per year. Good luck getting performance/dollar costs for 1Tb like that from any SAN solution at a typical ISP. There are however some hidden costs in the I/O that you’ll need to pay attention to. Each time you read or write a block to disk, there’s an incremental cost. The pricing is $0.10 per million I/O requests – which seems cheap, but just running the simple tests I ran with Bonnie++ I consumed almost 2 million requests in less than 3 hours of instance time. If you have a high number of reads or writes, which you likely do if you’re considering reading this, you’ll need to factor these costs in.

The total AWS cost for running these tests was $0.71 of which $0.19 were storage related. The balance was the machine instances and bandwidth.


This entry was posted in Technology, Metrics, Hardware, Amazon Web Services and tagged , , , , , , , . Bookmark the permalink.

26 Responses to Amazon EC2 Disk Performance

  1. Ben Strawson says:

    Great to see someone taking the time to do a methodical comparison of the different setups. It would be nice to see a comparison with the results of your tests on real physical boxes, as that is what a lot of people will be comparing EC2/EBS to.

    Regarding different RAID levels, I see no reason to use anything that provides redundancy. Amazon describe EBS as “highly available, highly reliable” and also says that they are “automatically replicated on the backend (in a single Availability Zone)”. I take this to mean that they use RAID, so I see no reason to add anything above that. Of course you will want to back them up still – but that is what snapshots and S3 are for.

    Finally, regarding RAID0 as you say the costs are the same. However, taking a backup becomes a bit more difficult as you would need to freeze the filesystem on your volume (eg. run xfs_freeze) and then snapshot each of the EBS volumes.

  2. Erik says:

    Thanks Ben. I was curious myself how it related to comparable hardware systems. I don’t have any great data, which is why I didn’t include it here, but I can tell you a single block storage device it is roughly equivalent to what comes bundled in a GoDaddy dedicated server with a single drive or an entry level server with Peer1 which I believe are SATA.

    Regarding the redundancy, if zone failures are a potential problem then sharing a RAID 10 across multiple zones might prove beneficial but I don’t think EBS has been around long enough to know for sure what types of failures we can expect or that placing volumes on us-east-1a, us-east-1b and us-east-1c are sufficiently independent that you’d be unaffected by a localized failure.

  3. Pingback: BotchagalupeMarks for March 1st - 09:46 | IT Management and Cloud Blog

  4. Pingback: Amazon EC2 Volume Types and Performance Testing

  5. GR says:

    Perhaps you would want to also test using the Solaris AMI’s and a ZFS raid across multilple EBS volumes. I would be very interested in the results of this test. This also has the benefit of not requiring a freeze of any kind on the ZFS pool when you want to take snapshots of the entire raid array. (ZFS snapshots happen in constant time and are almost instantaneous.)

  6. It might be important to note that EBS has billed itself as really starting to show its strengths when you get into more random I/O workloads: Bonnie++ is a sequential benchmark.

    There is a post in the AWS forums from an Amazon engineer advising someone to use the ephemeral storage over EBS in a highly sequential workload, and anecdotal evidence from various people have confirmed that to me, but I don’t have actual numbers. “A whole shitload faster” isn’t very scientific 😉

  7. Pingback: Cloud Computing Links March 10, 2009 at Cloud Curious

  8. In Graph title is “(Measured in Bytes)”. It means that ESB performs less than 100kB/s. It’s terrible! 😉

  9. Erik says:

    @GR I’ve been curious about ZFS… I’ll put that on my list of things to check out. As I’m new to EC2, I haven’t spent much time playing with the different AMI’s out there and have trended towards what I’m familiar with CentOS, Ubuntu and Fedora.

  10. Mullany says:

    Amazon’s published reliability guildelines for EBS are the following:


    “As an example, volumes that operate with 20 GB or less of modified data since their most recent Amazon EBS snapshot can expect an annual failure rate (AFR) of between 0.1% – 0.5%, where failure refers to a complete loss of the volume.”

  11. Great article, I really enjoyed it. I recently built out our infrastructure for on Amazon Ec2. I am going to do a bit of experimentation with our database servers running on RAID. I sure wish Amazon would add meta data to the volumes so I could make heads or tails out of the names they use. ElasticFox has the meta data but I want something more.

  12. Erik says:

    @brandon – I know what you mean! It would be nice if we could assign some descriptor so it’s easier to make heads or tails of the different drives and instances. I just started using ElasticFox the other day and find it handy because it lets me visually manage more than one set of keys, so I can manage my clients servers too.

  13. Pingback: Bookmarks for Monday, March 30th — Trevor Fitzgerald

  14. Neil says:

    I ran some tests on one of our servers with bonnie++.

    The server is a Dell Poweredge SC1435 with 2x 15K RPM SAS Hard Drives in RAID 0. OS is Ubuntu 2.6.27-7-server. Filesystem is ext3.

    The version of bonne++ is 1.03c, with the default options.

    Results are as follows:

    Output per char: 52,963 K/sec
    Output block: 61,226 K/sec
    Output rewrite: 56,688 K/sec
    Input per char: 52,033 K/sec
    Input block: 228,544 K/sec
    Random seeks: 687/sec

    Your results imply that EBS is many thousands of times worse. Are your results definitely in bytes / sec as the graph implies, not K/sec?

  15. Erik says:


    You are correct, the numbers are in kilobytes, I just haven’t had a chance to update the graph. Thank you for the comparison numbers.


  16. Pingback: Amazon Elastic Block Store Geschwindigkeitsbenchmark | Server in den Wolken

  17. Pingback: Amazon EC2 Disk Performance and Why RAID 10 is bad for EBS

  18. Linto says:

    Does hard drive throughput change depending on instance size? (large, x-large etc.)?

  19. Erik says:

    Yes, performance does appear to change based on the instance size. Amazon used to actually call this out on their service descriptions. You can see that exact question answered here

  20. Pingback: Freelance CTO – John Shiple | Amazon EC2 Disk Performance and Why RAID 10 is bad for EBS

  21. Zen says:

    Hello Erik, I’m playing with EBS atm, and I was able to attach 16 drives in such way as

    /dev/sdf1 – /dev/sdf8
    /dev/sdh1 – /dev/sdh8

    Seems like it can be more than 11TB per instance. Do you know what is the limit of EBS volumes I can attach this way and is it correct to do it like this?

  22. Erik says:

    Zen, I’ve never needed to go to that many volumes, but yes, your approach looks correct.

  23. NAV says:

    I am trying doing this on this AMI (ami-41814f28). But I get an error when I try re assembling the RAID array.

    /sbin/mdadm –assemble –verbose /dev/md0 /dev/sdj /dev/sdk /dev/sdl
    mdadm: looking for devices for /dev/md0
    mdadm: cannot open device /dev/sdj: Device or resource busy
    mdadm: /dev/sdj has no superblock – assembly aborted

    Any help is appreciated.


  24. Pingback: Amazon EC2 Disk Performence - Water is…..

  25. Pingback: When the Cloud Fails ... | Bogdan Bocse

  26. Pingback: Easy RAIDer « Coding Fit