Honesty Box: EBS Performance Revisited

As part of my work on Honesty Box, I’ve been reviewing EBS disk performance once again. This was a great opportunity to expand on the research from last year. After re-reading what I posted then, along with the wealth of data that has been compiled since, I realized I still didn’t have sufficient information to answer two key questions.

  1. How does the number of EBS volumes impact a performance of RAID 0?
  2. Does the instance size, make a significant difference in the RAID performance?

As before I used Bonnie++ to measure the results. You can read about the full method I used below.

Results

  • RAID 0 performed better with an even number of EBS volumes.
  • RAID 0 performed best with 8 volumes for writes and random seek.
  • RAID 0 performed poorly for reads!
  • Larger instances perform significantly better than smaller instances.
  • The ephemeral store has very good overall performance.

Data

The titles of the Bonnie tests can be confusing for folks removed from the programming process. Be sure to read the full explanation of what each test is doing.

Sequential Output is a measure of the write performance to the drive. Higher bars are better. With RAID 0 it appears that an even number of drives performs significantly better than an odd number.

Sequential Create is a measure of the files created by Bonnie. Higher values are better. Test that complete too quickly return no values. That is the cause of the missing bars for Read/sec above. You can safely consider that value too fast to measure.

Sequential Input is a measure of the read performance from the drive. Higher values are better. This is concerning because of the steady decline in block read performance associated with the number of available volumes. This may have to do with the time of day that these tests were run and really warrants more investigation. It should also be noted that this is a measure of sequential performance so unless your reading contiguous files off the disk, this number may be irrelevant to you.

Random Create measures how the files are created and deleted. Higher values are better. Again, tests that happen too quickly are discarded explaining the Read/sec result having no values.

Random Seeks should scale consistently with the number of EBS volumes added. Higher values are better. However, that did not appear to be the case and a limit appeared to be reached at 8.

Effect of CPU

To test the impact of the CPU units, I selected the 4 volume array and then compared it with the tests run last year. Both were using 4 volume EBS RAID 0 with XFS file systems. They both used the noop IO scheduler. The underlying OS did change from Fedora to Ubuntu and a year has passed.

Sequential Output Taller is better. Clearly the additional IO capacity in the larger instance does make a big difference in the performance of the volumes. I would expect smaller increments in CPU capacity would result in smaller differences.

Sequential Input Taller is better. Clearly the m1.large instance out performs the smaller m1.small instance.

Thoughts and Next Steps

After reviewing the performance of the native ephemeral storage, I wonder if partitioning the ephemeral store and assembling a RAID array from there might not be the best route for high speed storage? Of course backup would be a potential issue, but snapshotting of XFS may be able to mitigate that. For future tests I would like to study the impact of using the -b flag which causes Bonnie++ to flush to disk. I also think larger volume sets as shown by these tests and different I/O schedulers may yield different results.

Method

As before I used Bonnie++ to measure disk performance but it’s limitations are fairly well understood and it gives us a metric that can be compared with other metrics. You can read the full explanation of what each value actually means here. Armed with 16 EBS stores mapped to an unused m1.large instance, I began running tests. The process was as follows:

  1. Create a new RAID set using a chunk size of 256
  2. Use XFS to format the drives
  3. Mount the filesystem w/ Ubuntu defaults
  4. Capture Bonnie results
  5. Dissassemble the RAID set
  6. Rinse and repeat

I did this for 2-10 volumes and then one additional test with 16 volumes. For comparison, I also ran the test with the ephemeral store and a single EBS volume. Those are the results represented in each of the graphs above. I reran the 6 volume test 3 times over the course of a day and took an average value for the graphs.

This entry was posted in Amazon Web Services, Hardware, Linux, Metrics, Servers and tagged , , , , , , . Bookmark the permalink.

3 Responses to Honesty Box: EBS Performance Revisited

  1. ThePush says:

    I don’t think software RAID should be allowed on Amazon.

  2. e-zuka says:

    why do you setup RAID with “chunk size of 256”.

    is there any recommendation?

  3. Pingback: Preliminary EBS performance tests on Amazon Compute Cluster cc1.4xlarge instance types : BioTeam Inc.