Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This has been a long time coming, but AWS has consistently been improving their service (as long as you can ignore the particularly bad reliability as of late).

It's telling that they have only enabled this for a huge (quadruple extra large) instance type. It's probably hard to make this work for someone who just wants a 10GB disk with great IO. The problem at the low end is that disks are larger and would thus have to be divided up to make proper use of them, leading to IO contention..

The high IO options will probably only ever be available for pretty large instances.



It's telling that they have only enabled this for a huge (quadruple extra large) instance type.

My guess (not based on any knowledge of EC2 internals) is that they don't have any way to do fair I/O sharing between guests. If they did, they could split these boxes into 32 small instances with 1 ECU, 1.7 GB RAM, and a 60 GB disk with 2500 random reads / 250-4000 random writes per second.


Xen offers easy ways of doing fair I/O sharing between guests. These servers they're using are most likely multi-tenant systems with 256-512GB of RAM and 6-12TB of SSD storage. Providers don't like keeping expensive systems around that aren't making money, especially when demand changes every hour, so I expect that they have at least 4 instances sharing the I/O of each host (especially when they mention broad ranges of expected I/O).

The most likely reason for not slicing these systems up to smaller instances is they want to maintain consistent, high performance I/O.


AFAIK, the largest tier in any AWS instance type has always been the full box. i.e an m1.xlarge is the whole box, an m2.4xlarge is a whole box, etc.


I would agree with you, but them listing such broad write IOPS ranges makes me think otherwise. I could be wrong though.


There's a technical reason for the range, explained in the blog post:

> Why the range? Write IOPS performance to an SSD is dependent on something called the LBA (Logical Block Addressing) span. As the number of writes to diverse locations grows, more time must be spent updating the associated metadata. This is (very roughly speaking) the SSD equivalent of seek time for a rotating device, and represents per-operation overhead.


It seems like they could spread a bunch of smaller instances over an array of SSDs, maybe just offer less space at a higher price? SSDs are naturally better at concurrent access, so while you might not get 150k+ IOPS, you could get at least 10k or so, with the expected low random access times. Imagine an SSD-backed EBS.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: