Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For background, this came up as Akamai mentioned in their Heartbleed announcement that they didn't think they had any sensitive information[0] disclosed by the bug as they had wrapped OpenSSL's malloc to create two heaps: a secure heap for sensitive information like keys and an ordinary heap:

https://blogs.akamai.com/2014/04/heartbleed-update.html

It prompted a lot of interest in their solution, which prompted them to release the patch.

edit: [0] by "sensitive information" we mean keys, if I understand the implementation correctly, you could still overrun into web server allocation and get back HTTP headers.



We definitely leaked session cookies and passwords, until we patched. This only protected SSL private keys.


You could do better with a probabilistically secure allocator instead: http://people.cs.umass.edu/~emery/pubs/ccs03-novark.pdf

Randomized allocation makes it nearly impossible to forge pointers, locate sensitive data in the heap, and makes reuse unpredictable.

This is strictly more powerful than ASLR, which does nothing prevent Heartbleed. Moving the base of the heap doesn't change the relative addresses of heap objects with a deterministic allocator. A randomized allocator does change these offsets, which makes it nearly impossible to exploit a heap buffer overrun (and quite a few other heap errors).


That paper only seems to mention heap overflows for purposes of writing to a target object that will later be used for indirection or execution. I don't see how it makes Heartbleed any better to extract a shuffled heap instead of a sorted one. What am I missing?


It's not just a shuffled heap, it's also sparse. Section 4.1 covers heap overflow attacks, with an attacker using overflows from one object to overwrite entries in a nearby object's vtable. Because the objects could be anywhere in the sparse virtual address space, the probability of overwriting the desired object is very low (see section 6.2).

The same reasoning applies to reads. If sensitive objects are distributed throughout the sparse heap, the probability of hitting a specific sensitive object is the same as the probability of overwriting the vtable in the above attack. The probability of reading out any sensitive object depends on the number of sensitive objects and the sparsity of the heap.

There are also guard pages sprinkled throughout the sparse heap. Section 6.3.1 shows the minimum probability of a one byte overflow (read or write) hitting a guard page. This probability increases with larger objects and larger overflows. You can also increase sparsity to increase this probability, at a performance cost.


An attack that reads everything is different from an attack that writes everything; 4.1 doesn't seem to understand that. The latter will just crash the computer like some kind of Core Wars champ. The former can copy out the whole heap! So a writing attacker has to worry about crashing the server or getting caught. A reading attacker can just loop, then run.

The guard pages I believe help---but random guard pages just mean I won't know quite what's protected and what is not. This last week I benefitted quite a bit from being able to reconstruct year old server memory layouts precisely.

In this case, I want a marginal chance of compromise no worse than 2^-192, about the strength of RSA-2048.


From reading that blog post it is not clear to me if their wrapping code existed prior to the discovery of the heartbleed bug: The post summary says:

Akamai patched the announced Heartbleed vulnerability prior to its public announcement. We, like all users of OpenSSL, could have exposed passwords or session cookies transiting our network from August 2012 through 4 April 2014. Our custom memory allocator protected against nearly every circumstance by which Heartbleed could have leaked SSL keys.

So: did the custom memory allocator exist already in August 2012? From reading the post this looks to be the case. Could it be that someone at Akamai took a look at the heartbeat (or other OpenSSL) code, decided that it could lead to memory leaks, and wrote their own memory allocator wrapper code to guard against this?

Edit: SSL -> OpenSSL


From the original post:

> This patch is a variant of what we've been using to help protect customer keys for a decade.

So it protects keys against heartbleed, but not other HTTP related data (urls, cookies, headers, etc).


How I read it is that this code already protected their private key, but the Heartbleed bug still disclosed other private details (such as submitted user data).


Akamai's patch is older than the heartbleed issue in OpenSSL so it wasn't a response to heartbleed, and they were still vulnerable for non-key data.


Keys are not passwords and session cookies.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: