> Perhaps the most famous example is figuring out passwords by measuring the time it takes for a program to reject candidate passwords. The longer rejection takes, the longer password prefix can be deduced
That's odd. Wouldn't a password check be performed on the hashed version of the plaintext password? Words that share prefixes shouldn't produce hashes with the same prefix. There shouldn't be any difference in the time it takes wether the prefix is wrong or correct.
I was really hoping this post would include a demonstration of type-level safety against such attacks given the mention in the intro. It's not obvious to me how the demo would be extended to cover such timing leaks. I suspect that even with type-level safety, you're still at the mercy of the optimizer and language runtime to maintain those security properties. Can you specify constant time codegen or other constraints with GHC?
Sorry, I didn't mean to mislead. I was exemplifying ways of subtly leaking data. In the next paragraph, I clarify what we cover with the following:
> Namely, we implement a type system that regulates explicit and implicit dataflows as well as non-termination as a covert channel.
Yes, there is work on preventing timing attacks using static types. One is "A Hardware Design Language for Timing-Sensitive Information-Flow Security" which addresses exactly this problem. The second line of work is resource analysis. There are type systems that can specify the complexity of the program. Check out for example relational cost analysis [0]. This can be used for privacy and security purposes.
Needless to say, this is considerably more sophisticated than what I covered.
That codahale article you linked is surprising. Is that just a toy example or is it actually viable? I wouldn't think a "cryptographic" hash of one input sharing any prefix with another input would leak any information about the plaintext due to the avalanche effect. I thought that was possible for encrypted values but not hashed values.
chrismorgan's comment about plaintext only is what I would've assumed the attack was limited to for passwords.
It's fundamentally the same attack as the plaintext case, although here it's breaking signature verification.
Here, the computation is
real_sig := Sign(msg)
Cmp(real_sig, provided_sig)
For a fixed message, Sign(msg) is constant time. So, with enough attempts you can create a high-resolution timer to tease apart subtle timing differences in Cmp.
(But this relies upon transmitting the plaintext password, you might say! If you transmit the hash directly, then you're again hosed. One option I've seen previously is transmitting a hash of the password, then hashing + salting it once more, which relies on avalanche effect as you suggest. I provide no guarantees re: the security of such an approach, though -- talk to an actual cryptographer!)
> One option I've seen previously is transmitting a hash of the password, then hashing + salting it once more, which relies on avalanche effect as you suggest.
If the client-side hash is fixed, this is no more secure than transmitting the plaintext password, since as far as the server is concerned, it is the password. You could, however, send a nonce to the client, send back H(nonce++password), calculate H(random++Hsent), and compare that to H(random++H(nonce++password)) calculated on the server.
> talk to an actual cryptographer
I am not your lawyer^Wcryptographer and this is not legal^Wcryptographic advice.
Yes. You could (probably should) add a additional password=PBKDF(real_password) if that's a problem (eg because of password reuse), but it's not a essential feature from a cryptographic perspective.
I brought it up mostly because I was hoping someone would point out how typing can easily fix such timing attacks, but yeah that was a legitimate issue. [1] shows a walkthrough of one attack.
In the particular case of password verification, I don't think it'd be that bad. It simply gives attackers more information than they should have. The issues are obvious if the hash doesn't have perfect diffusion. It also allows you to begin reconstructing the hash and performing offline attacks e.g. to avoid (already too-lax) rate limits.
One thing that bothered me from an initial skim of that medium post:
> Even if the difference is only 0.0011 nanoseconds, the test is reproducible and each gives similar results.
From the graphs, this should read 0.0011 _milliseconds_. 0.0011 ns is eons faster than any CPU clocks out there (roughly 900 GHz), and would take a heck of a lot more samples :) Meanwhile, 0.0011 ms is on the order of a microsecond, which is very doable.
When comparing hashes, you can still have timing differences. You have to actually compare the whole string, drag the comparison result along and evaluate it at the very end. And make sure the compiler is not "helping". But this would give no knowledge of how long the matching prefix is. The hash should indeed mask that.
That's odd. Wouldn't a password check be performed on the hashed version of the plaintext password? Words that share prefixes shouldn't produce hashes with the same prefix. There shouldn't be any difference in the time it takes wether the prefix is wrong or correct.