I stand corrected - though, as another reply said, it makes little difference if you can't actually use a forked server in practice.
I don't know what I could say to convince you I'm just an ordinary person concerned about my privacy, but ultimately it doesn't matter: you should definitely consider the possibility that I'm a bad actor and take nothing on faith. Equally, you shouldn't trust that Marlinspike hasn't been compromised either.
A little thought experiment: Put yourself in the NSA's position in 2013. GPG has been out there for years and, despite your best efforts, you can't break it directly when users follow proper security practices. (You have to compromise those users' computers instead, and that's vastly more expensive; every time you use one of your rootkits or exploits you run the risk of burning it, so they're reserved for high-value targets). The world is suddenly a lot more interested in privacy, and while popular culture doesn't grasp the intricacies of key exchange or forward secrecy, there are enough cryptography experts around that any obvious downgrade from GPG will be noticed and picked up on (this is just after the conclusive failure of your Dual_EC_DRBG efforts). What do you do? How do you get the public to accept something easier to compromise?
My answer is: you find a different front to attack GPG from. You talk up different kinds of attackers. You dangle a new, desirable security property that GPG doesn't have, and a theoretically clean construction - and then you compromise the metadata subtly, down in the weeds of usability features, letting you identify the higher-value targets. You get people used to using a closed-source build that auto-updates, and have a canned exploit ready (a compromised PRNG or similar) to use on those targets. And you get people to enter their phone numbers so that you can always track their location and what hardware they're running if you do have to attack their device more directly.
Maybe I'm being paranoid, but it seems distinctly odd that we see such a push behind an app that compromises so many features that were previously thought essential to security, just as the move for encryption is finally gaining momentum.
GPG has an infinitesimally small user base. Many tech savvy users still struggle to use it correctly. Moxie has explicitly stated that his aim is not to build the perfect secure messenger app, but a messenger app that provides the greatest amount of security to the greatest number of users. He has explicitly stated that he has made some design decisions that slightly compromise the ultimate security of Signal, but are necessary to establish a wide user base and avoid traps that could drastically compromise the security model because of user error.
Signal is not designed for you. Highly sophisticated, highly paranoid users already have a variety of options for securing their communications. Signal is designed to provide the greatest possible amount of security to the greatest possible number of users, which necessarily requires that some tradeoffs are made in the interests of ease-of-use.
> GPG has an infinitesimally small user base. Many tech savvy users still struggle to use it correctly. Moxie has explicitly stated that his aim is not to build the perfect secure messenger app, but a messenger app that provides the greatest amount of security to the greatest number of users.
But what's the threat model where Signal makes sense? For a less-than-nation-state attacker, basic TLS as virtually all messengers support is surely adequate. For a nation-state attacker, phone-number-as-ID is a bigger vulnerability than anything Signal helps with, and central servers means that Signal can simply be blocked outright in any case. If we're talking about, say, Turkey cracking down on protesters, they would probably rather those protesters were using Signal (where arresting one means you get the phone numbers - and therefore locations - of all their friends) than the likes of Facebook or Discord or what-have-you.
> Signal is not designed for you. Highly sophisticated, highly paranoid users already have a variety of options for securing their communications. Signal is designed to provide the greatest possible amount of security to the greatest possible number of users, which necessarily requires that some tradeoffs are made in the interests of ease-of-use.
I'd be fine with that if Marlinspike didn't also trash-talk those more secure tools.
There are nation-state attackers and nation-state attackers. Most oppressive regimes don't have the technological resources to perform complex attacks on relatively tough systems, but they can perform pervasive monitoring on layer 1, use dodgy certificates to undermine TLS and bribe or coerce corporate actors. Most people in functioning democracies aren't particularly worried about becoming the target of the full might of a three-letter agency, but they might be worried about bulk collection via L1 or PRISM intercepts.
Signal is a vast improvement over SMS, plaintext email or any commercial messaging application, but it's no more difficult to use. It's relatively foolproof, in that user error can't fatally undermine the security model in most cases. It's not perfect, but it's easily the most secure chat app that I could confidently persuade non-techies to actually use. A highly secure app that you don't know how to use offers you no security at all.
> they can perform pervasive monitoring on layer 1
Indeed (particularly as telecoms are often state-owned in those regimes), which is what makes phone-number-as-ID such a bad idea.
> use dodgy certificates to undermine TLS
Difficult in these days of certificate transparency and HPKP.
> bribe or coerce corporate actors
If that's your worry surely you want to rely on a big corporation rather than Signal. Look at e.g. Brazil having to block WhatsApp entirely because Facebook wouldn't play ball with them. Facebook has deep pockets that mean they can afford to do that kind of thing.
> Signal is a vast improvement over SMS, plaintext email
Agreed
> or any commercial messaging application,
Not convinced that there's a significant improvement here. Plenty of commercial messaging applications have encryption. If the server is under an attacker's control then you're vulnerable, but I'm not convinced that isn't the case with Signal too.
The threat model is "the backend server has a security flaw and gets exploited, dumping a bunch of information about my chats" or "the backend server is run by a company that wants to use the contents of my messages for analytics and I don't want that" or "a rogue employee with access to databases but not enough access to ship rogue code wants to read my messages".
I don't know what I could say to convince you I'm just an ordinary person concerned about my privacy, but ultimately it doesn't matter: you should definitely consider the possibility that I'm a bad actor and take nothing on faith. Equally, you shouldn't trust that Marlinspike hasn't been compromised either.
A little thought experiment: Put yourself in the NSA's position in 2013. GPG has been out there for years and, despite your best efforts, you can't break it directly when users follow proper security practices. (You have to compromise those users' computers instead, and that's vastly more expensive; every time you use one of your rootkits or exploits you run the risk of burning it, so they're reserved for high-value targets). The world is suddenly a lot more interested in privacy, and while popular culture doesn't grasp the intricacies of key exchange or forward secrecy, there are enough cryptography experts around that any obvious downgrade from GPG will be noticed and picked up on (this is just after the conclusive failure of your Dual_EC_DRBG efforts). What do you do? How do you get the public to accept something easier to compromise?
My answer is: you find a different front to attack GPG from. You talk up different kinds of attackers. You dangle a new, desirable security property that GPG doesn't have, and a theoretically clean construction - and then you compromise the metadata subtly, down in the weeds of usability features, letting you identify the higher-value targets. You get people used to using a closed-source build that auto-updates, and have a canned exploit ready (a compromised PRNG or similar) to use on those targets. And you get people to enter their phone numbers so that you can always track their location and what hardware they're running if you do have to attack their device more directly.
Maybe I'm being paranoid, but it seems distinctly odd that we see such a push behind an app that compromises so many features that were previously thought essential to security, just as the move for encryption is finally gaining momentum.