If the leaker visits this page before opening the Tor Browser from a regular browser to copy the onion url, the whole thing is as safe as SSL as there will be a trail of the SSL connection just before the visit to SecureDrop. And they don't even explain to avoid it.
(Securedrop dev here) This is a really good point. Unfortunately, we're "as safe as SSL" no matter what, unless the source has a separate way to verify the .onion address on the SSL-protected page. They can use the SecureDrop directory for that (and we're working on other schemes as well), but it's not automated so only a handful of very cautious sources would likely do this.
I'm not sure how we could explain to avoid it - where would the explanation go? Visiting that page would be just as much of a correlation, no? It's kind of a chicken and egg problem, unless the source is already using Tor.
Avoiding the "trail of the SSL connection" also suggests we should be doing something to combat website fingerprinting, which we have discussed but do not have a clear solution for yet.
Our current thinking is that just visiting the landing page is not enough to prosecute a source. We can do better, and are working on it, but it's difficult.
> Include an iframe for all (or a random subset of) visitors, loading this particular url (hidden).
Or, since the content of this page is mostly text, it could be included in the HTML of all washingtonpost.com home page requests with very small overhead, and shown with a non-tracked javascript action (link/button), so it is all client-side and indistinguishable from a normal request to the home page.
Definitely! The challenge is getting the news orgs to change their entire site, which often involves a lot of complex, entrenched infrastructure and sometimes involves reluctant third parties such as ad networks.
We're working on a best practices guide for deployments [0]. I'll make sure these suggestions go in there. Feel free to take a look and comment if you're interested!
We've been working on this with some of our deployment partners for a while now :D Great idea! I didn't know anybody else did it, it's cool to hear about c't.
> I'm not sure how we could explain to avoid it - where would the explanation go?
You could put the instructions on pages that many people visit regularly, true security through obscurity. For example, put the instructions in abbreviated form in a box in the footer of your front page (or in the footer of every page).
Print a QR Code for SecureDrop in every issue of the newspaper. Hell, feature it as part of a story announcing SecureDrop the first time you print it. Then just print it in a consistent position with minimal explanation from then on.
This may be one of the rare cases where the use of a QR Code is justified.
Only if they visit the page just before. Seems plausible they would read about it, set it up and then drop their documents at a later date as a default behavior.
I agree it would probably be a good idea to put a warning about such a problem though.
There's this hard tradeoff that most people are willing to make, between making things more 'secure' and making things useable by the general public. I just wish that attention would be paid to the security side of things.
Ultimately, we can write descriptive documentation - but getting it read and understood is hard. Cryptoparties, are again a great idea, but getting the non-technical user involved is damned hard.
IMHO these things always come down to "how do we make it easy for the public, whilst keeping it REALLY secure". How does security become a general piece of education, much akin to math, or at least history?
I don't see how that would help. The threat model here, the reason to use Tor is that they could be compromised and forced to log, and through Tor they would not know the leaker's IP.
You only need the two "leak at time X, IP Y loaded this page at time X-5" datapoints to break this.
Either you misunderstood me, or I don't quite understand how that would not help.
My suggestion is to embed an iframe to the posted URL on every page on www.washingtonpost.com. Every article, everything. I'd assume this would blast the logs enough that if you look at "time X-5" you'll have too many data points to actually make something out of it. Because everyone who reads an article on wapo will have also visited that page. So yes, that embedded page would be loaded by every single viewer of any page on washingtonpost.com.
Edit: I just realized that there is a huge unfixable flaw in this approach. The request for an article in the logs will always show up shortly before the request for the SecureDrop page. Even if you would iframe a random article on the SecureDrop page too you could see from the logs that is was loaded before the actual article. Essentially rendering this thing useless :/
(Securedrop dev here) We often suggest ideas like this to deployment operators, and others as well. For example, we encourage deployments to mirror the Tor Browser Bundle so sources don't have to go to Tor's (monitored) website to get it. We encourage them to use SSL everywhere so the "trail to the landing page" is harder to spot. We encourage the exact "hidden iframes" idea you propose here. And we encourage them to deploy on a path, not on a subdomain (because hostnames are visible even with TLS). At least WaPo is doing the last one right!
Generally, it is very difficult to convince the operators of sites like the Washington Post to do things like this, but we're working on it!
Uuuh, hi there! Thanks for the effort you all put into making leaking safer for sources.
Other possible approach: load the landing page everywhere and show it with Javascript when the user clicks their way to it. I think it's an improvement on the iframe without drawbacks. How does it sound?
It shouldn't matter where you're downloading the TBB binary, since you're going to verify the signature before trusting it, right? Surely you wouldn't just assume it was legitimate, and then install it.
How about some simple cookie tracking an iframe that loads a random number of seconds after the page loads (like 10 - 60)? That might spam the logs randomly enough so that it couldn't be tracked. However, I think measures such as including the Securedrop page as a part of the root domain only under ssl would be the simplest solution in this case.
Wouldn't matter, the GET for a particular IP for the article would still show up before the GET for SecureDrop, the actual timing is irrelevant here, if there's always an article visit, and then a SecureDrop request.
I guess you could randomize if you load the iframe or not. Then you couldn't be sure if a visit was an actual visit or an iframe that was randomly triggered (with a random delay).
But for this to be useful you'd still need to instruct sources to randomly browse the page before going to SecureDrop. Which might work if you force them to click a link on the main-page to get to the SecureDrop page.
But if they go directly to /securedrop it will fail again because the GET /securedrop will show up as the first request from that IP, giving away that the visit was intentional.
So my current idea would be to randomly generate the actual /securedrop path in a non-predictable matter per client. Maybe something simple like securedrop-sha1(...). Then link to that from WaPo's main page. Forcing everyone to go trough WaPo.com.
But then you still have the problem that you must make sure sources don't access this link from history or something.
Please correct me if I'm wrong but, right now, at home, I visited that site. Hardly suspicious at all, since it's on HN front page. I could write down the .onion url on a piece of paper (or just print the page, as reference) and then later follow the instructions posted there, at a semi-anonymous Internet cafe, without having to visit that page, right?
That's like saying John Smith went to a bank withdrew money at 1pm on Jan 1. Then the bank was robbed at 1:10 Jan 1 therefore John Smith robbed the bank.
I don't think you can connect visiting the info page and the very next SecureDrop file upload.
The threat here isn't only proof that is acceptable in court:
* Your actions could put you on a shortlist of people to be more thoroughly investigated.
* Your actions could tip off the people whom your information threatens; maybe they stop communicating with you (or worse) to shut off the leak.
* Per the Snowden release, the NSA tracked the communications of people within something like 3 degrees of their targets. With standards that low, it's not a stretch to think someone would track everyone visiting the Washington Post's secure drop box.
That is a poor analogy of the threat. Basically the problem is about attracting adversarial resources. Any suspicious activity will attract more attention and thus make it more likely the adversary will find real evidence.
A Tor user at Harvard was successfully tracked when he sent a bomb threat, since he was the only user on the Harvard LAN using Tor at the time the threat was issued.
That wasn't proof, of course, but it didn't need to be proof, just a good lead for law enforcement to kick-start their investigation.
If memory serves, there were several people who had been or were using Tor at the time the threat was sent. When he was questioned by the police, however, he confessed.
That's possible, but doesn't really change the point. By bootstrapping a associations of identity-masking technologies with possible identities you allow "normal" law enforcement investigative techniques to unmask the identity.
I worry that the Washington Post has unintentionally created a honeypot for leakers. I wonder if the Post has the resources to sufficiently secure it:
The requirement for security is to make successful attacks more expensive than they are worth for the attackers. (There is no perfect security, of course.)
How much is information leaked to the WP worth? It's information that can change the course of history; it could make war or peace; it could be worth billions or even trillions of dollars; it could simply change the course of the stock market or of one stock and be worth billions to an individual.
If I ran a state intelligence service, with the fate of my nation and all my citizens in my hands, I would be irresponsible not to invest in monitoring the Washington Post (and the NY Times, and others') "secure" tip line. If I ran an unscrupulous business, it would be worth it, if only for the information relevant to the stock market.
EDIT: Also, the information can change the course of elections and be a target of unscrupulous politicians.
I find it hard to believe that the Washington Post or any news organization has the resources to protect assets that valuable.
Very refreshing to see a big, red warning in the screenshot about the fact that Javascript is enabled! Usually you see the same thing when Javascript is disabled, asking you to enable it.
(SecureDrop dev here) Glad you like it! It's hard to tell people who get excited about fun UX ideas that they can't use JS, but from my experience as a browser security engineer, eliminating JavaScript (and plugins, which the TBB does already) dramatically reduces the browser's (unfortunately enormous) attack surface.
Agreed with you completely. Every time a new web app is posted to HN and it doesn't work without enabling Javascript, a small circle of security-conscious people complain about it. The responses from other people are in the lines of:
"Are there really people that browse the internet without enabling Javascript in 2014?"
"Well, 0.01% of your users have Javascript disabled, you can safely ignore them"
"Javascript is an important part of the web, if you have it disabled, you have no right to complain"
We need more people like you to advocate secure browsers without using Javascript.
This is a different deployment of the same product [1]. Which, incidentally, was originally created by Aaron Swartz. The Wikipedia page[2] has a list of well-known deployments.
Thanks for pointing that out. I just watched "The Internet's Own Boy", the documentary about Aaron, and it is positively incredibly how many projects Aaron created or played a critical role in creating. An unthinkable shame that he left us so soon — one can only imagine all the things he had left to create.
Does anyone know what the codenames are like? If they are easy enough to remember, then they may be easy enough to brute-force?
I think this is a great concept, yet perhaps too little, too late (Journalists should know PGP and drop boxes like these should have been common already). I also worry a bit because of Washington Post's track record with leaks, of the top of my head:
- Washington Post was Snowden's first choice, but they put up enough demands for Snowden to move to The Guardian. [1]
- Washington Post, according to Assange, had access to the "Collateral Murder" video a whole year before WikiLeaks published their edited video. [2]
- Washington Post employs op-ed columnists that call for assassination of "criminally dangerous" leakers like Assange [3]
Securedrop dev here. We tried to balance the memorizability of codenames (aka Diceware passphrases) with their length. The current minimum length is 8 words from a list of 6969 words, so you get math.log(69698, 2) = 102 bits of entropy, which is quite good. Additionally, the codenames are stretched with scrypt with affords an extra (approx.) 14 bits of entropy (that's our current work factor).
We are continuing to discuss and debate this trade-off. Other ideas welcome!
> Does anyone know what the codenames are like? If they are easy enough to remember, then they may be easy enough to brute-force?
I don't know what they're like, but if you take a list of 5000 common words and use 4 random entries for each codename, there are 625,000,000,000,000 possible combinations. Brute-forcing the entire space at 100,000 tries per second would take ~200 years.
The wordlist is just a random sampling of English nouns (I couldn't find a quick source of common nouns long enough). It may contain profanity, watch out!
Tor hidden services are not bulletproof. Just as a really simple example, you can do network traffic analysis to find network nodes with one-way traffic to hosts without a correlated public service and deduce if a hidden service is nearby.
There are several exploits which have been used in the past to expose Tor hidden services, and several papers on theoretical ways to expose them. Many of these attacks can be used in reverse to expose the origin of a connection to a hidden service.
In the [not so] extreme case, the govt can always issue a National Security Letter to WaPo and scoop up any data it wants directly from the hidden service servers, similar to its Silk Road and Freedom Hosting takedowns.
If all Post correspondents used SecureDrop to submit their stories that would be a start.
One would have to assume that all the traffic going to the server is logged by the NSA and anyone else who can manage it. If the traffic volume is low then timing correlation with even a large pool of suspects is simple. An active attacker can differentiate between the SSL connection from a web browser and one from a tor node, so the background SSL traffic to the Post would not provide cover.
I think it could be improved by using a mix network (eg mixminion) accessed over tor, rather than just tor.
Unfortunately the mixmaster/mixminion networks are currently too small to provide meaningful complexity. Large scale adoption by, eg, newspapers, is not technically hard and would significantly complicate the adversary problem.
This is brilliant, and a smart move for the WP, despite some of the criticism's below. I think it's a much needed, if romantic, idea that harkens back to the transparency of Wikileaks, and gives WP a great little heads up over some of the other papers. I wouldn't be surprised to watch the others follow suit soon.
Sometime in the near future, I predict that the US will require some form of photo I.D before using an internet kiosk. As usual, the spin will be to protect the children.
USA is pretty low on the list of countries I could imagine implementing something like this. Given Russia's, China's, and a large portion of SEA countries' internet censorship track records...
I'd put the USA pretty high on that list. They've implemented plenty of their take-downs over the past year, and are more capable of introducing something like this than any SEA state.
That's not the point at all. The USA claims to be a bastion of democracy and freedom. Therefore it has significantly higher standards to live up to than countries like Russia and China.
I have a better idea. Make it so that some traffic receives higher priority than others, and force content providers to have to pay to play. Then limit competition at the ISP level so that to succeed you have to pay a monopoly to carry your traffic in a timely manner.
No need for something as heavy as what you propose.
They've done a pretty good job of scaring people into securing their APs (which is also a legitimate thing in most cases); just publishing some stories about people having ISP service cut off due to freeloaders doing bad stuff would probably be enough; wouldn't even need to try to prosecute some.
>They've done a pretty good job of scaring people into securing their APs
How is this even remotely a bad thing? It's trivial to MITM people on unsecured networks - I can't think of a single consumer router that actually does DHCP snooping to prevent it either.
I think the technology confuses two things:
1. Encrypted traffic between device and wireless hotspot
2. Restricted access to the wireless hotspot (you need a password or it won't give you service)
I want to allow anonymous access, but let the traffic be encrypted. Is there a technical reason why this is not implemented?
I'm very sad by the culture (and moreso, the legal necessity) of restricting wireless access. I want to share, and have at times relied on anonymous wifi to help me get home.
You can run an access point with all the benefits of WPA2/AES, but make the password really simple. Setting your SSID to "PasswordIsBacon" or just using the same SSID and password is a fairly easy way to share access, without running a completely insecure, unencrypted network.
That's "easy to share" which is a much greater hurdle than "publicly accessible". I want strangers to be able to use my wifi in the middle of the night from outside my home. I want devices to connect without any questions or hassle.
A short walk with Wigle shows literally dozens with WPS on, and usually 4-5 with WEP, plus a couple of open ones that aren't paid. WPS is a massive gaping vulnerability as long as you can stay nearby for a few hours, while WEP gives the illusion of security to clueless people but is worthless (yay RC4... worst algorithm ever.).
No, for your convenience, you only need to identify yourself in the case that you exit the kiosk without using any sort of web service account that can be used to identify you ;)
Wow, Tor is still a thing? We have confirmation that security agencies have taken over exit nodes and injected spyware before to track targets. I'm surprised anyone uses it. It's like the security lottery.
The NSA leaks reveal that for the most part, Tor is still secure if you're using a sufficient number of intermediary nodes.
If anything, the real concern here is the implicit encouragement to use local library computers, which would be much easier for a government agency (or cybercriminal) to infect with malware and observe.
(Securedrop dev) That's not an implicit encouragement, despite it being your interpretation. Library computers, in my experience, do not typically allow you to install software on them, such as the Tor Browser Bundle, which is needed to access SecureDrop.
The explicit encouragement that is clearly written on the landing page is to use a personal computer (not a work computer) and a public network (e.g. a coffee shop).
“The American Library Association (ALA) opposes any use of governmental power to suppress the free and open exchange of knowledge and information or to intimidate individuals exercising free inquiry…ALA considers that sections of the USA PATRIOT ACT are a present danger to the constitutional rights and privacy rights of library users.”
Tor isn't some magic wand you can wave to get security, but it helps. The core Tor software's job is to conceal your identity from your recipient, and to conceal your recipient and your content from observers on your end. By itself, Tor does not protect the actual communications content once it leaves the Tor network. This can make it useful against some forms of metadata analysis, but this also means Tor is best used in combination with other tools.https://blog.torproject.org/blog/prism-vs-tor
OPSEC is hard.