FWIW this came after a long long period of the DNSOP working group throwing hate and venom on this form of hithertofore non-normative DNS use cases. This was a very hot topic and it seemed like absolutely no progress was going to be made making this happen for a long time due to the DNSOPS wg.
I don't have good citations- would love to gather them again- but it was only after other IETF members started questioning the absurd rejectionist behavior of the DNSOP working group that this changed, and the .onion draft was allowed to get due status.
I can't think of any case in IETF history of a draft being so hotly denied and rejected. Very interesting story, look forward to someone telling it better than I.
Man, sometimes I wonder if in 20-30 years there will be anyone left who is this passionate about such seemingly obscure topics. The Internet owes much of its success to these people.
Of course. In Japan for example if you can name a topic or activity, no matter how obscure, there will be an otaku (enthusiast) or even a club of otaku for it. What makes you think there will be less interest in obscure topics in the future?
And here in the West everything has a subreddit. I can only imagine the future holds even better, resilient ways for obscure topics to gather people like distributed forums and such.
Much of the time the people on IETF WGs have been on them for years through multiple companies. There are exceptions, but often this rejectionist-type behavior isn't because of some kind of corporate influence, it is because the new thing doesn't match what these people think should be done.
Also, they were originally trying to get several already-in-use psuedo-TLDs recognized including .onion, .i2p, .bit and .gns, but they had to compromise.
About time (even though it was in 2015). I was afraid that someone was going to buy .onion and use it to deanonymize people who thought they were on TOR when they were not.
THIS is the important point of this post. It might be "cool" that they now recognize the TLD now, but I can't imagine the chaos it would cause if .onion was purchased.
There were plans for that contigency. Tor would have moved to some other tld (ie .onionland). The tor browser bundle would update bookmarks and probably issue a warning for .onion addresses. Annoyance but not chaos.
DDG themselves used to link to that URL in thee instant answer box if you searched for "duckduckgo onion", and I don't think anyone who isn't familiar with TLDs and Tor would realise what is happening there.
Many web browsers have predictive services built-in to the address bar. Some are even enabled by default. Google has undoubtedly seen .onion addresses submitted.
The idea that we're treating name resolution differently by different applications is... Crazy. I can't even begin to imagine all the ways that is going to fail.
The idea that users would have to be careful not to enter a .onion domain into non-TOR software is also crazy. Of course they're going to do this, and they're going to do it often.
And relying on applications not to asking publicly for resolution for .onion domains is laughable. It's going to happen a lot.
As others have noted, they should have implemented new protocol names instead. All existing software that doesn't follow these rules would simply do nothing (or return an error, as is appropriate) and only applications with actual support would attempt to do anything.
There exist several other and older top level domains which should never be publicly resolved. .localhost, .example, .invalid, and .test are a few domains which most resolvers (and even operative systems) know to not try resolve. .local, is an new one which zeroconf is primary responsible to resolve.
To go to the extreme of having no such reserved domains would be crazy and way too much existing software would break spectacularly if anyone tried. rfc6761 is simply an update to rfc2606, by extending the concept of the reserved names like .localhost to names like .local or .onion.
Of course this sounds like the right engineering approach, but perhaps fitting that into the browser use case was not possible in a manner suitable for the Tor project? After all, the primary interface to the Tor network for most users is their browser.
[edit] meant to add: of course you could add a new protocol, but then that would imply browser modifications, right?
Bundled with the .onion application was originally .i2p and .gnu applications as well.
Only after the Tor project unbundled the non .onion TLDs was any progress made.
After the .onion TLD had been accepted the process by which these special (reserved TLDs) were registered at the IETF was removed.
The IETF bent over backwards not to include anyone and when they did they only allowed for a single TLD to be reserved before closing the special (reserved) TLD application process.
Mind you that .gnu and .i2p applied on the exact same grounds and even on the exact same application originally.
I have never been in contact with a more dysfunctional and bureaucratic organization.
Abstractly something on layer 7 of the OSI model shouldn't care exactly how the data is being routed, but the security considerations of using TOR breaks the abstraction. A naïve client could be made to leak identifying information through any number of side-channels. Since the lower layers can't have a whitelist of acceptable applications to connect to .onion addresses, the applications themselves must voluntarily refrain from connecting.
This is like the prevailing wisdom to check malloc's return value, not storing passwords in the database, not rolling your own crypto, etc. except that in this case it actually got included in the standard. Unfortunately like all those other things, there's nothing that actually prevents a programmer from breaking the rules.
Only applications that actually access URLs would need to generate an error. In practice, your code doesn't need to know about .onion domains - but network libraries like libcurl do.
What is this nonsense? A domain iooks up to an IP address (and some other things). DNS does not and should not care about what protocols are going to be used with said IP address.
I can use an .onion domain for unencrypted Telnet as root with no password if I want. It's stupid but that's not something that should be restricted at a DNS level.
At first I agreed with you, but realized that my preferred solution was essentially what they recommended and just with different wording. My thought process was:
- Absolutely, DNS resolvers should not care or have knowledge of the protocol that will be used to access that address.
- What they *should* do is just say that normal DNS resolvers shouldn't ever resolve .onion addresses.
- (And then Tor should include a special DNS resolver that does anyway.)
- Oh, that's compatible with what they said.
I think some of the confusion comes from their use of "applications".
> Tor should include a special DNS resolver that does anyway
Would be pointless, given that the spec says:
> Applications that do not implement the Tor protocol SHOULD generate an error upon the use of .onion and SHOULD NOT perform a DNS lookup.
So according to this spec, even if you did implement a special DNS resolver, only TOR-aware applications would be able to use it, and that's pointless since TOR-aware applications can connect to `.onion` services without using DNS at all.
I'd think, for the most part, it would act like using a typical invalid URL and throw an error. Special cases would be, say, Chrome rejecting .onions instead of doing a search
Because https://*.onion and ftps://*.onion are different. And it lets you do things like tor2web by just appending a DNS domain to the end of the onion domain, without requiring any special client support. And more things break on unexpected protocols than unexpected names.
That makes sense. I wasn't aware of the details, but looking into it now, "onion"/hidden services are a way of establishing tor circuits which are then used normally. Thanks!
As I understand it, currently _any_ protocol can be routed to a `.onion` TOR hidden service using a SOCKS proxy. So services don't necessarily even need to be aware of TOR, and can just use it as an underlying transport so long as they're capable of routing traffic and DNS requests through SOCKS. You wouldn't be able to do that if you just started making up protocol names like that; your FTP client doesn't know how to handle the `ftps+onion://` protocol. (But it can handle `ftps://somedomainhash.onion/` over a TOR-aware SOCKS proxy just fine.)
This advantage is entirely negated though by this line in the new spec: "Applications that do not implement the Tor protocol SHOULD generate an error upon the use of .onion and SHOULD NOT perform a DNS lookup." So you definitely have a valid point. If they don't want to allow applications that don't explicitly support TOR to connect to `.onion` addresses, why not just make up new protocols that existing applications don't support?
> "Applications that do not implement the Tor protocol SHOULD generate an error upon the use of .onion and SHOULD NOT perform a DNS lookup."
The RFC is still a "Proposed Standard" and that sentence is probably a mistake.
Even applications that are designed to use Tor like the Tor Browser don't "implement the Tor protocol", tor implements the Tor protocol and other applications use tor via SOCKS. Or not even that now that there are things like torify.
The problem is there is a trade off between a) applications that know nothing of Tor leaking the onion name lookups from people who actually need anonymity into the public DNS when Tor is not installed or configured correctly, and b) breaking things for people who want the widest variety of application support via Tor, who aren't interested in anonymity and are only using it for e.g. NAT traversal. (The Tor people like to encourage the second group because more users improves overall anonymity.)
A possible solution is Tor providing a particular innocent name that will resolve via Tor but not via public DNS, and then applications that can't resolve that name should not try to resolve any other onion names, and DNS caches like dnsmasq should by default return NXDOMAIN for that name without forwarding the query.
This committee recognizes what already exists. onion://hash is not used because .onion domains are not different protocols (they're still http) but are a specially allocated space of hostnames representing the "addresses" of Tor servers.
It is extremely useful for an onion address to just define a plain old TCP socket, rather than imply HTTP. An onion address is just that... an anonymous connection to some server, somewhere in the world. While you might be unaware of where the service is operated from, it could be the exact opposite: maybe you just started an ssh server on your desktop and would like to access it remotely where ssh might be monitored or blocked -- despite ssh itself being encrypted, making a connection and the lifespan of the connection tell quite a lot about your habits. Or maybe you decide to visit China for a weekend... Hell, or maybe you're buried behind multiple NATs/firewalls and just need an easy way to poke from the outside (I've done this, in corporate environments where you can run your own VMs and such)
I'm not sure why so many people seem to think that I'm implying anything about HTTP. "onion" would be fine as a URI scheme I think. Different programs would interpret it differently, which is obviously bad, and using a special-use domain name lets it slot seamlessly into the usual URI system as just a different sort of host specifier, which is obviously better, lacking something like nested URIs (http://www.it.uc3m.es/muruenya/papers/nested_uris_euromicro0...), which never became a thing.
It was a long time ago that TOR chose to do so. I also think TLD makes sense anyway. TOR is a network and the protocol of the traffic that is relayed through it is independent of the transport.
Relevant: A draft RFC for an X.alt tld, so in the future the next thing like the next X.onion would be X.onion.alt instead, with X.oninon not expected to change due to backwards compatibility: https://tools.ietf.org/html/draft-wkumari-dnsop-alt-tld-00
This is really cool, and I really hope that resolvers adopt this --- which basically means dropping any queries for `.onion` to avoid leaking that a client attempted to resolve such a domain.
Yep. The author, Appelbaum, is no longer with Tor after falling victim to one of the many successful culture-based attacks on privacy and free software groups in 2015/2016.
I don't know any of the people involved, but this part always struck me as being pretty odd:
====
On 10 June, Jill Bähring, a woman whom three witnesses claimed to have seen being abused,[92] flatly denied the abuse allegations.[101] In a statement released by Gizmodo journalist William Turton, Bähring wrote: "Reading this highly distorted version of my experience, which is being used as one of the 'bulletproof examples' of Jacob's alleged misbehavior, I can’t help but wonder. Wonder about all the stories that have been published the last days. Wonder not only about mob justice on twitter, caused by rumors and speculation, but also about the accounts repeated by those who call themselves journalists. Wonder about how many other stories have been willingly misinterpreted. Wonder about the witnesses in all these stories, who coincidentally always seem to consist of the same set of people. Wonder about their motive to speak on my behalf without my consent."[102][103]
It has been theorized that a state actor orchestrated an attack on Appelbaum to oust him from the Tor project (1) to discredit him as an authoritative journalistic source [1: Appelbaum was one of the individuals given access to the Snowden documents]; and (2) to gain more control of the Tor project to introduce vulnerabilities.
The events leading up to his ejection had a striking resemblance to the JTRIG manipulation tactics.
It's very possible that Appelbaum was a bad actor, exploiting his credibility, but it's also highly suspect that these accusations came to light so late into his life. It seems that, prior to the release of the accusations, someone would have at least suggested that he was sexually exploitative. Yet, there doesn't seem to exist even a peep of anything close to misconduct.
It's why these manipulation tactics are so surprisingly effective. Who should be believed? There's a strong moral dogma that suggests we should immediately assume that the accusations are truth,
I don't know Mr Appelbaum at all, I do know trial by twitter, mob justice, unsubstantiated claims, authority claiming to have secret evidence and pronouncing someone guilty when I see it. I don't find any of that at all helpful in assessing someone's guilt or innocence.
I don't have an opinion on guilt or innocence in the absence of evidence at all. I do note that at least some of the unsubstantiated accusations do seem to now have found substance as being made without the alleged victim's knowledge or consent and the alleged victim has denied the accusation in its entirety. The second time that happened I felt there is something not quite right going on. There's a distinct smell. Just what that is and why is a matter for speculation.
I don't have good citations- would love to gather them again- but it was only after other IETF members started questioning the absurd rejectionist behavior of the DNSOP working group that this changed, and the .onion draft was allowed to get due status.
I can't think of any case in IETF history of a draft being so hotly denied and rejected. Very interesting story, look forward to someone telling it better than I.