The thing about DNS I like the least is that you are always at the mercy of a lot of other parties. You can never really own a domain, only rent it. You depend on the owner of the TLD you use to keep honoring your contract and they depend on the owner of the root zone. Prices can just rise because there is a lot of friction to change the domain. Through no fault of your own you can lose your access to a domain (.ga, .af). And that is especially problematic in systems that use the domain as an identifier (see ActivityPub). I wish there was a better way but I can't think of one.
Properties are dependant on you being able to defend it, which, if you aren't a state, you delegate to the state where you posess thing. This include your body, your mind, and your life.
Losing a couple percent of cash every year to inflation is a lot more predictable than losing a domain suddenly and having someone else put up a malware-infested copy of my own site taken from an archive.
I think it's a fine compromise. You own your domain inasmuch as you can own a home. With both, you pay protection money to a central authority who ensures you're allowed to keep owning them.
I would be curious to know what Paul Vixie thinks of things like DKIM and DMARC. Both of them turn DNS into a database of convenience for every major email provider on the planet, while neither of these technologies seems to make any useful impact on spam itself as the former is as often ignored as the latter is misconfigured. for relaying to Google, one must take a prescriptive and religious approach by accepting both these gods as a precondition of your delivery.
DKIM and DMARC are not intended to address spam (a flood of low-quality commercial and scam emails).
They are intended, along with SPF, to address impersonation, which is a distinct problem (although some spam does employ impersonation).
They work just as well for minor email providers as they do for major ones. Even better, in some cases, because many small email providers are corporate IT departments, who tend to filter aggressively on those signals.
> * DKIM and DMARC are not intended to address spam […]
They are intended, along with SPF, to address impersonation, which is a distinct problem (although some spam does employ impersonation).
I would wager that the vast majority of true spam⁰ currently being sent, based on what my mail server sees, involves impersonation, because it is mostly messages that are not intended to be replied to by mail. You are in most cases instead told to go to a website or call a phone number if it is a “business opportunity” or to send money to a crypto-wallet in the case of a scam.
This means that DKIM and SPF are increasingly effectively an anti-spam measure even if you argue that this is not their primary direct intent, because of the large overlap.
I would argue that spam is one of their primary targets: junk mailers started faking addresses from real domains where they would otherwise have just included a completely fake address in response to recipient mail servers discarding mail that didn't appear to at least come from a valid domain. Some junk mail more deliberately uses impersonation, to convince the end recipient that it comes from the impersonated individual/group, but much of it does so just to convince the mail server to actually deliver the message.
Yes, some genuine junk doesn't use impersonation (the hosting providers that buy mailing lists from other hosts as they close down, and other businesses doing the same, and damned recruiters, for two examples) – but not the majority IME.
--
[0] In “true” spam, I'm discounting newsletters and mailing lists that people forgot they signed up for, or signed up for to get a discount but don't really want, etc.
The inventors of DNS originally envisioned a lot more email-related “catalog of convenience” data in the DNS. Go lookup MD, MF, MAILA, MB, MG, MR, MINFO, and MAILB records in the original spec[0]
Better a silly underscore subdomain than a plethora of TXT records, which was once a real problem, and threatened to overwhelm the apex domain. But yes, the abandonment of the SPF record type was disappointing.
It was clear that the record type significantly hindered the deployment, which is why. If initial implementations wouldn't have been so strict at enforcing a small subset of types, things might be different.
Also, the web ‘public suffix list’ should be a DNS property instead.
Unfortunately the public suffix list has serious security implications, so it isn't safe to distribute it via unauthenticated DNS.
It also has very serious availability implications, so if you sign it with the WebPKI-maximum validity of 90 days and somehow and those signatures expire, you either get major vulnerabilities if you fail open or else an entire TLD (like ".com") breaks if you fail closed. You get to play this game every 90 days. Sounds like fun.
https://www.digicert.com/blog/chromes-proposed-90-day-certif...
This is the fundamental problem with trying to have DNSSEC replace DNS (instead of augmenting it): signature revocation is Byzantine-Generals-complete, which means that you either sign something forever (like ssh keys) or for a fixed time period and treat nonrenewal as revocation. Nonrenewal-as-revocation sorta works for signing single domains; if one website's cert expires it isn't the end of the world. When you start trying to sign larger and larger fractions of the Domain Name space, moving up to ccTLDs and then the root, the stakes are just way too high.
This is why DNSSEC will never displace unauthenticated DNS.
The choices are to stick with unauthenticated DNS, or maintain both systems in parallel forever. There is no third option where unsigned DNS records just go away.
> This is the fundamental problem with trying to have DNSSEC replace DNS (instead of augmenting it):
DNSSEC does not replace DNS. DNSSEC augments DNS.
> [stuff about revocation]
Much of what you wrote there is erroneous:
> [..] you either sign something forever (like ssh keys) or for a fixed time period and treat nonrenewal as revocation.
In DNSSEC you revoke by publishing new public keys in a DS record at the delegation and deleting the old public keys, also at the delegation. No need to worry about expiration -- the DS record has none. There is no expiration in the DS record but that's not "you sign something forever" because the DS record is signed by an RRSIG record that has an expiration.
Zone operators have to re-sign RRSIG records, but that's not really a problem.
So revocation in DNSSEC is easy, and you can do it as often as the TTL on the DS RR allows.
Revocation in PKI is much harder, unless one just goes for short-lived certificates. By automating certificate renewal operations, Let's Encrypt is doing the world a favor, as it will allow servers to have very short-lived certificates, thus eliding the need for revocation (and keeping CRLs small if recovation is still needed).
> [...] if one website's cert expires it isn't the end of the world. When you start trying to sign larger and larger fractions of the Domain Name space, moving up to ccTLDs and then the root, the stakes are just way too high.
Now this is true but it shouldn't be any more likely than other whole-domain outage failure modes of DNS. If you publish the wrong DS records at the delegation, your domain will be out -- don't do that! That's like publishing the wrong NS records at the delegation. The delegator can also break the delegation on their own, but they can do that w/o DNSSEC too, so you're no more and no less at their mercy w/ or w/o DNSSEC. And if you fail to re-sign your zones, you'll have some outages. In practice the right way to do the last is to sign dynamically in your DNS servers using ECC and with some caching, naturally.
> This is why DNSSEC will never displace unauthenticated DNS.
>
> The choices are [...]
The predicate is erroneous, so the conclusion doesn't follow from it. It might follow from other issues, but not the ones you stated.
> > No need to worry about expiration -- the DS record has none.
> > the DS record is signed by an RRSIG record that has an expiration
> You've just contradicted yourself.
No contradiction there. I said that the DS RR doesn't have an expiration, and that's correct.
The RRSIG RR has an expiration, but because the custom is that the zone operator resigns the contents periodically, it's a non-issue. The child delegation's operator doesn't have to take special action to have their DS "renewed", they only have to take special action to rotate the child zone's public keys.
You don't have to worry about expiration because the parent should keep re-signing their zone. The expiration allows you to "revoke" by telling the parent the new keys and then delete the DS RRs for the old keys at a convenient time.
It's a shame that SRV records were not included in DNS from beginning, the landscape could look really different today. Notably it would have also liberated port numbers from their semi-fixed status; you could run any service on any port completely transparently, no need to worry about port collisions and whatnot.
It's even weirder and more frustrating when you look at RR types like WKS and HINFO that were included in the original spec. Rather than be able to say "I run this service/variant on this port", they included a way to specify which well-known services were running (at the DNS configuration level,) but on the wire it is just a bitmap of putatively open ports.
a globally accessible, replicated database that should be writeable by anybody and which contains autoritative data about which number (IP) maps to which name?
seems like a use case for a blockchain???? runs for cover
Does an MX record make it a “database of convenience”? You don’t have to serve any of those records. I’m not saying there isn’t a problem with email but just that it’s not a problem with DNS.
the "Stupid DNS Tricks" section says using dns to map clients to a nearby pop is a trick. they predicted this trick would be used for decades and it seems like they were right. i know cloudfront uses this in some fancy form.
i don't know if i'd call it a trick though. if you have multiple pops, dns feels like a natural place to control what traffic goes to which pop. you will need resolvers to be well behaved which will never be the case. not all will respect ttls and use the client subnet extension but a lot do. dns gives you a nice knob and hooks to apply rules to control the traffic to each pop. this paper i think describes the idea well https://www.sigcomm.org/sites/default/files/ccr/papers/2015/....
if using dns like this is a trick, what is the right way to map a client to 1 of multiple pops? anycast?
I work for a CDN who uses anycast for routing. It does work really well and is robust to failures... it auto heals when routes go down, and is immune to issues with DNS caching or people using DNS servers that aren't near their actual location.
There are down sides, though. Control is not very fine grained, meaning you can only move fairly large chunks of traffic at a time. It is also a method better suited to fewer, larger POPs instead of many, smaller, pops, which has its own limitations.
Another option that I have seen used for large download distributions (e.g. game downloads) is to use http redirects... the first request hits a server whose only job is to choose where the actual download will come from, and return a 301 redirect pointing to the actual content targeted to a specific pop or server. This works well, because you can choose exactly where traffic goes without the downsides of DNS redirection, but you do get the downsides of needing two requests for each client request, as well as requiring client support for redirects (which not all traffic supports)
surprised you’ve seen this redirect method. i thought it was patented by google. the most obvious search only turns up an edgio patent. i’ll have to search harder
yeesh i didnt realize there was a patent on that. your link lines up with the authors predictions of cdns coming up with and patenting different ways to accomplish this. i guess it makes sense and maybe it isn't a bad thing, but i never thought about it. i wonder how anyone infringing on that is even caught? is something like that ever enforced?
edit: i was blind and missed that it is marked as abandoned "ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION". i never look at these so maybe thats a normal ending, but it is interesting to me.
this guy sounds uhhh exactly the same https://patents.google.com/patent/US11706292B1/en...? reading these brings up the duration of a connection as a relevant dimension. in my bubble i've only dealt with relatively short lived conns. long lived conns must be more likely to get caught up in routing changes so a redirect to unicast can help.
The thing about know-how it's that it is a secret. But the moment you fill a patent, it is not a secret anymore, but clearly http redirector concept based of geoip db lookup of the client is as old as the internet... We did this in early 2000 just based on public IANA ip address split over each RIR as a naive approach without a proper geoip DB.
Not a bad approach. Minor nitpick to your hypothetical (lol) -- in this case one may want to use a 302 redirect since you might want to be consulted again for a subsequent download, rather than have your response cached (potentially on behalf of multiple users).
I admire the work of Paul Vixie over the years, been using postfix and bind since 2000.
However, I'm a fan of geodns concept in general, and we have ECS standard as well to help with better answers. This allows small players or individuals to do what Big companies do, but without running anycast network, which is very expensive at every step of the journey: from getting ASN, then PI address space, paying yearly fees, then getting bgp peering and operating it efficiently. Thousands of $ per month and a required expertise. Instead of just spinning up 5 cheap vps and put a smart DNS in front, accepting all the tradeoffs, for $50 per month.
I do agree however, that some of the operational cost (marginal) is shifted to other parties because of small TTL need for this to work sensibly. But still, majority of traffic goes to my authoritative DNS servers that I operate, as the higher zones have a big TTL anyway. You can't geodns a glue record, it is static and it is cached properly.
So who bear this cost? End user pays for the bandwidth so an extra packets for lookup is not an issue. ISP resolvers? A bit yes, but they can enforce a policy on minimal TTL if this is too expensive for the ISP... Clearly it is not. At the end of the day, customer (ISP customer) will pay the bill if ISP needs to spin up more resolvers.
This “solves” the “problem” of using DNS for a purpose which it was never meant. It is rare for me to say anything positive about Cloudflare, but I absolutely respect their current position to not pass ECS through their 1.1.1.1 resolver.
I am not sure whether you are aware of how many recursively self-deprecating RFCs you have to read to be able to implement a DNS resolver.
(Hint: it's more than 20)
At this point it's time to take a step back and think about what problems DNS causes and how it would be possible to mitigate them. And not iterate on the literally first mistake on the internet which was meant to replace the /etc/hosts file.
Most security problems are caused by DNS in one way or the other. Heck, even HTTPS and HSTS is botched because of CAs being an inherent requirement only because DNS is unreliable in both terms of transport layer and handshake/discovery.
If DNS would be based on cryptographic identity, then we wouldn't even need CAs, and we wouldn't even need that giga zip file of universal trust in foreign organizations called ca-certificates.
I'd love to see a real alternative to DNS, but until there's a real contender my focus is on incrementally improving what we have.
My take on the CA system [1] is that it looks crazy at first, and looks worse the more you look, but there's enough checks in place that, in practice, it generally works.
> If DNS would be based on cryptographic identity, then we wouldn't even need CAs, and we wouldn't even need that giga zip file of universal trust in foreign organizations called ca-certificates.
Typically the domain name of the service is related to (i.e. the fingerprint of) the public key being used. This is unfortunately painful to use, so I don't know anyone using it at scale outside Tor's onion services.