I see a lot of confusion here, probably Cloudflare should have included an explanation of how ECH works in TFA instead of referring to their other article[1].
The difference between ECH and SNI is that while SNI includes the hostname in the ClientHello (the first TLS record indicating connection initiation), ECH includes an encrypted section in the ClientHello called ClientHelloInner, and the hostname is moved inside it.
The ClientHelloInner is encrypted using a public key made available over DNS, which is queried over DNS over HTTPS providers such as Google or Cloudflare DNS; plaintext DNS is avoided in order to prevent a MITM on the ClientHelloInner key.
Doing so prevents ISPs and governments from analyzing your traffic. However, a CDN operator such as Cloudflare terminates TLS for your website, and thus traffic would be visible to them either way.
Now, to the non-technical part of it: while ECH provides a significant privacy improvement, I personally am against its implementation. Most ISPs enforce country-specific orders to block domains using a combination of DNS packet interception and SNI inspection. The legitimacy or sanity of such laws are a separate matter - countries would want to block websites that violate their laws.
If we take away this last resort from governments, they would react by enforcing client side blocklisting and DRMization as suggested in France[2], or force root certificate installation using legislation[3], or blocking large swathes of the internet as is the case with China.
> plaintext DNS is avoided in order to prevent a MITM on the ClientHelloInner key.
Is MiTM possible unless the attacker is in possession of a sufficiently advanced quantum computer? What's published as HTTPS/SVCB record is the public part of the key. Afaik, DNSSEC isn't even a requirement for zones publishing HTTPS/SVCB ECH records?
> Doing so prevents ISPs and governments from analyzing your traffic.
Don't think traffic analysis is thwarted. ECH plugs inspection of the only plaintext bits in TLS, which is not only abused by censors, but also by various 'well-meaning' middleware that made bringing upgrades to TLSv1.3 a nightmare.
> If we take away this last resort from governments...
You mention GFW, and we got GFW without ECH. Governments will government, nothing will stop them. That shouldn't stop us from shoring up our side of the equation. Because by that logic, Meta shouldn't e2ee WhatsApp or Browsers shouldn't OCSP / CRL. In fact, not having those sound scary to me; because governments aren't the only power hungry actor around. It wasn't long ago Meta was caught spying on the users of its data saver VPN (Onavo) for half a decade, if not more.
I think the remark is because: MITM would allow them to spoof the DNS response entirely, so they can replace it with whatever key they want. Doesn't matter what level of security the key claims at that point, it's attacker-controlled and they can just read whatever you send next.
I think the DNS key is only for the handshake to provide the certificate for the actual key. Without a certificate from a CA for second part, all the spoofed DNS key would get is what website they were trying to visit.
Can someone chime in with how quantum computers effectively at least double the output of traditional "classical" computers? I legitimately don't understand why its not easier and in some bottom-line sense cheaper just to like double up your (classical) computing power. What is it about QC that's so damn sepcial when you could theoretically achieve the same maximal idealized output with a simple increase in classical inputs?
It seems with QC, there's this collective magical thinking that they offer this magical free lunch that is inaccessable via more traditional means. I don't get what the seemingly non-existent tradeoffs or optimizations that are being made to achieve this...
>Can someone chime in with how quantum computers effectively at least double the output of traditional "classical" computers?
In general: Quantum computers halve the search space, not speed, so for example quantum computer needs 2^128 operations to bruteforce 2^256 keys [1]
In case of non quantum resistant algorithms (for example RSA or most other popular algorithms today): there are algorithms that offer exponential speedup, where even a slow (in terms of number of operations/second) quantum computer will easily outperform a classical comoputer
[1]: Grover's algorithm. I'm oversimplifying a bit.
I'm speaking in very abstract terms as a non-expert, but the critical distinction in this case is that prime factorisation (the basic underpinnings of RSA / similar encryption) is known to be an NP problem (more precisely - sub-exponential) for a classical computer but is polynomial for a quantum computer
To achieve the same computing power on a classical computer, it would need to be exponentially more powerful - this is because (abstracting heavily) the quantum computer can test all possible inputs to a function at once, where a classical computer must test them one at a time
Weren't some of these widespread conventions in some sense strategically designed or implemented in such a way as to ensure backdoors and/or contrived vulnerabillities? Something, something purposefully smaller key sizes or special "weaker" variables than was practicable or other trickery that always ostensibly has an economic or other seemingly justifiable underpinning but introduces unacceptable security compromises that arise later and predictably.
Historically there have been restrictions on key size (and bad algorithms) - to my knowledge, neither is currently the case: there is no known way to break a 2048-bit RSA key (although if there was, we probably wouldn't know about it)
Grover's algorithm is a general solution to the search problem. Given an arbitrary (computable) function f: X -> Y, and a desired value y in the codomain, find a value x such f(x) = y. For the sake of simplicity, assume that x is unique (although a Grover's algorithm can be extenef to not have this constraint).
Let N be the size of the domain X. On a classical computer, without any additional information, the optimal solution is to simply iterate through X trying inputs until you find the correct one. On average you will need to evaluate f on half the possible inputs, which is N/2. In the worst case, you need to try all N.
Using Grover's algorithm, you can solve the problem with only sqrt(N) invocations of f, which is a quadratic speed up compared with classical computers.
Applying this to encryption keys: for a keylenth of n, we have 2^n possible keys. Classically speaking, you would need to try decrypting (2^n)/2 = 2^(n-1) times. But with Grover's algorithm, you can find the correct key after decrypting judt sqrt(2^n) = 2^(n/2) times, which is the classical worst case for a key of half the length. Note that this is still exponential in the size of the key, so is not a fundamental game changer.
This has nothing to do with prime factoring or the discrete log problem. Those have even better quantum algorithms (Shor's) that can solve them in polynomial time, which would be a major game changer if they ever become practical.
Algorithms have been developed for quantum computers that could be potentially effective at breaking public key cryptosystems whose security relies on the difficulty of solving certain math problems on classical computers. In particular, the discrete logarithm problem and integer factorization of very large numbers: https://en.wikipedia.org/wiki/Shor%27s_algorithm.
There are no quantum computers large enough to even come close to attempting this, and there's a question about whether it's ever going to be physically possible to build a quantum computer large enough to actually attack something like a TLS key exchange in real time. Classical computers are equally as fast/faster at other tasks.
Is it a fair conjecture that in some sense there's an issue with cryptographer's trying to push the field in terms of the efficacy and robustness of encryption forward while "the government"s continually work all manner of trickery to hamper these efforts in subtle and not-so-subtle but gag-ordered-enforced ways?
I feel like there's this constant ridiculous pushback on any digital product or protocol or service being air-tight cryptographically and implementationally speaking when they can basically already build air-tight cases via parallel construction with the help of the infinite resources available upon (often not even) receipt of a warrant?
Its very strange. The obsession is always on completely neutering/compromising the technology and never on actually doing the damn police work they are enpowered to approach laterally like they did before typewriters and telephones/wire-taps or bending the providions of constitutions they swore to protect and enforce until its a simulacrum of its original concept.
Like, it always comes across as they feel that their entire case is lost if they can only prove something 5 different ways instead of 6. It wouldn't be so problematic if humans weren't so human and law enforcement wasn't emphatically staffed by humans who are liable to abuse things to maximize their money, power, and prestige and have the absolute or qualified immunity to get away with it at least once regardless of how it damages the targets of their misconduct.
>Like, it always comes across as they feel that their entire case is lost if they can only prove something 5 different ways instead of 6.
There are a lot of crimes these days without a complainant. The sale and consumption of illegal drugs for instance. Surveillance can be very important in discovering the crime in the first place.
It's like asking someone to help you with your diet and then becoming upset when they invade your privacy by looking in your kitchen cupboards and fridge. We have in a sense brought this on ourselves by passing laws intended to protect us from ourselves.
Did anybody ask governments to “help with diet” though? It feels more like governments in their ever expanding power struggle decided diets needed to be fixed even though nobody complained about them and then decided that the ends justify any means.
> while ECH provides a significant privacy improvement, I personally am against its implementation. Most ISPs enforce country-specific orders to block domains using a combination of DNS packet interception and SNI inspection. The legitimacy or sanity of such laws are a separate matter - countries would want to block websites that violate their laws.
That's exactly why I'm in favor of it: it makes effective censorship impossible.
> If we take away this last resort from governments, they would react by enforcing client side blocklisting and DRMization as suggested in France[2], or force root certificate installation using legislation[3]
Note that those plans both thankfully failed.
> or blocking large swathes of the internet as is the case with China.
I know a lot of Americans talk that way about America, but America tolerates quite a bit of government censorship (DMCA/SLAP), privacy violations (NSA/TSA), and civil rights violations (prison slaves, war on drugs).
We will almost certainly block ECH at my work, as we already block DoH. I expect any sane network to do the same.
The natural alternative if blocking these protocols becomes unsustainable will absolutely be to require full decryption at our security edge. And that will provide drastically more information to us than we have now and will absolutely feel invasive.
The idea of uninspectable client traffic is somewhat unhinged, and is already heavily used by malicious actors.
Huh? What? Ever had to administer a corporate network for non-tech staff? Malware is everywhere. People are stupid. AND my assumption on a corporate network has always been that you have no expectation of privacy - its a work network, don't use it for personal stuff! Pretty simple.
I don’t think morals factor in to the decision to allow / disallow traffic on a private network. That’s definitely morally neutral and to suggest otherwise is immoral.
ISPs are also private networks, unless operated by a government entity. And I would absolutely uphold the same expectation from an ISP as an organizational or personal network.
> And I would absolutely uphold the same expectation from an ISP as an organizational or personal network.
No. ISPs provide internet service to customers who pay them for it. The entire situation is different. The ISPs are under no obligation to break and inspect encrypted traffic. Owners and operators of home and corporate networks have an entirely different(and justifiable) set of concerns.
SNI monitoring is a reasonable compromise, and I think a healthy one: Your ISP doesn't need to deep inspect your traffic to Microsoft because it accepts that Microsoft is doing something reasonable with it. It allows delegating authority which at least gives a path for investigation or blocking if necessary without seeking an extreme amount of transient information.
I would say if ECH is implemented the correct response would unfortunately be to MITM it if too many providers implement it to just block it entirely. I suspect large companies won't force it to maintain a wide customer base, and again, any reasonable network operator should just block anyone who does.
ISPs absolutely have all sorts of regulatory needs and network performance reasons to classify traffic. It's an unpopular view, but it's reality. (And I would encourage you to investigate who pays the people telling you otherwise, before someone links Mike Masnick here.)
I think they mean malware. Ransomware for example. Nothing wrong about locking a network down in general. Work network doesn’t have to be an open, all things go, network just so employees can instagram. Heck, there are organizations with air-gaped networks for very good reasons.
Instagram on your phone while at work. Don’t expect privacy on managed work machines. You company can see your emails. They can install CA on your managed machine and MITM your https traffic.
I hate the trend of also managing personal devices. But I get that it’s a complicated subject.
I'll be worried when I hear the first case of an employee getting in trouble for doing something innocent on corpnet. Right now I assume they can see everything while using one of their devices or on their network but they never bug us about it. They seem mostly interested in stopping malware/leaks, which seems quite reasonable.
As the grandparent of this thread, I can definitely say I don't care about personal browsing on work computers. I consider that an HR issue (if it impacts their ability to do their job), not an IT issue. The ability to manage network traffic comes entirely from maintaining a secure and functional network.
MITM can't prevent leakage for a determined spy or insider trader. I worked at a bank, they blocked FB etc with a blacklist, but it was very easy to circumvent. Whitelist might work, but it would create too much management headache.
The problem is that what starts as a strong desire to conduct copyright infringement (or sure, some light personal web browsing on work computers) along with a dash of free speech extremism, has turned into a militant expectation of very handy passageways for actual crimes.
> If we take away this last resort from governments, they would react by enforcing client side blocklisting and DRMization as suggested in France[2], or force root certificate installation using legislation[3], or blocking large swathes of the internet as is the case with China.
Not every government has the leverage, capability or power to do this. Client side blacklisting will be trivially circumvented if it actually goes into effect. The browser vendors all rejected the proposed Kazakh root certificate. And plenty of countries with pervasive Internet censorship don't have enough of a "domestic internet" to block large numbers of websites without a lot of people getting upset.
> If we take away this last resort from governments, they would react by enforcing client side blocklisting and DRMization as suggested in France[2], or force root certificate installation using legislation[3], or blocking large swathes of the internet as is the case with China.
You left out the fourth option, which is give up on their censorship aspirations. For most countries, censorship isn't important, and making it too hard will simply make it not worth it. China is of course different.
Most countries made entirely ineffective laws and didn't bother following up.
> Doing so prevents ISPs and governments from analyzing your traffic. However, a CDN operator such as Cloudflare terminates TLS for your website, and thus traffic would be visible to them either way.
Cloudflare adopts the similar predatory Google posture of "we really really care about your privacy - nobody else should spy on you but us". Creepy!!
They're going to resort to blocking IPs and not caring about collateral damage and force CDNs and anyone running multi tenant to turn the specific site targeted off or move them onto a separate pool of IPs that is easily blockable.
Do you think for a second anyone in power in China, Russia, or North Korea care if some random 20-year-old kid can access English-language sites or not?
> Now, to the non-technical part of it: while ECH provides a significant privacy improvement, I personally am against its implementation. Most ISPs enforce country-specific orders to block domains using a combination of DNS packet interception and SNI inspection. The legitimacy or sanity of such laws are a separate matter - countries would want to block websites that violate their laws.
You don't even have to start with governments. ECH can just as well be used by trackers/ads/app telemetry etc to stop you from blocking them via Pihole.
No it can't. If it's in a browser, then extensions can still block it, and if it's in an app, then they could make it unblockable without needing DoH or ECH just by putting the trackers/ads/app telemetry on the same domain as the rest of what the app uses.
"I see a lot of confusion here, probably Clouflare should have included an explanation of how ECH works in TFA instead of referring to their other article[1]."
Even better would be to explain why SNI exists. This in turn explains why CDNs like Cloudflare are interested in it. SNI allows CDNs to operate as intermediaries on a www where HTTPS is increasinglty mandatory for every website. With respect to the "privacy" of TLS it is peculiar to put ISPs and governments in one category and CDNs in another. If ISPs (without ECH) or CDNs (with or without ECH) are getting a list of every domain the user visits, then governments can get the same list by requesting it from the ISP or CDN.
The question, IMHO, is whether there are other possible technical solutions besides TLS SNI to allow multiple HTTPS sites to be hosted on the same IP. Is it possible to obviate the need to use a third party such as a CDN to solve this problem. Is it possible to obviate the need for duct tape solutions like ECH. ECH is proposed as a solution for the "plaintext domain names over the wire" problem. But that problem only exists because it's created by the use of SNI. And SNI exists to enable one to host multiple HTTPS sites on the same IP. Today, CDNs are the primary benefactors of SNI. They host 10s or 100s of 1000s of HTTPS sites on a limited number of IPs. These are the primary users and benefactors of SNI.
For example, consider this prototype, which some believe inspired the advertising company-sposonsored "QUIC":
"An ISP or site administrator can easily run a huge number of CurveCP servers on a single global IPv4 address, even if the servers are independently operated with separate long-term public keys. This feature is provided by a simple extension mechanism in CurveCP addresses.
CurveCP servers are inherently anti-aliased, providing automatic virtual hosting and fixing some of the deficiencies in the "same-origin" policy in web browsers. This feature is provided by a simple domain-name mechanism in CurveCP addresses.
If a site has two server addresses, and one server is down, a CurveCP client will quickly connect to the other address.
A CurveCP connection remains fully functional even if the client changes IP address.
CurveCP is fully compatible with existing NAT (network address translation) mechanisms; none of the above features require clients or servers to know the global addresses of their gateways."
Other people already explained that there's a bootstrap problem which prohibits "just encrypt that section of the handshake".
But the two SNIs are because of GREASE particularly. They might be avoided if you didn't want to enable GREASE, but we definitely want GREASE.
The idea of GREASE is, whenever we might want to do something that's visible to third parties, we must sometimes do it anyway, at least as much as they can tell, so as to ensure they tolerate this when it happens so that when we did want it we can do it successfully.
For example GREASE extensions are just nonsense extensions you propose in a TLS connection today. Chrome does this. "Hey server, can you do BINGLE BONGLE?" and of course your server doesn't do BINGLE BONGLE because that's nonsense, but to a "Security device" which is "Inspecting TLS connections as part of our Next Generation Firewall Technology" it seems like maybe we want to do BINGLE BONGLE. How should it react? Well, it should do nothing because the specification is clear that if you don't understand what is said you should ignore it, but without GREASE we know in SSL, and TLS 1.0, TLS 1.1 and TLS 1.1 these devices would freak out for each new extension.
1.6GB of customer personal information in a CSV file POSTed to a competitor's Dropbox? Fine. Your web browser wanted to use NEW FEATURE? Alert! Attack detected - lock down the network, summon armed guards! So hence the invention of GREASE.
For ECH GREASE works by just always pretending we're doing ECH. If we want to talk to old-web-site.example we send an outer SNI of old-web-site.example and it doesn't matter what our inner SNI is, it can be random nonsense encrypted to nobody, because old-web-site don't use ECH anyway.
If we want to talk to ech-enabled-site.example our browser discovers oh, here's a key for ECH for ech-enabled-site.example and it says we should ask to talk to boring.example, so the browser encrypts the inner SNI of ech-enabled-site.example with the key, and provides an outer SNI of boring.example.
In both cases this looks the same to snoops, there's an outer SNI they can read and an inner SNI they can't read. Which is genuine? No way for them to know. But the server knows easily.
Bootstrapping the encryption is a problem: until you've run the handshake, you don't have a key with which you could encrypt the handshake. And you don't want the key to live for too long, so folk are going to end up trying to use expired keys.
Between the article and the linked introduction, all of what you are looking for is explained.
How do you "just encrypt it"? Encryption in TLS starts after you have verified who you are talking to via the certificate, otherwise you might just be doing encryption with whoever happens to MITM you. However to do the verification, the server must know what domain you are actually trying to reach - hence the SNI. This is why there is the DNS side channel.
> If we take away this last resort from governments, they would [...]
Appeasement doesn't work. Maybe it's easier to remember that if you're British and so you had to watch Chamberlain's "Peace for our time" news reel in history class. The British Prime Minister, Neville Chamberlain, negotiated a deal with rising star German Chancellor Adolf Hitler, you've heard of him. Hitler agreed that Germany wouldn't start a massive European war in exchange for the British turning a blind eye to smaller wars he'd already started.
You may never have heard of Chamberlain, because it turns out that piece of paper with Hitler's signature on it was worth precisely what you'd expect, and we needed an actual War Prime Minister soon enough.
Now, the argument for appeasement is that sure, it doesn't actually work but it buys time. This is wrong because your opponents have the same extra time and they know precisely what they're doing whereas you're expending resources pretending (even if you correctly believe the appeasement won't work) that appeasement works.
The difference between ECH and SNI is that while SNI includes the hostname in the ClientHello (the first TLS record indicating connection initiation), ECH includes an encrypted section in the ClientHello called ClientHelloInner, and the hostname is moved inside it.
The ClientHelloInner is encrypted using a public key made available over DNS, which is queried over DNS over HTTPS providers such as Google or Cloudflare DNS; plaintext DNS is avoided in order to prevent a MITM on the ClientHelloInner key.
Doing so prevents ISPs and governments from analyzing your traffic. However, a CDN operator such as Cloudflare terminates TLS for your website, and thus traffic would be visible to them either way.
Now, to the non-technical part of it: while ECH provides a significant privacy improvement, I personally am against its implementation. Most ISPs enforce country-specific orders to block domains using a combination of DNS packet interception and SNI inspection. The legitimacy or sanity of such laws are a separate matter - countries would want to block websites that violate their laws.
If we take away this last resort from governments, they would react by enforcing client side blocklisting and DRMization as suggested in France[2], or force root certificate installation using legislation[3], or blocking large swathes of the internet as is the case with China.
[1] https://blog.cloudflare.com/encrypted-client-hello/
[2] https://www.article19.org/resources/france-proposed-internet...
[3] https://en.wikipedia.org/wiki/Kazakhstan_man-in-the-middle_a...