Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
What you can't say (apenwarr.ca)
60 points by sophiebits on April 3, 2011 | hide | past | favorite | 26 comments


When confronted with disagreement, claiming your assertion seems to be "Something you can't say" is incredibly weak. The article does not address any of the counterarguments raised in http://news.ycombinator.com/item?id=2377109, but merely restates the assertion. It would be "something you can't say" if all the counterarguments were grasping at straws, showing that people are unwilling or incapable of understanding or accepting the assertion.

It's the same argument every crackpot invokes to explain disagreement. It is the very last line of defense that everyone can invoke, independent of what the other has argued. It's basically equivalent to "I'm right; you just don't understand". But the article hasn't even tried to refute any of the counterarguments and is in no position to make such claims.


The difference between merely unpopular and What You Can't Say is how people respond to it. Merely unpopular views are responded to with thoughtful, rational rebuttal. What You Can't Say is responded to with moral outrage, dismissal, and similar responses. Someone who's merely wrong can be corrected with good argument taken in good faith--someone who's broached a taboo is beyond simple argument and faces opposition on a social than a rhetorical level.


The discussion also tends to be steered away from the taboo, towards nitpicking. The basic argument here is that ipv6 is less desirable than a series of hacks on top of ipv4, but very few people were arguing that. Instead they refused to even admit that possibility and focused on refuting individual elements of the argument. Every single argument of the article could be refuted, and still the basic tenet could be correct. I've read through the discussion and i still don't know whether ipv6 is a good or bad idea.

Anyway, practicality always wins. If the ipv6 proponents can't make the transition succeed soon enough, hacks+ipv4 will win by virtue of being the only practical solution _right now_.


The basic argument that most people responded to was the claim that the ipv6 transition was practically impossible. The central point of the counterarguments is that since djb's article, a lot has changed and there now actually are sensible transition plans. Even djb has acknowledged that fact.

The remainder of the arguments were mostly about taste. He accepts NAT; many others argue that NAT is horrible and he ignores those arguments based on straw men (an edit in the original article) like:

  (Update 2011/04/02: A lot of people have criticized this 
  article by talking about how nasty it is to require NAT
  everywhere. If we had too much NAT, the whole world would
  fall apart, [..]
Moreover, his arguments included technical errors that limit their applicability. All in all he was simply far from convincing. Getting into a discussion where you defend the virtues of NAT is not saying something "you can't say". It saying something unpopular that needs good arguments, because there are good reasons it's unpopular.


I'm glad that he brough up "What you can't say" because I've never read that particular Paul Graham essay.


I was shocked at the time that some people actually think Postel's Law is wrong, but now I understand. Some people believe the world must be purified; hacky workarounds are bad; they must be removed.

Parsers that refuse to parse, Internet Protocol versions that don't work with 95% of servers on the Internet, and programming languages that are still niche 50+ years later... sometimes you just have to relax and let it flow.

Just "letting it flow" is what we've been doing, Postel's law is exactly that. And look at the mess that's gotten us into. Letting people off the hook by working around their shitty "interoperable" implementations by being liberal in what we accept doesn't help in the long run, and it, IMO, naively assumes that eventually everyone will come around. It's short term thinking.

It's not about purity, it's about sanity.

And for the record, I'm a proponent of Postel's Law, because it's pragmatic, but not blindly. One must learn from one's mistakes. The problem is when the 800lb Gorilla decides to be liberal in what they generate because they know that everyone else will be liberal in what they accept--Postel's Law being effective pretty much "requires immediate total cooperation from everybody at once" (which you might recognize as one of the possibilities on the "Universal Crackpot Spam Solution Rebuttal"). There's a time and place for Postel's Law, and the hard part is finding the line where on one side it makes sense to be liberal and on the other it makes sense to be conservative.


The main problem with "pure" implementations is that they deny a core aspect of humanity: we make mistakes. The problem is not that parsers have to deal with invalid syntax, that's just a given, it's that they have a notion of invalid syntax at all. It's not that hard to design a spec in such a way that all input will be parsed in a predictable way, maximally extracting semantics. This is what i like about the html5 parsing; it doesn't have a concept of unparseable input, yet all parsers can implement the same standardized parsing algorithm.


they deny a core aspect of humanity: we make mistakes

Exactly. Postel's Law is meant to work around that. One might call it being robust (one might also call "accepting the input and doing something sane rather than trying to guess" robust also). There are two holes in it, however: 1) it doesn't encourage people to actually fix their "mistakes", and 2) it encourages exploitation of those who are liberal with their input.

We must be liberal, but not necessarily too liberal, in what we accept. Postel's Law has specific applications. One shouldn't be liberal in their acceptance of tyrants, for example.


1. Why do people have to fix their mistakes if automation can solve the problem for them? If we can assume that mistakes will be made, and we can find an automated way to solve those mistakes, then why should we force humans to jump through hoops?

2. Why can't a parser be strictly standardized and liberal with its input at the same time? If the spec provides error recovery behavior, what is wrong with that?

My point is that there's no such thing as too liberal as long as all parsers implement the same exact kind of liberal parsing. Our low-level communication protocols have no concept of invalid input, they can recover from any random burst of garbage input, and we think this is normal. But then at a higher level of communication, like XML, suddenly error recovery is a bad thing? It makes no sense to me.


1. Why do people have to fix their mistakes if automation can solve the problem for them?

Postel's Law isn't about automation, it's about where to apply effort. Automatically fixing mistakes at the time of their creation would be great, but just like real life, there are a million ways something can be interpreted wrong after the fact (and thus the wrong "fix" applied) and only one way to interpret it right.

why should we force humans to jump through hoops?

Humans have to jump through hoops to create the robust error recovery. Rather than the effort being evenly distributed among all parties when everyone is conservative on both the production and acceptance side, the producers can be really lazy and the acceptors have to jump through hoops to accept all the lazy people's output. There is no automation here, someone has to write the code that liberally accepts things, which is often a hard task because of the many different ways things can be interpreted when they are not specific and explicit.

2. Why can't a parser be strictly standardized and liberal with its input at the same time? If the spec provides error recovery behavior, what is wrong with that?

A parser that is liberal with its input and provides robust error recovery begets tag soup. The only people who like tag soup are those who want to be lazy when producing it. It's more work, over all, to accept all input and try to figure out what was intended than it is to just say "I can't interpret this" and tell the generator that they need to be more conservative in what they generate (the other part of Postel's Law).

My point is that there's no such thing as too liberal as long as all parsers implement the same exact kind of liberal parsing.

It's not the liberal parsing that is necessarily the problem, it's the second order effect of liberal interpretation. If people can be liberal in their parsing, then they can be liberal in their interpretation, and if we accept that, we have to, as users, accept very little robust interoperability.

Our low-level communication protocols have no concept of invalid input, they can recover from any random burst of garbage input

"Recover"? If you don't do the TCP handshake in very specific ways, not only does no other server talk to you, but you may end up breaking some of the guarantees that TCP is supposed to provide. Random garbage that sets the RST bit in a TCP packet closes the connection, it doesn't "recover" from that.

Now, obviously, I'm not advocating that things should outright crash when given bad input: that's the worst. They should produce decent error messages as soon as possible so the producer can increase their conservative nature of generation. Consider serving a web page as application/xhtml+xml, which in Firefox (at least back when I was doing a lot of this) would fail to accept the file and would tell you where it was structured wrong. By accepting any old ambiguous format, you'd never see this error and you wouldn't know that you weren't being conservative in your output. And since different browsers treated malformed content differently (either accepting it, or not accepting it, or trying to guess and often getting it wrong or different from what other browsers guessed), you end up with a mess where the "liberal" accepting side gets tagged as deficient if it doesn't jump through all the hoops thrown at it.


The problem with Pastels law isn't that we make mistakes, thats a given.

The problem is that it still works. Drive a car too fast around a corner and you are thrown of the road; write invalid xml and you get an error.


But if your car can correct your cornering for you, why shouldn't it? Should we disable all our car's electronics just so we can learn to make fewer mistakes the hard way?


The problem with postel's law in terms of webdev is that there is no reference browser implementation. For the longest time end-user clients were the only tools web devs had. This led to such a huge disconnect between the theoretical standard and the practical standards that even when validators came around it the gap was so large that almost no one bothered with ensuring their html was valid.

What woudl have been helpful, and still would be, is the ability to toggle a browser into "strict" rendering mode during development testing.


Every HTML version dating back to http://www.w3.org/MarkUp/draft-ietf-iiir-html-01.txt had a DTD. Validation was available long before the W3C made theirs trivial to use, the problem was widespread ignorance on the part of authors of early web pages and especially tutorials.


In many ways, the IPv6 debate is similar to the x86 vs RISC debates. While everyone agrees that the Intel architecture is bad, in practice it doesn't matter all that much on big desktop chips.

Linus Torvalds had a small rant about that when the Itanium was introduced: http://www.theinquirer.net/inquirer/news/1008015/linus-torva...

The sad fact is that the x86 architecture survived because it was good enough, and there was no incentive to replace it with a "purer" design. Only in the mobile / microcontroller space has there been widespread success for more RISCy designs (ARM/AVR). This wasn't because HTC, Samsung and Apple decided that ARM was "purer", it was mostly because there was no x86 processor that could compete with the speed and power and chip real estate afforded by ARM designs.

Likewise, the incentives for IPv6 aren't there, and in fact, it introduces a host of new problems for most users.

For instance, I spent quite some time trying to figure out why my NTP daemon would not work. The reason? It was IPv6 enabled, as was my operating system. The NTP server was also IPv6 enabled and had an AAAA record. But my ISP didn't route IPv6, so no communication took place.

Now, I could blame the NTPD creators for not thinking about this and writing some specific code for it with an IPv4 fallback (perhaps after 30 seconds just to make life interesting). Or I could blame the ISP for being slow to implement IPv6 (this was in 2004 after all). Or I could blame myself for being incompetent and just using the default config files provided by the operating system.

But the fact is, this is just what happens when you have one protocol that always works, and another new protocol that sometimes works but you really want to make the new one the default, preferred protocol.


> Likewise, the incentives for IPv6 aren't there

Maybe IPv4 + hacks works still great in the US and in Europe, evolving nations (think: China, India, Brazil, ...) have lots of incentives to use IPv6.

Just because you have no incentives doesn't mean the rest of the world does not either.


Sure, and just because the x86 instruction set works for desktop PCs, it doesn't mean it works well for cellphones. Which was exactly my point: It will spread because of incentives, not because it's the "better" solution. And if another solution works well enough and is cheaper to implement, it will slow the adoption of IPv6.


I had a similar feeling with HTTPS [0] in suggesting a transition to making encrypted connections the norm on websites so that networks can't sniff activity. The point was lost on most posters who want absolute security for every website that uses HTTPS despite the unfriendly UI warnings for even a 5 minute expiry despite being valid [1]. Not just banks, but websites like Wikipedia too.

[0] http://news.ycombinator.com/item?id=2376548

[1] http://news.ycombinator.com/item?id=2376183


One could also argue that the point was lost on you that "there is no such thing as partial security".


Internet Protocol versions that don't work with 95% of servers on the Internet, and programming languages that are still niche 50+ years later... sometimes you just have to relax and let it flow

Progress requires hard work and perseverance. This is like saying that the Wright brothers shouldn't have tried to build an airplane, because people have been trying to fly for as long as human civilization but never managed to. Maybe it was time to relax and let it slide...

Being pragmatic is good, and I agree that not everything can (or needs to be) 'pure' and 'perfect'. Some things are OK as they are now. But I'm happy there are people striving to make things better.


IPv6 has been the future for quite a while (2002), and it will probably remain the future for quite a while because we have good partial solutions:

* The HTTP Host: header means that you don't need a public IP address for every site. Heck, it would even be possible to have a single public IP for a load balancer that redistributes requests to a data center full of servers that serve requests for thousands of sites. If you think this can't be done efficiently, think about these Cnoection: and oCnnection: headers that pop up with load balancers that only rewrite part of the header.

* NAT means that we only have one IP per Internet-connected household, not one per computer or other device. With mounting pressure, mobile operators will increasingly put their users behind NAT and maybe some people in address-starved regions of the world will only be able to get NATted connections from their provider. Software like Skype knows how to get around NAT pretty well, and UPnP works pretty well for other server-like programs that people have.

Which means that 99% of the people will be perfectly happy with multiple-occupancy HTTP service and UPnP/firewall piercing and not care at all whether IPv4 addresses run out or not. The remaining 1% will have to push very hard, and pay a significant fraction of the cost, to get IPv6 off the ground all while the incumbents sit in a corner and watch.


The problem is that we have fewer IPV4 addresses then we have consumers who wish to consume IPv4 content.

Take our current situation: We still have large swaths of china and india without Internet connectivity, yet we are basically entirely allocated with demand for ipv4 addresses increasing.

V6 used to be hard to implement because older routers/switches implemented their support in software rather then in ASIC. This is now no longer an issue, which is why you see ipv6 connectivity sprouting up around the place. I would expect that when push comes to shove, we will see a fast v6 transition when it becomes a large enough deal for there to be > 0 consumer demand for it.


My prediction is that, if large swaths of China and India get Internet connectivity, they, or their internet providers, will have to choose between

* NATted IPv4 connectivity (maybe with one IP per city and not one per household) and

* buggy but non-NATted IPv6 connectivity.

with the former being cheaper to implement. Most likely, IPv6 will only see universal adoption after the old gear that has buggy or too-slow IPv6 support has been thrown out. Then again, if a large government (say, China) puts itself behind IPv6 (say, because having one IP address per citizen is more convenient for keeping tabs on everyone), a fast transition would be more likely.

Don't get me wrong here: I think that the transition to IPv6 will happen eventually, but not as fast as the shrinking number of remaining IPv4 address blocks would suggest.


From what i've heard, carrier grade nat is stupidly expensive and rather hard to scale nicely at reasonable speeds.

It would be fairly easy to implement say ipv6 in china, as it would be a closed system with everybody getting new gear and everybody in the same "we have no ips" boat.

To be honest, as the internet ecosystem in places such as china is mostly closed, I'd be surprised if they didn't go ipv6 only internally well before the rest of the world switches (what's the point in having a v4 address if everything you care about is on native v6 and you have a 64 gateway for everything that isn't).

Going to be interesting to see.


Sure, people are currently happy with the current state of affairs.

That's different from hoping that IPv6 will fail, like the original poster did.

btw:

- Tricks like HTTP host header partial rewriting cannot be done with HTTPS. Sure, there's SNI, but that doesn't change the bottleneck problem.

- One IPv4 per household will be possible in USA, but other countries have much less IPv4 addresses. There's just not enough of them, and artificial scarcity makes no sense at all.


I think it's more like saying it's not worth the cost to scrap all existing airplanes for a new airplane design with a 10% gain in fuel efficiency, if a modification to existing ones will make them slightly uglier, heavier and still give you an 8% gain.

Some "purists" would much rather see all-new planes flying around. They would point to any number of deficiencies in the old airplanes to make their point and they would try to minimize the extra cost of replacing all the old airplanes (you're going to have to replace them in 10 years anyway, so why not start now?).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: