Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Shouldn't using common CDNs solve this problem?


Using CDNs lowers the hit rate because the average user's browser cache will have each library stored number of CDNs times the number of versions in widespread usage. A package manager could improve some of this by allowing you to say “Any version of jQuery between 1.0 and 2.0 is fine” when most sites don't depend on point releases.

The Subresource Integrity spec for strong content hashes could improve cache hits by allowing a different URL to be used if the hash matches but everyone wants to avoid that turning into a massive security / privacy hit — see e.g. https://hillbrad.github.io/sri-addressable-caching/sri-addre...


I understand the idea of supporting ranges of libraries from a development perspective, but when you're running something in production, is "use any of these configurations" ever desirable?

Presumably there's a reason that you'd _want_ them to use 1.12.4 over 1.0.0 (or you'd want to at least check that they didn't break anything that you rely on in 1.13 when that comes out)?


It's desirable if you're really focused on performance – if you're not a site like Google/Facebook which people access all the time, it's safe to assume that none of your resources are cached and so you might find n milliseconds of load time a healthy gain (less so now that HTTP/2 is widespread). In many cases it's more likely that you'd know of a bug fixed in a new release – where you'd say libfoo >= 1.12.4 – than that 1.3 has a huge problem.


Using a CDN doesn't lower the hit rate over not using a CDN. I assume you mean using a CDN has a potentially lower hit rate than using a package manager within the browser.


One factor to consider: what's the cost of the extra DNS lookups and connection latency to a new CDN host vs. your main host, especially in the post-HTTP/2 era? If you had something like SRI where the browser might not even connect at all, that's a pure win but otherwise it's quite easy to find real pages where the extra 500-5000 milliseconds to connect to the CDN is greater than serving a modest-sized file.

(That's not an exaggeration: I've measured uncached latency for DNS + connection for a CDN host in the high end of that range, especially over cellular connections or outside of the U.S.)


South Korea here. Every time I find random asset-loading delays in my clients' websites, it's caused by one of the major CDNs such as Google web fonts and code.jquery.com.

International connectivity out of Korea is pretty congested, especially at peak hours. Tokyo is 30-40ms away in the morning but randomly jumps to 150ms+ in the evening. It can take more than 1 second for the DNS lookup, TCP and SSL handshakes, let alone the actual transfer. So unless the CDN in question has a physical presence in Korea, it is almost always faster to load assets directly from the web server.

I suspect that many regions outside of US/EU are in a similar situation. Using a POP in another country 2000km away does jack shit for local websites, and only harms companies that fall for aggressive CDN marketing.


Sure, if your target market is outside the area covered by your CDN's POPs then you're not going to get anything out of it. Although if you do get a CDN that's appropriately located it can be a huge win if you can avoid leaving certain country boundaries.


Fair point. It's true that hosting CDN assets on a separate domain at least requires more thought than it used to. Although there's also the fact that if you're using a common CDN the browser may have DNS cached. Plus it can still be a win if the CDN is well-located wrt to your target audience and you have a fair number of assets.


“may have DNS cached” was less common than I thought – even using some fairly popular CDN providers, the RUM DNS latency outliers are surprisingly high even for clients in areas with CDN edge nodes.


I think they are more like a band-aid than a solution.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: