Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This article touches on "Request Coalescing" which is a super important concept - I've also seen this called "dog-pile prevention" in the past.

Varnish has this built in - good to see it's easy to configure with NGINX too.

One of my favourite caching proxy tricks is to run a cache with a very short timeout, but with dog-pile prevention baked in.

This can be amazing for protecting against sudden unexpected traffic spikes. Even a cache timeout of 5 seconds will provide robust protection against tens of thousands of hits per second, because request coalescing/dog-pile prevention will ensure that your CDN host only sends a request to the origin a maximum of once ever five seconds.

I've used this on high traffic sites and seen it robustly absorb any amount of unauthenticated (hence no variety on a per-cookie basis) traffic.



Back when I was just getting started, we were doing a lot of WordPress stuff. A client contacted us, "oh yeah, later today we're probably going to have 1000x the traffic because of a popular promotion". I had no idea what to do so I thought, I'll just set the varnish cache to 1 second, that way WordPress will only get a maximum of 60 requests per second. It worked pretty much flawlessly, and taught me a lot about the importance of request coalescing and how caches work.


I'll echo what Simon said; we share some experiences here. There's a potential footgun, though, anyone getting started with this should know about-

Request coalescing can be incredibly beneficial for cacheable content, but for uncacheable content you need to turn it off! Otherwise you'll cause your cache server to serialize requests to your backend for it. Let's imagine a piece of uncacheable content takes one second for your backend to generate. What happens if your users request it at a rate of twice a second? Those requests are going to start piling up, breaking page loads for your users while your backend servers sit idle.

If you are using Varnish, the hit-for-miss concept addresses this. However, it's easy to implement wrong when you start writing your own VCL. Be sure to read https://info.varnish-software.com/blog/hit-for-miss-and-why-... and related posts. My general answer to getting your VCL correct is writing tests, but this is a tricky behavior to validate.

I'm unsure how nginx's caching handles this, which would make me nervous using the proxy_cache_lock directive for locations with a mix of cacheable and uncacheable content.


And to add the last big one from the trifecta:

Know how to deal with cacheable data. Know how to deal with uncacheable data. But by all means, know how to keep them apart.

Accidentally caching uncacheable data has lead so some of the most ugly and avoidable data leaks and compromises in recent times.

If you go down the "route everything through a CDN route (that can be as easy as ticking a box in the Google Cloud Platform backend), make extra sure to flag authenticated data as cache-control: private / no-cache.


no-cache does not mean content must not be cached - in fact, it specifies the opposite!

no-cache means that the response may be stored in any cache, but cached content MUST be revalidated before use.

public means that the response may be cached in any cache even if the response was not normally cacheable, while private restricts this to only the user agent's cache.

no-store specifies that this response must not be stored in any cache. Note that this does not invalidate previous cached responses from being used.

max-age=0 can added to no-store to also invalidate old cached responses should one have accidentally sent a cacheable response for this resource. No other directives have any effect when using no-store.


That’s the best synopsis of the cache options I’ve ever read. It’s one of those things I have to pull documentation on every time I use it, but the way you just explained it makes so much sense that I might just memorize it now.

Edit: And now I see that you just copied bits from the Moz Dev page. I'll have to start using those more. I think the MS docs always come up first in Google.


MDN docs are quite good at times. And yes, certain parts were copy pasted in, as I didn't want to accidentally end up spreading misinformation.

Also note that I only mentioned the usual suspects - there are many more options, like must-revalidate.


Speaking of non-cacheable data:

https://arstechnica.com/gaming/2015/12/valve-explains-ddos-i...

Caching is HARD.


In varnish, if you have some requirements flexibility you can enable grace mode in order to serve stale responses but update from the origin, and avoid long requests every [5] seconds.

Not quite the same layer, but in node.js I’m a fan of the memoize(fn)->promise pattern where you wrap a promise-returning function to return the _same_ promise for any callers passing the same arguments. It’s a fairly simple caching mechanism that coalesces requests and the promise resolves/rejects for all callers at once.


I've implemented this manually in some golang web applications I've written. It really helps when you have an expensive cache-miss operation, as it can stack the specific requests so that once the original request is served, all of the stacked requests are served with the cached copy.


"Thundering herd" problem is how I have always heard it called.


The thundering herd problem isn't really about high levels of traffic. To the extent that that's a problem, it's just an ordinary DOS.

The thundering herd problem specifically refers to what happens if you coordinate things so that all your incoming requests occur simultaneously. Imagine that over the course of a week, you tell everyone who needs something from you "I'm busy right now; please come back next Tuesday at 11:28 am". You'll be overwhelmed on Tuesday at 11:28 am regardless of whether your average weekly workload is high or low, because you concentrated your entire weekly workload into the same one minute. You solve the thundering herd problem by not giving out the same retry time to everyone who contacts you while you're busy.


Hmm. I think of thundering Herd being about retries.

All your failing requests batch up when your retry strategy sucks, then you end up really high traffic on every retry, and very little in between


Retries without jitter are indeed a common source of thundering herd problems. Even with exponential backoff, if all the clients are retrying simultaneously, they'll hammer your servers over and over. Adding jitter (just a random amount of extra delay that's different for every client+retry), they get staggered and the requests are spread out.


What do you do when you’re an API SaaS, and it’s your clients’ apps that are making thundering-herd requests?

Imagine you’re a service like Feedly, and one of your “direct customer” API clients — some feed-reader mobile client — has coded their apps such that all of their connected clients will re-request the specific user’s unique feed at exact, crontab-like 5-minute offsets from the start of the hour. So every five minutes, you get a huge burst of traffic, from all these clients—and it’s all different traffic, with nothing coalescesable.

You don’t control the client in this case, but nor can you simply ban them—they’re your paying customers! (Yes, you can “fire your customer”, but this would be most of your customers…)

And certainly, you can try to teach the devs of your client how to write their own jitter logic—but that rarely works out, as often it’s junior frontend devs who wrote the client-side code, and it’s hard to have a non-intermediated conversation with them.


If you have no control at all over the client, then ultimately, you have to just take it and build your service to handle that amount of traffic. Adding jitter is a technique that you use when writing clients. That's why I mentioned it in the context of retries. If you are writing a CDN per the article, at some point your CDN has to make requests back to the origin. If one of those requests fails and you retry, you add jitter there to avoid DoSing yourself. If you are working in a microservices architecture, you add jitter on retries between your services.

The best you can do with clients that are out of your control is to publish a client library/SDK for your API that is convenient for your customers to use and implements best practices like exponential backoff, jitter, etc. If you have documentation with code snippets that junior devs are likely to copy and paste, include it in those.

If you've painted yourself into a corner like you describe and are seeing extremely regular traffic patterns, you might be able to pre-cache. Ie, it's 12:01 and you know that a barrage is coming at 12:05. Start going down the list of clients/feeds that you know are likely to be requested based on recent traffic patterns and generate the response, putting it in your cache/CDN with a five minute TTL. Then at least a good portion of the requests should be served straight from there and not add load to the origin. There are obviously drawbacks/risks to that approach, but it might be all you can really do.


If you're extremely desperate, you can start adding conditional jitter (somewhere within 5ms - 200 ms) to your load balancer/reverse proxy, such as your NGINX/Envoy/Apache box, which sits in front of your API. You can make the jitter conditional on count of concurrent requests or on latency spikes. It's an extreme last resort, and may require a bit of custom work via custom module or extension, but it is possible.

In general, try to avoid not having any control over the client and if you must lack control over the client (such as if you're a pure SaaS company selling a public API), you can apply jitter based on API key in addition to other metrics I mentioned above.

As better engineers than I used to say at a previous engagemen: "if it's not in the SLA, it's an opportunity for optimization"


I like the “jitter based on API key” idea.

It’s somewhat hard in our case, as our direct customers (like the mobile app I mentioned) have API keys with us, but they don’t tell us about which user of theirs is making the request. And often they’ll run an HTTP gateway (in part so that they don’t have to embed their API key for our service in their client app), so we don’t even get to see the originating user IPs for these requests, either. We just get these huge spikes of periodic traffic, all from the same IP, all with the same API key, all about different things, and all delivered over a bunch of independent, concurrent TCP connections.

I’ve been considering a few options:

- Require users that have such a “multiple users behind an API gateway” setup, to tag their proxied requests with per-user API sub-keys, so we can jitter/schedule based on those.

- Since these customers like API gateways so much, we could just build a better API gateway for them to run; one that benefits us. (E.g. by Nagle-ing requests together into fewer, larger batch requests.) Requests that come as a single large batch request, could be scheduled by our backend at an optimal concurrency level, rather than trying to deal with huge concurrency bursts as we are now.

- Force users to rewrite their software to “play nice”, by introducing heavy-handed rate-limiting. Try to tune it so that the only possible way to avoid 429s is to either do gateway-side request queuing, or to introduce per-client schedule offsets (i.e. placing users on a hash ring by their ID, so for a periodic-per-5-minutes request, equal numbers of client apps are set to make the request at T+0, vs. T+2.5.)

- Introduce a middleware / reverse-proxy that holds an unbounded-in-size no-expire request queue, with one queue per API key, where requests are popped fairly from each queue (or prioritized according to the plan the user is paying for). Ensure backends only select(1) requests out from the middleware’s downstream sockets as quickly as they’re able to handle them. Require API requests to have explicit TTLs — a time after which serving the request would no longer be useful. If a backend pops a request and finds that it’s past its TTL, it discards it, answering it with an immediate 504 error.


Jitter is one way to solve it. Request coalescing is another.

It depends on the request type. Is it cacheable? Do you require a per-client side effect? ...


Request coalescing in a shared cache does not solve thundering herd, it just reduces propagation to backend services. Your cache is still subject to a thundering herd, and may be unable to keep up.

The only way to solve thundering herd - which is that a load of all requests arrive within a short timespan - is to distribute requests over larger timespan.

Reducing your herd size by having fewer requests does not solve thundering herd, but may make it bearable.


Retries tend to amplify it, but a more common cause is scheduled tasks in clients/end user devices.

E.g. all clients checking for an update at 10:00 UTC every day, all clients polling for new data at fixed times, etc.


Where does your perspective differ from what I said above?


Thundering herd is about mitigating a problem with backpressure scenarios. If you have a backoff and a delayed queue of requests, letting them all proceed at once when the backpressure scenario resolves is likely to recreate it/create a new one. Staggering them so they proceed slightly off in time avoids that.


unrelated to CDNs but IIRC vitess did/does query coalescing too -- if it starts to serve a query for "select * from users where id = 123" and then another 20 connections all want the same query result, vitess doesn't send all 21 select queries to the backend, it sends the first one and then has all the connections wait on the backend response, then serves the same response to them all.


Vitess still does this. It can also do similar with writes on hot rows where someone is incrementing a counter for example.


Wait, how can it do it with writes/increments? Does it keep track of which rows are hot and add a short but stochastically distributed delay to writes to try to coalesce more updates into a single hit to the DB?

I would think you'd need to do it that way, you wouldn't want to reply "done" to the first increment if that operation is going to be batched up with other ops; you'd want to keep that connection hanging until all the increments you're going to aggregate have all been committed by the backend.

In the select coalescing case, except for bookkeeping overhead, none of the queries are slower (it's a big net win all around because not only do clients get their answers on average somewhat sooner, but the DB doesn't have to parse those queries, check for them in the query cache, or marshal N responses).

But in the increment/write case, it seems like in order to spare some DB resources, some clients will perceive increased write delays (or does it still net a win because the DB backend doesn't have to deal with the contention?).


Do you know if varnish's request coalescing allows it to send partial responses to every client? For example, if an origin server sends headers immediately then takes 10 minutes to send the response body at a constant rate, will every client have half of the response body after 5 minutes?

Thanks!


I don’t know about Varnish, but having worked on other implementations, you would usually have a timeout on the initial lock (semaphore) to prevent a slow connection from impacting all clients.

But this is much, much harder to do once you are already streaming the response - if the time to first byte (TTFB) is quick, but the connection is low-throughout, you can’t do much at this point. But nearly all modern implementations stream the bytes to all clients immediately; they don’t try to fill the cache first (they do it simultaneously).

Some implementations might avoid fanning in too much - maintaining a smaller pool of connections rather than trying get to ”1”, but that’s ultimately a trade-off at each layer of the onion, as they can still add up.

(I worked at both Cloudflare and Google, and it was a common topic: request coalescing is a big deal for large customers)


I think the nginx that members of the public can get from their package manager does not have this feature, and will force each client other than the first to either wait for the entire body to be downloaded or wait for a timeout and hit the origin in a non-cacheable request.


I don't know for certain, but my hunch is that it streams the output to multiple waiting clients as it receives it from the origin. Would have to do some testing to confirm that though.


Varnish has defaulted to streaming responses since varnish 4. I think it gets used for a lot of video streaming use cases.


Is this the same idea as `stale-while-revalidate`?




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: