Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As someone already replied, message queues under constant load can grow unbounded.

> If I synchronously hit some service then I need to build facilities into the service to say that it's too heavily loaded, and perhaps handle those responses on the client.

Most of the time, you have to have timeouts when there is a synchronous API across the network.

In a large system ideally you'd want to reflect the level of loading back to the input source so the source can slow down sending the data. Think of TCP, tcp works this way on a small scale. One way to fix the problem is to leverage that to actually open a TCP stream and send data that way. The sender will slow down accordingly.

Now if you know that your load is not constant, so there are periods of high activity when inputs are generated then you can try and absorb (and amortize) those high volume peaks using asynchronous queues.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: