I disagree with many of the posters here saying that the difference between GET and POST is irrelevant or a social construct or something like that.
Intermediate proxies and browsers apply different caching rules to GET vs POST responses. This can have huge performance and security implications.
Along the lines of security, browsers impose stricter security mechanisms on POST requests. Cross site POST calls require a CORS pre-flight options request, and the response of this request is scrutinized. Choosing GET vs POST can be the differences between having CSRF protections and having none.
Sometimes I feel that some of the opinions here are not very useful without the context of the environments in which they are deployed. A few weeks ago I saw someone wrote that logging is an anti-pattern because correct code doesn’t need it. That may work for a small project, just like using fat GET requests may be fine for most projects. But when you are building complex software at scale, the importance of following the normal practices becomes very evident
> Cross site POST calls require a CORS pre-flight options request, and the response of this request is scrutinized.
Not in all cases. Certain POST requests count as "simple" requests and no preflight check is performed. For example, a POST with application/x-www-form-urlencoded data and no extra headers doesn't need a preflight, because you could make an equivalent request with an HTML form.
And continuing on that note, non-simple cross-origin GET requests initiated via JS also requires a pre-flight OPTIONS request (actually any non-simple HTTP request would require this). See https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS#simpl...
No logging might "work" for a small project, but only in the way that not wearing clothes might "work" if you're only running outside to grab your mail then running back inside.
This, or having a body on a GET request, is the best option due to caching implications, of which said implications are often opinionated (not according to a spec) along the entire path of request/response.
It's always bugged me having to use POST for complex query requests that have "outgrown" URL parameters. Had no idea a new QUERY verb was in the works - that sounds ideal.
Yes! It can make a big difference! For instance: in a single-writer many-read-replica deployment, GETs might be routed automatically to replicas, while POSTs might have to be routed to the writer. You can design around it (for instance, route everything to replicas, catch exceptions on writes, and replay to the writer), but it's simpler and marginally faster to simply distinguish based on the verb.
> route everything to replicas, catch exceptions on writes, and replay to the writer),
I am curious about the specifics here. Do you do it at the application level or proxy level?
Does doing it at the app level require that all the requests can be routed to any DB instance regardless of whether they are master or slave. A write to a mster is passed through. A read to a master is routed to slaves LB. A read on slave is passed through. A write on replica is routed to the master.
There's a bunch of different ways to do this, and upthread I just gave a sketch. I can tell you what we do at Fly.io --- or rather, what we suggest our users do:
* We have an HTTP header you can set in a reply that says "replay this request elsewhere".
* We boot up clusters of Postgres servers in read-replica configurations --- a single writer, lots of readers.
* If you try to write to a read replica, you get an error from Postgres, and your framework passes that error up to you.
* We suggest our users catch the exception and set the "replay elsewhere" header to redirect the request to the write master.
And that's pretty much it. Most apps are read-heavy. Reads get serviced from replicas (and, usually, close to their users --- that's the point of the service we built) and writes go to the single write master. You don't write any serious code to make that work.
But if apps could reliably say "a POST isn't just a complicated GET, but is almost certainly changing mutable serverside state", this would be an even easier problem; you'd just have the CDN route POSTs to the write master region, and GETs to the nearest region, and you'd be done.
You can architect your apps like this today, I guess, using QUERY or abusing PUT or something. My point is just that there's significant value in being able to shift all the read-only operations out of POST.
GETs are idempotent and pure, at least in theory. Any replica can serve them, give an identical result, and change no data.
POST, on the other hand, is expected to mutate data. It needs to go to the master node, unless you have bidirectional replication, which is way more complex.
This is why a load balancer / proxy can spread GET requests and cache their results, but can't do so with POSTs.
QUERY is like POST in that it has a body, but is like GET in that it's idempotent, and should return the same result given the same URL and body. Load-balancing and caching work again.
> GETs are idempotent and pure, at least in theory.
No, gets are safe (do not induce any client-responsibility state changes) and idempotent (do not induce any additional client-responsibility change of state for a second identical request with no intervening action after the first), not pure (do not depend on any state outside of the request).
That takes two steps, needs authorization, and needs queries created that way to be stable so that when you execute them with GET you know what you're executing.
This is a lot of application logic to add just to avoid adding a QUERY method. It's just too much.
while that could be preferred, here's some reasons why it may not be
1. pre-existing behavior. maybe queries have only started getting big enough to where this mattered, but all of our existing clients communicate the original way and won't migrate
2. more complex behavior on clients and services as you've now introduced a new stateful resource between client and service. should clients store query ids, or does the service handle idempotency? what db does the service use to store queries? how do we limit stored queries? do clients have to manage number of queries stored? does the service?
If you can only use POST, then the query is by definition not safe in the HTTP sense even if it would be with a QUERY method: POST is defined to be not-safe.
So an HTTP framework would not retry, moving retry logic into the application.
If you can only use POST, then you also can't cache the result because POST is not safe: you've no idea if the same POST is intended to produce the same response.
POSTs are cacheable with a cache response header. POST MAY NOT be idempotent. That doesn't mean they MUST NOT be idempotent. If you have the cache header, you do know what the intention is.
Popular browsers limit the size of the URL (not specifically the query string, the whole URL). If frameworks limit qs size, it’s most likely due to upstream browser limitations.
CloudFront does as well. Bit us when Adyen didn't support a POST-workflow for our case, and at the time sent a GET request a thousand kilometers long instead.
AFAIK Chrome et al do too. at a couple thousand of characters last I searched for an answer.
I did write some tracking code that ended up having really long strings and a ton of parameters and wrote it to break into multiple requests at like 1800.
also things like proxies/load balancers. I remember being happy that I figured out how to configure my framework to allow a long url, but then sad that something in between wasn't allowing it.
There's definitely a lot of misunderstood uses of the verbs, and I've maybe read about them one only a couple of times long long ago. I know POST gets bastardized a lot in attempts to avoid query strings attached to the end of GETs. Lots of people promoting that in tutorials/blogs/etc
I have had to bastardize POST in a situation where I had to support an HTTP client that only implemented GET and POST. So the desired action, e.g. DELETE, was included in the POST data.
The problem with "exotic" verbs is that proxies in the middle will often misbehave when faced with them. There is some old network infrastructure out there.
It's being imported into schools in the UK and parts of mainland Europe. Cory Doctorow reckons that there's a trend, where things like this start in places like schools and then gradually end up in all our institutions.
This brings back memories. I once failed an interview because an interviewer asked me how a certain API could be implemented and I suggested that you could send a body with a GET request. My thinking at the time was that, even though unusual, Elasticsearch does this (2016: https://github.com/elastic/elasticsearch/issues/16024#issuec...) and it might be easier to cache than POST but I could tell that the interviewer wasn't happy with my reponse.
Glad that a QUERY method will be provided instead, and I learnt my lesson about giving against-the-grain responses to interviewers.
The QUERY method is definitely the way to go. GET with a body is iffy. If you owned the entire stack it might be doable. But once you don't have control of the many http proxies and firewalls that a GET with a body has to travel then it becomes a problem. I think the RFC does not explicitly define the behavior of GET with a body. So, proxies or firewalls can just drop the message and be completely HTTP compliant.
Not passing judgement - you may have very well qualified your statement. But having dealt with the above issue some years back (as in being the person on the debugging run) I do sympathize with the unhappiness. :)
Yeah, your point about needing to own the entire stack in this case makes perfect sense, and to be honest, I didn't qualify what I said well at all at the time. When I left the interview I had that sense of "damn, why did I say something so strange with no explanation of where I'd seen this or argument against it". There could have been a conversation about it, but my response just made the interviewer tick the 'no' box.
Yeah, that's on the interviewer for not asking follow up questions.
I've had plenty of intelligent answers that seemed weird to me before I followed up. Or "wrong" answers because I didn't explain the question well or there were different assumptions. Red flags shouldn't be red flags until you probe and give the candidate a chance to explain (or dig a deeper hole).
I've had a similar interview where I was interviewing and this came up, the interviewee didn't pass but it wasn't because they said GETs could have bodies. That was just one of the reasons, and they would've passed if that was the only "wrong" answer.
If they had explained that you could indeed have a body in a GET request, even if it went against the spec and you'd probably have to modify you existing body parser to comply, I would have accepted the answer as "correct". What matters is if you can explain your answer, not if you can answer yes or no questions.
In the end they didn't explain their reasoning and I just said that GETs can indeed have a body, but it goes against the spec, so it's gonna be hard unless "you own the entire stack", like a parallel comment said here. This way they have a base to explore more info later and improve for their next interview.
Sort of reminds me of how at one point, the Ruby HTTP client wouldn't let you send query parameters in a POST request. Somewhere out there there's an issue where someone reports that, and the maintainer is appalled that someone would even want to do so!
But those same proxies/firewalls probably won't understand QUERY either, right? And if they are updated to add support, why can't they be updated to add support for GET with request bodies?
Oh damn... I've seen this one too. As always this is doing REST api without understanding the nuances of http and its headers. The application service in this case wasn't setting the correct cache-control. A proxy some layers above happily cached it. Adding the cache-control didn't make a difference. The proxy had to be restarted (the cache was within its ttl so didn't get changed).
Ideally an api gateway should have managed the headers but ours didn't. Not at that time at least.
The same advice applies for university exams. (Note, particularly in essay based subjects.) Don't provide against the grain answers no matter how smart or well thought through. Your university professor wants to hear how clever he/she is and wants you to write opinions that agree with their point of view. It's the best way to get marks. Just copy the party line. I was told this by a very clever university professor. It still makes me smile lol
This is only kind of true. If you are asked "why is X true?" and you go on critique the epistemological basis for saying that something of type X could be true or not, you may be giving an in-depth and intelligent answer that is responsive to the prompt. But you are also not demonstrating the knowledge that the question was designed to probe; i.e., that you have learned and retained the particular arguments for the truth of X that were covered in class.
These discussions sometimes go off the rail because it's a bunch of STEM folks critiquing the humanities, so I'll share a STEM example that gets the point across. A too-clever friend of mine got marks off in our Analysis course because he presented a constructive proof of a theorem on a test, and that was not the proof technique that the test was designed to interrogate.
Superficially his answer was "correct". But then, as his professor pointed out, on an even deeper level it really was incorrect, because our write-ups of proofs are really high-level descriptions of formal derivations, and the student was describing a proof a different axiomatic system than the one assumed by the question, so, to be totally correct, he would have had to embed the proof that his proof in constructive analysis mapped over into normal analysis. Which would have been quite a bit of work that the student, at the time, was unable to do (and probably no one could do in an exam setting).
But, again, the real point is that this part of the debate is pedantic because what my clever "friend" really didn't understand was WHY the question was being asked. The answer is supposed to demonstrate a certain piece of knowledge in a certain context; it's an exam answer in a closed curriculum, not a Treatise.
Besides, you are ALWAYS writing into a audience and context; if you want to write to a "pure" audience, you may do so, but consider saving it for Sunday morning prayers.
Which, since that friend isn't paid to write analysis proofs these days, were perhaps important lessons ;)
> I learnt my lesson about giving against-the-grain responses to interviewers.
Generally speaking there's nothing wrong with against-the-grain responses if they're well argued for, especially if you can compare them to the other, more common, options and make your solution sound the best. "Elastic search does this, and it might be easier to cache" don't sound like the best arguments to me.
One issue, for me, is that as soon as you step out of the realm of GET and POST you start needing to content with proxy servers that are doing stupid things and misconfigured - you also bump into the issue that for some inconceivable reason HTML forms are still restricted to only supporting GET and POST methods. I've written GET requests with bodies and, while they're normally not that useful, if you've got a big chunk of serialized data in the request (i.e. some random blob of JSON with config settings) then they're a reasonable solution. QUERY is nice, but GET is known and, when talking about the web, old standards are a lot more trust worthy.
I’ve to build a cli HTTP client in Java and went with OkHttp [1], as it was widely used on Android. I’ve to add support for GET requests with a body but I wasn’t able to do it with OkHttp. The library was rather opinionated and you couldn’t add a body to a request body back then [2]. I was rather surprised, because I thought HTTP specs allowed it, while discouraging; so an HTTP library should allow this kind of usage. I went back to Apache HTTPComponents, while less fancy than OkHttp but also less opinionated, and was able to complete my client. Things may have changed, it was a few years ago.
Yes, OkHttp is somewhat opinionated. You probably shouldn't include an entity body with a GET request unless you really know what you're doing. I wouldn't recommend it anyhow. Years of tradition make this a risky choice.
If i understand the common critique from the comments here it's that GET with a body, while not bad on its face, is nevertheless discouraged because you're gambling with any number of intermediate systems that may strip the body from the request.
But what about DELETE with body? It's similarly unspecified, but it seems to me to be of a different breed. The point made about GET is that, under the assumption of a theoretically idempotent and immutable operation, various caching systems can kick in and serve a request from any potential world-wide server. But a DELETE is by nature a mutating event, and will always need to be directed to some kind of principal server, meaning it should probably escape the caching the same way POST requests do.
Is it just as much a gamble to do DELETE with body as it is GET with body?
> If i understand the common critique from the comments here it's that GET with a body, while not bad on its face, is nevertheless discouraged because you're gambling with any number of intermediate systems that may strip the body from the request.
That's a problem.
> But what about DELETE with body? It's similarly unspecified,
And faces the same risk that intermediate systems might reject it or strip the body, called out in RFC 7231.
> But a DELETE is by nature a mutating event,
DELETE is, more to the point, and despite being idempotent, expressly not cacheable, so there is no risk of getting served the wrong cached response based on the body being stripped. But, aside from the request not fitting the defined semantics of what DELETE means, you still have the risk of rejection or the request having the body stripped.
I took a deepish dive into these kinds of issues. My interpretation is that no one uses REST properly (ie, HATEOAS). Fielding has even said so - if you aren’t designing something as big as HTTP itself, then “true” REST isn’t for you.
Now there’s this thing called REST that we all commonly understand and debate over. And I have run into so many issues that I basically abandoned calling it REST and now just call it an API.
Use GET for everything that can be cached and repeated without side-effects, and POST for everything else.
You can delete the rest of the spec and clients can connect without head-aches but only if you can put up with the fanatics in your team trying to trace every bug to not following rest, in the rare case giving you no options other than trying patch after patch to make everything rest-compliant.
My interpretation is that REST is such a complicated definition that no-one bothers to understand it. It's just too complicated (insert reference to Fielding dissertation). At least, whatever Fielding wrote, terms and languages are too esoteric for people including me to bother with it.
REST is now in practice an API using JSON over HTTP with appropriate verbs. And it's 'stateless' in that the server doesn't have to track the clients prior requests to understand the current request. That's about it I think....
> My interpretation is that no one uses REST properly (ie, HATEOAS).
Agreed that everyone ignores the HATEOAS clause of REST, even when they call things "RESTful", but that's because the HATEOAS clause is tremendously stupid. I don't know why it's so common for narrow-minded intellectuals to decide that the reason the world is so complicated is because nobody has sat down yet and decided to make it simple. No, the world is complicated because reality has a surprising amount of detail [1][2]. When one person (or worse, a committee) sits down and decides "this is the way all of the world's information will be organized from now on" (HATEOAS, the "semantic web", etc.) they envision automated tools that can browse information the way Web Browsers browse the web. But as complicated as HTTP, HTML, CSS, JS, CORS, SVG, and now WebAssembly are, they are dozens of orders of magnitude simpler than "all the information anyone might ever want to make available over an API". Writing hypertext that can be understood by a tool to digest how your API should be consumed is just not possible. It doesn't work for people who are new to the API (they need to read the docs no matter what), and it's useless overhead for the people who use the API all the time (the hypertext is delivered with every request!?)
People don't use HATEOAS because HATEOAS is stupid.
> My motivation came from GraphQL as it responds everything with 200s, and so far it has been humming along fine.
This is slightly annoying though, as other consumers (like `fetch` in JavaScript world for example) behaves differently if there is a error. If every status is 200, suddenly you need to manually check the response inside the returned promise, while if you answer with a correct status code when the server encountered an error (5xx), then it'll jump to the `.catch` part of the promise chain instead, which you're probably already handling anyways.
Similarly with curl if I'm not mistaken (don't have it in front of me). A 2xx would make the process return 0, while a 5xx would make it return non-0, so you can handle errors without having to read the actual response.
That must be something fetch specific, per the HTTP specs IIRC, any status code should be passed to the application - and the way I read it - unfiltered. It is neither an error nor an exception, the request is successful. A DNS or server connection issue would not be.
Similar with curl, 502 returns non-zero, 302 not. 4xx IIRC as well zero unless --fail is in use. The default makes it sensible/useful for smoke tests.
Perhaps GraphQL was aware of such client side "bugs" it choose to go "OK" first of all.
That part of GQL drives me nuts. How the heck do people do error handling on the frontend with API requests through GQL? Instead of `try catch`, do you do `if (response.error) {}`?
APIs should be giving back specified error messages regardless as 4xx and 5xx errors can still be too generic.
For example, if you're rate limited (429), a robust API will still give you back data on how many requests you've made and how many requests you're allowed to make, so you're still going to have to check the payload regardless.
The combination of specific error status codes along with error messages has been redundant in my experience.
Furthermore, in Axios, checking for `error.response.data` isn't terribly far from `error.response.status`, so I'm unsure of this "worst client experience" you're talking about.
I don't want to introspect response bodies when I can introspect the status. If I'm given 429, I can automatically do something based on that. If I need to introspect the response body I have to say "okay, look inside Joe's api and look for key Foo. But in Sally's api, look for key Bar. And in Mike's api look for key Baz. And...."
I think its objectively worse that I, as the engineer, need to handle all of them differently when they all could have returned to me 429 instead and I could write a general wrapper for that.
Because your api returns it as error.response.data while Joe's api might put it in a different key while Mike might put it as a different key.
If there's a way where we could say "this specific key will be this specific value in order to mean this specific scenario", I'd support that! But. Then. Isn't that the status that's I'm talking about?
I never said the error message can't or shouldn't be introspected and totally agree we should send our clients as much info as we can to make it clear on what they can do to fix any errors!
I just really really like being able to say "this specific field tells you, across all apis, what happened with this request" and don't see the gain in pushing that data into good luck finding out what key.
But you're already doing this for actual non-error responses...
I'm currently dealing with an endpoint where it sends anywhere from 2-4mb worth of JSON data. Do I expect my clients to traverse every key and every deeply nested field to find what they're expecting for?
Absolutely not. because there's an internally standardized way of dealing with things and that is also precisely how we also deal with errors.
Now if you have an external API that faces many clients that you don't know about, then maybe there is consideration for usage of specific status codes.
But even then, your api should have documentation for it.
Sorry, it seems like you're dug in on this, but you're forcing clients to deserialize enormous object graphs just to figure out if there was a failure or not. This is "surprising" behavior. A better designed system would use the HTTP status code, status reason, and perhaps an additional header to convey important high-level information.
Actively abusing/not using the status code is not okay; this is something you see in "written by the intern" web APIs. Use the HTTP status code, please, and include other helpful details in the status message and entity body if appropriate.
We have a standardized way of dealing with errors in which we will send 400/500
and have an error.data.err and an error.data.message in which err will give a title of the error, and the message elaborating the error in all of our errors across our APIs
the reason being is we don't need our developers to memorize dozens of generic status codes, and can be extremely specific in our error.data.err in which status codes can't.
I get so exhausted listening to people talk about http semantics, and arguing about the restfulness of different approaches.
From an implementation point of view, the difference between a POST, GET, and QUERY with a body is...trivial. I look forward to QUERY because it'll finally stop arguing about having to send a body on GET, or do a POST for something like a search.
It's gotten to the point where it all feels like naval gazing to me. I'd rather talk about what the api does rather than how amazingly restful it is.
> I get exhausted by people talking about integers, floats, and strings and arguing about the "types" of different things. From an implementation point of view, the difference between an integer, an float with or without a mantissa, or a string with numbers and decimal symbols is trivial. It has gotten to the point that...
Things have meanings, and in computer science and programming those meanings can mean a performace difference or a difference in results, so I think it is very important that we start to pay attention to the "naval gazing" aspects of these things and not use a GET with a body, since there is a QUERY, and other specs which might be kinda naval-gazey.
We are in a career that requires a pretty high level of strictness and we're always complaining about bugs in this or that software; then we argue that there is no meaning in certain things (only because in practice, people hadn't been strict about something like a GET request) which then leads to more bugs!
I think you may be misinterpreting my exhaustion --
I certainly agree that things have meaning. They have such a meaning that we standardize them.
Thanks to the standardization and proliferation of the technology, there's generally an obvious correct choice, and a few poor arguments for others. In other words, it's generally not hard to make these things follow the Principle of Least Astonishment[0]. Every now and then there might be an interesting constraint that makes for a different choice (e.g., don't store zip code as an integer, because it may have a leading zero).
The amount of verbose discussion that I observe any time rest semantics come up still baffles me.
Some things have fundamental meaning. Like integers, floats, and strings are clearly very different things.
Whether you allow this or that via PUT vs. POST or whatever is just a convention, and one that you can freely change as long as it is fully documented.
At the end of the day, I don't care. If the API I am talking to says they respond to certain information sent via specific method, that's how I send it. Yes, we could argue till the cows come home about levels of appropriateness, but I've got better things to do with that time. Like process the data I just received from GET request sent with a body!
I thought "REST" was primarily successful at first because it was a reaction to the overblown XML mess in the early 2000s (that's the "X" in XHR, even though nobody uses it for that any more).
I never understood why it became dogmatic though. Arguing for practicality over dogma seems like a losing battle, even in this industry where people are supposed to be relatively practical. I guess that's human nature :-/
Exactly. If I need to access some api, I'll just read their docs on which methods they want you to use. I never go in thinking "oh if there's a get request for this resource, there must be a post request for this resource like this".
Actually I would test the request first. And only if it doesn't work this would require the documentation, so would then test it after the functional test. If the documentation is even needed.
> I get so exhausted listening to people talk about http semantics, and arguing about the restfulness of different approaches.
I'm with you this far.
> From an implementation point of view, the difference between a POST, GET, and QUERY with a body is...trivial.
Sadly the difference is not trivial at all. I suppose if you're implementing an application-specific HTTP server that directly understands the application's semantics then the difference between these is trivial enough. But for the client the difference is not trivial at all, and the clients are bound to be off-the-shelf in many cases. And for an off-the-shelf HTTP server framework the difference may not be so trivial either.
I've stared at enough http where the only difference to me is
GET /foo HTTP/1.1\rnHost: example.com\rn\rn{"foo":"bar"}
POST /foo HTTP/1.1\rnHost: example.com\rn\rn{"foo":"bar"}
QUERY /foo HTTP/1.1\rnHost: example.com\rn\rn{"foo":"bar"}
If it's a possibility that potential clients might refuse to do something like a GET with a body...don't use that as part of your api. Or make the endpoint respond identically to a GET with a body as a POST with a body. Similarly if you're working in a http server framework that makes such a thing difficult.
At the end of the day, I care a hell of a lot more about how well the api is documented than I do about any of it's particular semantics.
> From an implementation point of view, the difference between a POST, GET, and QUERY with a body is...trivial
That said, I'm sure you're right, and there are proxies that don't support a GET with a body. It's an inadvisable thing to implement if there's a chance that your clients may not be able to support it. It's also really easy to just send a POST request with a body instead.
Where I get exhausted is when people start complaining that POSTing a body for something like a search query isn't restful. No, it's not, but it's not worth a heck of a lot of discussion either. In that respect, I'm happy that QUERY is being added, so it can hopefully resolve the distaste that the dogmatic experience. But that's really the only thing that makes me excited about it. Other than hushing the dogmatic, it doesn't really add much value.
In the absence of QUERY, the only way to do large queries is by POSTing them. Anyone who complains that that is not RESTful can go get in the time machine and implement QUERY in 1993.
> From an implementation point of view, the difference between a POST, GET, and QUERY with a body is...trivial.
I'm quite certain that if I were implementing a caching proxy (or a caching client, like a browser) - I'd find the diffence rather important. That said, I'm also sure I'd have to consider special exceptions for popular software that got the details wrong.
Have you ever needed a cache? The difference between GET and POST starts to look very important.
Even without a cache, a GET and POST describe "what the api does" in detail - will my request change state on the server? Or will it be idempotent?. Considering them to be equivalent is like saying SQL statements SELECT, INSERT or DELETE are all the same. The distinction is absolutely critical.
Seems like the HTTP/2 protocol has been baked for so long, that whatever new methods are added, adoption will be very slow and only a small fraction of web APIs will expose those methods.
In other words, aside from <1% of developers, no one will care.
Also, graphQL seems to address many of the issues that manifest in the current form of GET. My bet will be that graphQL gets adopted more widely (and faster) than a new QUERY method.
Nobody uses QUERY and it will thus have surprising consequences in no different amount than GET with a request. Firewalls will block you for "hacking". It's absolutely insane and idiotic that this is even a question. HTTP, which is a protocol that has no stated purpose, could have had no method at all and it would have made no difference. They could have just shoved whatever caching concern bullshit rationale which isn't sound either into more header lines. HTTP is a turing tarpit for protocols. Please close your websites, the users hate it and so do the engineers.
Standards are nice and all, but I've implemented a generic Web-client library and had to support payloads with all HTTP Verbs, because, well, some servers require it.
Clients of a service seldom get to meaningfully complain - use the service, don't use the service, whatever... So whatever the server wants, the server gets...
Although ironically I've also implemented servers and Gorton all kinds of complaining from clients - it's too hard to set a cookie, or header, can't we put tokens inside the JSON etc...
Yeah, nobody is saying start using QUERY today for all your requests, it's a suggestion to design new apis with more semantic verbs that follow the spec.
I really, really, hate REST prescriptivism. I'm going to return 200 from POSTs and want to use request bodies for GET requests to do filtering. We may as well use POST for updates. REST doesn't work for my use-case or probably most use-cases but people seem to treat it like an article of faith even when it forces a completely bass-ackwards design.
You can do that (adding body to GET) but you might generate some restrictions for using the cire. Example from the article:
> When working with HTTP, there’s servers but also load balancers, proxies, browsers and other clients that all need to work together. The behavior isn’t just undefined server-side, a load balancer might choose to silently drop bodies or throw errors. There’s many real-world examples of this. fetch() for example will throw an error.
Sorry I wasn't clear. The lack of defined behaviour for bodies in GET is an example of harms arising from REST obsession. It's clearly a useful thing but because it goes against the intended design it's treated as a bad idea. It just feels like REST discussions are where the strongest divide between pragmatism and bikeshedding occur.
While REST obsession is a thing, the "lack of defined behaviour for bodies in GET" arises from the fact that it's defined to not have a request body, therefore there exists a lot of code that assumes it has no request body and acts accordingly, leading to interoperability problems.
You have to separate HTTP, the spec, from REST the philosophy.
REST says you should use HTTP status codes. HTTP doesn't obligate you to use the full range of its status codes -- you can code up a resource that always returns 200 for POSTs, and that's fine as far as HTTP goes, and not as far as REST goes.
No, it doesn't, though leveraging the semantics of the underlying communications protocol (such as response codes when running over HTTP) is a convenient way to satisfying the “self-describing messages” constraint of REST.
> you can code up a resource that always returns 200 for POSTs, and that's fine as far as HTTP goes, and not as far as REST goes.
The only reason it would be even slightly problematic for REST is if it were inconsistent with the semantics of HTTP. Which, of course, returning 200 for anything but success would be.
In my view, REST is how HTTP is supposed to be used. If you're going to POST and 200 everything, you're probably re-inventing things that HTTP already has well-tested and already implemented infrastructure for. If HTTP already has a clear solution for what you're doing, you should use that instead of building it yourself on top of HTTP.
It's not even just that, but by avoiding the well-known standards of how HTTP behaves, you're going to find yourself fighting with any tool/system that is based on those standards.
For REST api design I've always kind of been pissed that GET doesn't come with a body. (Yeah I know you can do it anyway.) Just because encoding complicated parameters into the URL is pretty sucky at times.
I suppose a QUERY would really solve the problem. Maybe it's the best compromise.
An interviewer once asked me what the difference between GET and POST were. My answer was along the lines of "well technically just four letters in the header but they're used differently" and the interviewer was absolutely adamant that I was wrong and that there was some fundamental difference between GET and POST requests to the point that I tried numerous times to steer it back towards how they're used.
Eventually I gave up and just ended the interview.
GET is supposed to be idempotent, and shouldn't have irreversible side-effects. Still a good idea, in my opinion. Browsers know they can repeat GET requests but shouldn't repeat POST requests without explicit confirmation.
If I'm interviewing someone who claims to be familiar with HTTP, I expect them to know this. They might choose otherwise in specific situations for specific reasons, but they should know the convention.
I'm not sure that's the case. The resource receiving the GET request isn't producing any side effects in reality. The caching server/service in front of it is.
The underlying resource (behind the cache) doesn't (shouldn't?) even know about the cache itself.
And the cache doesn't mutate any data points, just may store a version of it.
There actually is difference between the two. From the RFC 2616 specs [1]:
> GET: "means retrieve whatever information (in the form of an entity) is identified by the Request-URI" [2]
> POST: "used to request that the origin server accept the entity enclosed in the request as a new subordinate of the resource identified by the Request-URI" [3]
The name of the request is the only difference from the perspective of HTTP. But it's definitely worth mentioning that POST requests trigger a bunch of browser safety features that aren't triggered by GETs.
True, but your garden-variety command-line utility might not be able to send a POST (like wget, but apparently curl can).
Both of these concerns are valid and need to be taken together. GET can be run just like a normal URL and thus is more versatile, but as you probably understand, less secure.
I hate that. For some reason being the interviewer fills people with an unfounded confidence in their own knowledge. Some act like they're infallible in interviews. The result is that as a candidate you have to avoid harming their egos but at the same time don't contradict them. It can be like walking on egg shells.
Last time I interviewed I pushed back but in a "are you sure? can we look it up?" kind of way and that seemed most palatable to them.
If it's a piece of base64 that a random web visitor uploaded and then another one picked up such that it was activated as a request in their browser, the name which applies is: "cross site scripting attack".
Intermediate proxies and browsers apply different caching rules to GET vs POST responses. This can have huge performance and security implications.
Along the lines of security, browsers impose stricter security mechanisms on POST requests. Cross site POST calls require a CORS pre-flight options request, and the response of this request is scrutinized. Choosing GET vs POST can be the differences between having CSRF protections and having none.
Sometimes I feel that some of the opinions here are not very useful without the context of the environments in which they are deployed. A few weeks ago I saw someone wrote that logging is an anti-pattern because correct code doesn’t need it. That may work for a small project, just like using fat GET requests may be fine for most projects. But when you are building complex software at scale, the importance of following the normal practices becomes very evident