It strikes me that this is exactly the same scenario, in reverse, that we have with people being convicted of "hacking" a website by entering a URL that wasn't supposed to be exposed. Things like "consent" and "authorization" are murky when we delegate our will to computer programs like browsers and servers.
If URL "hacking" is illegal, then we have decided as a society that persuading a piece of software to do something does not equate to informed consent on the part of the person operating it (and by extension that we're meant to make some sort of guess as to what they do intend).
I strongly disagree with this interpretation. In my view, a person maintaining infrastructure is a position of power and therefore should be held to specific standards in regards to how it treats users.
Users on the other hand are just people. Being a user on a service does not grant you power over other users (not out-of-the-box anyway). Sure you can scan for vulns and/or follow a public link to a top-secret document: in my view that's nothing wrong. Now reverse the situation: why should a remote server administrator dictate the computing performed on your machine?! Is it ethical for some website operators to start scanning your local network?
A service operator knows about threats, hopefully has counter-measures in place, and can always ban you (or specific requests) if it comes to that. A user is mostly helpless, especially when it comes to computations performed by a script you unconsciously downloaded and executed from a server. How many users are aware of what RCE even means and that a web browser with JS enabled is essentially RCE-as-a-service?
If URL "hacking" is illegal, then we have decided as a society that persuading a piece of software to do something does not equate to informed consent on the part of the person operating it (and by extension that we're meant to make some sort of guess as to what they do intend).