Wow, if I'm understanding that correctly R's function arguments can have default values that depend on the values variables within the function have when the argument is first used. That seems insane!
It's a sort of lisp-y idea. Arguments passed to functions get "quoted", so the function can get or change data about the expression and its scope/environment before evaluating it. Like others have mentioned, it's what make R quite good for developing DSLs like R's formula language or dplyr. (And other conveniences, like auto-labeled plots, etc.) But similar to lisp macros, it can make for unpleasant surprises if not used wisely.
If you look at attempts to do this stuff in python---e.g. patsy, which emulates R's formula DSL, and there's another project that emulates dplyr I don't recall the name of---you see they have to resort to parsing and eval'ing strings instead of working on expressions (language objects that represent ASTs), which is not nearly as nice or safe.
Edit: But just to emphasize your surprise -- yes, you can definitely be surprised by delayed evaluation in many contexts if you're used to more traditional languages.
> y <- 10
> wat <- function(x=10*y) { y = -y; x }
> wat()
[1] -1000
But (1) good library writers don't play these kinds of tricks, so it doesn't come up too often in practice; and (2) when writing/debugging my own code, I've not found it too hard to reason about, anticipate, and avoid these effects.
> y <- 10
> less_wat <- function(x=10*y) { force(x); y = -y; x }
> less_wat()
[1] 1000
It's odd but it enables a lot of useful things (e.g. magrittr's pipe operators). It's possible to write functions that change their behaviour depending on what name they were called by too.