I'm sure some of us who are out of the loop might be wondering: what about the magrittr pipe operator (%>%) that we all know and love?
Luke Tierney explains the move from %>% to a native pipe |> here [1]. The native pipe aims to be more efficient as well as addresses issues with the magrittr pipe like complex stack traces.
Turns out the |> syntax is also used in Julia, Javascript and F#.
The lambda syntax (\(x) -> x + 1) is similar to Haskell's (\x -> x + 1).
Of course, I should note this is the history for the pipe-forward operator for chaining (reverse) function application used in a programming language. The general concept is even earlier, as attested by the shell syntax for chaining anonymous pipes https://en.wikipedia.org/wiki/Pipeline_(Unix)#History.
Metanote: I was surprised I was unable to find an answer to who invented the (|>) pipe syntax through google. I could only find this Elm thread https://elixirforum.com/t/which-language-first-introduced-th... which got close but did not have the answer. I am therefore writing this here to hopefully surface it for future searches and "question answering AIs".
And given that I'm currently staring at Isabelle code most of the day for my Master's thesis at the chair of Prof. Nipkow, it's sightly surreal to learn about this here, heh.
The reason for announcing the new lambda syntax at the same time seems to be to enable certain workflows that the magrittr pipe supports. The %>% operator, by default, pipes to the first argument of a function. If you want to pipe to a different argument, you can do:
a %>% func(x, arg2 = .)
It seems like the native pipe doesn't support a placement argument, but you can use the new, more concise lambda operator:
a |> \(d) func(x, arg2 = d)
A little more verbose, but it's not a very common use case, it's more general, and I'd happily trade a little more verbosity for the rest of the improvements. (That said, I haven't played around with the magrittr 2.0 improvements yet, so maybe the difference is going to end up being less than the presentation suggests.)
The use of "." as an argument is actually probably one of my most common wtf's with pipes in general.
I tend to use it a lot if I'm just piping a vector to base functions (gsub/grep have x as their third argument.
This syntax looks like it makes that a little harder, but the new error messages are going to make everything so much better that I'm totally fine with it.
already means "regress the variable y on all other columns in `my_dataframe`." For big, interactive regresions, it's really natural to write
my_original_dataframe %>%
do_a_bunch_of_tranformations() %>%
select(...) %>% # Pull out just the columns you want
lm(y ~ ., data = .)
and god knows how that last line is going to be interpreted. So disambiguating through some mechanism is necessary anyway. A lambda is much better than some temporary variable that just holds the formula `y ~ .`.
The zfit package is intended to address this issue, with the zlm() and comparable functions that are very thin wrappers around lm() and friends. The ony thing they do is flip the argument order so the data comes first, making exactly this use case much simpler. So you can do:
Tbh, I would 1000% rather my coworkers write a lambda function or closure where it's necessary than add a new package depencency just to change the order of arguments in widely used functions.
Plus, I still wouldn't trust the code
cars %>% zlm(dist ~ .)
to necessarily work the way I want, or to work the same way across package versions.
Admittedly, the `|>` javascript syntax is complicated by unclear async behavior.
I'm excited for it, though, and if the partial application syntax `func(a, ?)` gets ratified then we'll have a nice concise way of describing operations.
The anonymous function change is probably a (small) mistake.
function(x) {x + 1}
is already logically equivalent to and from some perspectives an arguable syntax improvement on
\(x) x + 1
Giving everyone two ways of doing one thing just means the tutorials will be fragmented and beginners even more confused.
Tierney mentioned that tidyverse found function(x) too verbose and uses fomula syntax. Given how tidyverse often uses the "y ~ x" formula notation, this might actually be picking up deficiencies in R's macro system rather than in the function notation and the problem got misdagnosed.
Function syntax is this stuff [0, 1]. Tidyverse uses it to accomplish some non-model stuff. The one that leaps to my mind is faceting [2]. I'd expect that sort of thing to be handled by macros.
And all the rest of the comment I wouldn't have typed except I'm already replying, since I know this is one of those two-types-of-people-who-don't-change-opinions situations. But...
> not only saves a few keystrokes.
R is secretly a lisp. People can define whatever they want to be whatever they want. Pipes were already implemented in a library (try doing that in Python). Make your own library or bind \ to a keyboard macro or something if your fingers are on the point of crumbling under the stress of those 7 keystrokes.
Defaults using real words to describe things is good. The function to create a function being function() is eminently reasonable. \() is meaningless and about as useful as a one-word variable
> It will also produce shorter, and clearer, lines of code.
Opinons very much divide. Code length is only a proxy for load on a reader's short term memory which is what matters. \ is going to put more burden on someone if they aren't very familiar with R. Most R coders are not full time programmers and not very good at R.
I know what formulas are, I just don’t see what’s the connection with the proposed change:
‘\(x) x + 1’ is parsed as ‘function(x) x + 1’
I also know that R has some vestigial scheme under the hood, but the syntax is not lisp (it was taken from S). In common lisp one could easily use a macro or reader macro but in R a change in the parser is needed so “\” can be used instead of “function”.
Note that the existing “f <- function(...) ...” syntax is not being removed. But I write a lot of code like
wenc's comment (currently top) links to a video where luke tierney explains why the magrittr pipe is not optimal so they are looking for a native solution.
In R all functions are anonymous functions. Functions are created anonymously and then (usually, but not always) assigned to a variable using the assignment operator.
It is, and it's not the first language to use it as such.
But for many programmers it always triggers the 'escape' alarm in the mind, and it will always cause slight discomfort seeing it used in the raw.
IMO it should be like Fennel and just support `lambda` and `λ`. The latter is not even that hard to type, in virtually any free (libre) OS you can just
$ echo '
<Multi_key> <Y> <Y> : "Λ" # GREEK CAPITAL LETTER LAMDA
<Multi_key> <y> <y> : "λ" # GREEK CAPITAL LETTER LAMDA
' >> ~/.XCompose
If you’re limited by a nonfree OS you should be able to patch the problem with free duct tape like Karabiner or AutoHotKey.
Now imagine doing that same thing for every system you're programming on. And then imagine having to do it for a million different symbols. I'm sure as hell glad you're not in charge of any of this.
> Now imagine doing that same thing for every system you're programming on.
As a matter of fact, I already do, the XCompose way.
yadm clone /path/to/dotfiles/repo.git¹
As for the vim way, why would I use a system without vim²?
> And then imagine having to do it for a million different symbols.
I don’t know other symbols as useful as this, but sure, either of my solutions scales fine (`yadm clone` shouldn’t get bogged down by any repo smaller than the Linux kernel’s). My system's /usr/share/X11/locale/en_US.UTF-8/Compose already has a section for APL’s symbols out of the box.
¹yadm is dumb by the way, any symlink manager + git/hg is probably better
²As it so happens, I do: because vim is bloat and https://sr.ht/~martanne/vis/ fits much better in the ramdisk my OS always runs in — but I’d bet 99.99% ±0.009% of programmers install their OS on an HDD/SSD and don’t care.
These will both come in pretty handy, although the first is just a formalization of something that's already available in packages, the lambda syntax will clean up code a fair bit, and make realtime analysis easier to type.
Question by someone who is ignorant but interested in functional programming: what is the closest equivalent to these functions in Python? (Or correct me if I'm asking the wrong question).
I used to love using lambdas in Python, along with map/reduce/filter but for whatever reason the Python community has turned against it. Map and filter can now be nicely done with list comprehensions, although I still haven't found a decent one-line equivalent for reduce (other then importing functools).
There is none and after having used Elixir for a year going back to Python for anything non-trivial input/output parsing feels really cumbersome now.
Other comments have mentioned that functional idioms make code harder to read for devs unfamiliar with the concepts but the pipe operator IMHO has no downsides (I am not even sure what it really has to do with funcational programming, other than that it happens to be used in more functional languages).
> Other comments have mentioned that functional idioms make code harder to read for devs unfamiliar with the concepts but the pipe operator IMHO has no downsides (I am not even sure what it really has to do with funcational programming, other than that it happens to be used in more functional languages).
What it has to do with functional programming is, first, that it's right side operand is a function, and, second, that it's a technique for unrolling the deeply nested function calls common in expression-oriented functional programming without resorting intermediate assignments which are natural in statement-oriented imperative programming but less so for single-user and not independently semantically important values in expression-oriented functional programming.
I'm not sure what specific "functions" you're talking about, but Python generally encourages a procedural style of programming as opposed to functional. The rationale is that functional code can be really difficult to read if you aren't already familiar with the idioms and terminology, whereas it's pretty easy to mentally parse and understand a `for` loop. So in that sense, list comprehensions are about as far as Python goes in that direction; there is no syntactic equivalent to the pipe operator, and no way to write reduce or similar operations as succinctly as you can in functional languages.
Yup, it's a terrible shame that pandas started off as a base-R clone in Python.
Now, the only time I write base-R like code is in Python, which is pretty weird.
It's also strange as sklearn is beautiful, and in general python libraries are nicer than the equivalents in R, but pandas is a large, warty exception.
And just in brief as to what the pipe generally does without the syntax available. If you have functions x and y which take 1 argument. Where |> is the piping syntax
y(1) |> x would be equivalent to x(y(1)) in python.
A pipe merely "pipes" the output of one function as an input to another. For example, | in bash. In Python this can be done the trivial way (by composing) or by using decorators.
Yes, it can be done the trivial way in most languages. For deep nestung, that's ugly and awkward, which is why some languages have piping/composition operators [or threading macros] (sometimes more than one). Python has no close equivalent of a piping operator or threading macro (decorators don't seem helpful at all here.)
I wish more languages gave us a "|>" operator. Too many languages settle with dot notation, which confuses encapsulation/method calling with syntactical convenience.
This, along with inferior metaprogramming affordances, is the biggest reason pandas will never be as productive an analyst tool as R's dplyr. In R you can pipe anything into anything. In pandas you're stuck with the methods pandas gives you.
This is also why pandas API is so bloated. A lot of pandas special-case functionality, like dropping duplicate rows, can be replicated in R by chaining together more flexible and orthogonal primitives.
As someone who is now bouncing back & forth between Python and R on a weekly basis, I've been surprised (after making fun of R sometimes) how much I miss the piping when I leave R for Python. Pandas seems so inflexible by comparison, so nitpicky for little gain. I've been surprised again and again how much dplyr supports near-effortless fluency and productivity.
Never really thought I'd be writing that in a public forum.
Personally, I think R as a language is absolutely beautiful. The libraries have tons of warts, and many different styles. And there are perhaps too many object orientation systems built on it. But that you could even build multiple object oriented systems points to how powerful the language is.
I think it's functional orientation and the way that for loops are neglected make it seem completely insane to many people. But this is a far smaller fraction of people today than it was in 2000. back then, Java and C++ didn't even have lambdas. Since then, procedural languages have gained a lot the "mind-breaking" functional features of Lisp-derived languages. Python and JavaScript have become far more common. All the things that made the language of R "weird" and unusable to the Java/C++ crowd have been adopted elsewhere.
You should try using R; the real world performance is actually better than Julia in every way that matters. Yes, Julia has a better data frame object; that's why we use data.tables().
Also Wes probably got the idea for row indeces from R.
Tbh R's performance is only an issue when you deal with really big datasets. Most of the time R does just fine, and has a lot more libraries that Julia can ever hope to have.
I have used Julia a bit and really enjoyed it. The only reason I do not use it for work is lack of libraries. I know I could 'be the change I want to see in the world' and contribute, but given the pace of things at work I cannot fit that in on the company dime at this time...
Could you describe what kind of libraries you found lacking in Julia? I did get a feeling that lots of long-tail stuff was missing, when I was looking through Julia packages some time ago, but only in a vague "this doesn't seem that exhaustive" sense. Knowing what specifically has been found lacking would be useful.
I think the obvious limitations of Python is a big reason, but probably not the main reason why Pandas isn't orthogonal. The reason why Pandas is such an ungodly mess is because it must be, in order to be even halfway efficient. When you do try to compose things, or even have the audacity to use a python lambda or an if statement or whatever, you suddenly suffer a 100x slowdown in performance.
Julia doesn't have these problems, and I've found it so much nicer to use for data analysis. You can even call Python libs, if you really have to.
I'm sure they wanted to not replace magrittr pipes. R is introspective, and some reckless people (like myself) will mess with functions' guts in a few scripts. Replacing the `%>%` function with a syntax symbol will break those scripts. Even with scripts that don't metaphorically shove their hands down the garbage disposal, programmers might've relied on certain behaviors of the magrittr pipe.
The R Core team is very cautious about breaking anything, including universally acknowledged terrible defaults (I never thought `stringsAsFactors = TRUE` would ever go away). They know the majority of R programmers are not experts in programming. These users just want to write a script, debug it, and then use it for years with complete trust in the results.
It seems to have been worth the caution. R has a great reputation for stability. The contrast between my experiences in R and python datascience tools is stark. Pandas syntax has changed wildly since I started learning it, but R and tidyverse hasn’t really changed at all. Admittedly pandas was in rapid and early development at the time.
I think the R reputation for stability is entirely driven by CRAN. If your package doesn't build on the latest version of R, it is marked unavailable. This means that people can build on R-current, in a way that simply isn't possible with the state of python packaging.
Maybe Python just needs a bigger repository with more stringent rules for what will be allowed?
CRAN is definitely one of the best "features" of R. A very strict and official repository that requires human approval. Most often, that human is part of or close to the R Core team. Most guides I've read for getting a package on CRAN have to mention not wasting the time of somebody who's likely a busy statistics professor.
I thought %% was some sort of macro expansions and that is how magrittr creates pipes. But browsing through the magrittr github repo, it looks like they just define %>% and the other pipes as functions [1].
I don't actually know the relation between magrittr and the RStudio shortcut, but I've always assumed a shortcut for typing the pipe characters exist because RStudio employs Hadley Wichkham, who in turn is really big on tidyverse and pipes.
Would %>% mean anything in R if you didn't import magrittr?
All %fun% constructs are just simple functions that can be created by the user and can be used as infix operators. Base R has a few of those: %in%, %o%, %*%, %x%, %%, %/%.
magrittr created %>% which, when used in infix: x %>% f() calls the function on the right side with the argument on the left side f(x).
You can't do that if you wanted it to be a symbol of its own. The use of wrapping "%" is native functionality so making `%>%` into a symbol would break the consistency of that.
I'm pretty happy that they didn't, because silently replacing one operation with a similar operation that inevitably has different bugs and different ways of handling edge cases would be pretty frustrating. Letting both live side-by-side as people transition would be my preference.
This sounds promising, but how do we type that pipe easily if we're going to be using it all the time? I actually like %>% because it's easy to reach the keys and hammer it out. Agreed on the ugly stack traces.
I've literally written something similar 4 times in the last month, I hate having to load the temp function to be sure the next line runs. This is such a useful addition.
I've seen them used with data.table, but I don't use them myself. Reason being that I don't want to load a lib just to make my chains look a bit better. I usually have chains short enough that it's okay doing
Well, afaik it isn't really a pipe but syntactic sugar. A pipe streams data from one output stream to an input stream. This rewrites the code as if the input were passed as an argument.
That's exactly right. This is useful in the context of a large portion of R's most popular data wrangling packages known as the "Tidyverse". These packages used an equivalent pipe function that was non-native and had some perf and understandability issues.
There is a big difference between what is allowed from within a package and what is possible in native implementation. As an example - currently this pipe operator seems to be implemented at the parser level. The parser simply takes the pipe expression and translates it into standard nested function call f(g(x)). Which means - there will be almost no cost in speed (%>% was notoriously slow). In addition the user will get the usual error stack in case something within this pipe fails.
A lambda, at least as understood in a functional programming context, is pure.
Whereas what is proposed here is simply syntactic sugar for creating an anonymous function; from the little that is said in the announcement there is no reason to think this syntax would provide any guarantees that state changes due to lexical scoping won't affect the function's output.
personally Im surprised R is still in active development when the main use case for people to use R (at least when I was using it) was for statistical analysis. Python with its libraries (a lot I believe ported from R) just does is nicer, and faster.
R vs. Python flamewars always strike me as a Budweiser vs. Miller kind of argument. Neither is really a “craft beer” of programming languages. Neither are super remarkable as programming languages. Both made a bunch pragmatic tradeoffs to appeal to large audiences that share similar values—both are “average joe” beers.
Python has comparative advantages over R in production roles. R has comparative advantage in statistical libraries, visualization, and meta programming. Neither are exemplars for production deployment or meta programming (R is an exemplar for stats libraries however).
Nah, it's not nicer. dplyr is way better than pandas. But there is no end to the supply of Python fanbois who only know Python and assume that whatever's in Python just has to be better
I don't mind pandas so much, although dplyr is quite nice IMO (feels like natural language and declarative/SQL like, whereas pandas ends up with lots of procedural idioms).
ggplot is something that I don't think matplotlib is comparable to at all, though. I am so much faster at iterating on a visualization with R/ggplot than Python/matplotlib. Maybe it is my tooling, though. How about others who have used both? What are your experiences?
No, same here. I tried to recreate some covid rate graphs in python. The ggplot code did facetting and fitted a LOESS to the data. Nothing ground breaking, but it really hit the limits of what seaborn was able to do, and I wasn't able to tinker with it much further. It got to the point where to make it look good I needed to calculate all the curves manually.
Pandas is used in some top 10 banks for analytics. Its performance is abysmal at the scale used there. Nobody wants to invest resources in training analysts to write high performance code so here we are. I have never viewed SQL more highly after seeing the mess that analysts make when writing imperative code.
No surprise there - pandas encourages ugly, inefficient code with its bloated, unintuitive API.
Once I was a lead on a new project and asked the intern to write some basic ETL code for data in some spreadsheets. I said she could write it in Python if she wanted, because "Python is good for ETL", right?
This intern was not dumb by any means, but she wrote code that took 5 minutes to do something that can be done in <1 second with the obvious dplyr approach.
Also, if your bank analysts pick up dplyr, they can use dbplyr to write SQL for them :)
R’s meta programming facilities are head and shoulders above Python’s, which I think explains the brilliance of dplyr and dbplyr. But I feel like with R you have to scrape back a bunch of layers to get to the Schemey parts. I’ve always wondered what Hadley and Co would have done with dplyr and dbplyr had they had something like Racket at their disposal.
I was offended the first time I encountered R's nonstandard evaluation, but it didn't take long to accept it. Now I wonder why anyone would want to write `mytable.column` a million times when it's obvious from context what `column` is referred to, and the computer can reliably figure it out for you with some simple scoping rules. It's a superior notation that facilitates focus on the real underlying problem, and data analysts love that.
IMO they should just bite the bullet and learn proper SQL. I say this as a data scientist who learned SQL later than C, Matlab, R, Python/Pandas (though earlier than PySpark).
R’s data.table package is faster at these things out of the box than any single instance of a database server I’ve encountered. This is frustrating because I’m trying to explain some systemic issues we suffer by not using a relational database, but it’s really hard to make my case when data.table is one install.packages away and a version upgrade from Postgres 9 to something a little faster is gatekept by bureaucracy. I’ve been trying for months!
Thanks, I’m checking it out, it seems pretty interesting to keep an eye on. Lots of properties that would be useful in our shared computing environment like not requiring root or Docker.
Pandas/python is amazingly prevalent at trading firms. And everyday, we bitch about the performance, we bitch about the stupid API, we bitch about the GIL, the lack of expressiveness. The list goes on and on. But for some braindead reason, we never switch to Julia. It's masochistic.
I do think Julia is a far better language for numerics than python, but compared to DataFrames.jl, pandas can be quite fast. I know, "but it's easier to make it faster in Julia". Last I checked `sort(df, :col)` was significantly slower than `df[sortperm(df[:col])]`. Someone actually has to go through and make these libraries fast.
Second issue, in my field (bioinformatics) the script is still a pretty common unit of code. Without cached compilation being a simple flag, Julia often is slower.
Yeah, that's a good point. DataFrames.jl starts to really shine what the cookie cutter pandas functions arent adequate for what you need to do. DataFrames.jl can certainly be slower in some cases, but you should expect a consistent level of performance no matter what you do. This is a farcry from Pandas, which tanks by large factors when you start calling Python code vs C code.
In regards to Julia's compilation problem, you can use https://github.com/JuliaLang/PackageCompiler.jl to precompile an image, allowing you to avoid paying the JIT performance penalty over and over again.
About 8 years ago I agreed with this point, but with the development of tidyverse, R has become far superior to Python for anything involving dataframes.
I teach classes involving data analysis, some in Python and some in R (different topics). The amount of time the Python students spend fighting pandas---looking up errors, trying to parse the docs, trying out new arcane indexing strategies---is obscene. On the other hand, the R students progress rapidly. I'd move everything to R if I could, but Python is still better for NLP pipelines.
I know R because that's what we used at my first company. I would love to switch to Python/Pandas but I'm comfortable with R and it does everything I need it to with one exception over ten years of heavy use.
Python is wonderful but the cognitive load for switching in industry and academia without a clear cost benefit isn't worth it to most people I know in my shoes. I encourage new coders to learn Python but discounting R feels a bit asinine.
Hadley is still actively doing work for R which has led to a graphing packages that is substantially better than anything in Python (last I check). I have no doubt that Python will steal it and implement it eventually (as they should) but R is still doing firsts that Python hasn't (note the native implementation of Piping, they're late to the party on lambda functions obviously)
I made the switch years ago and there is lots that python does better. I really, really wish for a perfect port of dplyr and ggplot2. Those are what I truly miss, everything else I'm pretty happy with.
R already has a better lambda than Python, simply by virtue of having first class functions. This is just a bit shorter notation for something that already existed.
Yeah, basically this. I assume HN has a higher number of people who work in ML jobs in fields like finance etc. If you're working in any sort of social/public health research, then most new methods seem to be implemented as R packages. I'm thinking of things like new methods for propensity score, sequential trial designs etc. Also seems to be the preferred language on the Stats Stack Exchange posts.
Any sort of statistical or econometric estimator is typically published as an R package.
So for example, I recently saw a paper with a quite complex estimator based on dynamic panels and network (or spacial) interdependence that could identify missing network ties.
For that, an R package exists.
If you want to use it in Python, you'd have to replicate a whole estimation infrastructure yourself, starting by extending the basic models in statsmodels.
That example is quite typical in my opinion.
Like I said, really like to code in Python and I don't like R all that much.
But if someone says: "Why would you use R, Python is better", then we can confidently say the person does not know what R is actually used for.
Luke Tierney explains the move from %>% to a native pipe |> here [1]. The native pipe aims to be more efficient as well as addresses issues with the magrittr pipe like complex stack traces.
Turns out the |> syntax is also used in Julia, Javascript and F#.
The lambda syntax (\(x) -> x + 1) is similar to Haskell's (\x -> x + 1).
[1] https://www.youtube.com/watch?v=X_eDHNVceCU&feature=youtu.be...